Tag Archives: twitter

Директивата за авторско право: ход на ревизията: да се действа сега

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/26/copyright-5/

Ново развитие в ревизията на авторското право в ЕС – става ясно от  съобщенията на българското председателство, участници в ревизията и Юлия Реда – защото тя имаше много ясен възглед какво иска да се промени в правната рамка (общ режим на изключенията, актуализиране – за да имаме правна рамка, адекватна на технологичното развитие) – и сега следи ангажирано законодателния процес.

Правителствата на държавите от ЕС  са приели позиция  относно реформата на авторските права  без съществени промени по чл.11 (новото право за издателите)  и чл.13 (филтрите на входа), проектът е на сайта на Реда,  Politico дава измененията, засягащи правото на издателите, в цвят.

Сега Парламентът трябва да ги спре, пише Реда.

 Сега имате шанса да окажете влияние – шанс, който ще изчезне след две години, когато всички “изведнъж” ще се сблъскат с предизвикателството да се  внедряват филтри   и link tax.  Експертите почти единодушно се съгласяват, че проектът за реформата на авторското право е наистина лош.

Update: Member State governments have just adopted their position on #copyright, with no significant changes to the #CensorshipMachines and #LinkTax provisions. It is now up to Parliament to stop them and #FixCopyright. https://t.co/1JwNvQn24n pic.twitter.com/KAgqV3YYG1

https://platform.twitter.com/widgets.js

Две графики от сайта на Реда – за двата текста,  против които се събира подкрепа (вж и преподавателите) – за  отношението по държави и по партии в ЕП:

 

 

Social Media Sites Are Full of Pirate Champions League Streamers

Post Syndicated from Ernesto original https://torrentfreak.com/social-media-sites-are-full-of-pirate-champions-league-streamers-180526/

This evening, Liverpool and Real Madrid will go head to head in the Champions League final, one of the biggest sports events of the year.

Hundreds of millions of football fans from around the world will be glued to their televisions to follow the spectacle, while the hashtags #RMALIV and #UCLfinal are trending on social media.

While Twitter, Facebook and other social media are great ways to keep fans engaged and generate traction, they also present a threat. According to data released by the global anti-piracy outfit Irdeto, social media rivals traditional pirate streaming sites.

The company analyzed the number of pirated streams it ran into during the knockout stages of the Champions League and found 5,100 unique illegal streams that were rebroadcasting the matches.

Roughly 40 percent of these unauthorized broadcasts came from ‘social’ platforms including Periscope, Facebook and Twitch. Irdeto found 2,093 streams on these sites with an estimated 4,893,902 viewers.

Regular web-based streams on traditional sports pirate sites were the most popular (2,121), followed by ones found through Kodi-addons (886).

“These viewing figures combined with the number of UEFA Champions League streams detected across a variety of channels suggests that more needs to be done to stop the illegal distribution of high profile live European football matches,” the company writes.

Red card…

Rory O’Connor, Irdeto’s Senior Vice President of Cybersecurity Services, notes that criminals are “earning a fortune” from these activities. At the same time, he stresses that people who stream the matches on social media could face criminal action.

“The criminals who profit from these illegal streams have little regard for their viewers and are exposing them to cybercrime, inappropriate content and malware infection. Also, viewers of illegal content can face criminal penalties if they decide to share content with friends on social media,” O’Connor says.

Besides sharing infographics and reporting interesting statistics, including that Real Madrid was the most viewed team with 2,856,011 viewers of illegal social media streams during the knock out stage, Irdeto can also take action.

Whether they already work for UEFA or if this is an unsolicited application is not known to us, but they do work for other rightsholders.

So instead of tuning into the final tonight, they will probably be busy tracking down pirate broadcasts on social media and elsewhere, hoping to shut them down as soon as possible.

The game is on.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Прозрачност на политическата реклама

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/26/polit_ad_fb_twitter/

Как решавате проблем като този с руската пропаганда? Как  предпазвате от намеса в изборите? c|net информира за нови стъпки на интернет компаниите.

Отговорът на Facebook и Twitter е повече прозрачност относно политическата реклама: двете компании предприемат мерки да се вижда кой плаща политическа реклама. Google също се подготвя за подобна политика на прозрачност.

Facebook създава и архив на данните за политическа реклама на адрес facebook.com/politicalcontentads . Подобен  Ads Transparency Center  предстои да бъде създаден и в Twitter.

В САЩ има проект за закон –  the Honest Ads Act –  ако бъде приет,  прозрачността на политическата реклама онлайн ще е законово задължение.

Текст и обяснение от сайта на Конгреса, мотивите: Законът за честните реклами би попречил на чуждестранните участници да повлияят върху нашите избори, като гарантира, че политическите реклами, продавани онлайн, ще бъдат обхванати от същите правила като рекламите, продавани по телевизията, радиото и сателита.  Въвежда

  •  изискване на цифрови платформи с най-малко 50 000 000 месечни зрители да поддържат публичен архив  – всеки файл  ще съдържа цифрово копие на рекламата, описание на аудиторията, която рекламата цели, броя на генерираните показвания, датите и часовете на публикуване, таксуваните тарифи и информацията за връзка на купувача;
  • изискване  онлайн платформите да положат всички разумни усилия, за да гарантират, че чуждестранни физически и юридически лица не купуват политически реклами, за да повлияят на американския електорат.

 

Fully-Loaded Kodi Box Sellers Receive Hefty Jail Sentences

Post Syndicated from Andy original https://torrentfreak.com/fully-loaded-kodi-box-sellers-receive-hefty-jail-sentences-180524/

While users of older peer-to-peer based file-sharing systems have to work relatively hard to obtain content, users of the Kodi media player have things an awful lot easier.

As standard, Kodi is perfectly legal. However, when augmented with third-party add-ons it becomes a media discovery powerhouse, providing most of the content anyone could desire. A system like this can be set up by the user but for many, buying a so-called “fully-loaded” box from a seller is the easier option.

As a result, hundreds – probably thousands – of cottage industries have sprung up to service this hungry market in the UK, with regular people making a business out of setting up and selling such devices. Until three years ago, that’s what Michael Jarman and Natalie Forber of Colwyn Bay, Wales, found themselves doing.

According to reports in local media, Jarman was arrested in January 2015 when police were called to a disturbance at Jarman and Forber’s home. A large number of devices were spotted and an investigation was launched by Trading Standards officers. The pair were later arrested and charged with fraud offenses.

While 37-year-old Jarman pleaded guilty, 36-year-old Forber initially denied the charges and was due to stand trial. However, she later changed her mind and like Jarman, pleaded guilty to participating in a fraudulent business. Forber also pleaded guilty to transferring criminal property by shifting cash from the scheme through various bank accounts.

The pair attended a sentencing hearing before Judge Niclas Parry at Caernarfon Crown Court yesterday. According to local reporter Eryl Crump, the Court heard that the couple had run their business for about two years, selling around 1,000 fully-loaded Kodi-enabled devices for £100 each via social media.

According to David Birrell for the prosecution, the operation wasn’t particularly sophisticated but it involved Forber programming the devices as well as handling customer service. Forber claimed she was forced into the scheme by Jarman but that claim was rejected by the prosecution.

Between February 2013 and January 2015 the pair banked £105,000 from the business, money that was transferred between bank accounts in an effort to launder the takings.

Reporting from Court via Twitter, Crump said that Jarman’s defense lawyer accepted that a prison sentence was inevitable for his client but asked for the most lenient sentence possible.

Forber’s lawyer pointed out she had no previous convictions. The mother-of-two broke up with Jarman following her arrest and is now back in work and studying at college.

Sentencing the pair, Judge Niclas Parry described the offenses as a “relatively sophisticated fraud” carried out over a significant period. He jailed Jarman for 21 months and Forber for 16 months, suspended for two years. She must also carry out 200 hours of unpaid work.

The pair will also face a Proceeds of Crime investigation which could see them paying large sums to the state, should any assets be recoverable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-gdpr-data-processing-addendum/

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at: https://aws.amazon.com/compliance/gdpr-center/

-Chad

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Internet Association Blasts MPAA’s ‘Crony Politics’

Post Syndicated from Ernesto original https://torrentfreak.com/internet-association-blasts-mpaas-crony-politics-180516/

Last month, MPAA Chairman and CEO Charles Rivkin used the Facebook privacy debacle to voice his concern about the current state of the Internet.

“The Internet is no longer nascent – and people around the world are growing increasingly uncomfortable with what it’s becoming,” Rivkin wrote in his letter to several Senators, linking Internet-related privacy breaches to regulation, immunities, and safe harbors.

“The moment has come for a national dialogue about restoring accountability on the internet. Whether through regulation, recalibration of safe harbors, or the exercise of greater responsibility by online platforms, something must change.”

While it’s good to see that the head of Hollywood’s main lobbying group is concerned about Facebook users, not everyone is convinced of his good intentions. Some suggest that the MPAA is hijacking the scandal to further its own, unrelated, interests.

This is exactly the position taken by the Internet Association, a US-based organization comprised of the country’s leading Internet-based businesses. The organization is comprised of many prominent members including Google, Twitter, Amazon, Reddit, Yahoo, and Facebook.

Several of these companies were the target of the MPAA’s criticism, named or not, which prompted the Internet Association to respond.

In an open letter to House Energy and Commerce Committee Chairman Greg Walden, the group’s president and CEO, Michael Beckerman, lashes out against the MPAA and similar lobbying groups. These groups hijack the regulatory debate with anti-internet lobbying efforts, he says.

“Look no further than the gratuitous letter Motion Picture Association of America, Inc. Chairman & CEO Charles Rivkin submitted to the Energy and Commerce Committee during your recent Zuckerberg hearing,” Beckerman writes.

“The hearing had nothing to do with the Motion Picture industry, but Mr. Rivkin demonstrated shameless rent-seeking by calling for regulation on internet companies simply in an effort to protect his clients’ business interest.”

These rent-seeking efforts are part of the “crony politics” used by “pre-internet” companies to protect their old business models, the Internet Association’s CEO adds.

“This blatant display of crony politics is not unique to the big Hollywood studios, but rather emblematic of a broader anti-consumer lobbying campaign. Many other pre-internet industries —telcos, legacy tech firms, hotels, and others — are looking to defend old business models by regulating a rising competitor to the clear detriment of consumers.”

These harsh words show that the rift between Silicon Valley and Hollywood is still wide open.

It’s clear that the MPAA and other copyright industry groups are still hoping for stricter regulation to ensure that Internet companies are held accountable. Privacy is generally not their main focus though.

They mostly want companies such as Google and Facebook to prevent piracy and compensate rightsholders. Whether using the Facebook privacy scandal was a good way to bring this message to the forefront is a matter of which camp one’s in.

While the Internet Association bashes the MPAA’s efforts, they don’t discount the idea that more can be done to prevent and stop abuse.

“As technology and services evolve to better meet user needs, bad actors will find ways to take advantage. Our members are ever vigilant and work hard to stop them. The task is never done, and we pledge to work harder and do even better,” Beckerman notes.

The Internet Association’s full letter, spotted by Variety, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Roku Displays FBI Anti-Piracy Warning to Legitimate YouTube & Netflix Users

Post Syndicated from Andy original https://torrentfreak.com/roku-displays-fbi-anti-piracy-warning-to-legitimate-youtube-netflix-users-180516/

In 2018, dealing with copyright infringement claims is a daily issue for many content platforms. The law in many regions demands swift attention and in order to appease copyright holders, most platforms are happy to oblige.

While it’s not unusual for ‘pirate’ content and services to suddenly disappear in response to a DMCA or similar notice, the same is rarely true for entire legitimate services.

But that’s what appeared to happen on the Roku platform during the night, when YouTube, Netflix and other channels disappeared only to be replaced with an ominous anti-piracy warning.

As the embedded tweet shows, the message caused confusion among Roku users who were only using their devices to access legal content. Messages replacing Netflix and YouTube seemed to have caused the greatest number of complaints but many other services were affected.

FoxSportsGo, FandangoNow, and India-focused YuppTV and Hotstar were also blacked out. As were the yoga and transformational videos specialists over at Gaia, the horror buffs at ChillerFlix, and UK TV service BritBox.

But while users scratched their heads, with some misguidedly blaming Roku for not being diligent enough against piracy, Roku took to Twitter to reveal that rather than anti-piracy complaints against the channels in question, a technical hitch was to blame.

However, a subsequent statement to CNET suggested that while blacking out Netflix and YouTube might have been accidental, Roku appears to have been taking anti-piracy action against another channel or channels at the time, with the measures inadvertently spilling over to innocent parties.

“We use that warning when we detect content that has violated copyright,” Roku said in a statement.

“Some channels in our Channel Store displayed that message and became inaccessible after Roku implemented a targeted anti-piracy measure on the platform.”

The precise nature of the action taken by Roku is unknown but it’s clear that copyright infringement is currently a hot topic for the platform.

Roku is currently fighting legal action in Mexico which ordered its products off the shelves following complaints that its platform is used by pirates. That led to an FBI warning being shown for what was believed to be the first time against the XTV and other channels last year.

This March, Roku took action against the popular USTVNow channel following what was described as a “third party” copyright infringement complaint. Just a couple of weeks later, Roku followed up by removing the controversial cCloud channel.

With Roku currently fighting to have sales reinstated in Mexico against a backdrop of claims that up to 40% of its users are pirates, it’s unlikely that Roku is suddenly going to go soft on piracy, so more channel outages can be expected in the future.

In the meantime, the scary FBI warnings of last evening are beginning to fade away (for legitimate channels at least) after the company issued advice on how to fix the problem.

“The recent outage which affected some channels has been resolved. Go to Settings > System > System update > Check now for a software update. Some channels may require you to log in again. Thank you for your patience,” the company wrote in an update.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Седем мита за GDPR

Post Syndicated from Bozho original https://blog.bozho.net/blog/3105

GDPR, или новият Общ регламент относно защитата на данните, е гореща тема, тъй като влиза в сила на 25-ти май. И разбира се, публичното пространство е пълно с мнения и заключения по въпроса. За съжаление повечето от тях са грешни. На база на наблюденията ми от последните месеци реших да извадя 7 мита за Регламента.

От края на миналата година активно консултирам малки и големи компании относно регламента, водя обучения и семинари и пиша технически разяснения. И не, не съм юрист, но Регламентът изисква познаване както на правните, така и на технологичните аспекти на защитата на данните.

1. „GDPR ми е ясен, разбрал съм го“

Най-опасното е човек да мисли, че разбира нещо след като само е чувал за него или е прочел две статии в новинарски сайт (както за GDPR така и в по-общ смисъл). Аз самият все още не твърдя, че познавам всички ъгълчета на Регламента. Но по конференции, кръгли маси, обучения, срещи, форуми и фейсбук групи съм чул и прочел твърде много глупости относно GDPR. И то такива, които могат да се оборят с „Не е вярно, виж чл. Х“. В тази категория за съжаление влизат и юристи, и IT специалисти, и хора на ръководни позиции.

От мита, че познаваме GDPR, произлизат и всички останали митове. Част от вината за това е и на самия Регламент. Дълъг е, чете се трудно, има лоши законодателни практики (3 различни хипотези в едно изречение??) и нито Европейската Комисия, нито някоя друга европейска институция си е направила труда да го разясни за хората, за които се отнася – а именно, за почти всички. Т.нар. „работна група по чл. 29 (от предишната Директива)“ има разяснения по някои въпроси, но те са също толкова дълги и трудно четими ако човек няма контекст. При толкова широкообхватно законодателство е голяма грешка то да се остави нерязяснено. Да, в него има много нюанси и много условности (което е друг негов минус), но е редно поне общите положения да бъдат разказани ясно и то от практическа гледна точка.

Така че не – да не си мислим, че сме разбрали GDPR.

2. „Личните данни са тайна“

Определението за лични данни в Регламента може би характеризира целия Регламент – трудно четима и „увъртяно“:

„лични данни“ означава всяка информация, свързана с идентифицирано физическо лице или физическо лице, което може да бъде идентифицирано („субект на данни“); физическо лице, което може да бъде идентифицирано, е лице, което може да бъде идентифицирано, пряко или непряко, по-специално чрез идентификатор като име, идентификационен номер, данни за местонахождение, онлайн идентификатор или по един или повече признаци, специфични за физическата, физиологичната, генетичната, психическата, умствената, икономическата, културната или социална идентичност на това физическо лице;

Всъщност лични данни са всичко, което се отнася за нас. Включително съвсем очевидни неща като цвят на очи и коса, ръст и т.н. И не, личните данни не са тайна. Имената ни не са тайна, ръстът ни не е тайна. ЕГН-то ни не е тайна (да, не е). Има специални категории лични данни, които могат да бъдат тайна (напр. медицински данни), но за тях има специален ред.

Разграничаването, което GDPR не прави ясно, за разлика от едно разяснение на NIST – има лични данни, на база на които хората могат да бъдат идентифицирани, и такива, с които не могат, но се отнасят за тях. По цвят на косата не можем да бъдем идентифицирани. Но цветът на косата представлява лични данни. По професия не можем да бъдем идентифицирани. (По три имена и професия обаче – евентуално може и да можем). И тук едно много важно нещо, посочено в последните изречения на съображение 26 – данни, които са лични, но не могат да бъдат отнесени към конкретно лице, и на база на които не може да бъде идентифицирано такова, не попадат в обхвата на регламента. И съвсем не са тайна – „имаме 120 клиента на възраст 32 години, които са си купили телефон Sony между Април и Юли“ е напълно окей.

Та, личните данни не са та тайни – някои даже са съвсем явни и видни. Целта на GDPR е да уреди тяхната обработка с автоматизирани средства (или полуавтоматизирани в структуриран вид, т.е. тетрадки). С други думи – кой има право да ги съхранява, за какво има право да ги използва и как трябва да ги съхранява и използва.

3. „GDPR не се отнася за мен“

Няма почти никакви изключения в Регламента. Компании под 250 души не са длъжни да водят едни регистри, а компании, които нямат мащабна обработка и наблюдение на субекти на данни нямат задължение за длъжностно лице по защита на данните (Data protection officer; тази точка е дискусионна с оглед на предложенията за изменения на българския закон за защита на личните данни, които разширяват прекалено много изискванията за DPO). Всичко останало важи за всички, които обработват лични данни. И всички граждани на ЕС имат всички права, посочени в Регламента.

4. „Ще ни глобят 20 милиона евро“

Тези глоби са единствената причина GDPR да е популярен. Ако не бяха те, на никого нямаше да му дреме за поредното европейско законодателство. Обаче заради плашещите глоби всякакви консултанти ходят и обясняват как „ами те глобите, знаете, са до 20 милиона“.

Но колкото и да се повтарят тези 20 милиона (или както някои пресоляват манджата „глоби над 20 милиона евро“), това не ги прави реалистични. Първо, има процес, който всички регулатори ще следват, и който включва няколко стъпки на „препоръки“ преди налагане на глоба. Идва комисията, установява несъответствие, прави препоръки, идва пак, установява взети ли са мерки. И ако сте съвсем недобросъвестни и не направите нищо, тогава идват глобите. И тези глоби са пропорционални на риска и на количеството данни. Не е „добър ден, 20 милиона“. Според мен 20-те милиона ще са само за огромни международни компании, като Google и Facebook, които обработват данни на милиони хора. За тетрадката с вересиите глоба няма да има (правото да бъдеш забравен се реализира със задраскване, но само ако магазинерът няма легитимен интерес да ги съхранява, а именно – да му върнете парите :)).

Тук една скоба за българското законодателство – то предвижда доста високи минимуми на глобите (10 хил. лева). Това се оспорва в рамките на общественото обсъждане и е несъразмерно на минимумите в други европейски държави и се надявам да спадне значително.

5. „Трябва да спрем да обработваме лични данни“

В никакъв случай. GDPR не забранява обработката на лични данни, просто урежда как и кога те да се обработват. Имате право да обработвате всички данни, които са ви нужни, за да си свършите работата.

Някои интернет компании напоследък обявиха, че спират работа заради GDPR, защото не им позволявал да обработват данни. И това в общия случай са глупости. Или те така или иначе са били на загуба и сега си търсят оправдание, или са били такъв разграден двор и са продавали данните ви наляво и надясно без ваше знание и съгласие, че GDPR представлява риск. Но то това му е идеята – да няма такива практики. Защото (както твърди Регламентът) това представлява риск за правата и свободите на субектите на данни (субект на данните – това звучи гордо).

6. „Трябва да искаме съгласие за всичко“

Съгласието на потребителите е само едно от основанията за обработка на данните. Има доста други и те дори са по-често срещани в реалния бизнес. Както отбелязах по-горе, ако можете да докажете легитимен интерес да обработвате данните, за да си свършите работата, може да го правите без съгласие. Имате ли право да събирате адреса и телефона на клиента, ако доставяте храна? Разбира се, иначе не може да му я доставите. Няма нужда от съгласие в този случай (би имало нужда от съгласие ако освен за доставката, ползвате данните му и за други цели). Нужно ли е съгласие за обработка на лични данни в рамките на трудово правоотношение? Не, защото Кодекса на труда изисква работодателят да води трудово досие. Има ли нужда банката да поиска съгласие, за да ви обработва личните данни за кредита? Не, защото те са нужни за изпълнението на договора за кредит (и не, не можете да кажете на банката да ви „забрави“ кредита; правото да бъдеш забравен важи само в някои случаи).

Усещането ми обаче е, че ще плъзнат едни декларации и чекбоксове за съгласие, които ще са напълно излишни…но вж. т.1. А дори когато трябва да ги има, ще бъдат прекалено общи, а не за определени цели (съгласявам се да ми обработвате данните, ама за какво точно?).

7. „Съответсвието с GDPR е трудно и скъпо“

…и съответно Регламентът е голяма административна тежест, излишно натоварване на бизнеса и т.н. Ами не, не е. Съответствието с GDPR изисква осъзната обработка на личните данни. Да, изисква и няколко хартии – политики и процедури, с които да докажете, че знаете какви лични данни обработвате и че ги обработвате съвестно, както и че знаете, че гражданите имат някакви права във връзка с данните си (и че всъщност не вие, а те са собственици на тези данни), но извън това съответствието не е тежко. Е, ако хал хабер си нямате какви данни и бизнес процеси имате, може и да отнеме време да ги вкарате в ред, но това е нещо, което по принцип e добре да се случи, със или без GDPR.

Ако например досега в една болница данните за пациентите са били на незащитен по никакъв начин сървър и всеки е имал достъп до него, без това да оставя следа, и също така е имало още 3-4 сървъра, на които никой не е знаел, че има данни (щото „IT-то“ е напуснало преди 2 години), то да, ще трябват малко усилия.

Но почти всичко в GDPR са „добри практики“ така или иначе. Неща, които са полезни и за самия бизнес, не само за гражданите.

Разбира се, синдромът „по-светец и от Папата“ започва да се наблюдава. Освен компаниите, които са изсипали милиони на юристи, консултанти, доставчици (и което накрая е имало плачевен резултат и се е оказало, че за един месец няколко човека могат да я свършат цялата тая работа) има и такива, които четат Регламента като „по-добре да не даваме никакви данни никъде, за всеки случай“. Презастраховането на големи компании, като Twitter и Facebook например, има риск да „удари“ компании, които зависят от техните данни. Но отново – вж. т.1.


В заключение, GDPR не е нещо страшно, не е нещо лошо и не е „измислица на бюрократите в Брюксел“. Има много какво да се желае откъм яснотата му и предполагам ще има какво да се желае откъм приложението му, но „по принцип“ е окей.

И както става винаги със законодателства, обхващащи много хора и бизнеси – в началото ще има не само 7, а 77 мита, които с времето и с практиката ще се изяснят. Ще има грешки на растежа, има риск (особено в по-малки и корумпирани държави) някой „да го отнесе“, но гледайки голямата картинка, смятам, че с този Регламент след 5 години ще сме по-добре откъм защита на данните и откъм последици от липсата на на такава защита.

Spring 2018 AWS SOC Reports are Now Available with 11 Services Added in Scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2018-aws-soc-reports-are-now-available-with-11-services-added-in-scope/

Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.

With the addition of the following 11 new services, you can now select from a total of 62 SOC-compliant services. To see the full list, go to our Services in Scope by Compliance Program page:

• Amazon Athena
• Amazon QuickSight
• Amazon WorkDocs
• AWS Batch
• AWS CodeBuild
• AWS Config
• AWS OpsWorks Stacks
• AWS Snowball
• AWS Snowball Edge
• AWS Snowmobile
• AWS X-Ray

Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.

Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.

Want more AWS Security news? Follow us on Twitter.

Do You Take Your VPN Security Seriously?

Post Syndicated from Ernesto original https://torrentfreak.com/do-you-take-your-vpn-security-seriously-180506/

In recent years there has been a massive boom in VPN usage, spurred on by security breaches and privacy leaks.

While prospective VPN users pay a lot of attention to the various policies VPN providers have when it comes to logging or leak protection, the user’s own responsibility is often entirely ignored.

When there’s a leak of sorts, such as the common WebRTC, Ipv6, DNS or torrent client leaks, people are quick to point their finger at the VPN provider, even though they could have easily prevented issues themselves.

It’s clear that a good VPN provider should do everything in its power to prevent leaks. At the very minimum, they should inform users about possible risks. Better yet, they should regularly test for vulnerabilities.

Still, VPN users themselves can also take a more proactive approach. The problem is that many people don’t take their own VPN security very seriously.

After signing up at a VPN service, many assume that they are perfectly protected. Aside from checking whether their IP-address has changed, they expend very little effort to make sure that this is the case.

What new VPN users should do instead is a series of VPN leak tests. Not just one, but at least a couple. Also, this should be repeated on all devices and in all browsers that are used, just to make sure.

It would also be smart to redo these tests on a regular basis, as devices and applications change. If there are any problems, fix them, with or without help from the VPN provider.

Aside from testing how leak-safe the setup is, VPN users might want to read the documentation and setup guides their VPN service provides. What is the most secure protocol? Does the software have built-in leak protection? What about a kill-switch?

If you use a custom VPN application offered by the provider it may come in with built-in leak protection, but that’s not always the case.

Also, some providers offer these features but don’t have them enabled by default, as it may lead to various connectivity issues. Others leave it up to the user to secure their browsers and apps. These are all things that should be taken into account.

If there are any leaks, let your VPN provider know. They should fix them if they can, after all.

Similarly, torrent users should not forget to test if their torrent client is setup correctly, and test for leaks there as well. This is easily overlooked by many.

While checking for leaks is crucial, things get even complicated when it comes to anonymity.

Some people are extremely focused on choosing a “zero log” VPN to maintain their privacy, but then use the same VPN to log in to Google, Twitter, Facebook and other services. This links the VPN address to their personal account, creating extensive logs there. And that without mentioning the other privacy-sensitive and tracking data these services collect and store.

While most are not too worried about that, it shows that full privacy or anonymity is hard to accomplish, even if a VPN is secure.

The bottom line is, however, that both VPNs and their users should be vigilant. VPN providers should take responsibility to prevent or warn against possible leaks, but people should remember that a “zero-log” VPN really is worthless if the user hasn’t set it up correctly, or uses it the wrong way.

Do I leak offers a comprehensive and independent VPN leak test, but Google should be able to find dozens more.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Bad Software Is Our Fault

Post Syndicated from Bozho original https://techblog.bozho.net/bad-software-is-our-fault/

Bad software is everywhere. One can even claim that every software is bad. Cool companies, tech giants, established companies, all produce bad software. And no, yours is not an exception.

Who’s to blame for bad software? It’s all complicated and many factors are intertwined – there’s business requirements, there’s organizational context, there’s lack of sufficient skilled developers, there’s the inherent complexity of software development, there’s leaky abstractions, reliance on 3rd party software, consequences of wrong business and purchase decisions, time limitations, flawed business analysis, etc. So yes, despite the catchy title, I’m aware it’s actually complicated.

But in every “it’s complicated” scenario, there’s always one or two factors that are decisive. All of them contribute somehow, but the major drivers are usually a handful of things. And in the case of base software, I think it’s the fault of technical people. Developers, architects, ops.

We don’t seem to care about best practices. And I’ll do some nasty generalizations here, but bear with me. We can spend hours arguing about tabs vs spaces, curly bracket on new line, git merge vs rebase, which IDE is better, which framework is better and other largely irrelevant stuff. But we tend to ignore the important aspects that span beyond the code itself. The context in which the code lives, the non-functional requirements – robustness, security, resilience, etc.

We don’t seem to get security. Even trivial stuff such as user authentication is almost always implemented wrong. These days Twitter and GitHub realized they have been logging plain-text passwords, for example, but that’s just the tip of the iceberg. Too often we ignore the security implications.

“But the business didn’t request the security features”, one may say. The business never requested 2-factor authentication, encryption at rest, PKI, secure (or any) audit trail, log masking, crypto shredding, etc., etc. Because the business doesn’t know these things – we do and we have to put them on the backlog and fight for them to be implemented. Each organization has its specifics and tech people can influence the backlog in different ways, but almost everywhere we can put things there and prioritize them.

The other aspect is testing. We should all be well aware by now that automated testing is mandatory. We have all the tools in the world for unit, functional, integration, performance and whatnot testing, and yet many software projects lack the necessary test coverage to be able to change stuff without accidentally breaking things. “But testing takes time, we don’t have it”. We are perfectly aware that testing saves time, as we’ve all had those “not again!” recurring bugs. And yet we think of all sorts of excuses – “let the QAs test it”, we have to ship that now, we’ll test it later”, “this is too trivial to be tested”, etc.

And you may say it’s not our job. We don’t define what has do be done, we just do it. We don’t define the budget, the scope, the features. We just write whatever has been decided. And that’s plain wrong. It’s not our job to make money out of our code, and it’s not our job to define what customers need, but apart from that everything is our job. The way the software is structured, the security aspects and security features, the stability of the code base, the way the software behaves in different environments. The non-functional requirements are our job, and putting them on the backlog is our job.

You’ve probably heard that every software becomes “legacy” after 6 months. And that’s because of us, our sloppiness, our inability to mitigate external factors and constraints. Too often we create a mess through “just doing our job”.

And of course that’s a generalization. I happen to know a lot of great professionals who don’t make these mistakes, who strive for excellence and implement things the right way. But our industry as a whole doesn’t. Our industry as a whole produces bad software. And it’s our fault, as developers – as the only people who know why a certain piece of software is bad.

In a talk of his, Bob Martin warns us of the risks of our sloppiness. We have been building websites so far, but we are more and more building stuff that interacts with the real world, directly and indirectly. Ultimately, lives may depend on our software (like the recent unfortunate death caused by a self-driving car). And I’ll agree with Uncle Bob that it’s high time we self-regulate as an industry, before some technically incompetent politician decides to do that.

How, I don’t know. We’ll have to think more about it. But I’m pretty sure it’s our fault that software is bad, and no amount of blaming the management, the budget, the timing, the tools or the process can eliminate our responsibility.

Why do I insist on bashing my fellow software engineers? Because if we start looking at software development with more responsibility; with the fact that if it fails, it’s our fault, then we’re more likely to get out of our current bug-ridden, security-flawed, fragile software hole and really become the experts of the future.

The post Bad Software Is Our Fault appeared first on Bozho's tech blog.

Enhanced Domain Protections for Amazon CloudFront Requests

Post Syndicated from Colm MacCarthaigh original https://aws.amazon.com/blogs/security/enhanced-domain-protections-for-amazon-cloudfront-requests/

Over the coming weeks, we’ll be adding enhanced domain protections to Amazon CloudFront. The short version is this: the new measures are designed to ensure that requests handled by CloudFront are handled on behalf of legitimate domain owners.

Using CloudFront to receive traffic for a domain you aren’t authorized to use is already a violation of our AWS Terms of Service. When we become aware of this type of activity, we deal with it behind the scenes by disabling abusive accounts. Now we’re integrating checks directly into the CloudFront API and Content Distribution service, as well.

Enhanced Protection against Dangling DNS entries
To use CloudFront with your domain, you must configure your domain to point at CloudFront. You may use a traditional CNAME, or an Amazon Route 53 “ALIAS” record.

A problem can arise if you delete your CloudFront distribution, but leave your DNS still pointing at CloudFront, popularly known as a “dangling” DNS entry. Thankfully, this is very rare, as the domain will no longer work, but we occasionally see customers who leave their old domains dormant. This can also happen if you leave this kind of “dangling” DNS entry pointing at other infrastructure you no longer control. For example, if you leave a domain pointing at an IP address that you don’t control, then there is a risk that someone may come along and “claim” traffic destined for your domain.

In an even more rare set of circumstances, an abuser can exploit a subdomain of a domain that you are actively using. For example, if a customer left “images.example.com” dangling and pointing to a deleted CloudFront distribution which is no longer in use, but they still actively use the parent domain “example.com”, then an abuser could come along and register “images.example.com” as an alternative name on their own distribution and claim traffic that they aren’t entitled to. This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution.

Of course, the best fix for this kind of risk is not to leave dangling DNS entries in the first place. Earlier in February, 2018, we added a new warning to our systems. With this warning, if you remove an alternate domain name from a distribution, you are reminded to delete any DNS entries that may still be pointing at CloudFront.

We also have long-standing checks in the CloudFront API that ensure this kind of domain claiming can’t occur when you are using wildcard domains. If you attempt to add *.example.com to your CloudFront distribution, but another account has already registered www.example.com, then the attempt will fail.

With the new enhanced domain protection, CloudFront will now also check your DNS whenever you remove an alternate domain. If we determine that the domain is still pointing at your CloudFront distribution, the API call will fail and no other accounts will be able to claim this traffic in the future.

Enhanced Protection against Domain Fronting
CloudFront will also be soon be implementing enhanced protections against so-called “Domain Fronting”. Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to “www.example.com” but then issue a request for “www.example.org”.

In certain circumstances this is normal and expected. For example, browsers can re-use persistent connections for any domain that is listed in the same SSL Certificate, and these are considered related domains. But in other cases, tools including malware can use this technique between completely unrelated domains to evade restrictions and blocks that can be imposed at the TLS/SSL layer.

To be clear, this technique can’t be used to impersonate domains. The clients are non-standard and are working around the usual TLS/SSL checks that ordinary clients impose. But clearly, no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain. Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties.

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Easier way to control access to AWS regions using IAM policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/easier-way-to-control-access-to-aws-regions-using-iam-policies/

We made it easier for you to comply with regulatory standards by controlling access to AWS Regions using IAM policies. For example, if your company requires users to create resources in a specific AWS region, you can now add a new condition to the IAM policies you attach to your IAM principal (user or role) to enforce this for all AWS services. In this post, I review conditions in policies, introduce the new condition, and review a policy example to demonstrate how you can control access across multiple AWS services to a specific region.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element that lets you specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:RequestedRegion condition key

The new global condition key, , supports all actions across all AWS services. You can use any string operator and specify any AWS region for its value.

Condition key Description Operator(s) Value
aws:RequestedRegion Allows you to specify the region to which the IAM principal (user or role) can make API calls All string operators (for example, StringEquals Any AWS region (for example, us-east-1)

I’ll now demonstrate the use of the new global condition key.

Example: Policy with region-level control

Let’s say a group of software developers in my organization is working on a project using Amazon EC2 and Amazon RDS. The project requires a web server running on an EC2 instance using Amazon Linux and a MySQL database instance in RDS. The developers also want to test Amazon Lambda, an event-driven platform, to retrieve data from the MySQL DB instance in RDS for future use.

My organization requires all the AWS resources to remain in the Frankfurt, eu-central-1, region. To make sure this project follows these guidelines, I create a single IAM policy for all the AWS services that this group is going to use and apply the new global condition key aws:RequestedRegion for all the services. This way I can ensure that any new EC2 instances launched or any database instances created using RDS are in Frankfurt. This policy also ensures that any Lambda functions this group creates for testing are also in the Frankfurt region.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "rds:Describe*",
                "iam:ListRolePolicies",
                "iam:ListRoles",
                "iam:GetRole",
                "iam:ListInstanceProfiles",
                "iam:AttachRolePolicy",
                "lambda:GetAccountSettings"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances",
                "rds:CreateDBInstance",
                "rds:CreateDBCluster",
                "lambda:CreateFunction",
                "lambda:InvokeFunction"
            ],
            "Resource": "*",
      "Condition": {"StringEquals": {"aws:RequestedRegion": "eu-central-1"}}

        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::account-id:role/*"
        }
    ]
}

The first statement in the above example contains all the read-only actions that let my developers use the console for EC2, RDS, and Lambda. The permissions for IAM-related actions are required to launch EC2 instances with a role, enable enhanced monitoring in RDS, and for AWS Lambda to assume the IAM execution role to execute the Lambda function. I’ve combined all the read-only actions into a single statement for simplicity. The second statement is where I give write access to my developers for the three services and restrict the write access to the Frankfurt region using the aws:RequestedRegion condition key. You can also list multiple AWS regions with the new condition key if your developers are allowed to create resources in multiple regions. The third statement grants permissions for the IAM action iam:PassRole required by AWS Lambda. For more information on allowing users to create a Lambda function, see Using Identity-Based Policies for AWS Lambda.

Summary

You can now use the aws:RequestedRegion global condition key in your IAM policies to specify the region to which the IAM principal (user or role) can invoke an API call. This capability makes it easier for you to restrict the AWS regions your IAM principals can use to comply with regulatory standards and improve account security. For more information about this global condition key and policy examples using aws:RequestedRegion, see the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Continued: the answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-2/

Last week, we shared the first half of our Q&A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. Today we follow up with all your other questions, including your expectations for a Raspberry Pi 4, Eben’s dream add-ons, and whether we really could go smaller than the Zero.

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

With internet security becoming more necessary, will there be automated versions of VPN on an SD card?

There are already third-party tools which turn your Raspberry Pi into a VPN endpoint. Would we do it ourselves? Like the power button, it’s one of those cases where there are a million things we could do and so it’s more efficient to let the community get on with it.

Just to give a counterexample, while we don’t generally invest in optimising for particular use cases, we did invest a bunch of money into optimising Kodi to run well on Raspberry Pi, because we found that very large numbers of people were using it. So, if we find that we get half a million people a year using a Raspberry Pi as a VPN endpoint, then we’ll probably invest money into optimising it and feature it on the website as we’ve done with Kodi. But I don’t think we’re there today.

Have you ever seen any Pis running and doing important jobs in the wild, and if so, how does it feel?

It’s amazing how often you see them driving displays, for example in radio and TV studios. Of course, it feels great. There’s something wonderful about the geographic spread as well. The Raspberry Pi desktop is quite distinctive, both in its previous incarnation with the grey background and logo, and the current one where we have Greg Annandale’s road picture.

The PIXEL desktop on Raspberry Pi

And so it’s funny when you see it in places. Somebody sent me a video of them teaching in a classroom in rural Pakistan and in the background was Greg’s picture.

Raspberry Pi 4!?!

There will be a Raspberry Pi 4, obviously. We get asked about it a lot. I’m sticking to the guidance that I gave people that they shouldn’t expect to see a Raspberry Pi 4 this year. To some extent, the opportunity to do the 3B+ was a surprise: we were surprised that we’ve been able to get 200MHz more clock speed, triple the wireless and wired throughput, and better thermals, and still stick to the $35 price point.

We’re up against the wall from a silicon perspective; we’re at the end of what you can do with the 40nm process. It’s not that you couldn’t clock the processor faster, or put a larger processor which can execute more instructions per clock in there, it’s simply about the energy consumption and the fact that you can’t dissipate the heat. So we’ve got to go to a smaller process node and that’s an order of magnitude more challenging from an engineering perspective. There’s more effort, more risk, more cost, and all of those things are challenging.

With 3B+ out of the way, we’re going to start looking at this now. For the first six months or so we’re going to be figuring out exactly what people want from a Raspberry Pi 4. We’re listening to people’s comments about what they’d like to see in a new Raspberry Pi, and I’m hoping by early autumn we should have an idea of what we want to put in it and a strategy for how we might achieve that.

Could you go smaller than the Zero?

The challenge with Zero as that we’re periphery-limited. If you run your hand around the unit, there is no edge of that board that doesn’t have something there. So the question is: “If you want to go smaller than Zero, what feature are you willing to throw out?”

It’s a single-sided board, so you could certainly halve the PCB area if you fold the circuitry and use both sides, though you’d have to lose something. You could give up some GPIO and go back to 26 pins like the first Raspberry Pi. You could give up the camera connector, you could go to micro HDMI from mini HDMI. You could remove the SD card and just do USB boot. I’m inventing a product live on air! But really, you could get down to two thirds and lose a bunch of GPIO – it’s hard to imagine you could get to half the size.

What’s the one feature that you wish you could outfit on the Raspberry Pi that isn’t cost effective at this time? Your dream feature.

Well, more memory. There are obviously technical reasons why we don’t have more memory on there, but there are also market reasons. People ask “why doesn’t the Raspberry Pi have more memory?”, and my response is typically “go and Google ‘DRAM price’”. We’re used to the price of memory going down. And currently, we’re going through a phase where this has turned around and memory is getting more expensive again.

Machine learning would be interesting. There are machine learning accelerators which would be interesting to put on a piece of hardware. But again, they are not going to be used by everyone, so according to our method of pricing what we might add to a board, machine learning gets treated like a $50 chip. But that would be lovely to do.

Which citizen science projects using the Pi have most caught your attention?

I like the wildlife camera projects. We live out in the countryside in a little village, and we’re conscious of being surrounded by nature but we don’t see a lot of it on a day-to-day basis. So I like the nature cam projects, though, to my everlasting shame, I haven’t set one up yet. There’s a range of them, from very professional products to people taking a Raspberry Pi and a camera and putting them in a plastic box. So those are good fun.

Raspberry Shake seismometer

The Raspberry Shake seismometer

And there’s Meteor Pi from the Cambridge Science Centre, that’s a lot of fun. And the seismometer Raspberry Shake – that sort of thing is really nice. We missed the recent South Wales earthquake; perhaps we should set one up at our Californian office.

How does it feel to go to bed every day knowing you’ve changed the world for the better in such a massive way?

What feels really good is that when we started this in 2006 nobody else was talking about it, but now we’re part of a very broad movement.

We were in a really bad way: we’d seen a collapse in the number of applicants applying to study Computer Science at Cambridge and elsewhere. In our view, this reflected a move away from seeing technology as ‘a thing you do’ to seeing it as a ‘thing that you have done to you’. It is problematic from the point of view of the economy, industry, and academia, but most importantly it damages the life prospects of individual children, particularly those from disadvantaged backgrounds. The great thing about STEM subjects is that you can’t fake being good at them. There are a lot of industries where your Dad can get you a job based on who he knows and then you can kind of muddle along. But if your dad gets you a job building bridges and you suck at it, after the first or second bridge falls down, then you probably aren’t going to be building bridges anymore. So access to STEM education can be a great driver of social mobility.

By the time we were launching the Raspberry Pi in 2012, there was this wonderful movement going on. Code Club, for example, and CoderDojo came along. Lots of different ways of trying to solve the same problem. What feels really, really good is that we’ve been able to do this as part of an enormous community. And some parts of that community became part of the Raspberry Pi Foundation – we merged with Code Club, we merged with CoderDojo, and we continue to work alongside a lot of these other organisations. So in the two seconds it takes me to fall asleep after my face hits the pillow, that’s what I think about.

We’re currently advertising a Programme Manager role in New Delhi, India. Did you ever think that Raspberry Pi would be advertising a role like this when you were bringing together the Foundation?

No, I didn’t.

But if you told me we were going to be hiring somewhere, India probably would have been top of my list because there’s a massive IT industry in India. When we think about our interaction with emerging markets, India, in a lot of ways, is the poster child for how we would like it to work. There have already been some wonderful deployments of Raspberry Pi, for example in Kerala, without our direct involvement. And we think we’ve got something that’s useful for the Indian market. We have a product, we have clubs, we have teacher training. And we have a body of experience in how to teach people, so we have a physical commercial product as well as a charitable offering that we think are a good fit.

It’s going to be massive.

What is your favourite BBC type-in listing?

There was a game called Codename: Druid. There is a famous game called Codename: Droid which was the sequel to Stryker’s Run, which was an awesome, awesome game. And there was a type-in game called Codename: Druid, which was at the bottom end of what you would consider a commercial game.

codename druid

And I remember typing that in. And what was really cool about it was that the next month, the guy who wrote it did another article that talks about the memory map and which operating system functions used which bits of memory. So if you weren’t going to do disc access, which bits of memory could you trample on and know the operating system would survive.

babbage versus bugs Raspberry Pi annual

See the full listing for Babbage versus Bugs in the Raspberry Pi 2018 Annual

I still like type-in listings. The Raspberry Pi 2018 Annual has a type-in listing that I wrote for a Babbage versus Bugs game. I will say that’s not the last type-in listing you will see from me in the next twelve months. And if you download the PDF, you could probably copy and paste it into your favourite text editor to save yourself some time.

The post Continued: the answers to your questions for Eben Upton appeared first on Raspberry Pi.