Tag Archives: IP

Обнародвани са съществени изменения в Закона за авторското право и сродните му права

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/23/zid-zapsp-2/

На 29 март 2018 г. в извънреден брой на Държавен вестник са обнародвани изменения в Закона за авторското право и сродните права. Приетите изменения отразяват разпоредбите на Директива 2014/26 /ЕС, която трябваше да бъде приета в българското законодателство до април 2016 г. През януари 2018 г. Европейската комисия отнесе въпроса до Съда на ЕС и поиска България да бъде глобена с по около 19 000 евро на ден за неспазване на законодателството на ЕС.

Директивата има за цел да предвиди координирането на националните правила относно достъпа до дейността по управление на авторското право и сродните му права от организации за колективно управление на авторски права, особеностите на тяхното управление и надзорната рамка (съобр.8), да определи изискванията, приложими за организациите за колективно управление на авторски права, за да се гарантира висок стандарт на управление, финансово управление, прозрачност и отчетност (съобр.9), а също да да се осигури необходимото минимално качество на трансграничните услуги, предоставяни от организации за колективно управление на авторски права, особено по отношение на прозрачността на представяния репертоар и точността на финансовите потоци, свързани с използването на правата, и да се улесни предоставяне на многотериториална многорепертоарна услуга (съобр.40).

Измененията са съществени и пространни, вероятно скоро ще има подробни анализи.

И нещо от ПЗР на ЗИД:
§ 27. В 6-месечен срок от влизането в сила на този закон министърът на културата представя на Европейската комисия доклад за състоянието и развитието на многотериториалното отстъпване на права в интернет на територията на Република България. Докладът съдържа информация за наличието на многотериториални разрешения, за спазването на глава единадесета „и“ от организациите за колективно управление на права, както и оценка на развитието на многотериториалното разрешаване на използването в интернет на музикални произведения от ползватели, носители на права и от други заинтересовани лица.
§ 28. Министърът на културата предоставя на Европейската комисия списък на регистрираните организации за колективно управление на права и я уведомява за промените в него в тримесечен срок от настъпването им.
§ 29. В Закона за електронните съобщения се правят следните допълнения:
1. В чл. 73, ал. 3 се създава т. 15:
„15. задължения за предоставяне на вярна и точна информация за броя на потребителите и абонатите на предприятията, предоставящи електронни съобщителни мрежи и/или услуги.“
2. В чл. 231, ал. 2 накрая се добавя „договорите с доставчиците на съдържание, както и актуална база данни за абонатите, при спазване на изискванията за защита на личните данни“.
§ 30. В Закона за закрила и развитие накултурата в чл. 31, ал. 1, т. 1 след думите „ал. 2“ се добавя „и чл. 98в1, ал. 6“.
§ 31. Законът влиза в сила от деня на обнародването му в „Държавен вестник“, с изключение на § 18 и 30, които влизат в сила 9 месеца след обнародването му.”

Промяна в Правилника на НС относно изготвянето на програмата по въпросите на ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/23/parl_eu/

В Държавен вестник от 20 април 2018 г. е обнародвано решение за изменение на Правилника за организацията и дейността на Народното събрание (ДВ, бр. 35 от 2017 г.):
Народното събрание на основание чл. 86, ал. 1 от Конституцията на Република България и § 1, ал. 1 от допълнителните разпоредби на Правилника за организацията и дейността на Народното събрание
РЕШИ:
Параграф единствен. Параграф 10 от заключителните разпоредби се изменя така:
„§ 10. (1) Член 118 не се прилага от 1 юли 2017 г. до 31 декември 2018 г.*
(2) Постоянните комисии, с изключение на Комисията по европейските въпроси и контрол на европейските фондове, изработват по компетентност своите предложения за Годишна работна програма на Народното събрание по въпросите на Европейския съюз за 2018 г., като вземат предвид публикуваната Работна програма на Европейската комисия за 2018 г.
Комисията по европейските въпроси и контрол на европейските фондове, като взема предвид и предложенията на другите постоянни комисии, изработва проект на Годишна работна програма на Народното събрание по въпросите на Европейския съюз за 2018 г.
Годишната работна програма съдържа списък на проектите на актове на институциите на Европейския съюз, по които Народното събрание осъществява наблюдение и контрол.
Проектът на Годишната работна програма се обсъжда и приема от Народното събрание. Председателят на Народното събрание изпраща на Министерския съвет приетата Годишна работна програма.
При нововъзникнали обстоятелства Комисията по европейските въпроси и контрол на европейските фондове може да предлага по своя инициатива или по предложение на други постоянни комисии допълнения в Годишната работна програма на Народното събрание по въпросите на Европейския съюз за 2018 г., които се приемат по реда на изречения второ, трето и четвърто. Членове 119, 120 и 121 се прилагат съответно.
(3) Министерският съвет внася в Народното събрание за сведение и други документи, свързани с провеждането на Българското председателство на Съвета на Европейския съюз през 2018 г.“
*
Чл.118 от ПОДНС гласи:
Чл. 118. (1) Министерският съвет внася в Народното събрание приетата от него Годишна програма за участие на Република България в процеса на вземане на решения на Европейския съюз в 7-дневен срок от приемането и.
(2) Председателят на Народното събрание разпределя годишната програма по ал. 1 на постоянните комисии. В триседмичен срок от получаването и постоянните комисии, с изключение на Комисията по европейските въпроси и контрол на европейските фондове, изработват своите предложения за Годишна работна програма на Народното събрание по въпросите на Европейския съюз, като вземат предвид и Работната програма на Европейската комисия за съответната година.
(3) В 14-дневен срок от изтичането на срока по ал. 2 Комисията по европейските въпроси и контрол на европейските фондове, като взема предвид и предложенията на другите постоянни комисии, изработва проект на Годишна работна програма на Народното събрание по въпросите на Европейския съюз. Годишната работна програма съдържа списък на проектите на актове на институциите на Европейския съюз, по които Народното събрание осъществява наблюдение и контрол. Проектът на годишната работна програма се обсъжда и приема от Народното събрание.
(4) Председателят на Народното събрание изпраща на Министерския съвет приетата годишна работна програма по ал. 3.
(5) При нововъзникнали обстоятелства Комисията по европейските въпроси и контрол на европейските фондове може да предлага по своя инициатива или по предложение на други постоянни комисии допълнения в Годишната работна програма на Народното събрание по въпросите на Европейския съюз, които се приемат по реда на ал. 3.

OMG The Stupid It Burns

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html

This article, pointed out by @TheGrugq, is stupid enough that it’s worth rebutting.

The article starts with the question “Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?“. It then proceeds to ignore the lessons of those things.
Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.
But this article doesn’t cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that’s stupid. It’s the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven’t cured cancer yet, it’s because they don’t take the problem seriously.
The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It’s not that such lessons could have no value, it’s that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.
Then, in case we don’t get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It’s hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.
The article invests much effort in discussing the buzzword “OODA loop”. Most attacks in cyberspace don’t have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what’s going on. That’s obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody’s ability to predict.
You might claim that this is just the first stage, that they’ll loop around, observe Wannacry’s effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It’s essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It’s still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that’s vulnerable, it’ll be taken offline within a few hours, before any other evildoer can take advantage of it.
See what I’m doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??
The article has a humorous paragraph on “defense in depth”, misunderstanding the term. To be fair, it’s the cybersecurity industry’s fault: they adopted then redefined the term. That’s why there’s two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.
As used in the cybersecurity industry, “defense in depth” means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of “defense in depth” is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn’t spread to sales and marketing computers.
The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving “defense in depth” such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can’t directly compromise the host system. In other words, once exploited with “Broadpwn”, a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for “defense in depth”.
Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.
But that’s not this story. Instead, this story is by an outsider telling us we don’t know what we are doing, that they do, and then proceeds to prove they don’t know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.
My fear, here, is that I’m going to be in a meeting where somebody has read this pretentious garbage, explaining to me why “defense in depth” is wrong and how we need to OODA faster. I’d rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.

Netflix, Amazon and Hollywood Sue “SET TV” Over IPTV Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-amazon-and-hollywood-sue-set-tv-over-iptv-piracy-180422/

In recent years, piracy streaming tools and services have become a prime target for copyright enforcers.

This is particularly true for the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership forged between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

After taking action against Kodi-powered devices Tickbox and Dragonbox, key ACE members have now filed a similar lawsuit against the Florida-based company Set Broadcast, LLC, which sells the popular IPTV service SET TV.

The complaint, filed at a California federal court on Friday, further lists company owner Jason Labbosiere and employee Nelson Johnson among the defendants.

According to the movie companies, the Set TV software is little more than a pirate tool, allowing buyers to stream copyright infringing content.

“Defendants market and sell subscriptions to ‘Setvnow,’ a software application that Defendants urge their customers to use as a tool for the mass infringement of Plaintiffs’ copyrighted motion pictures and television shows,” the complaint reads.

In addition to the software, the company also offers a preloaded box. Both allow users to connect to live streams of TV channels and ‘on demand’ content. The latter includes movies that are still in theaters, which SET TV allegedly streams through third-party sources.

“For its on-demand options, Setvnow relies on third-party sources that illicitly reproduce copyrighted works and then provide streams of popular content such as movies still exclusively in theaters and television shows.”

From the complaint

The intended use of SET TV is clear, according to the movie companies. They frame it as a pirate service and believe that this is the main draw for consumers.

“Defendants promote the use of Setvnow for overwhelmingly, if not exclusively, infringing purposes, and that is how their customers use Setvnow,” the complaint reads.

Interestingly, the complaint also states that SET TV pays for sponsored reviews to reach a broader audience. The videos, posted by popular YouTubers such as Solo Man, who is quoted in the complaint, advertise the IPTV service.

“[The] sponsored reviewer promotes Setvnow as a quick and easy way to access on demand movies: ‘You have new releases right there and you simply click on the movie … you click it and click on play again and here you have the movie just like that in 1 2 3 in beautiful HD quality’.”

The lawsuit aims to bring an end to this. The movie companies ask the California District for an injunction to shut down the infringing service and impound all pre-loaded devices. In addition, they’re requesting statutory damages which could go up to several million dollars.

At the time of writing the SET TV website is still in the air, selling subscriptions. The company itself has yet to comment on the allegations.

A copy of the complaint is available here (pdf), courtesy of GeekWire.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Доклад на Държавния департамент на САЩ за състоянието на правата на човека 2017

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/22/hr2017/

Публикуван е поредният доклад на Държавния департамент на САЩ за състоянието на правата на човека през 2017 г.

Докладът има раздел за свободата на изразяване и медиите.

Законодателството предвижда съответните права, констатира се в доклада.

Но корпоративният и политически натиск, съчетан с нарастващата и непрозрачна концентрация на мрежи за собственост и разпространение на медиите, както и правителственото регулиране на ресурсите   за медиите, сериозно засягат медийната свобода и медийния плурализъм.

Индексът за устойчивост на медиите в Международния институт за изследвания и обмен (IREX) от 2017 г. сочи нарастването на политическия натиск и използването на медиите от олигарсите за “упражняване на влияние, разрушаване на репутацията на политически и бизнес опоненти и манипулиране на общественото мнение” като основни заплахи за общественото доверие в медиите. IREX отбелязва, че правителството активно възпрепятства свободното медийно развитие. Съобщенията за сплашване и насилие срещу журналисти продължават.

  • Свобода на изразяване:

Критика на  правителството  обичайно не води до репресии,  но са докладвани и няколко такива случая (напр. Васил Коцев за разпит във връзка с постинг за министър-председателя).

Напомня се глобата на Икономедиа, наложена от Комисията за финансов надзор/Стоян Мавродиев – да напомним връзката му с  фигура от организираната престъпност.

По отношение на речта на омразата  се отбелязва, че присъствието на националистически партии в правителството  оправомощава  някои да прибягват до речта на омразата като норма, а не изключение.

  • Преса и медийна свобода:

Според “Репортери без граници” от 2017 г.  пресата е “доминирана от корупция и тайни споразумения между медиите, политиците и олигарсите”. Докладът отбелязва, че липсва прозрачност при разпределянето от страна на правителството  – което води до ефект на подкупване:  да са снизходителни в отразяването на политиците  или да се въздържат от отразяване на проблемни случаи.

Вътрешни и международни организации критикуват както печатни, така и електронни медии за липса на прозрачност на собствеността и финансовата прозрачност, както и за податливост на  икономическо и политическо влияние.

На 21 март издателите на  Прас Прес  подават жалба до Комисията за защита на конкуренцията, като заявиха, че Националната дистрибуторска компания е злоупотребила с господстващото си положение на пазара за разпространение на пресата и е спряла първото издание на Прас Прес от продажба в неговите магазини.

  • Насилие и тормоз:

Отбелязва се нападението над Иво Никодимов, БНТ.

Депутатът от управляващите Антон Тодоров казва в ефир на журналиста Виктор Николаев, че “би го уволнил” за въпрос. Вицепремиерът Валери Симеонов по подобен начин заплашва журналиста. По-късно Симеонов призовава (чрез сайта на правителството!) за извинение  медиите, които интерпретират думите му като заплашване.

  •  Цензуриране или ограничения на съдържанието:

Журналистите продължават да отчитат автоцензура, редакционни забрани за отразяване на конкретни лица и теми и налагането на политически възгледи от корпоративни лидери. През март бизнесменът и издателят Сашо Дончев заявява на бизнес форум, че е бил поканен на частна среща с главния прокурор, където главният прокурор го обвинява в подкрепа за конкретна политическа партия и  предупреждава, че комуникациите му се наблюдават. Главният прокурор  излага  друга версия – че Дончев иска влияние върху прокурорите, работещи по дело, свързано с него.

Правителството не ограничава интернет( 63,5% от домакинствата имат достъп до интернет през 2016 г. според ITU).

 

How Many Piracy Warnings Would Get You to Stop?

Post Syndicated from Andy original https://torrentfreak.com/how-many-piracy-warnings-would-get-you-to-stop-180422/

For the past several years, copyright holders in the US and Europe have been trying to reach out to file-sharers in an effort to change their habits.

Whether via high-profile publicity lawsuits or a simple email, it’s hoped that by letting people know they aren’t anonymous, they’ll stop pirating and buy more content instead.

Traditionally, most ISPs haven’t been that keen on passing infringement notices on. However, the BMG v Cox lawsuit seems to have made a big difference, with a growing number of ISPs now visibly warning their users that they operate a repeat infringer policy.

But perhaps the big question is how seriously users take these warnings because – let’s face it – that’s the entire point of their existence.

There can be little doubt that a few recipients will be scurrying away at the slightest hint of trouble, intimidated by the mere suggestion that they’re being watched.

Indeed, a father in the UK – who received a warning last year as part of the Get it Right From a Genuine Site campaign – confidently and forcefully assured TF that there would be no more illegal file-sharing taking place on his ten-year-old son’s computer again – ever.

In France, where the HADOPI anti-piracy scheme received much publicity, people receiving an initial notice are most unlikely to receive additional ones in future. A December 2017 report indicated that of nine million first warning notices sent to alleged pirates since 2012, ‘just’ 800,000 received a follow-up warning on top.

The suggestion is that people either stop their piracy after getting a notice or two, or choose to “go dark” instead, using streaming sites for example or perhaps torrenting behind a decent VPN.

But for some people, the message simply doesn’t sink in early on.

A post on Reddit this week by a TWC Spectrum customer revealed that despite a wealth of readily available information (including masses in the specialist subreddit where the post was made), even several warnings fail to have an effect.

“Was just hit with my 5th copyright violation. They halted my internet and all,” the self-confessed pirate wrote.

There are at least three important things to note from this opening sentence.

Firstly, the first four warnings did nothing to change the user’s piracy habits. Secondly, Spectrum presumably had enough at five warnings and kicked in a repeat-infringer suspension, presumably to avoid the same fate as Cox in the BMG case. Third, the account suspension seems to have changed the game.

Notably, rather than some huge blockbuster movie, that fifth warning came due to something rather less prominent.

“Thought I could sneak in a random episode of Rosanne. The new one that aired LOL. That fast. Under 24 hours I got shut off. Which makes me feel like [ISPs] do monitor your traffic and its not just the people sending them notices,” the post read.

Again, some interesting points here.

Any content can be monitored by rightsholders but if it’s popular in the US then a warning delivered via an ISP seems to be more likely than elsewhere. However, the misconception that the monitoring is done by ISPs persists, despite that not being the case.

ISPs do not monitor users’ file-sharing activity, anti-piracy companies do. They can grab an IP address the second someone enters a torrent swarm, or even connects to a tracker. It happens in an instant, at a time of their choosing. Quickly jumping in and out of a torrent is no guarantee and the fallacy of not getting caught due to a failure to seed is just that – a fallacy.

But perhaps the most important thing is that after five warnings and a disconnection, the Reddit user decided to take action. Sadly for the people behind Rosanne, it’s not exactly the reaction they’d have hoped for.

“I do not want to push it but I am curious to what happens 6th time, and if I would even be safe behind a VPN,” he wrote.

“Just want to learn how to use a VPN and Sonarr and have a guilt free stress free torrent watching.”

Of course, there was no shortage of advice.

“If you have gotten 5 notices, you really should of learnt [sic] how to use a VPN before now,” one poster noted, perhaps inevitably.

But curiously, or perhaps obviously given the number of previous warnings, the fifth warning didn’t come as a surprise to the user.

“I knew they were going to hit me for it. I just didn’t think a 195mb file would do it. They were getting me for Disney movies in the past,” he added.

So how do you grab the attention of a persistent infringer like this? Five warnings and a suspension apparently. But clearly, not even that is a guarantee of success. Perhaps this is why most ‘strike’ schemes tend to give up on people who can’t be rehabilitated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Steam Censors MEGA.nz Links in Chats and Forum Posts

Post Syndicated from Ernesto original https://torrentfreak.com/steam-censors-mega-nz-links-in-chats-and-forum-posts-180421/

With more than 150 million registered accounts, Steam is much more than just a game distribution platform.

For many people, it’s also a social hangout and a communication channel.

Steam’s instant messaging tool, for example, is widely used for chats with friends. About games of course, but also to discuss lots of other stuff.

While Valve doesn’t mind people socializing on its platform, there are certain things the company doesn’t want Steam users to share. This includes links to the cloud hosting service Mega.

Users who’d like to show off some gaming footage, or even a collection of cat pictures they stored on Mega, are unable to do so. As it turns out, Steam actively censors these type of links from forum posts and chats.

In forum posts, these offending links are replaced by the text {LINK REMOVED} and private chats get the same treatment. Instead of the Mega link, people on the other end only get a mention that a link was removed.

Mega link removed from chat

While Mega operates as a regular company that offers cloud hosting services, Steam notes on their website that the website is “potentially malicious.”

“The site could contain malicious content or be known for stealing user credentials,” Steam’s link checker warns.

Potentially malicious…

It’s unclear what malicious means in this context. Mega has never been flagged by Google’s Safe Browsing program, which is regarded as one of the industry standards for malware and other unwanted software.

What’s more likely is that Mega’s piracy stigma has something to do with the censoring. As it turns out, Steam also censors 4shared.com, as well as Pirate Bay’s former .se domain name.

Other “malicious sites” which get the same treatment are more game oriented, such as cheathappens.com and the CSGO Skin Screenshot site metjm.net. While it’s understandable some game developers don’t like these, malicious is a rather broad term in this regard.

Mega clearly refutes that they are doing anything wrong. Mega Chairman Stephen Hall tells TorrentFreak that the company swiftly removes any malicious content, once it receives an abuse notice.

“It is crazy for sites to block Mega links as we respond very quickly to disable any links that are reported as malware, generally much quicker than our competitors,” Hall says.

Valve did not immediately reply to our request for clarification so the precise reason for the link censoring remains unknown.

That said, when something’s censored the public tends to work around any restrictions. Mega links are still being shared on Steam, with a slightly altered URL. In addition, Mega’s backup domain Mega.co.nz still works fine too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Implement continuous integration and delivery of serverless AWS Glue ETL applications using AWS Developer Tools

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-serverless-aws-glue-etl-applications-using-aws-developer-tools/

AWS Glue is an increasingly popular way to develop serverless ETL (extract, transform, and load) applications for big data and data lake workloads. Organizations that transform their ETL applications to cloud-based, serverless ETL architectures need a seamless, end-to-end continuous integration and continuous delivery (CI/CD) pipeline: from source code, to build, to deployment, to product delivery. Having a good CI/CD pipeline can help your organization discover bugs before they reach production and deliver updates more frequently. It can also help developers write quality code and automate the ETL job release management process, mitigate risk, and more.

AWS Glue is a fully managed data catalog and ETL service. It simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, and job scheduling. AWS Glue crawls your data sources and constructs a data catalog using pre-built classifiers for popular data formats and data types, including CSV, Apache Parquet, JSON, and more.

When you are developing ETL applications using AWS Glue, you might come across some of the following CI/CD challenges:

  • Iterative development with unit tests
  • Continuous integration and build
  • Pushing the ETL pipeline to a test environment
  • Pushing the ETL pipeline to a production environment
  • Testing ETL applications using real data (live test)
  • Exploring and validating data

In this post, I walk you through a solution that implements a CI/CD pipeline for serverless AWS Glue ETL applications supported by AWS Developer Tools (including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild) and AWS CloudFormation.

Solution overview

The following diagram shows the pipeline workflow:

This solution uses AWS CodePipeline, which lets you orchestrate and automate the test and deploy stages for ETL application source code. The solution consists of a pipeline that contains the following stages:

1.) Source Control: In this stage, the AWS Glue ETL job source code and the AWS CloudFormation template file for deploying the ETL jobs are both committed to version control. I chose to use AWS CodeCommit for version control.

To get the ETL job source code and AWS CloudFormation template, download the gluedemoetl.zip file. This solution is developed based on a previous post, Build a Data Lake Foundation with AWS Glue and Amazon S3.

2.) LiveTest: In this stage, all resources—including AWS Glue crawlers, jobs, S3 buckets, roles, and other resources that are required for the solution—are provisioned, deployed, live tested, and cleaned up.

The LiveTest stage includes the following actions:

  • Deploy: In this action, all the resources that are required for this solution (crawlers, jobs, buckets, roles, and so on) are provisioned and deployed using an AWS CloudFormation template.
  • AutomatedLiveTest: In this action, all the AWS Glue crawlers and jobs are executed and data exploration and validation tests are performed. These validation tests include, but are not limited to, record counts in both raw tables and transformed tables in the data lake and any other business validations. I used AWS CodeBuild for this action.
  • LiveTestApproval: This action is included for the cases in which a pipeline administrator approval is required to deploy/promote the ETL applications to the next stage. The pipeline pauses in this action until an administrator manually approves the release.
  • LiveTestCleanup: In this action, all the LiveTest stage resources, including test crawlers, jobs, roles, and so on, are deleted using the AWS CloudFormation template. This action helps minimize cost by ensuring that the test resources exist only for the duration of the AutomatedLiveTest and LiveTestApproval

3.) DeployToProduction: In this stage, all the resources are deployed using the AWS CloudFormation template to the production environment.

Try it out

This code pipeline takes approximately 20 minutes to complete the LiveTest test stage (up to the LiveTest approval stage, in which manual approval is required).

To get started with this solution, choose Launch Stack:

This creates the CI/CD pipeline with all of its stages, as described earlier. It performs an initial commit of the sample AWS Glue ETL job source code to trigger the first release change.

In the AWS CloudFormation console, choose Create. After the template finishes creating resources, you see the pipeline name on the stack Outputs tab.

After that, open the CodePipeline console and select the newly created pipeline. Initially, your pipeline’s CodeCommit stage shows that the source action failed.

Allow a few minutes for your new pipeline to detect the initial commit applied by the CloudFormation stack creation. As soon as the commit is detected, your pipeline starts. You will see the successful stage completion status as soon as the CodeCommit source stage runs.

In the CodeCommit console, choose Code in the navigation pane to view the solution files.

Next, you can watch how the pipeline goes through the LiveTest stage of the deploy and AutomatedLiveTest actions, until it finally reaches the LiveTestApproval action.

At this point, if you check the AWS CloudFormation console, you can see that a new template has been deployed as part of the LiveTest deploy action.

At this point, make sure that the AWS Glue crawlers and the AWS Glue job ran successfully. Also check whether the corresponding databases and external tables have been created in the AWS Glue Data Catalog. Then verify that the data is validated using Amazon Athena, as shown following.

Open the AWS Glue console, and choose Databases in the navigation pane. You will see the following databases in the Data Catalog:

Open the Amazon Athena console, and run the following queries. Verify that the record counts are matching.

SELECT count(*) FROM "nycitytaxi_gluedemocicdtest"."data";
SELECT count(*) FROM "nytaxiparquet_gluedemocicdtest"."datalake";

The following shows the raw data:

The following shows the transformed data:

The pipeline pauses the action until the release is approved. After validating the data, manually approve the revision on the LiveTestApproval action on the CodePipeline console.

Add comments as needed, and choose Approve.

The LiveTestApproval stage now appears as Approved on the console.

After the revision is approved, the pipeline proceeds to use the AWS CloudFormation template to destroy the resources that were deployed in the LiveTest deploy action. This helps reduce cost and ensures a clean test environment on every deployment.

Production deployment is the final stage. In this stage, all the resources—AWS Glue crawlers, AWS Glue jobs, Amazon S3 buckets, roles, and so on—are provisioned and deployed to the production environment using the AWS CloudFormation template.

After successfully running the whole pipeline, feel free to experiment with it by changing the source code stored on AWS CodeCommit. For example, if you modify the AWS Glue ETL job to generate an error, it should make the AutomatedLiveTest action fail. Or if you change the AWS CloudFormation template to make its creation fail, it should affect the LiveTest deploy action. The objective of the pipeline is to guarantee that all changes that are deployed to production are guaranteed to work as expected.

Conclusion

In this post, you learned how easy it is to implement CI/CD for serverless AWS Glue ETL solutions with AWS developer tools like AWS CodePipeline and AWS CodeBuild at scale. Implementing such solutions can help you accelerate ETL development and testing at your organization.

If you have questions or suggestions, please comment below.

 


Additional Reading

If you found this post useful, be sure to check out Implement Continuous Integration and Delivery of Apache Spark Applications using AWS and Build a Data Lake Foundation with AWS Glue and Amazon S3.

 


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 
Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

Cloudflare Kicks Out Torrent Site For Abuse Reporting Interference

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-kicks-out-torrent-site-for-abuse-reporting-interference-180420/

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

The company’s clients include billion dollar companies and national governments, but also personal blogs, and even pirate sites.

Copyright holders are not happy with the latter category and are pressuring Cloudflare to cut their ties with sites like The Pirate Bay, both in and out of court.

Cloudflare, however, maintains that it’s a neutral service provider. They forward copyright infringement notices to their customers, for example, but deny any liability for these sites.

Generally speaking, the company only disconnects a customer in response to a court order, as it did with Sci-Hub earlier this year. That’s why it came as a surprise when the anime torrent site NYAA.si was disconnected this week.

The site, which is a replacement for the original NYAA, has millions of users and is particularly popular in Japan. Without prior warning, it became unavailable for several hours this week, after Cloudflare removed it from its services. So what happened?

TorrentFreak spoke to the operator who said that the exact reason for the termination remains a mystery to him. He reached out to Cloudflare looking for answers, but the comany simply stated that it’s about “avoiding measures taken to avoid abuse complaints,” as can be seen below.

One of Cloudflare’s messages

The operator says he hasn’t done anything out of the ordinary and showed his willingness to resolve any possible issues. However, that hasn’t changed Cloudflare’s stance.

“We asked multiple times for clarification. We also expressed that we were willing to attempt to work with them on whatever the problem actually was, if they would explain what they even mean.

“Naturally, I have been stonewalled by them at every stage. I’ve contacted numerous persons at Cloudflare and nobody will talk about this,” NYAA’s operator adds.

TorrentFreak asked Cloudflare for more details and the company confirmed that the matter was related to interference with its abuse reporting systems, without providing further detail.

“We determined that the customer had taken steps specifically intended to interfere with and thwart the operation of our abuse reporting systems,” Cloudflare’s General Counsel Doug Kramer informed us.

Cloudflare’s statement suggests that the site took active steps to interfere with the abuse process. The company added that it can’t go into detail, but says that the reason for the termination was shared with the website owner.

The website owner, on the other hand, informs us that he has no clue what the exact problem is. NYAA.si occasionally swaps IP addresses and have recently set up some mirror domains, but these were all under the same account. So, he has no idea why that would interfere with any abuse reports.

“I’m honestly unsure of what we could have done that ‘circumvents” their abuse system,” NYAA’s operator says, adding that the only abuse reports received were copyright related.

It’s unlikely, however, that copyright takedown notices alone would warrant account termination, as most of the largest torrent sites use Cloudflare.

NYAA’s operator says he can do little more than speculate at the point. Some have hinted at a secret court order while Japan’s recent crackdown on manga and anime piracy also came to mind, all without a grain of evidence of course.

Whatever the reason, NYAA.si now has to move on without Cloudflare, while the mystery remains.

“Frankly, this whole thing is a joke. I don’t understand why they would willingly host much bigger sites like ThePirateBay without any issue, or even ISIS, or the various hacking groups that have used them over time,” the operator says.

If more information about the abuse process interfere becomes available, we’ll definitely follow it up.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!

Radio

60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.

Tutorials

If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.


We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

СЕМ: липсва баланс между свободата на изразяване и защитата на личния живот

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/20/cem-30/

Съветът за електронни медии е публикувал на сайта си изводи от експресен мониторинг върху телевизионното отразяване на катастрофата от средата на месец април, при която има загинали и ранени.

Съветът е получил 12 сигнала от граждани, главно за нарушение на професионални стандарти.

СЕМ напомня първо, че е подчертавал многократно, че “правото на личен живот и правото на свобода на изразяване нямат предимство едно пред друго.”

След това  регулаторът споменава друга двойка права –  при отразяването   липсва баланс между “правото на гражданите да получават пълна информация и правото на ненамеса в личния им живот.”

СЕМ говори  в изводите  за морално-етични норми и саморегулационни механизми, в изводите  липсват констатации за нарушение на закона или лицензиите.

*

 

Чл. 10. (1) ЗРТ (Изм. – ДВ, бр. 12 от 2010 г.) При осъществяването на своята дейност доставчиците на медийни услуги се ръководят от следните принципи:
1. гарантиране на правото на свободно изразяване на мнение;
2. гарантиране на правото на информация;
3. запазване на тайната на източника на информация;
4. защита на личната неприкосновеност на гражданите […]

 

Регулирането на интернет: да бъде или да не бъде

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/20/internet_reg/

Интернет: да се регулира или не?

Това е заглавието на обществена консултация в Обединеното кралство, съвсем очевидно по Шекспир. Въпросът не стои така дихотомно, но по  причини, известни само на него,  парламентът е избрал именно това заглавие.

Парламентарната комисия за комуникации задава въпроси относно начина, по който трябва да се подобри регулирането на интернет, включително чрез по-добро саморегулиране и управление,  и дали е необходима нова регулаторна рамка за интернет или общото законодателство на Обединеното кралство е адекватно. Ще бъде проучено също дали онлайн платформите имат достатъчна отчетност и прозрачност, адекватно управление и осигуряват ефективни поведенски стандарти за потребителите.

Ето деветте въпроса, срокът за отговори е 11 май.

Държавите се делят на такива, в които консултират, и такива, в които всичко си знаят.

Facebook изключва милиард и половина потребители от обхвата на GDPR

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/19/facebook-13/

Под това заглавие (ексклузивно) Reuters информира за следното:

Потребителите на Facebook (извън Съединените щати и Канада), независимо дали   знаят или не,  сега имат договор за услугата с компанията Facebook със седалище в Ирландия. Както Google,  LinkedIn и други компании,  Facebook също работи чрез калифорнийска и ирландска компания –  Facebook Inc/Калифорния, Менло Парк  u Facebook Ireland – като последното е под ирландска юрисдикция.

Facebook планира договорът c Facebook Ireland  да остане валиден само за европейски потребители, т.е. 1,5 милиарда потребители от Африка, Азия, Австралия и Латинска Америка няма да попаднат в обхвата на Общия регламент за защита на данните на Европейския съюз (GDPR), който влиза в сила на 25 май 2018 г. Най-голямата онлайн социална мрежа в света   намалява обхвата на прилагане на GDPR – регламентът позволява на европейските регулаторни органи да наказват компаниите за събиране или използване на лични данни без съгласието на потребителите.

Така се избягва огромен риск, пише Reuters,   тъй като новият регламент позволява да се налагат глоби в размер до 4% от глобалните годишни приходи за нарушения –  в случая с Facebook това означава  милиарди долари.

В същото време Зукърбърг е говорил вчера на конференция в Сан Хосе, Калифорния и е казал, че   въвежда нови настройки за защита на личния живот и личните данни в Европа, които в крайна сметка щели да обхванат потребителите по целия свят.”Ние не само искаме да спазваме закона, но и надхвърлим задълженията си и да изграждаме нови и по-добри практики за поверителност за всеки във Facebook”.

Get Started with Blockchain Using the new AWS Blockchain Templates

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/get-started-with-blockchain-using-the-new-aws-blockchain-templates/

Many of today’s discussions around blockchain technology remind me of the classic Shimmer Floor Wax skit. According to Dan Aykroyd, Shimmer is a dessert topping. Gilda Radner claims that it is a floor wax, and Chevy Chase settles the debate and reveals that it actually is both! Some of the people that I talk to see blockchains as the foundation of a new monetary system and a way to facilitate international payments. Others see blockchains as a distributed ledger and immutable data source that can be applied to logistics, supply chain, land registration, crowdfunding, and other use cases. Either way, it is clear that there are a lot of intriguing possibilities and we are working to help our customers use this technology more effectively.

We are launching AWS Blockchain Templates today. These templates will let you launch an Ethereum (either public or private) or Hyperledger Fabric (private) network in a matter of minutes and with just a few clicks. The templates create and configure all of the AWS resources needed to get you going in a robust and scalable fashion.

Launching a Private Ethereum Network
The Ethereum template offers two launch options. The ecs option creates an Amazon ECS cluster within a Virtual Private Cloud (VPC) and launches a set of Docker images in the cluster. The docker-local option also runs within a VPC, and launches the Docker images on EC2 instances. The template supports Ethereum mining, the EthStats and EthExplorer status pages, and a set of nodes that implement and respond to the Ethereum RPC protocol. Both options create and make use of a DynamoDB table for service discovery, along with Application Load Balancers for the status pages.

Here are the AWS Blockchain Templates for Ethereum:

I start by opening the CloudFormation Console in the desired region and clicking Create Stack:

I select Specify an Amazon S3 template URL, enter the URL of the template for the region, and click Next:

I give my stack a name:

Next, I enter the first set of parameters, including the network ID for the genesis block. I’ll stick with the default values for now:

I will also use the default values for the remaining network parameters:

Moving right along, I choose the container orchestration platform (ecs or docker-local, as I explained earlier) and the EC2 instance type for the container nodes:

Next, I choose my VPC and the subnets for the Ethereum network and the Application Load Balancer:

I configure my keypair, EC2 security group, IAM role, and instance profile ARN (full information on the required permissions can be found in the documentation):

The Instance Profile ARN can be found on the summary page for the role:

I confirm that I want to deploy EthStats and EthExplorer, choose the tag and version for the nested CloudFormation templates that are used by this one, and click Next to proceed:

On the next page I specify a tag for the resources that the stack will create, leave the other options as-is, and click Next:

I review all of the parameters and options, acknowledge that the stack might create IAM resources, and click Create to build my network:

The template makes use of three nested templates:

After all of the stacks have been created (mine took about 5 minutes), I can select JeffNet and click the Outputs tab to discover the links to EthStats and EthExplorer:

Here’s my EthStats:

And my EthExplorer:

If I am writing apps that make use of my private network to store and process smart contracts, I would use the EthJsonRpcUrl.

Stay Tuned
My colleagues are eager to get your feedback on these new templates and plan to add new versions of the frameworks as they become available.

Jeff;

 

The End of Google Cloud Messaging, and What it Means for Your Apps

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/the-end-of-google-cloud-messaging-and-what-it-means-for-your-apps/

On April 10, 2018, Google announced the deprecation of its Google Cloud Messaging (GCM) platform. Specifically, the GCM server and client APIs are deprecated and will be removed as soon as April 11, 2019.  What does this mean for you and your applications that use Amazon Simple Notification Service (Amazon SNS) or Amazon Pinpoint?

First, nothing will break now or after April 11, 2019. GCM device tokens are completely interchangeable with the newer Firebase Cloud Messaging (FCM) device tokens. If you have existing GCM tokens, you’ll still be able to use them to send notifications. This statement is also true for GCM tokens that you generate in the future.

On the back end, we’ve already migrated Amazon SNS and Amazon Pinpoint to the server endpoint for FCM (https://fcm.googleapis.com/fcm/send). As a developer, you don’t need to make any changes as a result of this deprecation.

We created the following mini-FAQ to address some of the questions you may have as a developer who uses Amazon SNS or Amazon Pinpoint.

If I migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. Your ability to connect to your applications and send messages through both Amazon SNS and Amazon Pinpoint doesn’t change. We’ll update the documentation for Amazon SNS and Amazon Pinpoint soon to reflect these changes.

If I don’t migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. If you do nothing, your existing credentials and GCM tokens will still be valid. All applications that you previously set up to use Amazon Pinpoint or Amazon SNS will continue to work normally. When you call the API for Amazon Pinpoint or Amazon SNS, we initiate a request to the FCM server endpoint directly.

What are the differences between Amazon SNS and Amazon Pinpoint?

Amazon SNS makes it easy for developers to set up, operate, and send notifications at scale, affordably and with a high degree of flexibility. Amazon Pinpoint has many of the same messaging capabilities as Amazon SNS, with the same levels of scalability and flexibility.

The main difference between the two services is that Amazon Pinpoint provides both transactional and targeted messaging capabilities. By using Amazon Pinpoint, marketers and developers can not only send transactional messages to their customers, but can also segment their audiences, create campaigns, and analyze both application and message metrics.

How do I migrate from GCM to FCM?

For more information about migrating from GCM to FCM, see Migrate a GCM Client App for Android to Firebase Cloud Messaging on the Google Developers site.

If you have any questions, please post them in the comments section, or in the Amazon Pinpoint or Amazon SNS forums.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.