Tag Archives: bing

ETTV: How an Upload Bot Became a Pirate Hero

Post Syndicated from Ernesto original https://torrentfreak.com/ettv-how-an-upload-bot-became-a-pirate-hero-171210/

Earlier this year, the torrent community was hit hard when another major torrent site suddenly shut its doors.

Just a few months after celebrating its tenth anniversary, ExtraTorrent’s operator threw in the towel. While an official explanation was never provided, it’s likely that he was pressed to make this decision.

The ExtraTorrent site was a safe harbor for millions of regular users, who became homeless overnight. But it was more than that. It was also the birth ground of several popular releasers and distribution groups.

ETTV and ETHD turned into well-known brands themselves. While the ET is derived from ExtraTorrent, the groups have shared TV and movie torrents on several other large torrent sites, and they still do. They even have their own site now.

With millions of people sharing their uploads every week, they’ve become icons and heroes to many. But how did this all come to be? We sat down with the team, virtually, to find out more.

“The idea for ettv/ethd was brought up by ExtraTorrent users,” the ETTV team says.

There was demand for a new group that would upload scene releases faster than the original EZTV, which was the dominant TV-torrent distribution group around 2011, when it all started.

“At the time the real EZTV was still active. They released stuff hours after it was released from the scene, leaving sites to wait very long for shows to arrive in public. In no way was ettv intended for competitive purposes. We had a lot of respect for Nova and the original EZTV operators.”

While ETTV is regularly referred to as a “group,” it was a one-person operation initially. Just a guy with a seedbox, grabbing scene releases and posting them on torrent sites.

It didn’t take long before people got wind of the new distribution ‘group,’ and interest for the torrents quickly exploded. This meant that a single seedbox was no longer sufficient, but help was not far away.

“It started off with one operator and a seedbox, but it became popular too fast. That’s when former ExtraTorrent owners stepped in to give ETTV the support and funding it needed to keep the story going.”

One of the earliest ETTV uploads on ExtraTorrent

In addition to the available disk space and bandwidth, the team itself expanded as well. At its height, a handful of people were working on the group. However, when things became more and more automated this number reduced again.

What many people don’t realize is that ETTV and ETHD are mostly run by lines of code. The entire distribution process is automated and requires minimal intervention from the people behind it.

“Ettv/Ethd is a bot, it doesn’t require human attention. It grabs what you tell the script to,” the team tells us.

The bot is set up to grab the latest copies of predefined shows from private servers where the latest scene release are posted. These are transferred to the seedbox and the torrents are then pushed out to the public – on ETTV.tv, but also on The Pirate Bay and elsewhere. Everything is automated.

Even most of the maintenance is taken care of by the ‘bot’ itself. When disk space is running out older content is purged, allowing fresh releases to come through.

“The only persons involve with the bots are the bill payers of our new home ettv.tv. All they do check bot logs to see if it has any errors and correct them,” the team explains.

One problem that couldn’t be easily solved with some code was the shutdown of ExtraTorrent. While the bills for the seedboxes were paid in advance until the end of 2017, the groups had to find a new home.

“The shutdown of ExtraTorrent didn’t affect the bots from running, it just left ettv/ethd homeless and caused fans to lose their way trying to find us. Not many knew where else we uploaded or didn’t like the other sites we uploaded to.”

After a few months had passed it became clear that they were not going anywhere. Quite the contrary, they started their very own site, ETTV.tv, where all the latest releases are published.

ETTV.tv

In the near future, the team will focus on turning the site into a new home for its followers. Just a few weeks ago it launched a new release “tag,” ETMovies, which specializes in lower resolution films with a smaller file size, for example.

“We recently introduced ETMovies which is basically for SD Movies, other than that the only plan ettv/ethd has is to give a home to the members that suffered from the sudden shut down of ExtraTorrent.”

Just this week, the site also expanded its reach by adding new categories such as music, games, software, and Books, where approved uploaders will publish content.

While they are doing their best to keep the site up and running, it’s not a given that ETTV will be around forever. As long as there are plenty of funds and no concrete legal pressure they might. But if recent history has shown us anything, it’s that there are no guarantees.

“No one is here seeking to be a millionaire, if the traffic pays the bills we keep going, if not then all we can say is (sorry we tried) we will not be the heroes that saved the day.

“Again and again, the troublesome history of torrent sites is clear. It’s a war no site owner can win. If we are ever in danger, we will choose freedom. It’s not like followers can bail you out if the worst were to happen,” the ETTV team concludes.

For now, however, the bot keeps on running.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Три билборда извън града – по Маклуън

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/12/09/three_billboards/

 

.

Последният засега филм на Мартин Макдона Три билборда извън града  (оригинално заглавие  Three Billboards Outside Ebbing, Missouri)  беше показан и в София. Филмът вече е взел наградата на публиката в Сан Себастиан и Торонто, наградата за най-добър сценарий във Венеция и две награди BIFA.

Главната героиня е Милдред Хейс. Живее в малък град  в Мисури, разведена, майка на син и дъщеря,  преживяла преди месеци огромна лична трагедия. Дъщеря й Анжела, в трудната гимназиална възраст, една вечер е изнасилена и убита. Месеци наред убиецът остава неизвестен, мъката не отстъпва, но се засилва гневът от липсата на справедливост и правосъдие.  На входа на града, покрай магистралата, има три изоставени билборда – и филмът започва с  решението на  Милдред да ги наеме, за да постави директно и видимо въпросите си  към хората, от които очаква действие – и лично към шефа на полицията : какво правиш, за да разкриеш престъплението? Въпросите  от билбордовете  – както и се очаква – привличат вниманието  на местната полиция, на телевизията,  на хората от града.

Какво казва законът, пита Милдред Хейс в началото, какво можеш и какво не можеш да напишеш на билборд?  Билбордовете са самостоятелен герой във филма: те движат действието, агресията към майката прелива в агресия  към билбордовете, подкрепата за майката се изразява в подкрепа за нейните въпроси на билбордовете, медията е съобщението*.  Ако в Ани Хол Маклуън се появява на екрана лично,   Три билборда извън града  е филмова реализация на идеите на Маклуън:  “Всяка медия застава между нас и реалността и въздейства върху начина ни на възприемане на света.”

Ролята на майката е поверена на Франсис Масдорманд, носителка на Оскар за ролята си във Фарго  на братята Джоел и Итън Коен. Макдорманд е съпруга на Джоел Коен от 1984 г.  и това като че ли по особен начин има отношение и към играта й в този филм на Макдона.  Може би не – но така или иначе вече писаха, че съсредоточената отдаденост, съчетана с пълна липса на актьорска суета, правят  Франсис Масдорманд  претендент за втори Оскар.

Както и наградите показват, в основата на успеха е текстът на Макдона.  Историята е пълна с обрати   и в края на филма трудно можем да се посочи  отрицателен герой – напълно отрицателен герой липсва,  героите показват различни страни от характера си в неочаквани развития. Конфликтът, разгърнат чрез билбордовете, е с местния полицейски шеф – който  на практика е престанал да търси убиеца, но пък го  е търсил добросъвестно – според собствените си разбирания, просто засега няма резултат  – и, за съжаление, е на път да си отиде от тежка болест. Какво може да направи? – Ако аз бях на твое място, бих направила база с ДНК за всеки мъж  в тази страна още от раждането му, и като стане нещо – установява се със сигурност – и смърт, казва майката.  – Това със сигурност не се позволява  от законите за правата на човека, обяснява полицаят, и точно в този момент той изглежда прав. Милдред Хейс също има грешки – може би и вини   – в трудните отношения майка – дъщеря, може би Анжела е щяла да остане  жива, ако майката с  търпение беше намирала път към нея: става ясно, че отношенията им са били обтегнати и скоро преди убийството дъщерята е искала да се премести да живее при баща си. Обтегнати отношения не заради липса на любов, а заради моментна небрежност или липса на внимание, никой не е съвършен – но само някои плащат толкова висока цена.

По подобен начин се развива и образът на полицая Диксън, класически расист и насилник, власт, облечена в сила. Още ли мъчиш негри, Диксън, пита майката в началото на филма. – Не негри, а цветнокожи, отговаря Диксън (известна фраза и от Ръкомахане в Спокан) –  и след известна пауза добавя – и не ги мъча.  Мъчи ги, филмът показва сцени на насилие, но ето, Диксън случайно чува непознат мъж да споделя, че е участвал в изнасилване – и от този момент се променя,  проявява изобретателност, рискува, за да получи ДНК от заподозрения – и  започва да се държи наистина като полицай, все пак повелята  е  да служи и да защитава.

Филмът не дава отговори, не знаем дали в далечината се вижда правосъдие за убитото момиче, но на финала камерата  проследява удивителна двойка – Милдред Хейс и Диксън, заели се с установяване на истината   – ако не за дъщерята на Милдред, то може би за друго насилие. Как ще продължи историята? Ще видим, ще решим по пътя.

Звучи и като отворен финал за Америка, за американската действителност. Ще решим по пътя.

Прекрасен филм, а най-хубавото е, че със сигурност за другите зрители важни ще се окажат други послания, те ще открият друго там. Отново време за Маклуън: “Как е възможно един поет като Елиът да заяви: Никога не съм мислил така, но това съм искал да кажа, след като вие сте го открили там. Ето това е моят начин на мислене по повод реакциите на критиката.”

*

Маршал Маклуън – исках да сложа връзка към статията за Маклуън в  българската Уикипедия – с удивление виждам, че няма статия, но поне има два реда за Медията е съобщението.

Filed under: Uncategorized

The Pi Towers Secret Santa Babbage

Post Syndicated from Mark Calleja original https://www.raspberrypi.org/blog/secret-santa-babbage/

Tired of pulling names out of a hat for office Secret Santa? Upgrade your festive tradition with a Raspberry Pi, thermal printer, and everybody’s favourite microcomputer mascot, Babbage Bear.

Raspberry Pi Babbage Bear Secret Santa

The name’s Santa. Secret Santa.

It’s that time of year again, when the cosiness gets turned up to 11 and everyone starts thinking about jolly fat men, reindeer, toys, and benevolent home invasion. At Raspberry Pi, we’re running a Secret Santa pool: everyone buys a gift for someone else in the office. Obviously, the person you buy for has to be picked in secret and at random, or the whole thing wouldn’t work. With that in mind, I created Secret Santa Babbage to do the somewhat mundane task of choosing gift recipients. This could’ve just been done with some names in a hat, but we’re Raspberry Pi! If we don’t make a Python-based Babbage robot wearing a jaunty hat and programmed to spread Christmas cheer, who will?

Secret Santa Babbage

Ho ho ho!

Mecha-Babbage Xmas shenanigans

The script the robot runs is pretty basic: a list of names entered as comma-separated strings is shuffled at the press of a GPIO button, then a name is popped off the end and stored as a variable. The name is matched to a photo of the person stored on the Raspberry Pi, and a thermal printer pinched from Alex’s super awesome PastyCam (blog post forthcoming, maybe) prints out the picture and name of the person you will need to shower with gifts at the Christmas party. (Well, OK — with one gift. No more than five quid’s worth. Nothing untoward.) There’s also a redo function, just in case you pick yourself: press another button and the last picked name — still stored as a variable — is appended to the list again, which is shuffled once more, and a new name is popped off the end.

Secret Santa Babbage prototyping

Prototyping!

As the build was a bit of a rush job undertaken at the request of our ‘Director of Vibe’ Emily, there are a few things I’d like to improve about this functionality that I didn’t get around to — more on that later. To add some extra holiday spirit to the project at the last minute, I used Pygame to play a WAV file of Santa’s jolly laugh while Babbage chooses a name for you. The file is included in the GitHub repo along with everything else, because ‘tis the season, etc., etc.

Secret Santa Babbage prototyping

Editor’s note: Considering these desk adornments, Mark’s Secret Santa gift-giver has a lot to go on.

Writing the code for Xmas Mecha-Babbage was fairly straightforward, though it uses some tricky bits for managing the thermal printer. You’ll need to install the drivers to make it go, as well as the CUPS package for managing the print hosting. You can find instructions for these things here, thanks to the wonderful Adafruit crew. Also, for reasons I couldn’t fathom, this will all only work on a Pi 2 and not a Pi 3, as there are some compatibility issues with the thermal printer otherwise. (I also tested the script on a Pi Zero W…no dice.)

Building a Christmassy throne

The hardest (well, fiddliest) parts of making the whole build were constructing the throne and wiring the bear. Using MakerCase, Inkscape, a bit of ingenuity, and a laser cutter, I was able to rig up a Christmassy plywood throne which has a hole through the seat so I could run the wires down from Babbage and to the Pi inside. I finished the throne by rubbing a couple of fingers of beeswax into it; as well as making the wood shine just a little bit and protecting it against getting wet, this had the added bonus of making it smell awesome.

Secret Santa Babbage inside

Next year’s iteration will be mulled wine–scented.

I next soldered two LEDs to some lengths of wire, and then ran the wires through holes at the top of the throne and down the back along a small channel I had carved with a narrow chisel to connect them to the Pi’s GPIO pins. The green LED will remain on as long as Babbage is running his program, and the red one will light up while he is processing your request. Once the red LED goes off again, the next person can have a go. I also laser-cut a final piece of wood to overlay the back of Babbage’s Xmas throne and cover the wiring a bit.

Creating a Xmas cyborg bear

Taking two 6 mm tactile buttons, I clipped the spiky metal legs off one side of each (the buttons were going into a stuffed christmas toy, after all) and soldered a length of wire to each of the remaining legs. Next, I made a small incision into Babbage with my trusty Swiss army knife (in a place that actually made me cringe a little) and fed the buttons up into his paws. At some point in this process I was standing in the office wrestling with the bear and muttering to myself, which elicited some very strange looks from my colleagues.

Secret Santa Babbage throne

Poor Babbage…

One thing to note here is to make sure the wires remain attached at the solder points while you push them up into Babbage’s paws. The first time I tried it, I snapped one of my connections and had to start again. It helped to remove some stuffing like a tunnel and then replace it afterward. Moreover, you can use your fingertip to support the joints as you poke the wire in. Finally, a couple of squirts of hot glue to keep Babbage’s furry cheeks firmly on the seat, and done!

Secret Santa Babbage

Next year: Game of Thrones–inspired candy cane throne

The Secret Santa Babbage masterpiece

The whole build process was the perfect holiday mix of cheerful and macabre, and while getting the thermal printer to work was a little time-consuming, the finished product definitely raised some smiles around the office and added a bit of interesting digital flavour to a staid office tradition. And it also helped people who are new to the office or from other branches of the Foundation to know for whom they will be buying a gift.

Secret Santa Babbage

Ready to dispense Christmas cheer!

There are a few ways in which I’ll polish this project before next year, such as having the script write the names to external text files to create a record that will persist in case of a reboot, and maybe having Secret Santa Babbage play you a random Christmas carol when you squeeze his paw instead of just laughing merrily every time. (I also thought about adding electric shocks for those people who are on the naughty list, but HR said no. Bah, humbug!)

Make your own

The code and laser cut plans for the whole build are available here. If you plan to make your own, let us know which stuffed toy you will be turning into a Secret Santa cyborg! And if you’ve been working on any other Christmas-themed Raspberry Pi projects, we’d like to see those too, so tag us on social media to share the festive maker cheer.

The post The Pi Towers Secret Santa Babbage appeared first on Raspberry Pi.

Coalition Against Piracy Wants Singapore to Block Streaming Piracy Software

Post Syndicated from Andy original https://torrentfreak.com/coalition-against-piracy-wants-singapore-to-block-streaming-piracy-software-171204/

Earlier this year, major industry players including Disney, HBO, Netflix, Amazon and NBCUniversal formed the Alliance for Creativity and Entertainment (ACE), a huge coalition set to tackle piracy on a global scale.

Shortly after the Coalition Against Piracy (CAP) was announced. With a focus on Asia and backed by CASBAA, CAP counts Disney, Fox, HBO Asia, NBCUniversal, Premier League, Turner Asia-Pacific, A&E Networks, BBC Worldwide, National Basketball Association, Viacom International, and others among its members.

In several recent reports, CAP has homed in on the piracy situation in Singapore. Describing the phenomenon as “rampant”, the group says that around 40% of locals engage in the practice, many of them through unlicensed streaming. Now CAP, in line with its anti-streaming stance, wants the government to do more – much more.

Since a large proportion of illicit streaming takes place through set-top devices, CAP’s 21 members want the authorities to block the software inside them that enables piracy, Straits Times reports.

“Within the Asia-Pacific region, Singapore is the worst in terms of availability of illicit streaming devices,” said CAP General Manager Neil Gane.

“They have access to hundreds of illicit broadcasts of channels and video-on-demand content.”

There are no precise details on CAP’s demands but it is far from clear how any government could effectively block software.

Blocking access to the software package itself would prove all but impossible, so that would leave blocking the infrastructure the software uses. While that would be relatively straightforward technically, the job would be large and fast-moving, particularly when dozens of apps and addons would need to be targeted.

However, CAP is also calling on the authorities to block pirate streams from entering Singapore. The country already has legislation in place that can be used for site-blocking, so that is not out of the question. It’s notable that the English Premier League is part of the CAP coalition and following legal action taken in the UK earlier this year, now has plenty of experience in blocking streams, particularly of live broadcasts.

While that is a game of cat-and-mouse, TorrentFreak sources that have been monitoring the Premier League’s actions over the past several months report that the soccer outfit has become more effective over time. Its blocks can still be evaded but it can be hard work for those involved. That kind of expertise could prove invaluable to CAP.

“The Premier League is currently engaged in its most comprehensive global anti-piracy programme,” a spokesperson told ST. “This includes supporting our broadcast partners in South-east Asia with their efforts to prevent the sale of illicit streaming devices.”

In common with other countries around the world, the legality of using ‘pirate’ streaming boxes is somewhat unclear in Singapore. A Bloomberg report cites a local salesman who reports sales of 10 to 20 boxes on a typical weekend, rising to 300 a day during electronic fairs. He believes the devices are legal, since they don’t download full copies of programs.

While that point is yet to be argued in court (previously an Intellectual Property Office of Singapore spokesperson said that copyright owners could potentially go after viewers), it seems unlikely that those selling the devices will be allowed to continue completely unhindered. The big question is how current legislation can be successfully applied.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Glenn’s Take on re:Invent 2017 Part 1

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-2017-part-1/

GREETINGS FROM LAS VEGAS

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We have a lot of exciting announcements this week. I’m going to post to the AWS Architecture blog each day with my take on what’s interesting about some of the announcements from a cloud architectural perspective.

Why not start at the beginning? At the Midnight Madness launch on Sunday night, we announced Amazon Sumerian, our platform for VR, AR, and mixed reality. The hype around VR/AR has existed for many years, though for me, it is a perfect example of how a working end-to-end solution often requires innovation from multiple sources. For AR/VR to be successful, we need many components to come together in a coherent manner to provide a great experience.

First, we need lightweight, high-definition goggles with motion tracking that are comfortable to wear. Second, we need to track movement of our body and hands in a 3-D space so that we can interact with virtual objects in the virtual world. Third, we need to build the virtual world itself and populate it with assets and define how the interactions will work and connect with various other systems.

There has been rapid development of the physical devices for AR/VR, ranging from iOS devices to Oculus Rift and HTC Vive, which provide excellent capabilities for the first and second components defined above. With the launch of Amazon Sumerian we are solving for the third area, which will help developers easily build their own virtual worlds and start experimenting and innovating with how to apply AR/VR in new ways.

Already, within 48 hours of Amazon Sumerian being announced, I have had multiple discussions with customers and partners around some cool use cases where VR can help in training simulations, remote-operator controls, or with new ideas around interacting with complex visual data sets, which starts bringing concepts straight out of sci-fi movies into the real (virtual) world. I am really excited to see how Sumerian will unlock the creative potential of developers and where this will lead.

Amazon MQ
I am a huge fan of distributed architectures where asynchronous messaging is the backbone of connecting the discrete components together. Amazon Simple Queue Service (Amazon SQS) is one of my favorite services due to its simplicity, scalability, performance, and the incredible flexibility of how you can use Amazon SQS in so many different ways to solve complex queuing scenarios.

While Amazon SQS is easy to use when building cloud-native applications on AWS, many of our customers running existing applications on-premises required support for different messaging protocols such as: Java Message Service (JMS), .Net Messaging Service (NMS), Advanced Message Queuing Protocol (AMQP), MQ Telemetry Transport (MQTT), Simple (or Streaming) Text Orientated Messaging Protocol (STOMP), and WebSockets. One of the most popular applications for on-premise message brokers is Apache ActiveMQ. With the release of Amazon MQ, you can now run Apache ActiveMQ on AWS as a managed service similar to what we did with Amazon ElastiCache back in 2012. For me, there are two compelling, major benefits that Amazon MQ provides:

  • Integrate existing applications with cloud-native applications without having to change a line of application code if using one of the supported messaging protocols. This removes one of the biggest blockers for integration between the old and the new.
  • Remove the complexity of configuring Multi-AZ resilient message broker services as Amazon MQ provides out-of-the-box redundancy by always storing messages redundantly across Availability Zones. Protection is provided against failure of a broker through to complete failure of an Availability Zone.

I believe that Amazon MQ is a major component in the tools required to help you migrate your existing applications to AWS. Having set up cross-data center Apache ActiveMQ clusters in the past myself and then testing to ensure they work as expected during critical failure scenarios, technical staff working on migrations to AWS benefit from the ease of deploying a fully redundant, managed Apache ActiveMQ cluster within minutes.

Who would have thought I would have been so excited to revisit Apache ActiveMQ in 2017 after using SQS for many, many years? Choice is a wonderful thing.

Amazon GuardDuty
Maintaining application and information security in the modern world is increasingly complex and is constantly evolving and changing as new threats emerge. This is due to the scale, variety, and distribution of services required in a competitive online world.

At Amazon, security is our number one priority. Thus, we are always looking at how we can increase security detection and protection while simplifying the implementation of advanced security practices for our customers. As a result, we released Amazon GuardDuty, which provides intelligent threat detection by using a combination of multiple information sources, transactional telemetry, and the application of machine learning models developed by AWS. One of the biggest benefits of Amazon GuardDuty that I appreciate is that enabling this service requires zero software, agents, sensors, or network choke points. which can all impact performance or reliability of the service you are trying to protect. Amazon GuardDuty works by monitoring your VPC flow logs, AWS CloudTrail events, DNS logs, as well as combing other sources of security threats that AWS is aggregating from our own internal and external sources.

The use of machine learning in Amazon GuardDuty allows it to identify changes in behavior, which could be suspicious and require additional investigation. Amazon GuardDuty works across all of your AWS accounts allowing for an aggregated analysis and ensuring centralized management of detected threats across accounts. This is important for our larger customers who can be running many hundreds of AWS accounts across their organization, as providing a single common threat detection of their organizational use of AWS is critical to ensuring they are protecting themselves.

Detection, though, is only the beginning of what Amazon GuardDuty enables. When a threat is identified in Amazon GuardDuty, you can configure remediation scripts or trigger Lambda functions where you have custom responses that enable you to start building automated responses to a variety of different common threats. Speed of response is required when a security incident may be taking place. For example, Amazon GuardDuty detects that an Amazon Elastic Compute Cloud (Amazon EC2) instance might be compromised due to traffic from a known set of malicious IP addresses. Upon detection of a compromised EC2 instance, we could apply an access control entry restricting outbound traffic for that instance, which stops loss of data until a security engineer can assess what has occurred.

Whether you are a customer running a single service in a single account, or a global customer with hundreds of accounts with thousands of applications, or a startup with hundreds of micro-services with hourly release cycle in a devops world, I recommend enabling Amazon GuardDuty. We have a 30-day free trial available for all new customers of this service. As it is a monitor of events, there is no change required to your architecture within AWS.

Stay tuned for tomorrow’s post on AWS Media Services and Amazon Neptune.

 

Glenn during the Tour du Mont Blanc

Netflix Is Not Going to Kill Piracy, Research Suggests

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-not-going-kill-piracy-research-suggests-171129/

There is little doubt that, in many countries, Netflix has become the standard for watching movies on the Internet.

Generally speaking, on-demand streaming services are convenient alternatives to piracy. However, millions of people stick to their old pirate habits, Netflix subscription or not.

Intrigued by this interplay of legal and unauthorized viewing, researchers from Carnegie Mellon University and Universidade Católica Portuguesa carried out an extensive study. They partnered with a major telco, which is not named, to analyze if BitTorrent downloading habits can be changed by offering legal alternatives.

The researchers used a piracy-tracking firm to get a sample of thousands of BitTorrent pirates at the associated ISP. Half of them were offered a free 45-day subscription to a premium TV and movies package, allowing them to watch popular content on demand.

To measure the effects of video-on-demand access on piracy, the researchers then monitored the legal viewing activity and BitTorrent transfers of the people who received the free offer, comparing it to a control group. The results show that piracy is harder to beat than some would expect.

Subscribers who received the free subscription watched more TV, but overall their torrenting habits didn’t change significantly.

“We find that, on average, households that received the gift increased overall TV consumption by 4.6% and reduced Internet downloads and uploads by 4.2% and 4.5%, respectively. However, and also on average, treated households did not change their likelihood of using BitTorrent during the experiment,” the researchers write.

One of the main problems was that these ‘pirates’ couldn’t get all their favorite shows and movies on the legal service, which is a common problem. For the small portion of subscribers who had access to their preferred content, the researchers did find an effect on torrent traffic.

“Households with preferences aligned with the gifted content reduced their probability of using BitTorrent during the experiment by 18% and decreased their amount of upload traffic by 45%,” the paper reads.

The video-on-demand service in the study had an average “fit” of just 12% with people’s viewing preferences, which means that they were missing a lot of content. But even Netflix, which has a library of thousands of titles, only has a fit of roughly 50%.

The researchers show that the lack of availability is partly caused by licensing windows, which makes it hard for legal video streaming services to compete with piracy.

“We show that licensing windows impose significant restrictions on the content that can be included in SVoD catalogs, which hampers the ability of content distributors to offer catalogs that cater to the preferences of pirates,” they write.

However, even if more content became available, piracy wouldn’t magically disappear. In the experiment, subscribers were offered free access to a video on demand service. In the real world, they would have to pay, which presents another barrier.

In this study, the pirate households were willing to pay at most $3.25 USD per month to access a service with a library as large as Netflix’s in the United States. That’s not enough.

This leads the researchers to the grim conclusion that video on demand services such as Netflix can’t significantly lower piracy rates. They could make a dent if they increase their content libraries while lowering the price at the same time, but that’s not going to happen.

“Together, our results show that, as a stand-alone strategy, using legal SVoD to curtail piracy will require, at the minimum, offering content much earlier and at much lower prices than those currently offered in the marketplace, changes that are likely to reduce industry revenue and that may damage overall incentives to produce new content while, at the same time, curbing only a small share of piracy,” the researchers conclude.

While Hollywood maintains that people can get pretty much anything they want legally, the current research shows that it’s not as simple as that. Most people are not going to pay for 22 separate subscriptions. Instead of more streaming services, it would be better to make more content available at the ones that are already out there.

The research was partially funded by the Carnegie Mellon University’s IDEA, which receives an unrestricted gift from the MPAA, so Hollywood will likely be clued in on the results.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sky’s Pirate Site-Blocking Move is Something For North Korea, ISPs Say

Post Syndicated from Andy original https://torrentfreak.com/skys-pirate-site-blocking-move-is-something-for-north-korea-isps-say-171129/

Entertainment companies have been taking legal action to have pirate sites blocked for more than a decade so it was only a matter of time before New Zealand had a taste of the action.

It’s now been revealed that Sky Network Television, the country’s biggest pay-TV service, filed a complaint with the High Court in September, demanding that four local Internet service providers block subscriber access to several ‘pirate’ sites.

At this point, the sites haven’t been named, but it seems almost inevitable that the likes of The Pirate Bay will be present. The ISPs are known, however. Spark, Vodafone, Vocus and Two Degrees control around 90% of the Kiwi market so any injunction handed down will affect almost the entire country.

In its application, Sky states that pirate sites make available unauthorized copies of its entertainment works, something which not only infringes its copyrights but also undermines its business model. But while this is standard fare in such complaints, the Internet industry backlash today is something out of the ordinary.

ISPs in other jurisdictions have fought back against blocking efforts but few have deployed the kind of language being heard in New Zealand this morning.

Vocus Group – which runs the Orcon, Slingshot and Flip brands – is labeling Sky’s efforts as “gross censorship and a breach of net neutrality”, adding that they’re in direct opposition to the idea of a free and open Internet.

“SKY’s call that sites be blacklisted on their say so is dinosaur behavior, something you would expect in North Korea, not in New Zealand. It isn’t our job to police the Internet and it sure as hell isn’t SKY’s either, all sites should be equal and open,” says Vocus Consumer General Manager Taryn Hamilton.

But in response, Sky said Vocus “has got it wrong”, highlighting that site-blocking is now common practice in places such as Australia and the UK.

“Pirate sites like Pirate Bay make no contribution to the development of content, but rather just steal it. Over 40 countries around the world have put in place laws to block such sites, and we’re just looking to do the same,” the company said.

The broadcaster says it will only go to court to have dedicated pirate sites blocked, ones that “pay nothing to the creators” while stealing content for their own gain.

“We’re doing this because illegal streaming and content piracy is a major threat to the entertainment, creative and sporting industries in New Zealand and abroad. With piracy, not only is the sport and entertainment content that we love at risk, but so are the livelihoods of the thousands of people employed by these industries,” the company said.

“Illegally sharing or viewing content impacts a vast number of people and jobs including athletes, actors, artists, production crew, customer service representatives, event planners, caterers and many, many more.”

ISP Spark, which is also being targeted by Sky, was less visibly outraged than some of its competitors. However, the company still feels that controlling what people can see on the Internet is a slippery slope.

“We have some sympathy for this given we invest tens of millions of dollars into content ourselves through Lightbox. However, we don’t think it should be the role of ISPs to become the ‘police of the internet’ on behalf of other parties,” a Spark spokesperson said.

Perhaps unsurprisingly, Sky’s blocking efforts haven’t been well received by InternetNZ, the non-profit organization which protects and promotes Internet use in New Zealand.

Describing the company’s application for an injunction as an “extreme step”, InternetNZ Chief Executive Jordan Carter said that site-blocking works against the “very nature” of the Internet and is a measure that’s unlikely to achieve its goals.

“Site blocking is very easily evaded by people with the right skills or tools. Those who are deliberate pirates will be able to get around site blocking without difficulty,” Carter said.

“If blocking is ordered, it risks driving content piracy further underground, with the help of easily-deployed and common Internet tools. This could well end up making the issues that Sky are facing even harder to police in the future.”

What most of the ISPs and InternetNZ are also agreed on is the need to fight piracy with competitive, attractive legal offerings. Vocus says that local interest in The Pirate Bay has halved since Netflix launched in New Zealand, with traffic to the torrent site sitting at just 23% of its peak 2013 levels.

“The success of Netflix, iTunes and Spotify proves that people are willing to pay to access good-quality content. It’s pretty clear that SKY doesn’t understand the internet, and is trying a Hail Mary to turnaround its sunset business,” Vocus Consumer General Manager Taryn Hamilton said.

The big question now is whether the High Court has the ability to order these kinds of blocks. InternetNZ has its doubts, noting that it should only happen following a parliamentary mandate.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

UI Testing at Scale with AWS Lambda

Post Syndicated from Stas Neyman original https://aws.amazon.com/blogs/devops/ui-testing-at-scale-with-aws-lambda/

This is a guest blog post by Wes Couch and Kurt Waechter from the Blackboard Internal Product Development team about their experience using AWS Lambda.

One year ago, one of our UI test suites took hours to run. Last month, it took 16 minutes. Today, it takes 39 seconds. Here’s how we did it.

The backstory:

Blackboard is a global leader in delivering robust and innovative education software and services to clients in higher education, government, K12, and corporate training. We have a large product development team working across the globe in at least 10 different time zones, with an internal tools team providing support for quality and workflows. We have been using Selenium Webdriver to perform automated cross-browser UI testing since 2007. Because we are now practicing continuous delivery, the automated UI testing challenge has grown due to the faster release schedule. On top of that, every commit made to each branch triggers an execution of our automated UI test suite. If you have ever implemented an automated UI testing infrastructure, you know that it can be very challenging to scale and maintain. Although there are services that are useful for testing different browser/OS combinations, they don’t meet our scale needs.

It used to take three hours to synchronously run our functional UI suite, which revealed the obvious need for parallel execution. Previously, we used Mesos to orchestrate a Selenium Grid Docker container for each test run. This way, we were able to run eight concurrent threads for test execution, which took an average of 16 minutes. Although this setup is fine for a single workflow, the cracks started to show when we reached the scale required for Blackboard’s mature product lines. Going beyond eight concurrent sessions on a single container introduced performance problems that impact the reliability of tests (for example, issues in Webdriver or the browser popping up frequently). We tried Mesos and considered Kubernetes for Selenium Grid orchestration, but the answer to scaling a Selenium Grid was to think smaller, not larger. This led to our breakthrough with AWS Lambda.

The solution:

We started using AWS Lambda for UI testing because it doesn’t require costly infrastructure or countless man hours to maintain. The steps we outline in this blog post took one work day, from inception to implementation. By simply packaging the UI test suite into a Lambda function, we can execute these tests in parallel on a massive scale. We use a custom JUnit test runner that invokes the Lambda function with a request to run each test from the suite. The runner then aggregates the results returned from each Lambda test execution.

Selenium is the industry standard for testing UI at scale. Although there are other options to achieve the same thing in Lambda, we chose this mature suite of tools. Selenium is backed by Google, Firefox, and others to help the industry drive their browsers with code. This makes Lambda and Selenium a compelling stack for achieving UI testing at scale.

Making Chrome Run in Lambda

Currently, Chrome for Linux will not run in Lambda due to an absent mount point. By rebuilding Chrome with a slight modification, as Marco Lüthy originally demonstrated, you can run it inside Lambda anyway! It took about two hours to build the current master branch of Chromium to build on a c4.4xlarge. Unfortunately, the current version of ChromeDriver, 2.33, does not support any version of Chrome above 62, so we’ll be using Marco’s modified version of version 60 for the near future.

Required System Libraries

The Lambda runtime environment comes with a subset of common shared libraries. This means we need to include some extra libraries to get Chrome and ChromeDriver to work. Anything that exists in the java resources folder during compile time is included in the base directory of the compiled jar file. When this jar file is deployed to Lambda, it is placed in the /var/task/ directory. This allows us to simply place the libraries in the java resources folder under a folder named lib/ so they are right where they need to be when the Lambda function is invoked.

To get these libraries, create an EC2 instance and choose the Amazon Linux AMI.

Next, use ssh to connect to the server. After you connect to the new instance, search for the libraries to find their locations.

sudo find / -name libgconf-2.so.4
sudo find / -name libORBit-2.so.0

Now that you have the locations of the libraries, copy these files from the EC2 instance and place them in the java resources folder under lib/.

Packaging the Tests

To deploy the test suite to Lambda, we used a simple Gradle tool called ShadowJar, which is similar to the Maven Shade Plugin. It packages the libraries and dependencies inside the jar that is built. Usually test dependencies and sources aren’t included in a jar, but for this instance we want to include them. To include the test dependencies, add this section to the build.gradle file.

shadowJar {
   from sourceSets.test.output
   configurations = [project.configurations.testRuntime]
}

Deploying the Test Suite

Now that our tests are packaged with the dependencies in a jar, we need to get them into a running Lambda function. We use  simple SAM  templates to upload the packaged jar into S3, and then deploy it to Lambda with our settings.

{
   "AWSTemplateFormatVersion": "2010-09-09",
   "Transform": "AWS::Serverless-2016-10-31",
   "Resources": {
       "LambdaTestHandler": {
           "Type": "AWS::Serverless::Function",
           "Properties": {
               "CodeUri": "./build/libs/your-test-jar-all.jar",
               "Runtime": "java8",
               "Handler": "com.example.LambdaTestHandler::handleRequest",
               "Role": "<YourLambdaRoleArn>",
               "Timeout": 300,
               "MemorySize": 1536
           }
       }
   }
}

We use the maximum timeout available to ensure our tests have plenty of time to run. We also use the maximum memory size because this ensures our Lambda function can support Chrome and other resources required to run a UI test.

Specifying the handler is important because this class executes the desired test. The test handler should be able to receive a test class and method. With this information it will then execute the test and respond with the results.

public LambdaTestResult handleRequest(TestRequest testRequest, Context context) {
   LoggerContainer.LOGGER = new Logger(context.getLogger());
  
   BlockJUnit4ClassRunner runner = getRunnerForSingleTest(testRequest);
  
   Result result = new JUnitCore().run(runner);

   return new LambdaTestResult(result);
}

Creating a Lambda-Compatible ChromeDriver

We provide developers with an easily accessible ChromeDriver for local test writing and debugging. When we are running tests on AWS, we have configured ChromeDriver to run them in Lambda.

To configure ChromeDriver, we first need to tell ChromeDriver where to find the Chrome binary. Because we know that ChromeDriver is going to be unzipped into the root task directory, we should point the ChromeDriver configuration at that location.

The settings for getting ChromeDriver running are mostly related to Chrome, which must have its working directories pointed at the tmp/ folder.

Start with the default DesiredCapabilities for ChromeDriver, and then add the following settings to enable your ChromeDriver to start in Lambda.

public ChromeDriver createLambdaChromeDriver() {
   ChromeOptions options = new ChromeOptions();

   // Set the location of the chrome binary from the resources folder
   options.setBinary("/var/task/chrome");

   // Include these settings to allow Chrome to run in Lambda
   options.addArguments("--disable-gpu");
   options.addArguments("--headless");
   options.addArguments("--window-size=1366,768");
   options.addArguments("--single-process");
   options.addArguments("--no-sandbox");
   options.addArguments("--user-data-dir=/tmp/user-data");
   options.addArguments("--data-path=/tmp/data-path");
   options.addArguments("--homedir=/tmp");
   options.addArguments("--disk-cache-dir=/tmp/cache-dir");
  
   DesiredCapabilities desiredCapabilities = DesiredCapabilities.chrome();
   desiredCapabilities.setCapability(ChromeOptions.CAPABILITY, options);
  
   return new ChromeDriver(desiredCapabilities);
}

Executing Tests in Parallel

You can approach parallel test execution in Lambda in many different ways. Your approach depends on the structure and design of your test suite. For our solution, we implemented a custom test runner that uses reflection and JUnit libraries to create a list of test cases we want run. When we have the list, we create a TestRequest object to pass into the Lambda function that we have deployed. In this TestRequest, we place the class name, test method, and the test run identifier. When the Lambda function receives this TestRequest, our LambdaTestHandler generates and runs the JUnit test. After the test is complete, the test result is sent to the test runner. The test runner compiles a result after all of the tests are complete. By executing the same Lambda function multiple times with different test requests, we can effectively run the entire test suite in parallel.

To get screenshots and other test data, we pipe those files during test execution to an S3 bucket under the test run identifier prefix. When the tests are complete, we link the files to each test execution in the report generated from the test run. This lets us easily investigate test executions.

Pro Tip: Dynamically Loading Binaries

AWS Lambda has a limit of 250 MB of uncompressed space for packaged Lambda functions. Because we have libraries and other dependencies to our test suite, we hit this limit when we tried to upload a function that contained Chrome and ChromeDriver (~140 MB). This test suite was not originally intended to be used with Lambda. Otherwise, we would have scrutinized some of the included libraries. To get around this limit, we used the Lambda functions temporary directory, which allows up to 500 MB of space at runtime. Downloading these binaries at runtime moves some of that space requirement into the temporary directory. This allows more room for libraries and dependencies. You can do this by grabbing Chrome and ChromeDriver from an S3 bucket and marking them as executable using built-in Java libraries. If you take this route, be sure to point to the new location for these executables in order to create a ChromeDriver.

private static void downloadS3ObjectToExecutableFile(String key) throws IOException {
   File file = new File("/tmp/" + key);

   GetObjectRequest request = new GetObjectRequest("s3-bucket-name", key);

   FileUtils.copyInputStreamToFile(s3client.getObject(request).getObjectContent(), file);
   file.setExecutable(true);
}

Lambda-Selenium Project Source

We have compiled an open source example that you can grab from the Blackboard Github repository. Grab the code and try it out!

https://blackboard.github.io/lambda-selenium/

Conclusion

One year ago, one of our UI test suites took hours to run. Last month, it took 16 minutes. Today, it takes 39 seconds. Thanks to AWS Lambda, we can reduce our build times and perform automated UI testing at scale!

Swiss Copyright Law Proposals: Good News for Pirates, Bad For Pirate Sites

Post Syndicated from Andy original https://torrentfreak.com/swiss-copyright-law-proposals-good-news-for-pirates-bad-for-pirate-sites-171124/

While Switzerland sits geographically in the heart of Europe, the country is not part of the European Union, meaning that its copyright laws are often out of touch with those of the countries encircling it.

For years this has meant heavy criticism from the United States, whose trade representative has put Switzerland on the Watch List, citing weaknesses in the country’s ability to curb online copyright infringement.

“The decision to place Switzerland on the Watch List this year is premised on U.S. concerns regarding specific difficulties in Switzerland’s system of online copyright protection and enforcement,” the USTR wrote in 2016.

Things didn’t improve in 2017. Referencing the so-called Logistep Decision, which found that collecting infringers’ IP addresses is unlawful, the USTR said that Switzerland had effectively deprived copyright holders of the means to enforce their rights online.

All of this criticism hasn’t fallen on deaf ears. For the past several years, Switzerland has been deeply involved in consultations that aim to shape future copyright law. Negotiations have been prolonged, however, with the Federal Council aiming to improve the situation for creators without impairing the position of consumers.

A new draft compromise tabled Wednesday is somewhat of a mixed bag, one that is unlikely to please the United States overall but could prove reasonably acceptable to the public.

First of all, people will still be able to ‘pirate’ as much copyrighted material as they like, as long as that content is consumed privately and does not include videogames or software, which are excluded. Any supposed losses accrued by the entertainment industries will be compensated via a compulsory tax of 13 Swiss francs ($13), levied on media playback devices including phones and tablets.

This freedom only applies to downloading and streaming, meaning that any uploading (distribution) is explicitly ruled out. So, while grabbing some streaming content via a ‘pirate’ Kodi addon is just fine, using BitTorrent to achieve the same is ruled out.

Indeed, rightsholders will be able to capture IP addresses of suspected infringers in order to file a criminal complaint with authorities. That being said, there will no system of warning notices targeting file-sharers.

But while the authorization of unlicensed downloads will only frustrate an already irritated United States, the other half of the deal is likely to be welcomed.

Under the recommendations, Internet services will not only be required to remove infringing content from their platforms, they’ll also be compelled to prevent that same content from reappearing. Failure to comply will result in prosecution. It’s a standard that copyright holders everywhere are keen for governments to adopt.

Additionally, the spotlight will fall on datacenters and webhosts that have a reputation for being popular with pirate sites. It’s envisioned that such providers will be prevented from offering services to known pirate sites, with the government clearly stating that services with piracy at the heart of their business models will be ripe for action.

But where there’s a plus for copyright holders, the Swiss have another minus. Previously it was proposed that in serious cases authorities should be able to order the ISP blocking of “obviously illegal content or sources.” That proposal has now been dropped, meaning no site-blocking will be allowed.

Other changes in the draft envision an extension of the copyright term from 50 to 70 years and improved protection for photographic works. The proposals also feature increased freedoms for researchers and libraries, who will be able to use copyrighted works without obtaining permission from rightsholders.

Overall the proposals are a pretty mixed bag but as Minister of Justice Simonetta Sommaruga said Wednesday, if no one is prepared to compromise, no one will get anything.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Game of Thrones Leaks “Carried Out By Former Iranian Military Hacker”

Post Syndicated from Andy original https://torrentfreak.com/game-of-thrones-leaks-carried-out-by-former-iranian-military-hacker-171122/

Late July it was reported that hackers had stolen proprietary information from media giant HBO.

The haul was said to include confidential details of the then-unreleased fourth episode of the latest Game of Thrones season, plus episodes of Ballers, Barry, Insecure, and Room 104.

“Hi to all mankind,” an email sent to reporters read. “The greatest leak of cyber space era is happening. What’s its name? Oh I forget to tell. Its HBO and Game of Thrones……!!!!!!”

In follow-up correspondence, the hackers claimed to have penetrated HBO’s internal network, gaining access to emails, technical platforms, and other confidential information.

Image released by the hackers

Soon after, HBO chairman and CEO Richard Plepler confirmed a breach at his company, telling employees that there had been a “cyber incident” in which information and programming had been taken.

“Any intrusion of this nature is obviously disruptive, unsettling, and disturbing for all of us. I can assure you that senior leadership and our extraordinary technology team, along with outside experts, are working round the clock to protect our collective interests,” he said.

During mid-August, problems persisted, with unreleased shows hitting the Internet. HBO appeared rattled by the ongoing incident, refusing to comment to the media on every new development. Now, however, it appears the tide is turning on HBO’s foe.

In a statement last evening, Joon H. Kim, Acting United States Attorney for the Southern District of New York, and William F. Sweeney Jr., Assistant Director-in-Charge of the New York Field Division of the FBI, announced the unsealing of an indictment charging a 29-year-old man with offenses carried out against HBO.

“Behzad Mesri, an Iranian national who had previously hacked computer systems for the Iranian military, allegedly infiltrated HBO’s systems, stole proprietary data, including scripts and plot summaries for unaired episodes of Game of Thrones, and then sought to extort HBO of $6 million in Bitcoins,” Kim said.

“Mesri now stands charged with federal crimes, and although not arrested today, he will forever have to look over his shoulder until he is made to face justice. American ingenuity and creativity is to be cultivated and celebrated — not hacked, stolen, and held for ransom. For hackers who test our resolve in protecting our intellectual property — even those hiding behind keyboards in countries far away — eventually, winter will come.”

According to the Department of Justice, Mesri honed his computer skills working for the Iranian military, conducting cyber attacks against enemy military systems, nuclear software, and Israeli infrastructure. He was also a member of the Turk Black Hat hacking team which defaced hundreds of websites with the online pseudonym “Skote Vahshat”.

The indictment states that Mesri began his campaign against HBO during May 2017, when he conducted “online reconnaissance” of HBO’s networks and employees. Between May and July, he then compromised a number of HBO employee user accounts and used them to access the company’s data and TV shows, copying them to his own machines.

After allegedly obtaining around 1.5 terabytes of HBO’s data, Mesri then began to extort HBO, warning that unless a ransom of $5.5 million wasn’t paid in Bitcoin, the leaking would begin. When the amount wasn’t paid, three days later Mesri told HBO that the amount had now risen to $6m and as an additional punishment, data could be wiped from HBO’s servers.

Subsequently, on or around July 30 and continuing through August 2017, Mesri allegedly carried through with his threats, leaking information and TV shows online and promoting them via emails to members of the press.

As a result of the above, Mesri is charged with one count of wire fraud, which carries a maximum sentence of 20 years in prison, one count of computer hacking (five years), three counts of threatening to impair the confidentiality of information (five years each), and one count of interstate transmission of an extortionate communication (two years). No copyright infringement offenses are mentioned in the indictment.

The big question now is whether the US will ever get their hands on Mesri. The answer to that, at least through any official channels, seems to be a resounding no. There is no extradition treaty between the US and Iran meaning that if Mesri stays put, he’s likely to remain a free man.

Wanted

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Now You Can Use AWS Shield Advanced to Help Protect Your Amazon EC2 Instances and Network Load Balancers

Post Syndicated from Ritwik Manan original https://aws.amazon.com/blogs/security/now-you-can-use-aws-shield-advanced-to-protect-your-amazon-ec2-instances-and-network-load-balancers/

AWS Shield image

Starting today, AWS Shield Advanced can help protect your Amazon EC2 instances and Network Load Balancers against infrastructure-layer Distributed Denial of Service (DDoS) attacks. Enable AWS Shield Advanced on an AWS Elastic IP address and attach the address to an internet-facing EC2 instance or Network Load Balancer. AWS Shield Advanced automatically detects the type of AWS resource behind the Elastic IP address and mitigates DDoS attacks.

AWS Shield Advanced also ensures that all your Amazon VPC network access control lists (ACLs) are automatically executed on AWS Shield at the edge of the AWS network, giving you access to additional bandwidth and scrubbing capacity as well as mitigating large volumetric DDoS attacks. You also can customize additional mitigations on AWS Shield by engaging the AWS DDoS Response Team, which can preconfigure the mitigations or respond to incidents as they happen. For every incident detected by AWS Shield Advanced, you also get near-real-time visibility via Amazon CloudWatch metrics and details about the incident, such as the geographic origin and source IP address of the attack.

AWS Shield Advanced for Elastic IP addresses extends the coverage of DDoS cost protection, which safeguards against scaling charges as a result of a DDoS attack. DDoS cost protection now allows you to request service credits for Elastic Load Balancing, Amazon CloudFront, Amazon Route 53, and your EC2 instance hours in the event that these increase as the result of a DDoS attack.

Get started protecting EC2 instances and Network Load Balancers

To get started:

  1. Sign in to the AWS Management Console and navigate to the AWS WAF and AWS Shield console.
  2. Activate AWS Shield Advanced by choosing Activate AWS Shield Advanced and accepting the terms.
  3. Navigate to Protected Resources through the navigation pane.
  4. Choose the Elastic IP addresses that you want to protect (these can point to EC2 instances or Network Load Balancers).

If AWS Shield Advanced detects a DDoS attack, you can get details about the attack by checking CloudWatch, or the Incidents tab on the AWS WAF and AWS Shield console. To learn more about this new feature and AWS Shield Advanced, see the AWS Shield home page.

If you have comments or questions about this post, submit them in the “Comments” section below, start a new thread in the AWS Shield forum, or contact AWS Support.

– Ritwik

New White House Announcement on the Vulnerability Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/new_white_house_1.html

The White House has released a new version of the Vulnerabilities Equities Process (VEP). This is the inter-agency process by which the US government decides whether to inform the software vendor of a vulnerability it finds, or keep it secret and use it to eavesdrop on or attack other systems. You can read the new policy or the fact sheet, but the best place to start is Cybersecurity Coordinator Rob Joyce’s blog post.

In considering a way forward, there are some key tenets on which we can build a better process.

Improved transparency is critical. The American people should have confidence in the integrity of the process that underpins decision making about discovered vulnerabilities. Since I took my post as Cybersecurity Coordinator, improving the VEP and ensuring its transparency have been key priorities, and we have spent the last few months reviewing our existing policy in order to improve the process and make key details about the VEP available to the public. Through these efforts, we have validated much of the existing process and ensured a rigorous standard that considers many potential equities.

The interests of all stakeholders must be fairly represented. At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound. These core considerations, which have been incorporated into the VEP Charter, help to standardize the process by which decision makers weigh the benefit to national security and the national interest when deciding whether to disclose or restrict knowledge of a vulnerability.

Accountability of the process and those who operate it is important to establish confidence in those served by it. Our public release of the unclassified portions Charter will shed light on aspects of the VEP that were previously shielded from public review, including who participates in the VEP’s governing body, known as the Equities Review Board. We make it clear that departments and agencies with protective missions participate in VEP discussions, as well as other departments and agencies that have broader equities, like the Department of State and the Department of Commerce. We also clarify what categories of vulnerabilities are submitted to the process and ensure that any decision not to disclose a vulnerability will be reevaluated regularly. There are still important reasons to keep many of the specific vulnerabilities evaluated in the process classified, but we will release an annual report that provides metrics about the process to further inform the public about the VEP and its outcomes.

Our system of government depends on informed and vigorous dialogue to discover and make available the best ideas that our diverse society can generate. This publication of the VEP Charter will likely spark discussion and debate. This discourse is important. I also predict that articles will make breathless claims of “massive stockpiles” of exploits while describing the issue. That simply isn’t true. The annual reports and transparency of this effort will reinforce that fact.

Mozilla is pleased with the new charter. I am less so; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes that pretty much anything can fall through:

The United States Government’s decision to disclose or restrict vulnerability information could be subject to restrictions by partner agreements and sensitive operations. Vulnerabilities that fall within these categories will be cataloged by the originating Department/Agency internally and reported directly to the Chair of the ERB. The details of these categories are outlined in Annex C, which is classified. Quantities of excepted vulnerabilities from each department and agency will be provided in ERB meetings to all members.

This is me from last June:

There’s a lot we don’t know about the VEP. The Washington Post says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue — or the Cisco vulnerabilities that the Shadow Brokers leaked last August — to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

I stand by that, and am not sure the new policy changes anything.

More commentary.

Here’s more about the Windows vulnerabilities hoarded by the NSA and released by the Shadow Brokers.

EDITED TO ADD (11/18): More news.

EDITED TO ADD (11/22): Adam Shostack points out that the process does not cover design flaws or trade-offs, and that those need to be covered:

…we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure.

Event-Driven Computing with Amazon SNS and AWS Compute, Storage, Database, and Networking Services

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/event-driven-computing-with-amazon-sns-compute-storage-database-and-networking-services/

Contributed by Otavio Ferreira, Manager, Software Development, AWS Messaging

Like other developers around the world, you may be tackling increasingly complex business problems. A key success factor, in that case, is the ability to break down a large project scope into smaller, more manageable components. A service-oriented architecture guides you toward designing systems as a collection of loosely coupled, independently scaled, and highly reusable services. Microservices take this even further. To improve performance and scalability, they promote fine-grained interfaces and lightweight protocols.

However, the communication among isolated microservices can be challenging. Services are often deployed onto independent servers and don’t share any compute or storage resources. Also, you should avoid hard dependencies among microservices, to preserve maintainability and reusability.

If you apply the pub/sub design pattern, you can effortlessly decouple and independently scale out your microservices and serverless architectures. A pub/sub messaging service, such as Amazon SNS, promotes event-driven computing that statically decouples event publishers from subscribers, while dynamically allowing for the exchange of messages between them. An event-driven architecture also introduces the responsiveness needed to deal with complex problems, which are often unpredictable and asynchronous.

What is event-driven computing?

Given the context of microservices, event-driven computing is a model in which subscriber services automatically perform work in response to events triggered by publisher services. This paradigm can be applied to automate workflows while decoupling the services that collectively and independently work to fulfil these workflows. Amazon SNS is an event-driven computing hub, in the AWS Cloud, that has native integration with several AWS publisher and subscriber services.

Which AWS services publish events to SNS natively?

Several AWS services have been integrated as SNS publishers and, therefore, can natively trigger event-driven computing for a variety of use cases. In this post, I specifically cover AWS compute, storage, database, and networking services, as depicted below.

Compute services

  • Auto Scaling: Helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You can configure Auto Scaling lifecycle hooks to trigger events, as Auto Scaling resizes your EC2 cluster.As an example, you may want to warm up the local cache store on newly launched EC2 instances, and also download log files from other EC2 instances that are about to be terminated. To make this happen, set an SNS topic as your Auto Scaling group’s notification target, then subscribe two Lambda functions to this SNS topic. The first function is responsible for handling scale-out events (to warm up cache upon provisioning), whereas the second is in charge of handling scale-in events (to download logs upon termination).

  • AWS Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and web services developed in a number of programming languages. You can configure event notifications for your Elastic Beanstalk environment so that notable events can be automatically published to an SNS topic, then pushed to topic subscribers.As an example, you may use this event-driven architecture to coordinate your continuous integration pipeline (such as Jenkins CI). That way, whenever an environment is created, Elastic Beanstalk publishes this event to an SNS topic, which triggers a subscribing Lambda function, which then kicks off a CI job against your newly created Elastic Beanstalk environment.

  • Elastic Load Balancing: Automatically distributes incoming application traffic across Amazon EC2 instances, containers, or other resources identified by IP addresses.You can configure CloudWatch alarms on Elastic Load Balancing metrics, to automate the handling of events derived from Classic Load Balancers. As an example, you may leverage this event-driven design to automate latency profiling in an Amazon ECS cluster behind a Classic Load Balancer. In this example, whenever your ECS cluster breaches your load balancer latency threshold, an event is posted by CloudWatch to an SNS topic, which then triggers a subscribing Lambda function. This function runs a task on your ECS cluster to trigger a latency profiling tool, hosted on the cluster itself. This can enhance your latency troubleshooting exercise by making it timely.

Storage services

  • Amazon S3: Object storage built to store and retrieve any amount of data.You can enable S3 event notifications, and automatically get them posted to SNS topics, to automate a variety of workflows. For instance, imagine that you have an S3 bucket to store incoming resumes from candidates, and a fleet of EC2 instances to encode these resumes from their original format (such as Word or text) into a portable format (such as PDF).In this example, whenever new files are uploaded to your input bucket, S3 publishes these events to an SNS topic, which in turn pushes these messages into subscribing SQS queues. Then, encoding workers running on EC2 instances poll these messages from the SQS queues; retrieve the original files from the input S3 bucket; encode them into PDF; and finally store them in an output S3 bucket.

  • Amazon EFS: Provides simple and scalable file storage, for use with Amazon EC2 instances, in the AWS Cloud.You can configure CloudWatch alarms on EFS metrics, to automate the management of your EFS systems. For example, consider a highly parallelized genomics analysis application that runs against an EFS system. By default, this file system is instantiated on the “General Purpose” performance mode. Although this performance mode allows for lower latency, it might eventually impose a scaling bottleneck. Therefore, you may leverage an event-driven design to handle it automatically.Basically, as soon as the EFS metric “Percent I/O Limit” breaches 95%, CloudWatch could post this event to an SNS topic, which in turn would push this message into a subscribing Lambda function. This function automatically creates a new file system, this time on the “Max I/O” performance mode, then switches the genomics analysis application to this new file system. As a result, your application starts experiencing higher I/O throughput rates.

  • Amazon Glacier: A secure, durable, and low-cost cloud storage service for data archiving and long-term backup.You can set a notification configuration on an Amazon Glacier vault so that when a job completes, a message is published to an SNS topic. Retrieving an archive from Amazon Glacier is a two-step asynchronous operation, in which you first initiate a job, and then download the output after the job completes. Therefore, SNS helps you eliminate polling your Amazon Glacier vault to check whether your job has been completed, or not. As usual, you may subscribe SQS queues, Lambda functions, and HTTP endpoints to your SNS topic, to be notified when your Amazon Glacier job is done.

  • AWS Snowball: A petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data.You can leverage Snowball notifications to automate workflows related to importing data into and exporting data from AWS. More specifically, whenever your Snowball job status changes, Snowball can publish this event to an SNS topic, which in turn can broadcast the event to all its subscribers.As an example, imagine a Geographic Information System (GIS) that distributes high-resolution satellite images to users via Web browser. In this example, the GIS vendor could capture up to 80 TB of satellite images; create a Snowball job to import these files from an on-premises system to an S3 bucket; and provide an SNS topic ARN to be notified upon job status changes in Snowball. After Snowball changes the job status from “Importing” to “Completed”, Snowball publishes this event to the specified SNS topic, which delivers this message to a subscribing Lambda function, which finally creates a CloudFront web distribution for the target S3 bucket, to serve the images to end users.

Database services

  • Amazon RDS: Makes it easy to set up, operate, and scale a relational database in the cloud.RDS leverages SNS to broadcast notifications when RDS events occur. As usual, these notifications can be delivered via any protocol supported by SNS, including SQS queues, Lambda functions, and HTTP endpoints.As an example, imagine that you own a social network website that has experienced organic growth, and needs to scale its compute and database resources on demand. In this case, you could provide an SNS topic to listen to RDS DB instance events. When the “Low Storage” event is published to the topic, SNS pushes this event to a subscribing Lambda function, which in turn leverages the RDS API to increase the storage capacity allocated to your DB instance. The provisioning itself takes place within the specified DB maintenance window.

  • Amazon ElastiCache: A web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache can publish messages using Amazon SNS when significant events happen on your cache cluster. This feature can be used to refresh the list of servers on client machines connected to individual cache node endpoints of a cache cluster. For instance, an ecommerce website fetches product details from a cache cluster, with the goal of offloading a relational database and speeding up page load times. Ideally, you want to make sure that each web server always has an updated list of cache servers to which to connect.To automate this node discovery process, you can get your ElastiCache cluster to publish events to an SNS topic. Thus, when ElastiCache event “AddCacheNodeComplete” is published, your topic then pushes this event to all subscribing HTTP endpoints that serve your ecommerce website, so that these HTTP servers can update their list of cache nodes.

  • Amazon Redshift: A fully managed data warehouse that makes it simple to analyze data using standard SQL and BI (Business Intelligence) tools.Amazon Redshift uses SNS to broadcast relevant events so that data warehouse workflows can be automated. As an example, imagine a news website that sends clickstream data to a Kinesis Firehose stream, which then loads the data into Amazon Redshift, so that popular news and reading preferences might be surfaced on a BI tool. At some point though, this Amazon Redshift cluster might need to be resized, and the cluster enters a ready-only mode. Hence, this Amazon Redshift event is published to an SNS topic, which delivers this event to a subscribing Lambda function, which finally deletes the corresponding Kinesis Firehose delivery stream, so that clickstream data uploads can be put on hold.At a later point, after Amazon Redshift publishes the event that the maintenance window has been closed, SNS notifies a subscribing Lambda function accordingly, so that this function can re-create the Kinesis Firehose delivery stream, and resume clickstream data uploads to Amazon Redshift.

  • AWS DMS: Helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.DMS also uses SNS to provide notifications when DMS events occur, which can automate database migration workflows. As an example, you might create data replication tasks to migrate an on-premises MS SQL database, composed of multiple tables, to MySQL. Thus, if replication tasks fail due to incompatible data encoding in the source tables, these events can be published to an SNS topic, which can push these messages into a subscribing SQS queue. Then, encoders running on EC2 can poll these messages from the SQS queue, encode the source tables into a compatible character set, and restart the corresponding replication tasks in DMS. This is an event-driven approach to a self-healing database migration process.

Networking services

  • Amazon Route 53: A highly available and scalable cloud-based DNS (Domain Name System). Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.You can set CloudWatch alarms and get automated Amazon SNS notifications when the status of your Route 53 health check changes. As an example, imagine an online payment gateway that reports the health of its platform to merchants worldwide, via a status page. This page is hosted on EC2 and fetches platform health data from DynamoDB. In this case, you could configure a CloudWatch alarm for your Route 53 health check, so that when the alarm threshold is breached, and the payment gateway is no longer considered healthy, then CloudWatch publishes this event to an SNS topic, which pushes this message to a subscribing Lambda function, which finally updates the DynamoDB table that populates the status page. This event-driven approach avoids any kind of manual update to the status page visited by merchants.

  • AWS Direct Connect (AWS DX): Makes it easy to establish a dedicated network connection from your premises to AWS, which can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.You can monitor physical DX connections using CloudWatch alarms, and send SNS messages when alarms change their status. As an example, when a DX connection state shifts to 0 (zero), indicating that the connection is down, this event can be published to an SNS topic, which can fan out this message to impacted servers through HTTP endpoints, so that they might reroute their traffic through a different connection instead. This is an event-driven approach to connectivity resilience.

More event-driven computing on AWS

In addition to SNS, event-driven computing is also addressed by Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources. With CloudWatch Events, you can route each event type to one or more targets, including:

Many AWS services publish events to CloudWatch. As an example, you can get CloudWatch Events to capture events on your ETL (Extract, Transform, Load) jobs running on AWS Glue and push failed ones to an SQS queue, so that you can retry them later.

Conclusion

Amazon SNS is a pub/sub messaging service that can be used as an event-driven computing hub to AWS customers worldwide. By capturing events natively triggered by AWS services, such as EC2, S3 and RDS, you can automate and optimize all kinds of workflows, namely scaling, testing, encoding, profiling, broadcasting, discovery, failover, and much more. Business use cases presented in this post ranged from recruiting websites, to scientific research, geographic systems, social networks, retail websites, and news portals.

Start now by visiting Amazon SNS in the AWS Management Console, or by trying the AWS 10-Minute Tutorial, Send Fan-out Event Notifications with Amazon SNS and Amazon SQS.

 

Weekly roundup: Into the deep end

Post Syndicated from Eevee original https://eev.ee/dev/2017/11/13/weekly-roundup-into-the-deep-end/

  • cc: UI thing I was doing is actually, usably done! Hallelujah. Now for more of it.

  • idchoppers: I got in a surprise Rust mood and picked this up again. I didn’t get very far, mostly due to trying to coerce Rust into passing around interconnected pointers when it really didn’t want to, but I did add a tiny stub of a CLI (which should make future additions a bit less messy) and started stubbing out a map type.

  • fox flux: Some more player sprites, naturally. But also, a bunch of stuff! I drew and animated a heart pickup thing, with the intention that hearts are a little better spaced out and getting one is a bit more of an accomplishment. I also figured out how to do palette translation in a shader, ultimately culminating in a cool palette-preserving underwater effect. Oh, and I guess I implemented water. And a bunch of new movement stuff. And, yeah.

    I’m so glad I’m finally doing game mechanics stuff — getting the art right is nice and all, but this is finally something new that I can play. And show off, even!

  • doom: I made a set of Eevee mugshots. I don’t know why. Took about a day, and was pretty cool? I might write a bit about it.

    I also streamed Absolutely Killed, a Doom 1 episode of “gimmick” maps that I enjoyed quite a lot! The stream is on Twitch, at least until they nuke it in a week or two, and I used my custom mugshot the whole time.

I did not work on the final October blog post that I started on almost two weeks ago now; my bad. I’ve had my head pretty solidly stuck on fox flux for like a week, now that I’m finally working on the game parts and now just fiddling with the same set of animations forever. I’ve got a lot of writing I want to do as well, so I’ll try to get to that Real Soon™.

MPAA Lobbies US Congress on Streaming Piracy Boxes

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-lobbies-us-congress-on-streaming-piracy-boxes-171112/

As part of its quest to reduce piracy, the MPAA continues to spend money on its lobbying activities, hoping to sway lawmakers in its direction.

While the lobbying talks take place behind closed doors, quarterly disclosure reports provide some insight into the items under discussion.

The MPAA’s most recent lobbying disclosure form features several new topics that weren’t on the agenda last year.

Among other issues, the Hollywood group lobbied the U.S. Senate and the U.S. House of Representatives on set-top boxes, preloaded streaming piracy devices, and streaming piracy in general.

The details of these discussions remain behind closed doors. The only thing we know for sure is what Hollywood is lobbying on, but it doesn’t take much imagination to take an educated guess on the ‘why’ part.

Just over a year ago streaming piracy boxes were hardly mentioned in anti-piracy circles, but today they are on the top of the enforcement list. The MPAA is reporting these concerns to lawmakers, to see whether they can be of assistance in curbing this growing threat.

Some of the lobbying topics

It’s clear that pirate streaming players are a prime concern for Hollywood. MPA boss Stan McCoy recently characterized the use of these devices as “Piracy 3.0” and a coalition of industry players sued a US-based seller of streaming boxes earlier this month.

The lobbying efforts themselves are nothing new of course. Every year the MPAA spends around $4 million to influence the decisions of lawmakers, both directly and through external lobbying firms such as Covington & Burling, Capitol Tax Partners, and Sentinel Worldwide.

While piracy streaming boxes are new on the agenda this year, they are not the only topics under discussion. Other items include trade deals such as the TPP, TTIP, and NAFTA, voluntary domain name initiatives, EU digital single market proposals, and cybersecurity.

TorrentFreak reached out to the MPAA for more information on the streaming box lobbying efforts, but we have yet to hear back.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Visualize AWS Cloudtrail Logs using AWS Glue and Amazon Quicksight

Post Syndicated from Luis Caro Perez original https://aws.amazon.com/blogs/big-data/streamline-aws-cloudtrail-log-visualization-using-aws-glue-and-amazon-quicksight/

Being able to easily visualize AWS CloudTrail logs gives you a better understanding of how your AWS infrastructure is being used. It can also help you audit and review AWS API calls and detect security anomalies inside your AWS account. To do this, you must be able to perform analytics based on your CloudTrail logs.

In this post, I walk through using AWS Glue and AWS Lambda to convert AWS CloudTrail logs from JSON to a query-optimized format dataset in Amazon S3. I then use Amazon Athena and Amazon QuickSight to query and visualize the data.

Solution overview

To process CloudTrail logs, you must implement the following architecture:

CloudTrail delivers log files in an Amazon S3 bucket folder. To correctly crawl these logs, you modify the file contents and folder structure using an Amazon S3-triggered Lambda function that stores the transformed files in an S3 bucket single folder. When the files are in a single folder, AWS Glue scans the data, converts it into Apache Parquet format, and catalogs it to allow for querying and visualization using Amazon Athena and Amazon QuickSight.

Walkthrough

Let’s look at the steps that are required to build the solution.

Set up CloudTrail logs

First, you need to set up a trail that delivers log files to an S3 bucket. To create a trail in CloudTrail, follow the instructions in Creating a Trail.

When you finish, the trail settings page should look like the following screenshot:

In this example, I set up log files to be delivered to the cloudtraillfcaro bucket.

Consolidate CloudTrail reports into a single folder using Lambda

AWS CloudTrail delivers log files using the following folder structure inside the configured Amazon S3 bucket:

AWSLogs/ACCOUNTID/CloudTrail/REGION/YEAR/MONTH/HOUR/filename.json.gz

Additionally, log files have the following structure:

{
    "Records": [{
        "eventVersion": "1.01",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "AIDAJDPLRKLG7UEXAMPLE",
            "arn": "arn:aws:iam::123456789012:user/Alice",
            "accountId": "123456789012",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "Alice",
            "sessionContext": {
                "attributes": {
                    "mfaAuthenticated": "false",
                    "creationDate": "2014-03-18T14:29:23Z"
                }
            }
        },
        "eventTime": "2014-03-18T14:30:07Z",
        "eventSource": "cloudtrail.amazonaws.com",
        "eventName": "StartLogging",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.64",
        "userAgent": "signin.amazonaws.com",
        "requestParameters": {
            "name": "Default"
        },
        "responseElements": null,
        "requestID": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c",
        "eventID": "3074414d-c626-42aa-984b-68ff152d6ab7"
    },
    ... additional entries ...
    ]

If AWS Glue crawlers are used to catalog these files as they are written, the following obstacles arise:

  1. AWS Glue identifies different tables per different folders because they don’t follow a traditional partition format.
  2. Based on the structure of the file content, AWS Glue identifies the tables as having a single column of type array.
  3. CloudTrail logs have JSON attributes that use uppercase letters. According to the Best Practices When Using Athena with AWS Glue, it is recommended that you convert these to lowercase.

To have AWS Glue catalog all log files in a single table with all the columns describing each event, implement the following Lambda function:

from __future__ import print_function
import json
import urllib
import boto3
import gzip

s3 = boto3.resource('s3')
client = boto3.client('s3')

def convertColumntoLowwerCaps(obj):
    for key in obj.keys():
        new_key = key.lower()
        if new_key != key:
            obj[new_key] = obj[key]
            del obj[key]
    return obj


def lambda_handler(event, context):

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    print(bucket)
    print(key)
    try:
        newKey = 'flatfiles/' + key.replace("/", "")
        client.download_file(bucket, key, '/tmp/file.json.gz')
        with gzip.open('/tmp/out.json.gz', 'w') as output, gzip.open('/tmp/file.json.gz', 'rb') as file:
            i = 0
            for line in file: 
                for record in json.loads(line,object_hook=convertColumntoLowwerCaps)['records']:
            		if i != 0:
            		    output.write("\n")
            		output.write(json.dumps(record))
            		i += 1
        client.upload_file('/tmp/out.json.gz', bucket,newKey)
        return "success"
    except Exception as e:
        print(e)
        print('Error processing object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

The function goes over each element of the records array, changes uppercase letters to lowercase in column names, and inserts each element of the array as a single line of a new file. The new file is saved inside a flatfiles folder created by the function without any subfolders in the S3 bucket.

The function should have a role containing a policy with at least the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::cloudtraillfcaro/*",
                "arn:aws:s3:::cloudtraillfcaro"
            ],
            "Effect": "Allow"
        }
    ]
}

In this example, CloudTrail delivers logs to the cloudtraillfcaro bucket. Make sure that you replace this name with your bucket name in the policy. For more information about how to work with inline policies, see Working with Inline Policies.

After the Lambda function is created, you can set up the following trigger using the Triggers tab on the AWS Lambda console.

Choose Add trigger, and choose S3 as a source of the trigger.

After choosing the source, configure the following settings:

In the trigger, any file that is written to the path for the log files—which in this case is AWSLogs/119582755581/CloudTrail/—is processed. Make sure that the Enable trigger check box is selected and that the bucket and prefix parameters match your use case.

After you set up the function and receive log files, the bucket (in this case cloudtraillfcaro) should contain the processed files inside the flatfiles folder.

Catalog source data

Once the files are processed by the Lambda function, set up a crawler named cloudtrail to catalog them.

The crawler must point to the flatfiles folder.

All the crawlers and AWS Glue jobs created for this solution must have a role with the AWSGlueServiceRole managed policy and an inline policy with permissions to modify the S3 buckets used on the Lambda function. For more information, see Working with Managed Policies.

The role should look like the following:

In this example, the inline policy named s3perms contains the permissions to modify the S3 buckets.

After you choose the role, you can schedule the crawler to run on demand.

A new database is created, and the crawler is set to use it. In this case, the cloudtrail database is used for all the tables.

After the crawler runs, a single table should be created in the catalog with the following structure:

The table should contain the following columns:

Create and run the AWS Glue job

To convert all the CloudTrail logs to a columnar store in Parquet, set up an AWS Glue job by following these steps.

Upload the following script into a bucket in Amazon S3:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import boto3
import time

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "cloudtrail", table_name = "flatfiles", transformation_ctx = "datasource0")
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "make_struct", transformation_ctx = "resolvechoice1")
relationalized1 = resolvechoice1.relationalize("trail", args["TempDir"]).select("trail")
datasink = glueContext.write_dynamic_frame.from_options(frame = relationalized1, connection_type = "s3", connection_options = {"path": "s3://cloudtraillfcaro/parquettrails"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

In the example, you load the script as a file named cloudtrailtoparquet.py. Make sure that you modify the script and update the “{"path": "s3://cloudtraillfcaro/parquettrails"}” with the destination in which you want to store your results.

After uploading the script, add a new AWS Glue job. Choose a name and role for the job, and choose the option of running the job from An existing script that you provide.

To avoid processing the same data twice, enable the Job bookmark setting in the Advanced properties section of the job properties.

Choose Next twice, and then choose Finish.

If logs are already in the flatfiles folder, you can run the job on demand to generate the first set of results.

Once the job starts running, wait for it to complete.

When the job is finished, its Run status should be Succeeded. After that, you can verify that the Parquet files are written to the Amazon S3 location.

Catalog results

To be able to process results from Athena, you can use an AWS Glue crawler to catalog the results of the AWS Glue job.

In this example, the crawler is set to use the same database as the source named cloudtrail.

You can run the crawler using the console. When the crawler finishes running and has processed the Parquet results, a new table should be created in the AWS Glue Data Catalog. In this example, it’s named parquettrails.

The table should have the classification set to parquet.

It should have the same columns as the flatfiles table, with the exception of the struct type columns, which should be relationalized into several columns:

In this example, notice how the requestparameters column, which was a struct in the original table (flatfiles), was transformed to several columns—one for each key value inside it. This is done using a transformation native to AWS Glue called relationalize.

Query results with Athena

After crawling the results, you can query them using Athena. For example, to query what events took place in the time frame between 2017-10-23t12:00:00 and 2017-10-23t13:00, use the following select statement:

select *
from cloudtrail.parquettrails
where eventtime > '2017-10-23T12:00:00Z' AND eventtime < '2017-10-23T13:00:00Z'
order by eventtime asc;

Be sure to replace cloudtrail.parquettrails with the names of your database and table that references the Parquet results. Replace the datetimes with an hour when your account had activity and was processed by the AWS Glue job.

Visualize results using Amazon QuickSight

Once you can query the data using Athena, you can visualize it using Amazon QuickSight. Before connecting Amazon QuickSight to Athena, be sure to grant QuickSight access to Athena and the associated S3 buckets in your account. For more information, see Managing Amazon QuickSight Permissions to AWS Resources. You can then create a new data set in Amazon QuickSight based on the Athena table that you created.

After setting up permissions, you can create a new analysis in Amazon QuickSight by choosing New analysis.

Then add a new data set.

Choose Athena as the source.

Give the data source a name (in this case, I named it cloudtrail).

Choose the name of the database and the table referencing the Parquet results.

Then choose Visualize.

After that, you should see the following screen:

Now you can create some visualizations. First, search for the sourceipaddress column, and drag it to the AutoGraph section.

You can see a list of the IP addresses that you have used to interact with AWS. To review whether these IP addresses have been used from IAM users, internal AWS services, or roles, use the type value that is inside the useridentity field of the original log files. Thanks to the relationalize transformation, this value is available as the useridentity.type column. After the column is added into the Group/Color box, the visualization should look like the following:

You can now see and distinguish the most used IPs and whether they are used from roles, AWS services, or IAM users.

After following all these steps, you can use Amazon QuickSight to add different columns from CloudTrail and perform different types of visualizations. You can build operational dashboards that continuously monitor AWS infrastructure usage and access. You can share those dashboards with others in your organization who might need to see this data.

Summary

In this post, you saw how you can use a simple Lambda function and an AWS Glue script to convert text files into Parquet to improve Athena query performance and data compression. The post also demonstrated how to use AWS Lambda to preprocess files in Amazon S3 and transform them into a format that is recognizable by AWS Glue crawlers.

This example, used AWS CloudTrail logs, but you can apply the proposed solution to any set of files that after preprocessing, can be cataloged by AWS Glue.


Additional Reading

Learn how to Harmonize, Query, and Visualize Data from Various Providers using AWS Glue, Amazon Athena, and Amazon QuickSight.


About the Authors

Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/say-hello-to-our-newest-aws-community-heroes-fall-2017-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

MPAA Warns Australia Not to ‘Mess’ With Fair Use and Geo-Blocking

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-warns-australia-not-mess-with-fair-use-and-geo-blocking-171107/

Last year, the Australian Government’s Productivity Commission published a Draft Report on Intellectual Property Arrangements, recommending various amendments to local copyright law.

The Commission suggested allowing the use of VPNs and similar technologies to enable consumers to bypass restrictive geo-blocking. It also tabled proposals to introduce fair use exceptions and to expand safe harbors for online services.

Two months ago the Government responded to these proposals. It promised to expand the safe harbor protections and announced a consultation on fair use, describing the current fair dealing exceptions as restrictive. The Government also noted that circumvention of geo-blocks may be warranted, in some cases.

While the copyright reform plans have been welcomed with wide support from the public and companies such as Google and Wikipedia, there’s also plenty of opposition. From Hollywood, for example, which fears that the changes will set back Australia’s progress to combat piracy.

A few days ago, the MPAA submitted its 2018 list of foreign trade barriers to the U.S. Government. The document in question highlights key copyright challenges in the most crucial markets, Australia included. According to the movie industry group, the tabled proposals are problematic.

“If the Commission’s recommendations were adopted, they could result in legislative changes that undermine the current balance of protection in Australia. These changes could create significant market uncertainty and effectively weaken Australia’s infrastructure for intellectual property protection,” the MPAA writes.

“Of concern is a proposal to introduce a vague and undefined ‘fair use’ exception unmoored from decades of precedent in the United States. Another proposal would expand Australia’s safe harbor regime in piecemeal fashion,” the group adds.

The fair use opposition is noteworthy since the Australian proposal is largely modeled after US law. The MPAA’s comment suggests, however, that this can’t be easily applied to another country, as that would lack the legal finetuning that’s been established in dozens of court cases.

That the MPAA isn’t happy with the expansion of safe harbor protections for online service providers is no surprise. In recent years, copyright holders have often complained that these protections hinder progress on the anti-piracy front, as companies such as Google and Facebook have no incentive to proactively police copyright infringement.

Moving on, the movie industry group highlights that circumvention of geo-blocking for copyrighted content and other protection measures are also controversial topics for Hollywood.

“Still another would allow circumvention of geo-blocking and other technological protection measures. Australia has one of the most vibrant creative economies in the world and its current legal regime has helped the country become the site of major production investments.

“Local policymakers should take care to ensure that Australia’s vibrant market is not inadvertently impaired and that any proposed relaxation of copyright and related rights protection does not violate Australia’s international obligations,” the MPAA adds.

Finally, while it was not included in the commission’s recommendations, the MPAA stresses once again that Australia’s anti-camcording laws are not up to par.

Although several camming pirates have been caught in recent years, the punishments don’t meet Hollywood’s standards. For example, in 2012 a man connected to a notorious release group was convicted for illicitly recording 14 audio captures, for which he received an AUS$2,000 fine.

“Australia should adopt anticamcording legislation. While illegal copying is a violation of the Copyright Act, more meaningful deterrent penalties are required,” the MPAA writes. “Such low penalties fail to reflect the devastating impact that this crime has on the film industry.”

The last suggestion has been in the MPAA’s recommendations for several years already, but the group is persistent.

In closing, the MPAA asks the US Government to keep these and other issues in focus during future trade negotiations and policy discussions with Australia and other countries, while thanking it for the critical assistance Hollywood has received over the years.

MPAA’s full submission, which includes many of the recommendations that were made in previous years, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Improved Search for Backblaze’s Blog

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/using-relevannssi-wordpress-search/

Improved Search for Backblaze's Blog
Search has become the most powerful method to find content on the Web, both for finding websites themselves and for discovering information within websites. Our blog readers find content in both ways — using Google, Bing, Yahoo, Ask, DuckDuckGo, and other search engines to follow search results directly to our blog, and using the site search function once on our blog to find content in the blog posts themselves.

There’s a Lot of Great Content on the Backblaze Blog

Backblaze’s CEO Gleb Budman wrote the first post for this blog in March of 2008. Since that post there have been 612 more. There’s a lot of great content on this blog, as evidenced by the more than two million page views we’ve had since the beginning of this year. We typically publish two blog posts per week on a variety of topics, but we focus primarily on cloud storage technology and data backup, company news, and how-to articles on how to use cloud storage and various hardware and software solutions.

Earlier this year we initiated a series of posts on entrepreneurship by our CEO and co-founder, Gleb Budman, which has proven tremendously popular. We also occasionally publish something a little lighter, such as our current Halloween video contest — there’s still time to enter!

Blog search box

The Site Search Box — Your gateway to Backblaze blog content

We Could do a Better Job of Helping You Find It

I joined Backblaze as Content Director in July of this year. During the application process, I spent quite a bit of time reading through the blog to understand the company, the market, and its customers. That’s a lot of reading. I used the site search many times to uncover topics and posts, and discovered that site search had a number of weaknesses that made it less-than-easy to find what I was looking for.

These site search weaknesses included:

Searches were case sensitive
Visitor could easily miss content capitalized differently than the search terms
Results showed no date or author information
Visitor couldn’t tell how recent the post was or who wrote it
Search terms were not highlighted in context
Visitor had to scrutinize the results to find the terms in the post
No indication of the number of results or number of pages of results
Visitor didn’t know how fruitful the search was
No record of search terms used by visitors
We couldn’t tell what our visitors were searching for!

I wanted to make it easier for blog visitors to find all the great content on the Backblaze blog and help me understand what our visitors are searching for. To do that, we needed to upgrade our site search.

I started with a list of goals I wanted for site search.

  1. Make it easier to find content on the blog
  2. Provide a summary of what was found
  3. Search the comments as well as the posts
  4. Highlight the search terms in the results to help find them in context
  5. Provide a record of searches to help me understand what interests our readers

I had the goals, now how could I find a solution to achieve them?

Our blog is built on WordPress, which has a built-in site search function that could be described as simply adequate. The most obvious of its limitations is that search results are listed chronologically, not based on “most popular,” most occurring,” or any other metric that might make the result more relevant to your interests.

The Search for Improved (Site) Search

An obvious choice to improve site search would be to adopt Google Site Search, as many websites and blogs have done. Unfortunately, I quickly discovered that Google is sunsetting Site Search by April of 2018. That left the choice among a number of third-party services or WordPress-specific solutions. My immediate inclination was to see what is available specifically for WordPress.

There are a handful of search plugins for WordPress. One stood out to me for the number of installations (100,000+) and overwhelmingly high reviews: Relevanssi. Still, I had a number of questions. The first question was whether the plugin retained any search data from our site — I wanted to make sure that the privacy of our visitors is maintained, and even harvesting anonymous search data would not be acceptable to Backblaze. I wrote to the developer and was pleased by the responsiveness from Relevanssi’s creator, Mikko Saari. He explained to me that Relevanssi doesn’t have access to any of the search data from the sites using his plugin. Receiving a quick response from a developer is always a good sign. Other signs of a good WordPress plugin are recent updates and an active support forum.

Our solution: Relevanssi for Site Search

The WordPress plugin Relevanssi met all of our criteria, so we installed the plugin and switched to using it for site search in September.

In addition to solving the problems listed above, our search results are now displayed based on relevance instead of date, which is the default behavior of WordPress search. That capability is very useful on our blog where a lot of the content from years ago is still valuable — often called evergreen content. The new site search also enables visitors to search using the boolean expressions AND and OR. For example, a visitor can search for “seagate AND drive,” and see results that only include both words. Alternatively, a visitor can search for “seagate OR drive” and see results that include either word.

screenshot of relevannssi wordpress search results

Search results showing total number of results, hits and their location, and highlighted search terms in context

Visitors can put search terms in quotation marks to search for an entire phrase. For example, a visitor can search for “2016 drive stats” and see results that include only that exact phrase. In addition, the site search results come with a summary, showing where the results were found (title, post, or comments). Search terms are highlighted in yellow in the content, showing exactly where the search result was found.

Here’s an example of a popular post that shows up in searches. Hard Drive Stats for Q1 2017 was published on May 9, 2017. Since September 4, it has shown up over 150 times in site searches and in the last 90 days in has been viewed over 53,000 times on our blog.

Hard Drive Stats for Q1 2017

The Results Tell the Story

Since initiating the new search on our blog on September 4, there have been almost 23,000 site searches conducted, so we know you are using it. We’ve implemented pagination for the blog feed and search results so you know how many pages of results there are and made it easier to navigate to them.

Now that we have this site search data, you likely are wondering which are the most popular search terms on our blog. Here are some of the top searches:

What Do You Search For?

Please tell us how you use site search and whether there are any other capabilities you’d like to see that would make it easier to find content on our blog.

The post Improved Search for Backblaze’s Blog appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.