Tag Archives: Events

Getting Ready for the AWS Quest Finale on Twitch

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/getting-ready-for-the-aws-quest-finale-on-twitch/

Whew! March has been one crazy month for me and it is only half over. After a week with my wife in the Caribbean, we hopped on a non-stop Seattle to Tokyo flight so that I could speak at JAWS Days, Startup Day, and some internal events. We arrived home last Wednesday and I am now sufficiently clear-headed and recovered from jet lag to do anything more intellectually demanding than respond to emails. The AWS Blogging Team and the great folks at Lone Shark Games have been working on AWS Quest for quite some time and it has been great to see all of the progress made toward solving the puzzles in order to find the orangeprints that I will use to rebuild Ozz.

The community effort has been impressive! There’s a shared spreadsheet with tabs for puzzles and clues, a busy Slack channel, and a leaderboard, all organized and built by a team that spans the globe.

I’ve been checking out the orangeprints as they are uncovered and have been doing a bit of planning and preparation to make sure that I am ready for the live-streamed rebuild on Twitch later this month. Yesterday I labeled a bunch of containers, one per puzzle, and stocked each one with the parts that I will use to rebuild the corresponding component of Ozz. Fortunately, I have at least (my last count may have skipped a few) 119,807 bricks and other parts at hand so this was easy. Here’s what I have set up so far:

The Twitch session will take place on Tuesday, March 27 at Noon PT. In the meantime, you should check out the #awsquest tweets and see what you can do to help me to rebuild Ozz.


SXSW 2018 on BitTorrent: 8.24 GB of ‘Free’ Music?

Post Syndicated from Ernesto original https://torrentfreak.com/sxsw-2018-on-bittorrent-8-24-gb-of-free-music-180317/

The SXSW music festival was one of the first major events to embrace BitTorrent.

In 2005, it used the then relatively new technology to share hundreds of DRM-free tracks from participating artists. It was a practical, fast, and cheap solution that worked well.

The official torrent releases continued for three years but since 2008 this task has been unofficially taken over by the public. SXSW still showcases music on their site, for sampling purposes, but no longer via torrents.

However, once it’s out on the Internet it only takes one person to make a torrent archive. For many years, the operator of SXSW Torrent has taken on this task and 2018 is no exception.

This year’s archive is released in two torrents, which consists of 1,276 tracks totaling 8.24 gigabytes of free music.

All the tracks released for the previous editions are also still available and most of these torrents remain well-seeded. The 2005 – 2018 archives now total more than 15,000 tracks and 85 gigabytes of data.

TorrentFreak spoke to the operator of SXSW Torrent who told us that it’s the tenth year in a row that the site has compiled the archive. His motivation is partly selfish, serving as preparation for his yearly SXSW Music trip, but also because many others are relying on it.

“Many people come back every year, so I can’t leave them hanging,” the SXSW Torrent operator previously told us.

Apparently, these people prefer to download everything in one go, as opposed to browsing through the SXSW site to sample the music.

It’s clear that these efforts are appreciated by the public. With tens of thousands of downloaders each year, the SXSW torrents attract quite a bit of traffic. For some, it almost makes up for not being able to attend the festival in person.

However, the term ‘free’ music has changed over time. While SXSW Torrent was never asked to take any files down, the torrents are not officially authorized. This means that some artists or rightsholders, may not like the idea of their music being shared this way

When SXSW released the torrent we happily encouraged people to download the tracks, but now that’s no longer the case.

The good news is that there are plenty other options out there. Aside from the showcases on the official website, SXSW has published official Spotify and YouTube playlists where people can enjoy the the music, without any concerns.

This year’s SXSW music festival is currently underway in Austin, Texas and ends on Sunday. The torrents, however, are expected to live on for as long as there are people sharing.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.




Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.





Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.




Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.









Local Governments in Mexico Might ‘Pirate’ Dragon Ball

Post Syndicated from Andy original https://torrentfreak.com/local-governments-mexico-might-pirate-dragon-ball-180316/

When one thinks of large-scale piracy, sites like The Pirate Bay and perhaps 123Movies spring to mind.

Offering millions of viewers the chance to watch the latest movies and TV shows for free the day they’re released or earlier, they’re very much hated by the entertainment industries.

Tomorrow, however, there’s the very real possibility of a huge copyright infringement controversy hitting large parts of Mexico, all centered around the hugely popular anime series Dragon Ball Super.

This Saturday episode 130, titled “The Greatest Showdown of All Time! The Ultimate Survival Battle!!”, will hit the streets. It’s the penultimate episode of the series and will see the climax of Goku and Jiren’s battle – apparently.

The key point is that fans everywhere are going nuts in anticipation, so much so that various local governments in Mexico have agreed to hold public screenings for free, including in football stadiums and public squares.

“Fans of the series are crazy to see the new episode of Dragon Ball Super and have already organized events around the country as if it were a boxing match,” local media reports.

For example, Remberto Estrada, the municipal president of Benito Juárez, Quintana Roo, confirmed that the episode will be aired at the Cultural Center of the Arts in Cancun. The mayor of Ciudad Juarez says that a viewing will go ahead at the Plaza de la Mexicanidad with giant screens and cosplay contests on the sidelines.

Many local government Twitter accounts sent out official invitations, like the one shown below.

But despite all the preparations, there is a big problem. According to reports, no group or organization has the rights to show Dragon Ball Super in public in Mexico, a fact confirmed by Toei Animation, the company behind the show.

“To the viewers and fans of Dragon Ball. We have become aware of the plans to exhibit episode # 130 of our Dragon Ball Super series in stadiums, plazas, and public places throughout Latin America,” the company said in an official announcement.

“Toei Animation has not authorized these public shows and does not support or sponsor any of these events nor do we or any of our titles endorse any institution exhibiting the unauthorized episode.

“In an effort to support copyright laws, to protect the work of thousands of persons and many labor sectors, we request that you please enjoy our titles at the official platforms and broadcasters and not support illegal screenings that incite piracy.”

Armando Cabada, mayor of Ciudad Juarez, Chihuahua, was one of the first municipal officials to offer support to the episode 130 movement. He believes that since the events are non-profit, they can go ahead but others have indicated their screenings will only go ahead if they can get the necessary permission.

Crunchyroll, the US video-streaming company that holds some Dragon Ball Super rights, is reportedly trying to communicate with the establishments and organizations planning to host the events to ensure that everything remains legal and above board. At this stage, however, there’s no indication that any agreements have been reached or whether they’re simply getting in touch to deliver a warning.

One region that has already confirmed its event won’t go ahead is Mexico City. The head of the local government there told disappointed fans that since they can’t get permission from Toei, the whole thing has been canceled.

What will happen in the other locations Saturday night if licenses haven’t been obtained is anyone’s guess but thousands of disappointed fans in multiple locations raises the potential for the kind of battle the Mexican authorities can well do without, even if Dragon Ball Super thrives on them.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Raspberry Jam Big Birthday Weekend 2018 roundup

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/big-birthday-weekend-2018-roundup/

A couple of weekends ago, we celebrated our sixth birthday by coordinating more than 100 simultaneous Raspberry Jam events around the world. The Big Birthday Weekend was a huge success: our fantastic community organised Jams in 40 countries, covering six continents!

We sent the Jams special birthday kits to help them celebrate in style, and a video message featuring a thank you from Philip and Eben:

Raspberry Jam Big Birthday Weekend 2018

To celebrate the Raspberry Pi’s sixth birthday, we coordinated Raspberry Jams all over the world to take place over the Raspberry Jam Big Birthday Weekend, 3-4 March 2018. A massive thank you to everyone who ran an event and attended.

The Raspberry Jam photo booth

I put together code for a Pi-powered photo booth which overlaid the Big Birthday Weekend logo onto photos and (optionally) tweeted them. We included an arcade button in the Jam kits so they could build one — and it seemed to be quite popular. Some Jams put great effort into housing their photo booth:

Here are some of my favourite photo booth tweets:

RGVSA on Twitter

PiParty photo booth @RGVSA & @ @Nerdvana_io #Rjam

Denis Stretton on Twitter

The @SouthendRPIJams #PiParty photo booth

rpijamtokyo on Twitter

PiParty photo booth

Preston Raspberry Jam on Twitter

Preston Raspberry Jam Photobooth #RJam #PiParty

If you want to try out the photo booth software yourself, find the code on GitHub.

The great Raspberry Jam bake-off

Traditionally, in the UK, people have a cake on their birthday. And we had a few! We saw (and tasted) a great selection of Pi-themed cakes and other baked goods throughout the weekend:

Raspberry Jams everywhere

We always say that every Jam is different, but there’s a common and recognisable theme amongst them. It was great to see so many different venues around the world filling up with like-minded Pi enthusiasts, Raspberry Jam–branded banners, and Raspberry Pi balloons!


Sergio Martinez on Twitter

Thank you so much to all the attendees of the Ikana Jam in Krakow past Saturday! We shared fun experiences, some of them… also painful 😉 A big thank you to @Raspberry_Pi for these global celebrations! And a big thank you to @hubraum for their hospitality! #PiParty #rjam

NI Raspberry Jam on Twitter

We also had a super successful set of wearables workshops using @adafruit Circuit Playground Express boards and conductive thread at today’s @Raspberry_Pi Jam! Very popular! #PiParty

Suzystar on Twitter

My SenseHAT workshop, going well! @SouthendRPiJams #PiParty

Worksop College Raspberry Jam on Twitter

Learning how to scare the zombies in case of an apocalypse- it worked on our young learners #PiParty @worksopcollege @Raspberry_Pi https://t.co/pntEm57TJl


Rita on Twitter

Being one of the two places in Kenya where the #PiParty took place, it was an amazing time spending the day with this team and getting to learn and have fun. @TaitaTavetaUni and @Raspberry_Pi thank you for your support. @TTUTechlady @mictecttu ch




@GABONIAVERACITY #PiParty Lagos Raspberry Jam 2018 Special International Celebration – 6th Raspberry-Pi Big Birthday! Lagos Nigeria @Raspberry_Pi @ben_nuttall #RJam #RaspberryJam #raspberrypi #physicalcomputing #robotics #edtech #coding #programming #edTechAfrica #veracityhouse https://t.co/V7yLxaYGNx

North America

Heidi Baynes on Twitter

The Riverside Raspberry Jam @Vocademy is underway! #piparty

Brad Derstine on Twitter

The Philly & Pi #PiParty event with @Bresslergroup and @TechGirlzorg was awesome! The Scratch and Pi workshop was amazing! It was overall a great day of fun and tech!!! Thank you everyone who came out!

Houston Raspi on Twitter

Thanks everyone who came out to the @Raspberry_Pi Big Birthday Jam! Special thanks to @PBFerrell @estefanniegg @pcsforme @pandafulmanda @colnels @bquentin3 couldn’t’ve put on this amazing community event without you guys!

Merge Robotics 2706 on Twitter

We are back at @SciTechMuseum for the second day of @OttawaPiJam! Our robot Mergius loves playing catch with the kids! #pijam #piparty #omgrobots

South America

Javier Garzón on Twitter

Así terminamos el #Raspberry Jam Big Birthday Weekend #Bogota 2018 #PiParty de #RaspberryJamBogota 2018 @Raspberry_Pi Nos vemos el 7 de marzo en #ArduinoDayBogota 2018 y #RaspberryJamBogota 2018


Fablab UP Cebu on Twitter

Happy 6th birthday, @Raspberry_Pi! Greetings all the way from CEBU,PH! #PiParty #IoTCebu Thanks @CebuXGeeks X Ramos for these awesome pics. #Fablab #UPCebu

福野泰介 on Twitter

ラズパイ、6才のお誕生日会スタート in Tokyo PCNブースで、いろいろ展示とhttps://t.co/L6E7KgyNHFとIchigoJamつないだ、こどもIoTハッカソンmini体験やってます at 東京蒲田駅近 https://t.co/yHEuqXHvqe #piparty #pipartytokyo #rjam #opendataday

Ren Camp on Twitter

Happy birthday @Raspberry_Pi! #piparty #iotcebu @coolnumber9 https://t.co/2ESVjfRJ2d


Glenunga Raspberry Pi Club on Twitter

PiParty photo booth

Personally, I managed to get to three Jams over the weekend: two run by the same people who put on the first two Jams to ever take place, and also one brand-new one! The Preston Raspberry Jam team, who usually run their event on a Monday evening, wanted to do something extra special for the birthday, so they came up with the idea of putting on a Raspberry Jam Sandwich — on the Friday and Monday around the weekend! This meant I was able to visit them on Friday, then attend the Manchester Raspberry Jam on Saturday, and finally drop by the new Jam at Worksop College on my way home on Sunday.

Ben Nuttall on Twitter

I’m at my first Raspberry Jam #PiParty event of the big birthday weekend! @PrestonRJam has been running for nearly 6 years and is a great place to start the celebrations!

Ben Nuttall on Twitter

Back at @McrRaspJam at @DigInnMMU for #PiParty

Ben Nuttall on Twitter

Great to see mine & @Frans_facts Balloon Pi-Tay popper project in action at @worksopjam #rjam #PiParty https://t.co/GswFm0UuPg

Various members of the Foundation team attended Jams around the UK and US, and James from the Code Club International team visited AmsterJam.

hackerfemo on Twitter

Thanks to everyone who came to our Jam and everyone who helped out. @phoenixtogether thanks for amazing cake & hosting. Ademir you’re so cool. It was awesome to meet Craig Morley from @Raspberry_Pi too. #PiParty

Stuart Fox on Twitter

Great #PiParty today at the @cotswoldjam with bloody delicious cake and lots of raspberry goodness. Great to see @ClareSutcliffe @martinohanlon playing on my new pi powered arcade build:-)

Clare Sutcliffe on Twitter

Happy 6th Birthday @Raspberry_Pi from everyone at the #PiParty at #cotswoldjam in Cheltenham!

Code Club on Twitter

It’s @Raspberry_Pi 6th birthday and we’re celebrating by taking part in @amsterjam__! Happy Birthday Raspberry Pi, we’re so happy to be a part of the family! #PiParty

For more Jammy birthday goodness, check out the PiParty hashtag on Twitter!

The Jam makers!

A lot of preparation went into each Jam, and we really appreciate all the hard work the Jam makers put in to making these events happen, on the Big Birthday Weekend and all year round. Thanks also to all the teams that sent us a group photo:

Lots of the Jams that took place were brand-new events, so we hope to see them continue throughout 2018 and beyond, growing the Raspberry Pi community around the world and giving more people, particularly youths, the opportunity to learn digital making skills.

Philip Colligan on Twitter

So many wonderful people in the @Raspberry_Pi community. Thanks to everyone at #PottonPiAndPints for a great afternoon and for everything you do to help young people learn digital making. #PiParty

Special thanks to ModMyPi for shipping the special Raspberry Jam kits all over the world!

Don’t forget to check out our Jam page to find an event near you! This is also where you can find free resources to help you get a new Jam started, and download free starter projects made especially for Jam activities. These projects are available in English, Français, Français Canadien, Nederlands, Deutsch, Italiano, and 日本語. If you’d like to help us translate more content into these and other languages, please get in touch!

PS Some of the UK Jams were postponed due to heavy snowfall, so you may find there’s a belated sixth-birthday Jam coming up where you live!

S Organ on Twitter

@TheMagP1 Ours was rescheduled until later in the Spring due to the snow but here is Babbage enjoying the snow!

The post Raspberry Jam Big Birthday Weekend 2018 roundup appeared first on Raspberry Pi.

Voksi ‘Pirates’ New Serious Sam Game With Permission From Developers

Post Syndicated from Andy original https://torrentfreak.com/voksi-pirates-new-serious-sam-game-with-permission-from-developers-180312/

Bulgarian cracker Voksi is unlike many others in his line of work. He makes himself relatively available online, interacting with fans and revealing surprising things about his past.

Only last month he told TF that he is entirely self-taught and had been cracking games since he was 15-years-old, just six years ago.

Voksi is probably best known for his hatred of anti-piracy technology Denuvo and to this day is still one of just four groups/people who have managed to crack v4 of the anti-tamper technology. As such, he and his kind are often painted as enemies of the gaming industry but that doesn’t represent the full picture.

In discussion with TF over the weekend, Voksi told us that he’s a huge fan of the Serious Sam franchise so when he found out about the latest title – Serious Sam’s Bogus Detour (SSBD) – he wanted to play it – badly. That led to a remarkable series of events.

“One month before the game’s official release I got into the closed beta, thanks to a friend of mine, who invited me in. I introduced myself to the developers [Crackshell]. I told them what I do for a living, but also assured them that I didn’t have any malicious intents towards the game. They were very cool about it, even surprisingly cool,” Voksi informs TF.

The game eventually hit the market (without Voksi targeting it, of course) with some interesting additions. As shown in the screenshot taken from the game and embedded below, Voksi was listed as a tester for the game.

An unusual addition to the game credits….

Perhaps even more impressively, official Stream screenshots here show Voksi as a player in the game. It’s not exactly what one might expect for someone in his position but from there, the excitement began to fade. Despite a 9/10 rating on Steam, the books didn’t balance.

“The game was released officially on 20 of June, 2017. Months passed. We all hoped it’d be a success, but sadly that was not the case,” Voksi explains.

“Even with all the official marketing done by Devolver Digital, no one batted an eye and really gave it a chance. In December 2017, I found out how bad the sales really were, which even didn’t cover the expenses for the making game, let alone profit.”

Voksi was really disappointed that things hadn’t gone to plan so he contacted the developers with an idea – why didn’t he get involved to try and drum up some support from an entirely unconventional angle? How about giving a special edition of the game away for free while calling on ‘pirates’ to chip in with whatever they could afford?

“Last week I contacted the main dev of SSBD over Steam and proposed what I can do to help boost the game. He immediately agreed,” Voksi says.

“The plan was to release a build of the game that was playable from start to finish, playable in co-op with up to 4 players, not to miss anything important gameplay wise and add a little message in the bottom corner, which is visible at all times, telling you: “We are small indie studio. If you liked the game, please consider buying it. Thank you and enjoy the game!”

Message at the bottom of the screen

But Voksi’s marketing plan didn’t stop there. This special build of the game is also tied to a unique giveaway challenge with several prizes. It’s underway on Voksi’s REVOLT forum and is intended to encourage more people to play the game and share the word among family, friends and whoever else can support the developers.

Importantly, Voski isn’t getting paid to do any of this, he just wants to help the developers and support a game he feels deserves a lot more attention. For those interested in taking it for a spin, the download links are available here in the official thread.

The ‘pirate’ build – Serious.Sam.Bogus.Detour.B126.RIP-Voksi – is slightly less polished than those available officially but it’s hoped that people will offer their support on Steam and GOG if they like the game.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

AWS Summit Season is Almost Here – Get Ready to Register!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-summit-season-is-almost-here-get-ready-to-register/

I’m writing this post from my hotel room in Tokyo while doing my best to fight jet lag! I’m here to speak at JAWS Days and Startup Day, and to meet with some local customers.

I do want to remind you that the AWS Global Summit series is just about to start! With events planned for North America, Latin America, Japan and the rest of Asia, Europe, the Middle East, Africa, and Greater China, odds are that there’s one not too far from you. You can register for the San Francisco Summit today and you can ask to be notified as soon as registration for the other 30+ cities opens up.

The Summits are offered at no charge and are an excellent way for you to learn more about AWS. You’ll get to hear from our leaders and tech teams, our partners, and from other customers. You can also participate in hands-on workshops, labs, and team challenges.

Because the events are multi-track, you may want to bring a colleague or two in order to make sure that you don’t miss something of interest to your organization.


PS – I keep meaning to share this cool video that my friend Mike Selinker took at AWS re:Invent. Check it out!

McAfee Security Experts Weigh-in Weirdly With “Fresh Kodi Warning”

Post Syndicated from Andy original https://torrentfreak.com/mcafee-security-experts-weigh-in-weirdly-with-fresh-kodi-warning-180311/

Over the past several years, the last couple in particular, piracy has stormed millions of homes around the world.

From being a widespread but still fairly geeky occupation among torrenters, movie and TV show piracy can now be achieved by anyone with the ability to click a mouse or push a button on a remote control. Much of this mainstream interest can be placed at the feet of the Kodi media player.

An entirely legal platform in its own right, Kodi can be augmented with third-party add-ons that enable users to access an endless supply of streaming media. As such, piracy-configured Kodi installations are operated by an estimated 26 million people, according to the MPAA.

This popularity has led to much interest from tabloid newspapers in the UK which, for reasons best known to them, choose to both promote and demonize Kodi almost every week. While writing about news events is clearly par for the course, when one considers some of the reports, their content, and what inspired them, something doesn’t seem right.

This week The Express, which has published many overly sensational stories about Kodi in recent times, published another. The title – as always – promised something special.

Sounds like big news….

Reading the text, however, reveals nothing new whatsoever. The piece simply rehashes some of the historic claims that have been leveled at Kodi that can easily apply to any Internet-enabled software or system. But beyond that, some of its content is pretty weird.

The piece is centered on comments from two McAfee security experts – Chief Scientist Raj Samani and Chief Consumer Security Evangelist Gary Davis. It’s unclear whether The Express approached them for comment (if they did, there is no actual story for McAfee to comment on) or whether McAfee offered the comments and The Express built a story around them. Either way, here’s a taster.

“Kodi has been pretty open about the fact that it’s a streaming site but my view has always been if I use Netflix I know that I’m not going to get any issues, if I use Amazon I’m not going to get any issues,” Samani told the publication.

Ok, stop right there. Kodi admits that it’s a streaming site? Really? Kodi is a piece of software. It’s a media player. It can do many things but Kodi is not a streaming site and no one at Kodi has ever labeled it otherwise. To think that neither McAfee nor the publication caught that one is a bit embarrassing.

The argument that Samani was trying to make is that services like Netflix and Amazon are generally more reliable than third-party sources and there are few people out there who would argue with that.

“Look, ultimately you’ve got to do the research and you’ve got to decide if it’s right for you but personally I don’t use [Kodi] and I know full well that by not using [Kodi] I’m not going to get any issues. If I pay for the service I know exactly what I’m going to get,” he said.

But unlike his colleague who doesn’t use Kodi, Gary Davis has more experience.

McAfee’s Chief Consumer Security Evangelist admits to having used Kodi in the past but more recently decided not to use it when the security issues apparently got too much for him.

“I did use [Kodi] but turned it off as I started getting worried about some of the risks,” he told The Express.

“You may search for something and you may get what you are looking for but you may get something that you are not looking for and that’s where the problem lies with Kodi.”

This idea, that people search for a movie or TV show yet get something else, is bewildering to most experienced Kodi users. If this was indeed the case, on any large scale, people wouldn’t want to use it anymore. That’s clearly not the case.

Also, incorrect content appearing is not the kind of security threat that the likes of McAfee tend to be worried about. However, Davis suggests things can get worse.

“I’m not saying they’ve done anything wrong but if somebody is able to embed code to turn on a microphone or other things or start sending data to a place it shouldn’t go,” he said.

The sentence appears to have some words missing and struggles to make sense but the suggestion is that someone’s Kodi installation could be corrupted to the point that someone people could hijack the user’s microphone.

We are not aware of anything like that happening, ever, via Kodi. There are instances where that has happened completely without it in a completely different context, but that seems here nor there. By the same count, everyone should stop using Windows perhaps?

The big question is why these ‘scary’ Kodi non-stories keep getting published and why experts are prepared to weigh-in on them?

It would be too easy to quickly put it down to some anti-piracy agenda, even though there are plenty of signs that anti-piracy groups have been habitually feeding UK tabloids with information on that front. Indeed, a source at a UK news outlet (that no longer publishes such stories) told TF that they were often prompted to write stories about Kodi and streaming in general, none with a positive spin.

But if it was as simple as that, how does that explain another story run in The Express this week heralding the launch of Kodi’s ‘Leia’ alpha release?

If Kodi is so bad as to warrant an article telling people to avoid it FOREVER on one day, why is it good enough to be promoted on another? It can only come down to the number of clicks – but the clickbait headline should’ve given that away at the start.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

UK Govt. Met With Copyright Holders Dozens of Times in Just Three Months

Post Syndicated from Andy original https://torrentfreak.com/uk-govt-met-with-copyright-holders-dozens-of-times-in-just-three-months-180310/

While doing business with clients and suppliers is the usual day-to-day routine for most businesses, companies in the entertainment sector seem keener than most to spend time with those in power.

Whether there’s pressure to be applied in respect of upcoming changes in policy or long-term plans for modifying legislation, at least a few times a year news breaks of rightsholders having private meetings with officials. Most of the time, however, the head-to-heads fly under the radar.

This week, however, the UK government published a response to a Freedom of Information Request which asked for details of meetings between the government and copyright owner organizations, enforcement organizations, and collection societies (think BPI, MPA, FACT, Publishers Association, PRS, etc) including times, dates and topics discussed.

The request asked for details of meetings held between May 2016 and April 2017 but the government declined to provide all of this information since the effort required to extract the information “would exceed the cost limit.”

Given the amount of data published, this isn’t a surprise. Even though the government chose to limit the response to events held between January 16, 2017 and April 17, 2017, the meetings between the government and the above groups number in their dozens.

January 2017 got off to a pretty slow start but week three and beyond saw a flurry of meetings with groups and companies such as ITV, BBC, PRS for Music, Copyright Licensing Agency and several other organizations to discuss the EU’s Digital Single Market proposals.

On January 18, 2017 Time Warner had a meeting to discuss content protection and analytics, followed a day later by the Premier League who were booked in to discuss “illicit streaming devices” (a topic mirrored in March during a meeting with the Audiovisual Anti-Piracy Alliance).

Just a few days later the Police Intellectual Property Crime Unit held a “Partnership Working Group Meeting involving industry” and two days after that the police, Trading Standards, and the EU Police Agency convened to discuss enforcement activity.

January 26, 2017 saw an IP Outreach Workshop involving members of the IP Crime Group. This was potentially a big meeting. The IPCG consists of several regional police forces, PIPCU, National Crime Agency, Crown Prosecution Service, Department of Culture, Media and Sport, Trading Standards, HMRC, IFPI, BPI, FACT, Sky TV, PRS, FAST and the Publishers Association, to name just a few.

As the first month of the year was drawing to a close, Amazon met with the government to discuss “current procedures for removing copyright, design and trademark infringing material from their platform.” A similar meeting was held with eBay on February 1 and on February 20, Facebook had its turn on the same topic.

All three companies had come in for criticism from copyright holders for not doing enough to stem the tide of infringing content available on their platforms, particularly so-called Kodi boxes that provide access to movies, shows, and live TV.

However, in the months that followed they each responded positively, with eBay, Amazon and Facebook announcing restrictions on devices sold. While all three platforms still have a problem with infringing device sales, the situation appears to have improved since last year.

On the final day of January 2017, the MPAA attended a meeting to discuss the looming Digital Economy Bill and digital TV piracy. A couple of days later they were back again for a “business awareness seminar” with other big shots including the Alliance for IP, the Anti-Counterfeiting Group, Trading Standards and the Premier League.

However, given the dozens that took place, perhaps one of the more interesting meetings in terms of the mix of those in attendance took place February 7.

Titled “Organized Crime Task Force Meeting – Belfast” it was attended by the Police Service of Northern Ireland, the National Crime Agency, Trading Standards, HM Revenue and Customs, the Border Force, and (spot the odd one out) the Federation Against Copyright Theft.

This seems to suggest that FACT (a private company) is effectively embedded at the highest level of law enforcement, something that has made people very uncomfortable in the past.

Later in February, there was a roundtable meeting with the Alliance for IP, MPAA, Publishers’ Association, BPI, Premier League and Federation Against Copyright Theft (again) to discuss Brexit, the Digital Single Market, IP enforcement and industrial strategy. A similar meeting was held in March which was attended by UK Music, BPI, PRS, Featured Artists Coalition, and many more.

The full list of meetings, which number in their dozens for just a three-month period, can be found here pdf. Whether the volume is representative of other three-month periods isn’t clear but it seems reasonable to conclude that copyright organizations have the ears of government officials in the UK on an almost continual basis.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Serverless Dynamic Web Pages in AWS: Provisioned with CloudFormation

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/serverless-dynamic-web-pages-in-aws-provisioned-with-cloudformation/

***This blog is authored by Mike Okner of Monsanto, an AWS customer. It originally appeared on the Monsanto company blog. Minor edits were made to the original post.***

Recently, I was looking to create a status page app to monitor a few important internal services. I wanted this app to be as lightweight, reliable, and hassle-free as possible, so using a “serverless” architecture that doesn’t require any patching or other maintenance was quite appealing.

I also don’t deploy anything in a production AWS environment outside of some sort of template (usually CloudFormation) as a rule. I don’t want to have to come back to something I created ad hoc in the console after 6 months and try to recall exactly how I architected all of the resources. I’ll inevitably forget something and create more problems before solving the original one. So building the status page in a template was a requirement.

The Design
I settled on a design using two Lambda functions, both written in Python 3.6.

The first Lambda function makes requests out to a list of important services and writes their current status to a DynamoDB table. This function is executed once per minute via CloudWatch Event Rule.

The second Lambda function reads each service’s status & uptime information from DynamoDB and renders a Jinja template. This function is behind an API Gateway that has been configured to return text/html instead of its default application/json Content-Type.

The CloudFormation Template
AWS provides a Serverless Application Model template transformer to streamline the templating of Lambda + API Gateway designs, but it assumes (like everything else about the API Gateway) that you’re actually serving an API that returns JSON content. So, unfortunately, it won’t work for this use-case because we want to return HTML content. Instead, we’ll have to enumerate every resource like usual.

The Skeleton
We’ll be using YAML for the template in this example. I find it easier to read than JSON, but you can easily convert between the two with a converter if you disagree.

AWSTemplateFormatVersion: '2010-09-09'
Description: Serverless status page app
  # [...Resources]

The Status-Checker Lambda Resource
This one is triggered on a schedule by CloudWatch, and looks like:

# Status Checker Lambda
  Type: AWS::Lambda::Function
    Code: ./lambda.zip
        TABLE_NAME: !Ref DynamoTable
    Handler: checker.handler
      - CheckerLambdaRole
      - Arn
    Runtime: python3.6
    Timeout: 45
  Type: AWS::IAM::Role
    - arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
    - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Version: '2012-10-17'
      - Action:
        - sts:AssumeRole
        Effect: Allow
          - lambda.amazonaws.com
  Type: AWS::Events::Rule
    ScheduleExpression: rate(1 minute)
    - Id: CheckerLambdaTimerLambdaTarget
        - CheckerLambda
        - Arn
  Type: AWS::Lambda::Permission
    Action: lambda:invokeFunction
    FunctionName: !Ref CheckerLambda
      - CheckerLambdaTimer
      - Arn
    Principal: events.amazonaws.com

Let’s break that down a bit.

The CheckerLambda is the actual Lambda function. The Code section is a local path to a ZIP file containing the code and its dependencies. I’m using CloudFormation’s packaging feature to automatically push the deployable to S3.

The CheckerLambdaRole is the IAM role the Lambda will assume which grants it access to DynamoDB in addition to the usual Lambda logging permissions.

The CheckerLambdaTimer is the CloudWatch Events Rule that triggers the checker to run once per minute.

The CheckerLambdaTimerPermission grants CloudWatch the ability to invoke the checker Lambda function on its interval.

The Web Page Gateway
The API Gateway handles incoming requests for the web page, invokes the Lambda, and then returns the Lambda’s results as HTML content. Its template looks like:

# API Gateway for Web Page Lambda
  Type: AWS::ApiGateway::RestApi
    Name: Service Checker Gateway
  Type: AWS::ApiGateway::Resource
    RestApiId: !Ref PageGateway
      - PageGateway
      - RootResourceId
    PathPart: page
  Type: AWS::ApiGateway::Method
    AuthorizationType: NONE
    HttpMethod: GET
      Type: AWS
      IntegrationHttpMethod: POST
        Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${WebRenderLambda.Arn}/invocations
        application/json: |
              "method": "$context.httpMethod",
              "body" : $input.json('$'),
              "headers": {
                  #foreach($param in $input.params().header.keySet())
                  "$param": "$util.escapeJavaScript($input.params().header.get($param))"
      - StatusCode: 200
          method.response.header.Content-Type: "'text/html'"
          text/html: "$input.path('$')"
    ResourceId: !Ref PageResource
    RestApiId: !Ref PageGateway
    - StatusCode: 200
        method.response.header.Content-Type: true
  Type: AWS::ApiGateway::Stage
    DeploymentId: !Ref PageGatewayDeployment
    RestApiId: !Ref PageGateway
    StageName: Prod
  Type: AWS::ApiGateway::Deployment
  DependsOn: PageGatewayMethod
    RestApiId: !Ref PageGateway
    Description: PageGateway deployment
    StageName: Stage

There’s a lot going on here, but the real meat is in the PageGatewayMethod section. There are a couple properties that deviate from the default which is why we couldn’t use the SAM transformer.

First, we’re passing request headers through to the Lambda in theRequestTemplates section. I’m doing this so I can validate incoming auth headers. The API Gateway can do some types of auth, but I found it easier to check auth myself in the Lambda function since the Gateway is designed to handle API calls and not browser requests.

Next, note that in the IntegrationResponses section we’re defining the Content-Type header to be ‘text/html’ (with single-quotes) and defining the ResponseTemplate to be $input.path(‘$’). This is what makes the request render as a HTML page in your browser instead of just raw text.

Due to the StageName and PathPart values in the other sections, your actual page will be accessible at https://someId.execute-api.region.amazonaws.com/Prod/page. I have the page behind an existing reverse-proxy and give it a saner URL for end-users. The reverse proxy also attaches the auth header I mentioned above. If that header isn’t present, the Lambda will render an error page instead so the proxy can’t be bypassed.

The Web Page Rendering Lambda
This Lambda is invoked by calls to the API Gateway and looks like:

# Web Page Lambda
  Type: AWS::Lambda::Function
    Code: ./lambda.zip
        TABLE_NAME: !Ref DynamoTable
    Handler: web.handler
      - WebRenderLambdaRole
      - Arn
    Runtime: python3.6
    Timeout: 30
  Type: AWS::IAM::Role
    - arn:aws:iam::aws:policy/AmazonDynamoDBReadOnlyAccess
    - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Version: '2012-10-17'
      - Action:
        - sts:AssumeRole
        Effect: Allow
          - lambda.amazonaws.com
  Type: AWS::Lambda::Permission
    FunctionName: !Ref WebRenderLambda
    Action: lambda:invokeFunction
    Principal: apigateway.amazonaws.com
      - arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/*/*/*
      - __ApiId__: !Ref PageGateway

The WebRenderLambda and WebRenderLambdaRole should look familiar.

The WebRenderLambdaGatewayPermission is similar to the Status Checker’s CloudWatch permission, only this time it allows the API Gateway to invoke this Lambda.

The DynamoDB Table
This one is straightforward.

# DynamoDB table
  Type: AWS::DynamoDB::Table
    - AttributeName: name
      AttributeType: S
      WriteCapacityUnits: 1
      ReadCapacityUnits: 1
    TableName: status-page-checker-results
    - KeyType: HASH
      AttributeName: name

The Deployment
We’ve made it this far defining every resource in a template that we can check in to version control, so we might as well script the deployment as well rather than manually manage the CloudFormation Stack via the AWS web console.

Since I’m using the packaging feature, I first run:

$ aws cloudformation package \
    --template-file template.yaml \
    --s3-bucket <some-bucket-name> \
    --output-template-file template-packaged.yaml
Uploading to 34cd6e82c5e8205f9b35e71afd9e1548 1922559 / 1922559.0 (100.00%) Successfully packaged artifacts and wrote output template to file template-packaged.yaml.

Then to deploy the template (whether new or modified), I run:

$ aws cloudformation deploy \
    --region '<aws-region>' \
    --template-file template-packaged.yaml \
    --stack-name '<some-name>' \
    --capabilities CAPABILITY_IAM
Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - <some-name>

And that’s it! You’ve just created a dynamic web page that will never require you to SSH anywhere, patch a server, recover from a disaster after Amazon terminates your unhealthy EC2, or any other number of pitfalls that are now the problem of some ops person at AWS. And you can reproduce deployments and make changes with confidence because everything is defined in the template and can be tracked in version control.

Central Logging in Multi-Account Environments

Post Syndicated from matouk original https://aws.amazon.com/blogs/architecture/central-logging-in-multi-account-environments/

Centralized logging is often required in large enterprise environments for a number of reasons, ranging from compliance and security to analytics and application-specific needs.

I’ve seen that in a multi-account environment, whether the accounts belong to the same line of business or multiple business units, collecting logs in a central, dedicated logging account is an established best practice. It helps security teams detect malicious activities both in real-time and during incident response. It provides protection to log data in case it is accidentally or intentionally deleted. It also helps application teams correlate and analyze log data across multiple application tiers.

This blog post provides a solution and building blocks to stream Amazon CloudWatch log data across accounts. In a multi-account environment this repeatable solution could be deployed multiple times to stream all relevant Amazon CloudWatch log data from all accounts to a centralized logging account.

Solution Summary 

The solution uses Amazon Kinesis Data Streams and a log destination to set up an endpoint in the logging account to receive streamed logs and uses Amazon Kinesis Data Firehose to deliver log data to the Amazon Simple Storage Solution (S3) bucket. Application accounts will subscribe to stream all (or part) of their Amazon CloudWatch logs to a defined destination in the logging account via subscription filters.

Below is a diagram illustrating how the various services work together.

In logging an account, a Kinesis Data Stream is created to receive streamed log data and a log destination is created to facilitate remote streaming, configured to use the Kinesis Data Stream as its target.

The Amazon Kinesis Data Firehose stream is created to deliver log data from the data stream to S3. The delivery stream uses a generic AWS Lambda function for data validation and transformation.

In each application account, a subscription filter is created between each Amazon CloudWatch log group and the destination created for this log group in the logging account.

The following steps are involved in setting up the central-logging solution:

  1. Create an Amazon S3 bucket for your central logging in the logging account
  2. Create an AWS Lambda function for log data transformation and decoding in logging account
  3. Create a central logging stack as a logging-account destination ready to receive streamed logs and deliver them to S3
  4. Create a subscription in application accounts to deliver logs from a specific CloudWatch log group to the logging account destination
  5. Create Amazon Athena tables to query and analyze log data in your logging account

Creating a log destination in your logging account

In this section, we will setup the logging account side of the solution, providing detail on the list above. The example I use is for the us-east-1 region, however any region where required services are available could be used.

It’s important to note that your logging-account destination and application-account subscription must be in the same region. You can deploy the solution multiple times to create destinations in all required regions if application accounts use multiple regions.

Step 1: Create an S3 bucket

Use the CloudFormation template below to create S3 bucket in logging account. This template also configures the bucket to archive log data to Glacier after 60 days.

  "Description": "CF Template to create S3 bucket for central logging",

      "Description":"Central logging bucket name"
   "CentralLoggingBucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : {
        "BucketName" : {"Ref": "BucketName"},
        "LifecycleConfiguration": {
            "Rules": [
                  "Id": "ArchiveToGlacier",
                  "Prefix": "",
                  "Status": "Enabled",
                      "TransitionInDays": "60",
                      "StorageClass": "GLACIER"

    	"Description" : "Central log bucket",
    	"Value" : {"Ref": "BucketName"} ,
    	"Export" : { "Name" : "CentralLogBucketName"}

To create your central-logging bucket do the following:

  1. Save the template file to your local developer machine as “central-log-bucket.json”
  2. From the CloudFormation console, select “create new stack” and import the file “central-log-bucket.json”
  3. Fill in the parameters and complete stack creation steps (as indicated in the screenshot below)
  4. Verify the bucket has been created successfully and take a note of the bucket name

Step 2: Create data processing Lambda function

Use the template below to create a Lambda function in your logging account that will be used by Amazon Firehose for data transformation during the delivery process to S3. This function is based on the AWS Lambda kinesis-firehose-cloudwatch-logs-processor blueprint.

The function could be created manually from the blueprint or using the cloudformation template below. To find the blueprint navigate to Lambda -> Create -> Function -> Blueprints

This function will unzip the event message, parse it and verify that it is a valid CloudWatch log event. Additional processing can be added if needed. As this function is generic, it could be reused by all log-delivery streams.

  "Description": "Create cloudwatch data processing lambda function",
    "LambdaRole": {
        "Type": "AWS::IAM::Role",
        "Properties": {
            "AssumeRolePolicyDocument": {
                "Version": "2012-10-17",
                "Statement": [
                        "Effect": "Allow",
                        "Principal": {
                            "Service": "lambda.amazonaws.com"
                        "Action": "sts:AssumeRole"
            "Path": "/",
            "Policies": [
                    "PolicyName": "firehoseCloudWatchDataProcessing",
                    "PolicyDocument": {
                        "Version": "2012-10-17",
                        "Statement": [
                                "Effect": "Allow",
                                "Action": [
                                "Resource": "arn:aws:logs:*:*:*"
    "FirehoseDataProcessingFunction": {
        "Type": "AWS::Lambda::Function",
        "Properties": {
            "Handler": "index.handler",
            "Role": {"Fn::GetAtt": ["LambdaRole","Arn"]},
            "Description": "Firehose cloudwatch data processing",
            "Code": {
                "ZipFile" : { "Fn::Join" : ["\n", [
                  "'use strict';",
                  "const zlib = require('zlib');",
                  "function transformLogEvent(logEvent) {",
                  "       return Promise.resolve(`${logEvent.message}\n`);",
                  "exports.handler = (event, context, callback) => {",
                  "    Promise.all(event.records.map(r => {",
                  "        const buffer = new Buffer(r.data, 'base64');",
                  "        const decompressed = zlib.gunzipSync(buffer);",
                  "        const data = JSON.parse(decompressed);",
                  "        if (data.messageType !== 'DATA_MESSAGE') {",
                  "            return Promise.resolve({",
                  "                recordId: r.recordId,",
                  "                result: 'ProcessingFailed',",
                  "            });",
                  "         } else {",
                  "            const promises = data.logEvents.map(transformLogEvent);",
                  "            return Promise.all(promises).then(transformed => {",
                  "                const payload = transformed.reduce((a, v) => a + v, '');",
                  "                const encoded = new Buffer(payload).toString('base64');",
                  "                console.log('---------------payloadv2:'+JSON.stringify(payload, null, 2));",
                  "                return {",
                  "                    recordId: r.recordId,",
                  "                    result: 'Ok',",
                  "                    data: encoded,",
                  "                };",
                  "           });",
                  "        }",
                  "    })).then(recs => callback(null, { records: recs }));",

            "Runtime": "nodejs6.10",
            "Timeout": "60"

   "Function" : {
      "Description": "Function ARN",
      "Value": {"Fn::GetAtt": ["FirehoseDataProcessingFunction","Arn"]},
      "Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-Function" }}

To create the function follow the steps below:

  1. Save the template file as “central-logging-lambda.json”
  2. Login to logging account and, from the CloudFormation console, select “create new stack”
  3. Import the file “central-logging-lambda.json” and click next
  4. Follow the steps to create the stack and verify successful creation
  5. Take a note of Lambda function arn from the output section

Step 3: Create log destination in logging account

Log destination is used as the target of a subscription from application accounts, log destination can be shared between multiple subscriptions however according to the architecture suggested in this solution all logs streamed to the same destination will be stored in the same S3 location, if you would like to store log data in different hierarchy or in a completely different bucket you need to create separate destinations.

As noted previously, your destination and subscription have to be in the same region

Use the template below to create destination stack in logging account.

  "Description": "Create log destination and required resources",

      "Description":"Destination logging bucket"
      "Description":"S3 location for the logs streamed to this destination; example marketing/prod/999999999999/flow-logs/"
      "Description":"CloudWatch logs data processing function"
      "Description":"Source application account number"
    "MyStream": {
      "Type": "AWS::Kinesis::Stream",
      "Properties": {
        "Name": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-Stream"] ]},
        "RetentionPeriodHours" : 48,
        "ShardCount": 1,
        "Tags": [
            "Key": "Solution",
            "Value": "CentralLogging"
    "LogRole" : {
      "Type"  : "AWS::IAM::Role",
      "Properties" : {
          "AssumeRolePolicyDocument" : {
              "Statement" : [ {
                  "Effect" : "Allow",
                  "Principal" : {
                      "Service" : [ {"Fn::Join": [ "", [ "logs.", { "Ref": "AWS::Region" }, ".amazonaws.com" ] ]} ]
                  "Action" : [ "sts:AssumeRole" ]
              } ]
          "Path" : "/service-role/"
    "LogRolePolicy" : {
        "Type" : "AWS::IAM::Policy",
        "Properties" : {
            "PolicyName" : {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-LogPolicy"] ]},
            "PolicyDocument" : {
              "Version": "2012-10-17",
              "Statement": [
                  "Effect": "Allow",
                  "Action": ["kinesis:PutRecord"],
                  "Resource": [{ "Fn::GetAtt" : ["MyStream", "Arn"] }]
                  "Effect": "Allow",
                  "Action": ["iam:PassRole"],
                  "Resource": [{ "Fn::GetAtt" : ["LogRole", "Arn"] }]
            "Roles" : [ { "Ref" : "LogRole" } ]
    "LogDestination" : {
      "Type" : "AWS::Logs::Destination",
      "DependsOn" : ["MyStream","LogRole","LogRolePolicy"],
      "Properties" : {
        "DestinationName": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-Destination"] ]},
        "RoleArn": { "Fn::GetAtt" : ["LogRole", "Arn"] },
        "TargetArn": { "Fn::GetAtt" : ["MyStream", "Arn"] },
        "DestinationPolicy": { "Fn::Join" : ["",[
				"{\"Version\" : \"2012-10-17\",\"Statement\" : [{\"Effect\" : \"Allow\",",
                " \"Principal\" : {\"AWS\" : \"", {"Ref":"SourceAccount"} ,"\"},",
                "\"Action\" : \"logs:PutSubscriptionFilter\",",
                " \"Resource\" : \"", 
                {"Fn::Join": [ "", [ "arn:aws:logs:", { "Ref": "AWS::Region" }, ":" ,{ "Ref": "AWS::AccountId" }, ":destination:",{ "Ref" : "AWS::StackName" },"-Destination" ] ]}  ,"\"}]}"

    "S3deliveryStream": {
      "DependsOn": ["S3deliveryRole", "S3deliveryPolicy"],
      "Type": "AWS::KinesisFirehose::DeliveryStream",
      "Properties": {
        "DeliveryStreamName": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-DeliveryStream"] ]},
        "DeliveryStreamType": "KinesisStreamAsSource",
        "KinesisStreamSourceConfiguration": {
            "KinesisStreamARN": { "Fn::GetAtt" : ["MyStream", "Arn"] },
            "RoleARN": {"Fn::GetAtt" : ["S3deliveryRole", "Arn"] }
        "ExtendedS3DestinationConfiguration": {
          "BucketARN": {"Fn::Join" : [ "", ["arn:aws:s3:::",{"Ref":"LogBucketName"}] ]},
          "BufferingHints": {
            "IntervalInSeconds": "60",
            "SizeInMBs": "50"
          "CompressionFormat": "UNCOMPRESSED",
          "Prefix": {"Ref": "LogS3Location"},
          "RoleARN": {"Fn::GetAtt" : ["S3deliveryRole", "Arn"] },
          "ProcessingConfiguration" : {
              "Enabled": "true",
              "Processors": [
                "Parameters": [ 
                    "ParameterName": "LambdaArn",
                    "ParameterValue": {"Ref":"ProcessingLambdaARN"}
                "Type": "Lambda"

    "S3deliveryRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
              "Sid": "",
              "Effect": "Allow",
              "Principal": {
                "Service": "firehose.amazonaws.com"
              "Action": "sts:AssumeRole",
              "Condition": {
                "StringEquals": {
                  "sts:ExternalId": {"Ref":"AWS::AccountId"}
    "S3deliveryPolicy": {
      "Type": "AWS::IAM::Policy",
      "Properties": {
        "PolicyName": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-FirehosePolicy"] ]},
        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
              "Effect": "Allow",
              "Action": [
              "Resource": [
                {"Fn::Join": ["", [ {"Fn::Join" : [ "", ["arn:aws:s3:::",{"Ref":"LogBucketName"}] ]}]]},
                {"Fn::Join": ["", [ {"Fn::Join" : [ "", ["arn:aws:s3:::",{"Ref":"LogBucketName"}] ]}, "*"]]}
              "Effect": "Allow",
              "Action": [
              "Resource": "*"
        "Roles": [{"Ref": "S3deliveryRole"}]

   "Destination" : {
      "Description": "Destination",
      "Value": {"Fn::Join": [ "", [ "arn:aws:logs:", { "Ref": "AWS::Region" }, ":" ,{ "Ref": "AWS::AccountId" }, ":destination:",{ "Ref" : "AWS::StackName" },"-Destination" ] ]},
      "Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-Destination" }}


To create log your destination and all required resources, follow these steps:

  1. Save your template as “central-logging-destination.json”
  2. Login to your logging account and, from the CloudFormation console, select “create new stack”
  3. Import the file “central-logging-destination.json” and click next
  4. Fill in the parameters to configure the log destination and click Next
  5. Follow the default steps to create the stack and verify successful creation
    1. Bucket name is the same as in the “create central logging bucket” step
    2. LogS3Location is the directory hierarchy for saving log data that will be delivered to this destination
    3. ProcessingLambdaARN is as created in “create data processing Lambda function” step
    4. SourceAccount is the application account number where the subscription will be created
  6. Take a note of destination ARN as it appears in outputs section as you did above.

Step 4: Create the log subscription in your application account

In this section, we will create the subscription filter in one of the application accounts to stream logs from the CloudWatch log group to the log destination that was created in your logging account.

Create log subscription filter

The subscription filter is created between the CloudWatch log group and a destination endpoint. Asubscription could be filtered to send part (or all) of the logs in the log group. For example,you can create a subscription filter to stream only flow logs with status REJECT.

Use the CloudFormation template below to create subscription filter. Subscription filter and log destination must be in the same region.

  "Description": "Create log subscription filter for a specific Log Group",

      "Description":"ARN of logs destination"
      "Description":"Name of LogGroup to forward logs from"
      "Description":"Filter pattern to filter events to be sent to log destination; Leave empty to send all logs"
    "SubscriptionFilter" : {
      "Type" : "AWS::Logs::SubscriptionFilter",
      "Properties" : {
        "LogGroupName" : { "Ref" : "LogGroupName" },
        "FilterPattern" : { "Ref" : "FilterPattern" },
        "DestinationArn" : { "Ref" : "DestinationARN" }

To create a subscription filter for one of CloudWatch log groups in your application account, follow the steps below:

  1. Save the template as “central-logging-subscription.json”
  2. Login to your application account and, from the CloudFormation console, select “create new stack”
  3. Select the file “central-logging-subscription.json” and click next
  4. Fill in the parameters as appropriate to your environment as you did above
    a.  DestinationARN is the value of obtained in “create log destination in logging account” step
    b.  FilterPatterns is the filter value for log data to be streamed to your logging account (leave empty to stream all logs in the selected log group)
    c.  LogGroupName is the log group as it appears under CloudWatch Logs
  5. Verify successful creation of the subscription

This completes the deployment process in both the logging- and application-account side. After a few minutes, log data will be streamed to the central-logging destination defined in your logging account.

Step 5: Analyzing log data

Once log data is centralized, it opens the door to run analytics on the consolidated data for business or security reasons. One of the powerful services that AWS offers is Amazon Athena.

Amazon Athena allows you to query data in S3 using standard SQL.

Follow the steps below to create a simple table and run queries on the flow logs data that has been collected from your application accounts

  1. Login to your logging account and from the Amazon Athena console, use the DDL below in your query  editor to create a new table


Version INT,

Account STRING,

InterfaceId STRING,

SourceAddress STRING,

DestinationAddress STRING,

SourcePort INT,

DestinationPort INT,

Protocol INT,

Packets INT,

Bytes INT,

StartTime INT,

EndTime INT,

Action STRING,

LogStatus STRING


ROW FORMAT SERDE ‘org.apache.hadoop.hive.serde2.RegexSerDe’


“input.regex” = “^([^ ]+)\\s+([0-9]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([0-9]+)\\s+([0-9]+)\\s+([^ ]+)\\s+([^ ]+)$”)

LOCATION ‘s3://central-logging-company-do-not-delete/’;

2. Click ”run query” and verify a successful run/ This creates the table “prod_vpc_flow_logs”

3. You can then run queries against the table data as below:


By following the steps I’ve outlined, you will build a central logging solution to stream CloudWatch logs from one application account to a central logging account. This solution is repeatable and could be deployed multiple times for multiple accounts and logging requirements.


About the Author

Mahmoud Matouk is a Senior Cloud Infrastructure Architect. He works with our customers to help accelerate migration and cloud adoption at the enterprise level.


Dotcom: Obama Admitted “Mistakes Were Made” in Megaupload Case

Post Syndicated from Andy original https://torrentfreak.com/dotcom-obama-admitted-mistakes-were-made-in-megaupload-case-180301/

When Megaupload was forcefully shut down in 2012, it initially appeared like ‘just’ another wave of copyright enforcement action by US authorities.

When additional details began to filter through, the reality of what had happened was nothing short of extraordinary.

Not only were large numbers of Megaupload servers and millions of dollars seized, but Kim Dotcom’s home in New Zealand was subjected to a military-style raid comprised of helicopters and dozens of heavily armed special tactics police. The whole thing was monitored live by the FBI.

Few people who watched the events of that now-infamous January day unfold came to the conclusion this was a routine copyright-infringement case. According to Kim Dotcom, whose life had just been turned upside down, something of this scale must’ve filtered down from the very top of the US government. It was hard to disagree.

At the time, Dotcom told TorrentFreak that then-Vice President Joe Biden directed attorney Neil MacBride to target the cloud storage site and ever since the Megaupload founder has leveled increasingly serious allegations at officials of the former government of Barack Obama.

For example, Dotcom says that since the US would have difficulty gaining access to him in his former home of Hong Kong, the government of New Zealand was persuaded to welcome him in, knowing they would eventually turn him over to the United States. More recently he’s been turning up the pressure again, such as a tweet on February 20th which cast more light on that process.

“Joe Biden had a White House meeting with an ‘extradition expert’ who worked for Hong Kong police and a handful of Hollywood executives to discuss my case. A week prior to this meeting Neil MacBride hand-delivered his action plan to Biden’s chief of staff, also at the White House,” Dotcom wrote.

But this claim is just the tip of an extremely large iceberg that’s involved illegal spying on Dotcom in New Zealand and a dizzying array of legal battles that are set to go on for years to come. But perhaps of most interest now is that rather than wilting away under the pressure, Dotcom appears to be just warming up.

A few hours ago Dotcom commented on an article published in The Hill which revealed that Barack Obama will visit New Zealand in March, possibly to celebrate the opening of Air New Zealand’s new route to the U.S.

Rather than expressing disappointment, the Megaupload founder seemed pleased that the former president would be touching down next month.

“Great. I’ll have a Court subpoena waiting for him in New Zealand,” Dotcom wrote.

But that was just a mere hors d’oeuvre, with the main course was yet to come. But come it did.

“A wealthy Asian Megaupload shareholder hired a friend of the Obamas to enquire about our case. This person was recommended by a member of the Chinese politburo ‘if you want to get to Obama directly’. We did,” Dotcom revealed.

Dotcom says he’ll release a transcript detailing what Obama told his friend on March 21 when Obama arrives in town but in the meantime, he offered another little taster.

“Mistakes were made. It hasn’t gone well,” Obama reportedly told the person reporting back to Megaupload. “It’s a problem. I’ll see to it after the election.”

Of course, Obama’s position after the election was much different to what had gone before, but that didn’t stop Dotcom’s associates infiltrating the process aimed at keeping the Democrats in power.

“Our friendly Obama contact smuggled an @EFF lawyer into a re-election fundraiser hosted by former Vice President Joe Biden,” he revealed.

“When Biden was asked about the Megaupload case he bragged that it was his case and that he ‘took care of it’,” which is what Dotcom has been claiming all along.

On March 21, when Obama lands in New Zealand, Dotcom says he’ll be waiting.

“I’m looking forward to @BarackObama providing some insight into the political dimension of the Megaupload case when he arrives in the New Zealand jurisdiction,” he teased.

Better get the popcorn ready….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Switzerland Hopes New Law Will Keep it Off U.S. ‘Pirate Watchlist’

Post Syndicated from Ernesto original https://torrentfreak.com/switzerland-hopes-new-law-will-keep-it-off-us-pirate-watchlist-180228/

In a few weeks, the Office of the United States Trade Representative (USTR) will publish its yearly Special 301 Report, highlighting countries that fail to live up to U.S copyright protection standards.

In recent years Switzerland was among countries that were placed on the ‘Watch List.’ In 2017, the US reported that the Swiss had made some progress, but not enough. Its policies towards online piracy were not up to par, according to U.S. standards.

“Switzerland remains on the Watch List this year due to U.S. concerns regarding specific difficulties in Switzerland’s system of online copyright protection and enforcement,” USTR wrote in its Special 301 Report.

One of the key issues the United States identified is the lack of enforcement against hosting companies that do business with pirate sites. Branding these as a “safe haven” for pirates, the US called for suitable countermeasures.

A second problem that was highlighted is the so-called ‘Logistep Decision.‘ In 2010 the Swiss Federal Supreme Court barred anti-piracy outfit Logistep from harvesting the IP addresses of file-sharers. The Court ruled that IP addresses amount to private data, and outlawed the tracking of file-sharers in Switzerland.

According to the USTR, this ruling prevents copyright holders from enforcing their rights, and they called on the Swiss Government to address this concern as well.

Today nearly a year has passed and it looks like the recommendations were not ignored. In a letter to the USTR, the Swiss Government writes that the two main complaints are dealt with in their new copyright law, which was introduced late last year.

“The draft bill, adopted by the Federal Council at its meeting on November 22, 2017, addresses both of those concerns. It aims at further modernizing Swiss copyright law for the purposes of the digital environment and steps up the fight against Internet piracy,” the Swiss write.

The new copyright law addresses the hosting problem by introducing a “take-down-and-stay-down” policy. Internet services will be required to remove infringing content from their platforms and prevent that same content from reappearing. Failure to comply will result in prosecution.

“The ‘stay down’ will prevent rogue websites from being hosted in Switzerland and will make the fight against Internet piracy more effective and sustainable. That should put an end to criticism directed against Switzerland as a host country for infringing sites,” Switzerland informs the U.S.

Similarly, the Logistep ruling will no longer be an issue either if the country’s new copyright law is implemented.

“[T]he draft bill clarifies that the processing of data for the purposes of prosecuting copyright infringement is permissible. With that, it puts an end to the debate that followed the Logistep decision about the extent to which the recording of IP addresses for prosecution purposes is admissible.”

Many copyright holder groups have also asked for ISP blocking of pirate sites, but Switzerland notes that this idea is off the table for now. There is not enough support in Parliament for an Internet blocking provision which may jeopardize acceptance of the entire draft bill, their letter explains.

While not mentioned in the letter, downloading and streaming copyright infringing content for personal use also remains unpunished, video games and software excepted. Uploading and other types of distribution of infringing content are not permitted, however.

Still, the Swiss hope that the newly proposed changes to its copyright law will be enough to have it removed from the Special 301 Watch List.

“Switzerland is confident that the revision of the Swiss Copyright Act will more effectively address the challenges posed by the Internet,” the Swiss Government writes, adding that it “looks forward to continuing to work with the U.S. to further clarify any issue relating to online piracy.”

Switzerland’s letter to the United States Trade Representative is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.