Tag Archives: tob

All Systems Go! 2017 CfP Open

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-cfp-open.html

The All Systems Go! 2017 Call for Participation is Now Open!

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

BPI Breaks Record After Sending 310 Million Google Takedowns

Post Syndicated from Andy original https://torrentfreak.com/bpi-breaks-record-after-sending-310-million-google-takedowns-170619/

A little over a year ago during March 2016, music industry group BPI reached an important milestone. After years of sending takedown notices to Google, the group burst through the 200 million URL barrier.

The fact that it took BPI several years to reach its 200 million milestone made the surpassing of the quarter billion milestone a few months later even more remarkable. In October 2016, the group sent its 250 millionth takedown to Google, a figure that nearly doubled when accounting for notices sent to Microsoft’s Bing.

But despite the volumes, the battle hadn’t been won, let alone the war. The BPI’s takedown machine continued to run at a remarkable rate, churning out millions more notices per week.

As a result, yet another new milestone was reached this month when the BPI smashed through the 300 million URL barrier. Then, days later, a further 10 million were added, with the latter couple of million added during the time it took to put this piece together.

BPI takedown notices, as reported by Google

While demanding that Google places greater emphasis on its de-ranking of ‘pirate’ sites, the BPI has called again and again for a “notice and stay down” regime, to ensure that content taken down by the search engine doesn’t simply reappear under a new URL. It’s a position BPI maintains today.

“The battle would be a whole lot easier if intermediaries played fair,” a BPI spokesperson informs TF.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down.”

The long-standing suggestion is that the volume of takedown notices sent would reduce if a “take down, stay down” regime was implemented. The BPI says it’s difficult to present a precise figure but infringing content has a tendency to reappear, both in search engines and on hosting sites.

“Google rejects repeat notices for the same URL. But illegal content reappears as it is re-indexed by Google. As to the sites that actually host the content, the vast majority of notices sent to them could be avoided if they implemented take-down & stay-down,” BPI says.

The fact that the BPI has added 60 million more takedowns since the quarter billion milestone a few months ago is quite remarkable, particularly since there appears to be little slowdown from month to month. However, the numbers have grown so huge that 310 billion now feels a lot like 250 million, with just a few added on top for good measure.

That an extra 60 million takedowns can almost be dismissed as a handful is an indication of just how massive the issue is online. While pirates always welcome an abundance of links to juicy content, it’s no surprise that groups like the BPI are seeking more comprehensive and sustainable solutions.

Previously, it was hoped that the Digital Economy Bill would provide some relief, hopefully via government intervention and the imposition of a search engine Code of Practice. In the event, however, all pressure on search engines was removed from the legislation after a separate voluntary agreement was reached.

All parties agreed that the voluntary code should come into effect two weeks ago on June 1 so it seems likely that some effects should be noticeable in the near future. But the BPI says it’s still early days and there’s more work to be done.

“BPI has been working productively with search engines since the voluntary code was agreed to understand how search engines approach the problem, but also what changes can and have been made and how results can be improved,” the group explains.

“The first stage is to benchmark where we are and to assess the impact of the changes search engines have made so far. This will hopefully be completed soon, then we will have better information of the current picture and from that we hope to work together to continue to improve search for rights owners and consumers.”

With more takedown notices in the pipeline not yet publicly reported by Google, the BPI informs TF that it has now notified the search giant of 315 million links to illegal content.

“That’s an astonishing number. More than 1 in 10 of the entire world’s notices to Google come from BPI. This year alone, one in every three notices sent to Google from BPI is for independent record label repertoire,” BPI concludes.

While it’s clear that groups like BPI have developed systems to cope with the huge numbers of takedown notices required in today’s environment, it’s clear that few rightsholders are happy with the status quo. With that in mind, the fight will continue, until search engines are forced into compromise. Considering the implications, that could only appear on a very distant horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UK Police Claim Success in Keeping Gambling Ads off Pirate Sites

Post Syndicated from Andy original https://torrentfreak.com/uk-police-claim-success-in-keeping-gambling-ads-off-pirate-sites-170614/

Over the past several years, there has been a major effort by entertainment industry groups to cut off revenue streams to ‘pirate’ sites. The theory is that if sites cannot generate funds, their operators will eventually lose interest.

Since advertising is a key money earner for any website, significant resources have been expended trying to keep ads off sites that directly or indirectly profit from infringement. It’s been a multi-pronged affair, with agencies being encouraged to do the right thing and brands warned that their ads appearing on pirate sites does nothing for their image.

One sector that has trailed behind most is the gambling industry. Up until fairly recently, ads for some of the UK’s largest bookmakers have been a regular feature on many large pirate sites, either embedded in pages or more often than not, appearing via popup or pop-under spreads. Now, however, a significant change is being reported.

According to the City of London Police’s Intellectual Property Crime Unit (PIPCU), over the past 12 months there has been an 87% drop in adverts for licensed gambling operators being displayed on infringing websites.

The research was carried out by whiteBULLET, a brand safety and advertising solutions company which helps advertisers to assess whether placing an advert on a particular URL will cause it to appear on a pirate site.

PIPCU says that licensed gambling operators have an obligation to “keep crime out of gambling” due to their commitments under the Gambling Act 2005. However, the Gambling Commission, the UK’s gambling regulatory body, has recently been taking additional steps to tackle the problem.

In September 2015, the Commission consulted on amendments (pdf) to licensing conditions that would compel licensees to ensure that advertisements “placed by themselves and others” do not appear on websites providing unauthorized access to copyrighted content.

After the consultation was published in May 2016 (pdf), all respondents agreed in principle that gambling operators should not advertise on pirate sites. A month later, the Commission said it would ban the placement of gambling ads on such platforms.

When the new rules came into play last October, 40 gambling companies (including Bet365, Coral and Sky Bet, who had previously been called out for displaying ads on pirate sites) were making use of PIPCU’s ‘Infringing Website List‘, a database of sites that police claim are actively involved in piracy.

Speaking yesterday, acting Detective Superintendent Peter Ratcliffe, Head of the Police Intellectual Property Crime Unit (PIPCU), welcomed the ensuing reduction in ad placement on ‘pirate’ domains.

“The success of a strong relationship built between PIPCU and The Gambling Commission can be seen by these figures. This is a fantastic example of a joint working initiative between police and an industry regulator,” Ratcliffe said.

“We commend the 40 gambling companies who are already using the Infringing Website List and encourage others to sign up. We will continue to encourage all UK advertisers to become a member of the Infringing Website List to ensure they’re not inadvertently funding criminal websites.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NSA Document Outlining Russian Attempts to Hack Voter Rolls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_document_ou.html

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­– and there’s no evidence that they had any actual effect ­– they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia’s military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company’s network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA’s analysis ends. We don’t know whether those 122 targeted attacks were successful, or what their effects were if so. We don’t know whether other election software companies besides VR Systems were targeted, or what the GRU’s overall plan was — if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect — anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­– one of the states that VR Systems supports ­– but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don’t know what happened next, if anything. VR Systems isn’t commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn’t much of a smoking gun, it’s yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI’s affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner’s NSA account or her personal account, but in either case, it’s incredibly sloppy tradecraft.

With President Trump’s election, the issue of Russian interference in last year’s campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It’s interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It’s easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there’s a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don’t know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won’t be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don’t have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­– by extension ­– our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what’s historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

Raspberry Pi Looper-Synth-Drum…thing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-looper/

To replace his iPad for live performance, Colorado-based musician Toby Hendricks built a looper, complete with an impressive internal sound library, all running on a Raspberry Pi.

Raspberry Pi Looper/synth/drum thing

Check out the guts here: https://youtu.be/mCOHFyI3Eoo My first venture into raspberry pi stuff. Running a custom pure data patch I’ve been working on for a couple years on a Raspberry Pi 3. This project took a couple months and I’m still tweaking stuff here and there but it’s pretty much complete, it even survived it’s first live show!

Toby’s build is a pretty mean piece of kit, as this video attests. Not only does it have a multitude of uses, but the final build is beautiful. Do make sure to watch to the end of the video for a wonderful demonstration of the kit.

Inside the Raspberry Pi looper

Alongside the Raspberry Pi and Behringer U-Control sound card, Toby used Pure Data, a multimedia visual programming language, and a Teensy 3.6 processor to complete the build. Together, these allow for playback of a plethora of sounds, which can either be internally stored, or externally introduced via audio connectors along the back.

This guy is finally taking shape. DIY looper/fx box/sample player/synth. #teensy #arduino #raspberrypi #puredata

98 Likes, 6 Comments – otem rellik (@otem_rellik) on Instagram: “This guy is finally taking shape. DIY looper/fx box/sample player/synth. #teensy #arduino…”

Delay, reverb, distortion, and more are controlled by sliders along one side, while pre-installed effects are selected and played via some rather beautiful SparkFun buttons on the other. Loop buttons, volume controls, and a repurposed Nintendo DS screen complete the interface.

Raspberry Pi Looper Guts

Thought I’d do a quick overview of the guts of my pi project. Seems like many folks have been interested in seeing what the internals look like.

Code for the looper can be found on Toby’s GitHub here. Make sure to continue to follow him via YouTube and Instagram for updates on the build, including these fancy new buttons.

Casting my own urethane knobs and drum pads from 3D printed molds! #3dprinted #urethanecasting #diy

61 Likes, 4 Comments – otem rellik (@otem_rellik) on Instagram: “Casting my own urethane knobs and drum pads from 3D printed molds! #3dprinted #urethanecasting #diy”

I got the music in me

If you want to get musical with a Raspberry Pi, but the thought of recreating Toby’s build is a little daunting, never fear! Our free GPIO Music Box resource will help get you started. And projects such as Mike Horne’s fabulous Raspberry Pi music box should help inspire you to take your build further.

Raspberry Pi Looper post image of Mike Horne's music box

Mike’s music box boasts wonderful flashy buttons and turny knobs for ultimate musical satisfaction!

If you use a Raspberry Pi in any sort of musical adventure, be sure to share your project in the comments below!

 

 

The post Raspberry Pi Looper-Synth-Drum…thing appeared first on Raspberry Pi.

European Astro Pi: Mission complete

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/european-astro-pi-mission-complete/

In October last year, with the European Space Agency and CNES, we launched the first ever European Astro Pi challenge. We asked students from all across Europe to write code for the flight of French ESA astronaut Thomas Pesquet to the International Space Station (ISS) as part of the Proxima mission.

The winners were announced back in March, and since then their code has been uploaded to the ISS and run in space!

Thomas Pesquet aboard the ISS with the Astro Pi units

French ESA astronaut Thomas Pesquet with the Astro Pi units. Image credit ESA.

Code from 64 student teams ran between 28 April and 10 May, supervised by Thomas, in the European Columbus module.

Astro Pi on Twitter

We can confirm student programs are finished, results are downloaded from @Space_Station and teams will receive their​ data by next week 🛰️📡

On 10 May the results, data, and log files were downloaded to the ground, and the following week they were emailed back to the student teams for analysis.

Ecole St-André d’E on Twitter

On vient de recevoir les données enregistrées par nos codes #python depuis l’ #iss @CNES @astro_pi @Thom_astro . Reste à analyser tout ça!

We’ve looked at the results, and we can see that many of the teams have been successful in their missions: congratulations to all of you! We look forward to reading your write-ups and blogs.

In pictures

In a surprise turn of events, we learnt that Thomas set up a camera to take regular pictures of the Astro Pi units for one afternoon. This was entirely voluntary on his part and was not scheduled as part of the mission. Thank you so much, Thomas!

Some lucky teams have some very nice souvenirs from the ISS. Here are a couple of them:

Astro Pi units on the ISS photographed by Thomas Pesquet

Juvara team – Italy (left) and South London Raspberry Jam – UK (right). Image credit ESA.

Astro Pi units on the ISS photographed by Thomas Pesquet

Astro Team – Italy (left) and AstroShot – Greece (right). Image credit ESA.

Until next time…

This brings the 2016/17 European Astro Pi challenge to a close. We would like to thank all the students and teachers who participated; the ESA Education, Integration and Implementation, Ground Systems, and Flight Control teams; BioTesc (ESA’s user operations control centre for Astro Pi); and especially Thomas Pesquet himself.

Thomas and Russian Soyuz commander Oleg Novitskiy return to Earth today, concluding their six-month stay on the ISS. After a three-hour journey in their Soyuz spacecraft, they will land in the Kazakh steppe at approximately 15:09 this afternoon. You can watch coverage of the departure, re-entry, and landing on NASA TV.

Astro Pi has been a hugely enjoyable project to work on, and we hope to be back in the new school year (2017-18) with brand-new challenges for teachers and students.

 

The post European Astro Pi: Mission complete appeared first on Raspberry Pi.

Bicycle-powered Menabrea beer dispenser

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/menabrea-beer-bike/

Cycle. Beat the on-screen pace. Receive free Menabrea beer. All on a system controlled by a Raspberry Pi.

Honestly, what’s not to like?

Menabrea UK

If you’re wondering what it takes to win an ice cold pint at one of our Race to Biella events, this clip will give you more of an idea. It’s no mean feat!! Do you think you have the pedal power? Join us tonight at The Avonbridge Hotel for sunshine, cycling and, of course, a refreshing pint or two.

Glasgow-based creative content agency Bright Signals were contacted by Wire with a brief for a pretty tasty project: create something for Menabrea that ties in with the Giro d’Italia cycle race passing close to the brewery in Biella, Northern Italy.

Cycle race, was it? Menabrea brewery, you say?

The team at Bright Signals came up with the superb idea of a bicycle-powered Menabrea beer dispenser.

It must be noted that when I said the words ‘bicycle-powered beer dispenser’ aloud in the Raspberry Pi office, many heads turned and Director of Software Engineering Gordon Hollingworth dropped everything he was doing in order to learn more.

The final build took a fortnight to pull together, with Bright Signals working on the Raspberry Pi-controlled machine and Wire in charge of its graphic design.

Menabrea Beer Bike Raspberry Pi

Cheer for beer!
Image c/o Grant Gibson and Menabrea

Reuse, reduce, return to the bar

“This was probably one of the most enjoyable builds I’ve worked on,” says Bright Signal’s Deputy Managing Director, Grant Gibson. “We had a really clear idea of what we were doing from the start, and we managed to reuse loads of parts from the donor bicycle as we simplified the bike and built the pouring system.” The team integrated the bottle cage of the donor bike into the main dispensing mechanism, and the bike’s brake levers now cradle a pint glass at the perfect angle for pouring.

A Raspberry Pi powers the 24″ screen atop the beer dispenser, as well as the buttons, pouring motors, and lights.

Menabrea Beer Bike Raspberry Pi

Perfect size for the Raspberry Pi lobby!
Image c/o Grant Gibson

Giro di Scozia

Fancy trying Menabrea’s bicycle-powered beer dispenser for yourself? The final stop of its 4-week tour will be the Beer Cafe in Glasgow this Friday 2nd June. If you make it to the event, be sure to share your photos and video with us in the comments below, or via our social media channels such as Twitter, Facebook, and Instagram. And if you end up building your own beer-dispensing cycle, definitely write up a tutorial for the project! We know at least one person who is keenly interested…

Menabrea on Twitter

Another successful racer wins a pint of Menabrea in the #racetobiella. The bike’s at The Fox and Hound, Houston today…

The post Bicycle-powered Menabrea beer dispenser appeared first on Raspberry Pi.

New Features for IAM Policy Summaries – Services and Actions Not Granted by a Policy

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/new-features-for-iam-policy-summaries-services-and-actions-not-granted-by-a-policy/

Last month, we introduced policy summaries to make it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. On Thursday, May 25, I announced three new features that have been added to policy summaries and reviewed one of those features: resource summaries. Tomorrow, I will discuss how policy summaries can help you find potential typos in your IAM policies.

Today, I describe how you can view the services and actions that are implicitly denied, which is the same as if the services or actions are not granted by an IAM policy. This feature allows you to see which actions are not included at each access level for a service that has limited access, which can help you pinpoint the actions that are necessary to grant Full: List and Read permissions to a specific service, for example. In this blog post, I cover two examples that show how you can use this feature to see which services and actions are not granted by a policy.

Show remaining services and actions

From the policy summary in the IAM console, you can now see the services and actions that are not granted by a policy by choosing the link next to the Allow heading (see the following screenshot). This enables you to view the remaining services or actions in a service with partial access, without having to go to the documentation.

Let’s look at the AWS managed policy for the Developer Power User. This policy grants access to 99 out 100 services, as shown in the following screenshot. You might want to view the remaining service to determine if you should grant access to it, or you might want to confirm that this policy does not grant access to IAM. To see which service is missing from the policy, I choose the Show remaining 1 link.

Screenshot showing the "Show remaining 1" link

I then scroll down and look for the service that has None as the access level. I see that IAM is not included for this policy.

Screenshot showing that the policy does not grant access to IAM

To go back to the original view, I choose Hide Remaining 1.

Screenshot showing the "Hide remaining 1" link

Let’s look at how this feature can help you pinpoint which actions you need to grant for a specific access level. For policies that grant limited access to a service, this link shows in the service details summary the actions that are not granted by the policy. Let’s say I created a policy that grants full Amazon S3 list and read access. After creating the policy, I realize I did not grant all the list actions because I see Limited: List in the policy summary, as shown in the following screenshot.

Screenshot showing Limited: List in the policy summary

Rather than going to the documentation to find out which actions I am missing, I review the policy summary to determine what I forgot to include. When I choose S3, I see that only 3 out of 4 actions are granted. When I choose Show remaining 27, I see the list action I might have forgotten to include in the list-access level.

Screenshot showing the "Show remaining 27" link

The following screenshot shows I forgot to include s3:ListObjects in the policy. I choose Edit policy and add this action to the IAM policy to ensure I have Full: List and Read access to S3.

Screenshot showing the action left out of the policy

For some policies, you will not see the links shown in this post. This is because the policy grants full access to the services and there are no remaining services to be granted.

Summary

Policy summaries make it easy to view and understand permissions and resources defined in a policy without having to view the associated JSON. You can now view services and actions not included in a policy to see what was omitted by the policy without having to refer to the related documentation. To see policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page. Tomorrow, I will explain how policy summaries can help you find and troubleshoot typos in IAM policies.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

Build a Serverless Architecture to Analyze Amazon CloudFront Access Logs Using AWS Lambda, Amazon Athena, and Amazon Kinesis Analytics

Post Syndicated from Rajeev Srinivasan original https://aws.amazon.com/blogs/big-data/build-a-serverless-architecture-to-analyze-amazon-cloudfront-access-logs-using-aws-lambda-amazon-athena-and-amazon-kinesis-analytics/

Nowadays, it’s common for a web server to be fronted by a global content delivery service, like Amazon CloudFront. This type of front end accelerates delivery of websites, APIs, media content, and other web assets to provide a better experience to users across the globe.

The insights gained by analysis of Amazon CloudFront access logs helps improve website availability through bot detection and mitigation, optimizing web content based on the devices and browser used to view your webpages, reducing perceived latency by caching of popular object closer to its viewer, and so on. This results in a significant improvement in the overall perceived experience for the user.

This blog post provides a way to build a serverless architecture to generate some of these insights. To do so, we analyze Amazon CloudFront access logs both at rest and in transit through the stream. This serverless architecture uses Amazon Athena to analyze large volumes of CloudFront access logs (on the scale of terabytes per day), and Amazon Kinesis Analytics for streaming analysis.

The analytic queries in this blog post focus on three common use cases:

  1. Detection of common bots using the user agent string
  2. Calculation of current bandwidth usage per Amazon CloudFront distribution per edge location
  3. Determination of the current top 50 viewers

However, you can easily extend the architecture described to power dashboards for monitoring, reporting, and trigger alarms based on deeper insights gained by processing and analyzing the logs. Some examples are dashboards for cache performance, usage and viewer patterns, and so on.

Following we show a diagram of this architecture.

Prerequisites

Before you set up this architecture, install the AWS Command Line Interface (AWS CLI) tool on your local machine, if you don’t have it already.

Setup summary

The following steps are involved in setting up the serverless architecture on the AWS platform:

  1. Create an Amazon S3 bucket for your Amazon CloudFront access logs to be delivered to and stored in.
  2. Create a second Amazon S3 bucket to receive processed logs and store the partitioned data for interactive analysis.
  3. Create an Amazon Kinesis Firehose delivery stream to batch, compress, and deliver the preprocessed logs for analysis.
  4. Create an AWS Lambda function to preprocess the logs for analysis.
  5. Configure Amazon S3 event notification on the CloudFront access logs bucket, which contains the raw logs, to trigger the Lambda preprocessing function.
  6. Create an Amazon DynamoDB table to look up partition details, such as partition specification and partition location.
  7. Create an Amazon Athena table for interactive analysis.
  8. Create a second AWS Lambda function to add new partitions to the Athena table based on the log delivered to the processed logs bucket.
  9. Configure Amazon S3 event notification on the processed logs bucket to trigger the Lambda partitioning function.
  10. Configure Amazon Kinesis Analytics application for analysis of the logs directly from the stream.

ETL and preprocessing

In this section, we parse the CloudFront access logs as they are delivered, which occurs multiple times in an hour. We filter out commented records and use the user agent string to decipher the browser name, the name of the operating system, and whether the request has been made by a bot. For more details on how to decipher the preceding information based on the user agent string, see user-agents 1.1.0 in the Python documentation.

We use the Lambda preprocessing function to perform these tasks on individual rows of the access log. On successful completion, the rows are pushed to an Amazon Kinesis Firehose delivery stream to be persistently stored in an Amazon S3 bucket, the processed logs bucket.

To create a Firehose delivery stream with a new or existing S3 bucket as the destination, follow the steps described in Create a Firehose Delivery Stream to Amazon S3 in the S3 documentation. Keep most of the default settings, but select an AWS Identity and Access Management (IAM) role that has write access to your S3 bucket and specify GZIP compression. Name the delivery stream CloudFrontLogsToS3.

Another pre-requisite for this setup is to create an IAM role that provides the necessary permissions our AWS Lambda function to get the data from S3, process it, and deliver it to the CloudFrontLogsToS3 delivery stream.

Let’s use the AWS CLI to create the IAM role using the following the steps:

  1. Create the IAM policy (lambda-exec-policy) for the Lambda execution role to use.
  2. Create the Lambda execution role (lambda-cflogs-exec-role) and assign the service to use this role.
  3. Attach the policy created in step 1 to the Lambda execution role.

To download the policy document to your local machine, type the following command.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/preprocessiong-lambda/lambda-exec-policy.json  <path_on_your_local_machine>

To download the assume policy document to your local machine, type the following command.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/preprocessiong-lambda/assume-lambda-policy.json  <path_on_your_local_machine>

Following is the lambda-exec-policy.json file, which is the IAM policy used by the Lambda execution role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudWatchAccess",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Sid": "S3Access",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        },
        {
            "Sid": "FirehoseAccess",
            "Effect": "Allow",
            "Action": [
                "firehose:ListDeliveryStreams",
                "firehose:PutRecord",
                "firehose:PutRecordBatch"
            ],
            "Resource": [
                "arn:aws:firehose:*:*:deliverystream/CloudFrontLogsToS3"
            ]
        }
    ]
}

To create the IAM policy used by Lambda execution role, type the following command.

aws iam create-policy --policy-name lambda-exec-policy --policy-document file://<path>/lambda-exec-policy.json

To create the AWS Lambda execution role and assign the service to use this role, type the following command.

aws iam create-role --role-name lambda-cflogs-exec-role --assume-role-policy-document file://<path>/assume-lambda-policy.json

Following is the assume-lambda-policy.json file, to grant Lambda permission to assume a role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

To attach the policy (lambda-exec-policy) created to the AWS Lambda execution role (lambda-cflogs-exec-role), type the following command.

aws iam attach-role-policy --role-name lambda-cflogs-exec-role --policy-arn arn:aws:iam::<your-account-id>:policy/lambda-exec-policy

Now that we have created the CloudFrontLogsToS3 Firehose delivery stream and the lambda-cflogs-exec-role IAM role for Lambda, the next step is to create a Lambda preprocessing function.

This Lambda preprocessing function parses the CloudFront access logs delivered into the S3 bucket and performs a few transformation and mapping operations on the data. The Lambda function adds descriptive information, such as the browser and the operating system that were used to make this request based on the user agent string found in the logs. The Lambda function also adds information about the web distribution to support scenarios where CloudFront access logs are delivered to a centralized S3 bucket from multiple distributions. With the solution in this blog post, you can get insights across distributions and their edge locations.

Use the Lambda Management Console to create a new Lambda function with a Python 2.7 runtime and the s3-get-object-python blueprint. Open the console, and on the Configure triggers page, choose the name of the S3 bucket where the CloudFront access logs are delivered. Choose Put for Event type. For Prefix, type the name of the prefix, if any, for the folder where CloudFront access logs are delivered, for example cloudfront-logs/. To invoke Lambda to retrieve the logs from the S3 bucket as they are delivered, select Enable trigger.

Choose Next and provide a function name to identify this Lambda preprocessing function.

For Code entry type, choose Upload a file from Amazon S3. For S3 link URL, type https.amazonaws.com//preprocessing-lambda/pre-data.zip. In the section, also create an environment variable with the key KINESIS_FIREHOSE_STREAM and a value with the name of the Firehose delivery stream as CloudFrontLogsToS3.

Choose lambda-cflogs-exec-role as the IAM role for the Lambda function, and type prep-data.lambda_handler for the value for Handler.

Choose Next, and then choose Create Lambda.

Table creation in Amazon Athena

In this step, we will build the Athena table. Use the Athena console in the same region and create the table using the query editor.

CREATE EXTERNAL TABLE IF NOT EXISTS cf_logs (
  logdate date,
  logtime string,
  location string,
  bytes bigint,
  requestip string,
  method string,
  host string,
  uri string,
  status bigint,
  referrer string,
  useragent string,
  uriquery string,
  cookie string,
  resulttype string,
  requestid string,
  header string,
  csprotocol string,
  csbytes string,
  timetaken bigint,
  forwardedfor string,
  sslprotocol string,
  sslcipher string,
  responseresulttype string,
  protocolversion string,
  browserfamily string,
  osfamily string,
  isbot string,
  filename string,
  distribution string
)
PARTITIONED BY(year string, month string, day string, hour string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LOCATION 's3://<pre-processing-log-bucket>/prefix/';

Creation of the Athena partition

A popular website with millions of requests each day routed using Amazon CloudFront can generate a large volume of logs, on the order of a few terabytes a day. We strongly recommend that you partition your data to effectively restrict the amount of data scanned by each query. Partitioning significantly improves query performance and substantially reduces cost. The Lambda partitioning function adds the partition information to the Athena table for the data delivered to the preprocessed logs bucket.

Before delivering the preprocessed Amazon CloudFront logs file into the preprocessed logs bucket, Amazon Kinesis Firehose adds a UTC time prefix in the format YYYY/MM/DD/HH. This approach supports multilevel partitioning of the data by year, month, date, and hour. You can invoke the Lambda partitioning function every time a new processed Amazon CloudFront log is delivered to the preprocessed logs bucket. To do so, configure the Lambda partitioning function to be triggered by an S3 Put event.

For a website with millions of requests, a large number of preprocessed logs can be delivered multiple times in an hour—for example, at the interval of one each second. To avoid querying the Athena table for partition information every time a preprocessed log file is delivered, you can create an Amazon DynamoDB table for fast lookup.

Based on the year, month, data and hour in the prefix of the delivered log, the Lambda partitioning function checks if the partition specification exists in the Amazon DynamoDB table. If it doesn’t, it’s added to the table using an atomic operation, and then the Athena table is updated.

Type the following command to create the Amazon DynamoDB table.

aws dynamodb create-table --table-name athenapartitiondetails \
--attribute-definitions AttributeName=PartitionSpec,AttributeType=S \
--key-schema AttributeName=PartitionSpec,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=100,WriteCapacityUnits=100

Here the following is true:

  • PartitionSpec is the hash key and is a representation of the partition signature—for example, year=”2017”; month=”05”; day=”15”; hour=”10”.
  • Depending on the rate at which the processed log files are delivered to the processed log bucket, you might have to increase the ReadCapacityUnits and WriteCapacityUnits values, if these are throttled.

The other attributes besides PartitionSpec are the following:

  • PartitionPath – The S3 path associated with the partition.
  • PartitionType – The type of partition used (Hour, Month, Date, Year, or ALL). In this case, ALL is used.

Next step is to create the IAM role to provide permissions for the Lambda partitioning function. You require permissions to do the following:

  1. Look up and write partition information to DynamoDB.
  2. Alter the Athena table with new partition information.
  3. Perform Amazon CloudWatch logs operations.
  4. Perform Amazon S3 operations.

To download the policy document to your local machine, type following command.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/partitioning-lambda/lambda-partition-function-execution-policy.json  <path_on_your_local_machine>

To download the assume policy document to your local machine, type the following command.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/partitioning-lambda/assume-lambda-policy.json <path_on_your_local_machine>

To create the Lambda execution role and assign the service to use this role, type the following command.

aws iam create-role --role-name lambda-cflogs-exec-role --assume-role-policy-document file://<path>/assume-lambda-policy.json

Let’s use the AWS CLI to create the IAM role using the following three steps:

  1. Create the IAM policy(lambda-partition-exec-policy) used by the Lambda execution role.
  2. Create the Lambda execution role (lambda-partition-execution-role)and assign the service to use this role.
  3. Attach the policy created in step 1 to the Lambda execution role.

To create the IAM policy used by Lambda execution role, type the following command.

aws iam create-policy --policy-name lambda-partition-exec-policy --policy-document file://<path>/lambda-partition-function-execution-policy.json

To create the Lambda execution role and assign the service to use this role, type the following command.

aws iam create-role --role-name lambda-partition-execution-role --assume-role-policy-document file://<path>/assume-lambda-policy.json

To attach the policy (lambda-partition-exec-policy) created to the AWS Lambda execution role (lambda-partition-execution-role), type the following command.

aws iam attach-role-policy --role-name lambda-partition-execution-role --policy-arn arn:aws:iam::<your-account-id>:policy/lambda-partition-exec-policy

Following is the lambda-partition-function-execution-policy.json file, which is the IAM policy used by the Lambda execution role.

{
    "Version": "2012-10-17",
    "Statement": [
      	{
            	"Sid": "DDBTableAccess",
            	"Effect": "Allow",
            	"Action": "dynamodb:PutItem"
            	"Resource": "arn:aws:dynamodb*:*:table/athenapartitiondetails"
        	},
        	{
            	"Sid": "S3Access",
            	"Effect": "Allow",
            	"Action": [
                		"s3:GetBucketLocation",
                		"s3:GetObject",
                		"s3:ListBucket",
                		"s3:ListBucketMultipartUploads",
                		"s3:ListMultipartUploadParts",
                		"s3:AbortMultipartUpload",
                		"s3:PutObject"
            	],
          		"Resource":"arn:aws:s3:::*"
		},
	              {
		      "Sid": "AthenaAccess",
      		"Effect": "Allow",
      		"Action": [ "athena:*" ],
      		"Resource": [ "*" ]
	      },
        	{
            	"Sid": "CloudWatchLogsAccess",
            	"Effect": "Allow",
            	"Action": [
                		"logs:CreateLogGroup",
                		"logs:CreateLogStream",
             	   	"logs:PutLogEvents"
            	],
            	"Resource": "arn:aws:logs:*:*:*"
        	}
    ]
}

Download the .jar file containing the Java deployment package to your local machine.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/partitioning-lambda/aws-lambda-athena-1.0.0.jar <path_on_your_local_machine>

From the AWS Management Console, create a new Lambda function with Java8 as the runtime. Select the Blank Function blueprint.

On the Configure triggers page, choose the name of the S3 bucket where the preprocessed logs are delivered. Choose Put for the Event Type. For Prefix, type the name of the prefix folder, if any, where preprocessed logs are delivered by Firehose—for example, out/. For Suffix, type the name of the compression format that the Firehose stream (CloudFrontLogToS3) delivers the preprocessed logs —for example, gz. To invoke Lambda to retrieve the logs from the S3 bucket as they are delivered, select Enable Trigger.

Choose Next and provide a function name to identify this Lambda partitioning function.

Choose Java8 for Runtime for the AWS Lambda function. Choose Upload a .ZIP or .JAR file for the Code entry type, and choose Upload to upload the downloaded aws-lambda-athena-1.0.0.jar file.

Next, create the following environment variables for the Lambda function:

  • TABLE_NAME – The name of the Athena table (for example, cf_logs).
  • PARTITION_TYPE – The partition to be created based on the Athena table for the logs delivered to the sub folders in S3 bucket based on Year, Month, Date, Hour, or Set this to ALL to use Year, Month, Date, and Hour.
  • DDB_TABLE_NAME – The name of the DynamoDB table holding partition information (for example, athenapartitiondetails).
  • ATHENA_REGION – The current AWS Region for the Athena table to construct the JDBC connection string.
  • S3_STAGING_DIR – The Amazon S3 location where your query output is written. The JDBC driver asks Athena to read the results and provide rows of data back to the user (for example, s3://<bucketname>/<folder>/).

To configure the function handler and IAM, for Handler copy and paste the name of the handler: com.amazonaws.services.lambda.CreateAthenaPartitionsBasedOnS3EventWithDDB::handleRequest. Choose the existing IAM role, lambda-partition-execution-role.

Choose Next and then Create Lambda.

Interactive analysis using Amazon Athena

In this section, we analyze the historical data that’s been collected since we added the partitions to the Amazon Athena table for data delivered to the preprocessing logs bucket.

Scenario 1 is robot traffic by edge location.

SELECT COUNT(*) AS ct, requestip, location FROM cf_logs
WHERE isbot='True'
GROUP BY requestip, location
ORDER BY ct DESC;

Scenario 2 is total bytes transferred per distribution for each edge location for your website.

SELECT distribution, location, SUM(bytes) as totalBytes
FROM cf_logs
GROUP BY location, distribution;

Scenario 3 is the top 50 viewers of your website.

SELECT requestip, COUNT(*) AS ct  FROM cf_logs
GROUP BY requestip
ORDER BY ct DESC;

Streaming analysis using Amazon Kinesis Analytics

In this section, you deploy a stream processing application using Amazon Kinesis Analytics to analyze the preprocessed Amazon CloudFront log streams. This application analyzes directly from the Amazon Kinesis Stream as it is delivered to the preprocessing logs bucket. The stream queries in section are focused on gaining the following insights:

  • The IP address of the bot, identified by its Amazon CloudFront edge location, that is currently sending requests to your website. The query also includes the total bytes transferred as part of the response.
  • The total bytes served per distribution per population for your website.
  • The top 10 viewers of your website.

To download the firehose-access-policy.json file, type the following.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/kinesisanalytics/firehose-access-policy.json  <path_on_your_local_machine>

To download the kinesisanalytics-policy.json file, type the following.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis/kinesisanalytics/assume-kinesisanalytics-policy.json <path_on_your_local_machine>

Before we create the Amazon Kinesis Analytics application, we need to create the IAM role to provide permission for the analytics application to access Amazon Kinesis Firehose stream.

Let’s use the AWS CLI to create the IAM role using the following three steps:

  1. Create the IAM policy(firehose-access-policy) for the Lambda execution role to use.
  2. Create the Lambda execution role (ka-execution-role) and assign the service to use this role.
  3. Attach the policy created in step 1 to the Lambda execution role.

Following is the firehose-access-policy.json file, which is the IAM policy used by Kinesis Analytics to read Firehose delivery stream.

{
    "Version": "2012-10-17",
    "Statement": [
      	{
    	"Sid": "AmazonFirehoseAccess",
    	"Effect": "Allow",
    	"Action": [
       	"firehose:DescribeDeliveryStream",
        	"firehose:Get*"
    	],
    	"Resource": [
              "arn:aws:firehose:*:*:deliverystream/CloudFrontLogsToS3”
       ]
     }
}

Following is the assume-kinesisanalytics-policy.json file, to grant Amazon Kinesis Analytics permissions to assume a role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "kinesisanalytics.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

To create the IAM policy used by Analytics access role, type the following command.

aws iam create-policy --policy-name firehose-access-policy --policy-document file://<path>/firehose-access-policy.json

To create the Analytics execution role and assign the service to use this role, type the following command.

aws iam attach-role-policy --role-name ka-execution-role --policy-arn arn:aws:iam::<your-account-id>:policy/firehose-access-policy

To attach the policy (irehose-access-policy) created to the Analytics execution role (ka-execution-role), type the following command.

aws iam attach-role-policy --role-name ka-execution-role --policy-arn arn:aws:iam::<your-account-id>:policy/firehose-access-policy

To deploy the Analytics application, first download the configuration file and then modify ResourceARN and RoleARN for the Amazon Kinesis Firehose input configuration.

"KinesisFirehoseInput": { 
    "ResourceARN": "arn:aws:firehose:<region>:<account-id>:deliverystream/CloudFrontLogsToS3", 
    "RoleARN": "arn:aws:iam:<account-id>:role/ka-execution-role"
}

To download the Analytics application configuration file, type the following command.

aws s3 cp s3://aws-bigdata-blog/artifacts/Serverless-CF-Analysis//kinesisanalytics/kinesis-analytics-app-configuration.json <path_on_your_local_machine>

To deploy the application, type the following command.

aws kinesisanalytics create-application --application-name "cf-log-analysis" --cli-input-json file://<path>/kinesis-analytics-app-configuration.json

To start the application, type the following command.

aws kinesisanalytics start-application --application-name "cf-log-analysis" --input-configuration Id="1.1",InputStartingPositionConfiguration={InputStartingPosition="NOW"}

SQL queries using Amazon Kinesis Analytics

Scenario 1 is a query for detecting bots for sending request to your website detection for your website.

-- Create output stream, which can be used to send to a destination
CREATE OR REPLACE STREAM "BOT_DETECTION" (requesttime TIME, destribution VARCHAR(16), requestip VARCHAR(64), edgelocation VARCHAR(64), totalBytes BIGINT);
-- Create pump to insert into output 
CREATE OR REPLACE PUMP "BOT_DETECTION_PUMP" AS INSERT INTO "BOT_DETECTION"
--
SELECT STREAM 
    STEP("CF_LOG_STREAM_001"."request_time" BY INTERVAL '1' SECOND) as requesttime,
    "distribution_name" as distribution,
    "request_ip" as requestip, 
    "edge_location" as edgelocation, 
    SUM("bytes") as totalBytes
FROM "CF_LOG_STREAM_001"
WHERE "is_bot" = true
GROUP BY "request_ip", "edge_location", "distribution_name",
STEP("CF_LOG_STREAM_001"."request_time" BY INTERVAL '1' SECOND),
STEP("CF_LOG_STREAM_001".ROWTIME BY INTERVAL '1' SECOND);

Scenario 2 is a query for total bytes transferred per distribution for each edge location for your website.

-- Create output stream, which can be used to send to a destination
CREATE OR REPLACE STREAM "BYTES_TRANSFFERED" (requesttime TIME, destribution VARCHAR(16), edgelocation VARCHAR(64), totalBytes BIGINT);
-- Create pump to insert into output 
CREATE OR REPLACE PUMP "BYTES_TRANSFFERED_PUMP" AS INSERT INTO "BYTES_TRANSFFERED"
-- Bytes Transffered per second per web destribution by edge location
SELECT STREAM 
    STEP("CF_LOG_STREAM_001"."request_time" BY INTERVAL '1' SECOND) as requesttime,
    "distribution_name" as distribution,
    "edge_location" as edgelocation, 
    SUM("bytes") as totalBytes
FROM "CF_LOG_STREAM_001"
GROUP BY "distribution_name", "edge_location", "request_date",
STEP("CF_LOG_STREAM_001"."request_time" BY INTERVAL '1' SECOND),
STEP("CF_LOG_STREAM_001".ROWTIME BY INTERVAL '1' SECOND);

Scenario 3 is a query for the top 50 viewers for your website.

-- Create output stream, which can be used to send to a destination
CREATE OR REPLACE STREAM "TOP_TALKERS" (requestip VARCHAR(64), requestcount DOUBLE);
-- Create pump to insert into output 
CREATE OR REPLACE PUMP "TOP_TALKERS_PUMP" AS INSERT INTO "TOP_TALKERS"
-- Top Ten Talker
SELECT STREAM ITEM as requestip, ITEM_COUNT as requestcount FROM TABLE(TOP_K_ITEMS_TUMBLING(
  CURSOR(SELECT STREAM * FROM "CF_LOG_STREAM_001"),
  'request_ip', -- name of column in single quotes
  50, -- number of top items
  60 -- tumbling window size in seconds
  )
);

Conclusion

Following the steps in this blog post, you just built an end-to-end serverless architecture to analyze Amazon CloudFront access logs. You analyzed these both in interactive and streaming mode, using Amazon Athena and Amazon Kinesis Analytics respectively.

By creating a partition in Athena for the logs delivered to a centralized bucket, this architecture is optimized for performance and cost when analyzing large volumes of logs for popular websites that receive millions of requests. Here, we have focused on just three common use cases for analysis, sharing the analytic queries as part of the post. However, you can extend this architecture to gain deeper insights and generate usage reports to reduce latency and increase availability. This way, you can provide a better experience on your websites fronted with Amazon CloudFront.

In this blog post, we focused on building serverless architecture to analyze Amazon CloudFront access logs. Our plan is to extend the solution to provide rich visualization as part of our next blog post.


About the Authors

Rajeev Srinivasan is a Senior Solution Architect for AWS. He works very close with our customers to provide big data and NoSQL solution leveraging the AWS platform and enjoys coding . In his spare time he enjoys riding his motorcycle and reading books.

 

Sai Sriparasa is a consultant with AWS Professional Services. He works with our customers to provide strategic and tactical big data solutions with an emphasis on automation, operations & security on AWS. In his spare time, he follows sports and current affairs.

 

 


Related

Analyzing VPC Flow Logs with Amazon Kinesis Firehose, Amazon Athena, and Amazon QuickSight

New Features for IAM Policy Summaries – Resource Summaries

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/new-features-for-iam-policy-summaries-resource-summaries/

In March, we introduced policy summaries, which make it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. Today, we added three new features to policy summaries to improve the experience of understanding and troubleshooting your policies. First, we added resource summaries for you to see the resources defined in your policies. Second, you can now see which services and actions are implicitly denied by a policy. This allows you to see the remaining actions available for a service with limited access. Third, it is now easier for you to identify potential typos in your policies because you can now see which services and actions are unrecognized by IAM. Today, Tuesday, and Wednesday, I will demonstrate these three new features. In today’s post, I review resource summaries.

Resource summaries

Policy summaries now show you the resources defined in a policy. Previously, policy summaries displayed either All for all resources, the Amazon Resource Name (ARN) for one resource, or Multiple for multiple resources specified in the policy. Starting today, you can see the resource type, region, and account ID to summarize the list of resources defined for each action in a policy. Let’s review a policy summary that specifies multiple resources.

The following policy grants access to three Amazon S3 buckets with multiple conditions.

{
 "Version":"2012-10-17",
 "Statement":[
   {
     "Effect":"Allow",
     "Action":["s3:PutObject","s3:PutObjectAcl"],
     "Resource":["arn:aws:s3:::Apple_bucket"],
     "Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
   },{
     "Effect":"Allow",
     "Action":["s3:PutObject","s3:PutObjectAcl"],
     "Resource":["arn:aws:s3:::Orange_bucket"],
     "Condition":{"StringEquals":{"s3:prefix":["custom", "test"]}}
   },{
     "Effect":"Allow",
     "Action":["s3:PutObject","s3:PutObjectAcl"],
     "Resource":["arn:aws:s3:::Purple_bucket"],
     "Condition":{"DateGreaterThan":{"aws:CurrentTime":"2016-10-31T05:00:00Z"}}
   }
 ]
}

The policy summary (see the following screenshot) shows Limited: Write, Permissions management actions for S3 on Multiple resources and request conditions. Limited means that some but not all of the actions in the Write and Permissions management are granted in the policy.

Screenshot of the policy summary

If I choose S3, I see that the actions defined in the policy grant access to multiple resources, as shown in the following screenshot. To see the resource summary, I can choose either PutObject or PutObjectAcl.

Screenshot showing that the actions defined in the policy grant access to multiple resources

I choose PutObjectAcl to see the resources and conditions defined in the policy for this S3 action. If the policy has one condition, I see it in the policy summary. I can view multiple conditions in the JSON.

Screenshot showing the resources and the conditions defined in the policy for this S3 action

As the preceding screenshot shows, the PutObjectAcl action has access to three S3 buckets with respective request conditions.

Summary

Policy summaries make it easy to view and understand the permissions and resources defined in a policy without having to view the associated JSON. To see policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page. On Tuesday, I will review the benefits of viewing the services and actions not granted in a policy.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

Updated AWS SOC Reports Include Three New Regions and Three Additional Services

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/updated-aws-soc-reports-include-three-new-regions-and-three-additional-services/

 

SOC logo

The updated AWS Service Organization Control (SOC) 1 and SOC 2 Security, Availability, and Confidentiality Reports covering the period of October 1, 2016, through March 31, 2017, are now available. Because we are always looking for ways to improve the customer experience, the current AWS SOC 2 Confidentiality Report has been combined with the AWS SOC 2 Security & Availability Report, making for a seamless read. The updated AWS SOC 3 Security & Availability Report also is publicly available by download.

Additionally, the following three AWS services have been added to the scope of our SOC Reports:

The AWS SOC Reports now also include our three newest regions: US East (Ohio), Canada (Central), and EU (London). SOC Reports now cover 15 regions and supporting edge locations across the globe. See AWS Global Infrastructure for additional geographic information related to AWS SOC.

The updated SOC Reports are available now through AWS Artifact in the AWS Management Console. To request a report:

  1. Sign in to your AWS account.
  2. In the list of services under Security, Identity and Compliance, choose Compliance Reports. On the next page, choose the report you would like to review. Note that you might need to request approval from Amazon for some reports. Requests are reviewed and approved by Amazon within 24 hours.

For further information, see frequently asked questions about the AWS SOC program.  

– Chad

EC2 In-Memory Processing Update: Instances with 4 to 16 TB of Memory + Scale-Out SAP HANA to 34 TB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-in-memory-processing-update-instances-with-4-to-16-tb-of-memory-scale-out-sap-hana-to-34-tb/

Several times each month, I speak to AWS customers at our Executive Briefing Center in Seattle. I describe our innovation process and talk about how the roadmap for each AWS offering is driven by customer requests and feedback.

A good example of this is our work to make AWS a great home for SAP’s portfolio of business solutions. Over the years our customers have told us that they run large-scale SAP applications in production on AWS and we’ve worked hard to provide them with EC2 instances that are designed to accommodate their workloads. Because SAP installations are unfailingly mission-critical, SAP certifies their products for use on certain EC2 instance types and sizes. We work directly with SAP in order to achieve certification and to make AWS a robust & reliable host for their products.

Here’s a quick recap of some of our most important announcements in this area:

June 2012 – We expanded the range of SAP-certified solutions that are available on AWS.

October 2012 – We announced that the SAP HANA in-memory database is now available for production use on AWS.

March 2014 – We announced that SAP HANA can now run in production form on cr1.8xlarge instances with up to 244 GB of memory, with the ability to create test clusters that are even larger.

June 2014 – We published a SAP HANA Deployment Guide and a set of AWS CloudFormation templates in conjunction with SAP certification on r3.8xlarge instances.

October 2015 – We announced the x1.32xlarge instances with 2 TB of memory, designed to run SAP HANA, Microsoft SQL Server, Apache Spark, and Presto.

August 2016 – We announced that clusters of X1 instances can now be used to create production SAP HANA clusters with up to 7 nodes, or 14 TB of memory.

October 2016 – We announced the x1.16xlarge instance with 1 TB of memory.

January 2017 – SAP HANA was certified for use on r4.16xlarge instances.

Today, customers from a broad collection of industries run their SAP applications in production form on AWS (the SAP and Amazon Web Services page has a long list of customer success stories).

My colleague Bas Kamphuis recently wrote about Navigating the Digital Journey with SAP and the Cloud (registration required). He discusses the role of SAP in digital transformation and examines the key characteristics of the cloud infrastructure that support it, while pointing out many of the advantages that the cloud offers in comparison to other hosting options. Here’s how he illustrates these advantages in his article:

We continue to work to make AWS an even better place to run SAP applications in production form. Here are some of the things that we are working on:

  • Bigger SAP HANA Clusters – You can now build scale-out SAP HANA clusters with up to 17 nodes (34 TB of memory).
  • 4 TB Instances – The upcoming x1e.32xlarge instances will offer 4 TB of memory.
  • 8 – 16 TB Instances – Instances with up to 16 TB of memory are in the works.

Let’s dive in!

Building Bigger SAP HANA Clusters
I’m happy to announce that we have been working with SAP to certify the x1.32large instances for use in scale-out clusters with up to 17 nodes (34 TB of memory). This is the largest scale-out deployment available from any cloud provider today, and allows our customers to deploy very large SAP workloads on AWS (visit the SAP HANA Hardware directory certification for the x1.32xlarge instance to learn more). To learn how to architect and deploy your own scale-out cluster, consult the SAP HANA on AWS Quick Start.

Extending the Memory-Intensive X1 Family
We will continue to invest in this and other instance families in order to address your needs and to give you a solid growth path.

Later this year we plan to make the x1e.32xlarge instances available in several AWS regions, in both On-Demand and Reserved Instance form. These instances will offer 4 TB of DDR4 memory (twice as much as the x1.32xlarge), 128 vCPUs (four 2.3 GHz Intel® Xeon® E7 8880 v3 processors), high memory bandwidth, and large L3 caches. The instances will be VPC-only, and will deliver up to 20 Gbps of network banwidth using the Elastic Network Adapter while minimizing latency and jitter. They’ll be EBS-optimized by default, with up to 14 Gbps of dedicated EBS throughput.

Here are some screen shots from the shell. First, dmesg shows the boot-time kernel message:

Second, lscpu shows the vCPU & socket count, along with many other interesting facts:

And top shows nearly 900 processes:

Here’s the view from within HANA Studio:

This new instance, along with the certification for larger clusters, broadens the set of scale-out and scale-up options that you have for running SAP on EC2, as you can see from this diagram:

The Long-Term Memory-Intensive Roadmap
Because we know that planning large-scale SAP installations can take a considerable amount of time, I would also like to share part of our roadmap with you.

Today, customers are able to run larger SAP HANA certified servers in third party colo data centers and connect them to their AWS infrastructure via AWS Direct Connect, but customers have told us that they really want a cloud native solution like they currently get with X1 instances.

In order to meet this need, we are working on instances with even more memory! Throughout 2017 and 2018, we plan to launch EC2 instances with between 8 TB and 16 TB of memory. These upcoming instances, along with the x1e.32xlarge, will allow you to create larger single-node SAP installations and multi-node SAP HANA clusters, and to run other memory-intensive applications and services. It will also provide you with some scale-up headroom that will become helpful when you start to reach the limits of the smaller instances.

I’ll share more information on our plans as soon as possible.

Say Hello at SAPPHIRE
The AWS team will be in booth 539 at SAPPHIRE with a rolling set of sessions from our team, our customers, and our partners in the in-booth theater. We’ll also be participating in many sessions throughout the event. Here’s a sampling (see SAP SAPPHIRE NOW 2017 for a full list):

SAP Solutions on AWS for Big Businesses and Big Workloads – Wednesday, May 17th at Noon. Bas Kamphuis (General Manager, SAP, AWS) & Ed Alford (VP of Business Application Services, BP).

Break Through the Speed Barrier When You Move to SAP HANA on AWS – Wednesday, May 17th at 12:30 PM – Paul Young (VP, SAP) and Saul Dave (Senior Director, Enterprise Systems, Zappos).

AWS Fireside Chat with Zappos (Rapid SAP HANA Migration: Real Results) – Thursday, May 18th at 11:00 AM – Saul Dave (Senior Director, Enterprise Systems, Zappos) and Steve Jones (Senior Manager, SAP Solutions Architecture, AWS).

Jeff;

PS – If you have some SAP experience and would like to bring it to the cloud, take a look at the Principal Product Manager (AWS Quick Starts) and SAP Architect positions.

ISP Bombarded With 82,000+ Demands to Reveal Alleged Pirates

Post Syndicated from Andy original https://torrentfreak.com/isp-bombarded-with-82000-demands-to-reveal-alleged-pirates-170513/

It was once a region where people could share files without fear of reprisal, but over the years Scandinavia has become a hotbed of ‘pirate’ prosecutions.

Sweden, in particular, has seen many sites shut down and their operators sentenced, notably those behind The Pirate Bay but also more recent cases such as those against DreamFilm and Swefilmer.

To this backdrop, members of the public have continued to share files, albeit in decreasing numbers. However, at the same time copyright trolls have hit countries like Sweden, Finland, and Denmark, hoping to scare alleged file-sharers into cash settlements.

This week regional ISP Telia revealed that the activity has already reached epidemic proportions.

Under the EU IPR Enforcement Directive (IPRED), Internet service providers are required to hand over the personal details of suspected pirates to copyright holders, if local courts deem that appropriate. Telia says it is now being bombarded with such demands.

“Telia must adhere to court decisions. At the same time we have a commitment to respect the privacy of our customers and therefore to be transparent,” the company says.

“While in previous years Telia has normally received less than ten such [disclosure] requests per market, per year, lately the number of requests has increased significantly.”

The scale is huge. The company reports that in Sweden during the past year alone, it has been ordered to hand over the identities of subscribers behind more than 45,000 IP addresses.

In Finland during the same period, court orders covered almost 37,000 IP addresses. Four court orders in Denmark currently require the surrendering of data on “hundreds” of customers.

Telia says that a Danish law firm known as Njord Law is behind many of the demands. The company is connected to international copyright trolls operating out of the United States, United Kingdom, and elsewhere.

“A Danish law firm (NJORD Law firm), representing the London-based copyright holder Copyright Management Services Ltd, was recently (2017-01-31) granted a court order forcing Telia Sweden to disclose to the law firm the subscriber identities behind 25,000 IP-addresses,” the company notes.

Copyright Management Services Ltd was incorporated in the UK during October 2014. Its sole director is Patrick Achache, who also operates German-based BitTorrent tracking company MaverickEye. Both are part of the notorious international trolling operation Guardaley.

Copyright Management Services, which is based at the same London address as fellow UK copyright-trolling partner Hatton and Berkeley, filed accounts in June 2016 claiming to be a dormant company. Other than that, it has never filed any financial information.

Copyright Management Services will be legally required to publish more detailed accounts next time around, since the company is now clearly trading, but its role in this operation is far from clear. For its part, Telia hopes the court has done the necessary checking when handing information over to partner firm, Njord Law.

“Telia assumes that the courts perform adequate assessments of the evidence provided by the above law firm, and also that the courts conduct a sufficient assessment of proportionality between copyright and privacy,” the company says.

“Telia does not know what the above law firm intends to do with the large amount of customer data which they are now collecting.”

While that statement from Telia is arguably correct, it doesn’t take a genius to work out where this is going. Every time that these companies can match an IP address to an account holder, they will receive a letter in the mail demanding a cash settlement. Anything that substantially deviates from this outcome would be a very surprising development indeed.

In the meantime, Jon Karlung, the outspoken boss of ISP Bahnhof, has pointed out that if Telia didn’t store customer IP addresses in the first place, it wouldn’t have anything to hand out to copyright trolls.

“Bahnhof does not store this data – and we can’t give out something we do not have. The same logic should apply to Telia,” he said.

Bahnhof says it stores customer data including IP addresses for 24 hours, just long enough to troubleshoot technical issues but nowhere near long enough to be useful to trolls.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] Which email client for Ubuntu 17.10?

Post Syndicated from jake original https://lwn.net/Articles/720831/rss

An email client was once a mandatory offering for any operating system, but
that may be changing. A discussion on the ubuntu-desktop mailing list
explores the choices for a default email client for Ubuntu 17.10, which is
due in October. One of the possibilities being considered is to not have a
default email client at all.

Supporting and growing the Raspberry Jam community

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/support-raspberry-jam-community/

For almost five years, Raspberry Jams have created opportunities to welcome new people to the Raspberry Pi community, as well as providing a support network for people of all ages in digital making. All around the world, like-minded people meet up to discuss and share their latest projects, give workshops, and chat about all things Pi. Today, we are making it easier than ever to set up your own Raspberry Jam, thanks to a new Jam Guidebook, branding pack, and starter kit.

Raspberry Jam logo over world map

We think Jams provide lots of great learning opportunities and we’d like to see one in every community. We’re aware of Jams in 43 countries: most recently, we’ve seen new Jams start in Thailand, Trinidad and Tobago, and Honduras! The community team has been working on a plan to support and grow the amazing community of Jam makers around the world. Now it’s time to share the fantastic resources we have produced with you.

The Raspberry Jam Guidebook

One of the things we’ve been working on is a comprehensive Raspberry Jam Guidebook to help people set up their Jam. It’s packed full of advice gathered from the Raspberry Pi community, showing the many different types of Jam and how you can organise your own. It covers everything from promoting and structuring your Jam to managing finances: we’re sure you’ll find it useful. Download it now!

Image of Raspberry Jam Guidebook

Branding pack

One of the things many Jam organisers told us they needed was a set of assets to help with advertising. With that in mind, we’ve created a new branding pack for Jam organisers to use in their promotional materials. There’s a new Raspberry Jam logo, a set of poster templates, a set of graphical assets, and more. Download it now!

Starter kits

Finally, we’ve put together a Raspberry Jam starter kit containing stickers, flyers, printed worksheets, and lots more goodies to help people run their first Jam. Once you’ve submitted your first event to our Jam map, you can apply for your starter kit. Existing Jams won’t miss out either: they can apply for a kit when they submit their next event.

Image of Raspberry Jam starter kit contents

Find a Jam near you!

Take a look at the Jam map and see if there’s an event coming up near you. If you have kids, Jams can be a brilliant way to get them started with coding and making.

Can’t find a local Jam? Start one!

If you can’t find a Jam near you, you can start your own. You don’t have to organise it by yourself. Try to find some other people who would also like a Jam to go to, and get together with them. Work out where you could host your Jam and what form you’d like it to take. It’s OK to start small: just get some people together and see what happens. It’s worth looking at the Jam map to see if any Jams have happened nearby: just check the ‘Past Events’ box.

We have a Raspberry Jam Slack team where you can get help from other Jam organisers. Feel free to get in touch if you would like to join: just email jam@raspberrypi.org and we’ll get back to you. You can also contact us if you need further support in general, or if you have feedback on the resources.

Thanks

Many thanks to everyone who contributed to the guidebook and provided insights in the Jam survey. Thanks, too, to all Jam makers and volunteers around the world who do great work providing opportunities for people everywhere!

The post Supporting and growing the Raspberry Jam community appeared first on Raspberry Pi.

[$] The MuQSS CPU scheduler

Post Syndicated from corbet original https://lwn.net/Articles/720227/rss

The scheduler is a topic of keen interest for the desktop user;
the scheduling algorithm partially determines the responsiveness of
the Linux desktop as a whole. Con Kolivas maintains a series of scheduler patch sets
that he has tuned considerably over the years for his own use, focusing
primarily on latency reduction for a better desktop experience. In
early October 2016
, Kolivas updated the design of his popular desktop
scheduler patch set, which he renamed MuQSS. It is an update (and a name
change) from his previous scheduler, BFS, and it is designed to address
scalability concerns that BFS had with an increasing number of CPUs.

Court Extends Hold on Megaupload’s MPAA and RIAA Lawsuits

Post Syndicated from Ernesto original https://torrentfreak.com/court-extends-hold-on-megauploads-mpaa-and-riaa-lawsuits-170409/

megaupload-logoWell over five years have passed since Megaupload was shutdown and it’s still unclear how the criminal proceedings will unfold.

A few weeks ago the New Zealand High Court ruled that Kim Dotcom and his former colleagues can be extradited to the US. Not on copyright grounds, but for conspiracy to defraud.

Following the ruling Dotcom quickly announced that he would take the matter to the Court of Appeal, which will prolong the case for several months at least.

While all parties await the outcome of this appeal, the criminal case in the United States remains pending. The same goes for the civil cases launched by the MPAA and RIAA in 2014.

Since the civil cases may influence the criminal proceedings, Megaupload’s legal team previously managed to put these cases on hold, and this week another extension was granted.

Previously there were concerns that the long delays could result in the destruction of evidence, as some of Megaupload’s hard drives were starting to fail. However, after the parties agreed on a solution to back-up and restore the files, this is no longer an issue.

“With the preservation order now in place, Defendant Megaupload hereby moves the Court to enter the attached proposed order, continuing the stay in this case for an additional six months, subject to the terms and conditions stated in the proposed order,” the company wrote in the motion to stay.

On Thursday U.S. District Court Judge Liam O’Grady granted Megaupload’s request to stay both lawsuits until October this year, barring any new developments. The music and movie companies didn’t oppose the motion.

The order of U.S. District Court Judge Liam O’Grady is available here (pdf). A copy of Megaupload’s request can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Raspberry Turk: a chess-playing robot

Post Syndicated from Lorna Lynch original https://www.raspberrypi.org/blog/raspberry-turk-chess-playing-robot/

Computers and chess have been a potent combination ever since the appearance of the first chess-playing computers in the 1970s. You might even be able to play a game of chess on the device you are using to read this blog post! For digital makers, though, adding a Raspberry Pi into the mix can be the first step to building something a little more exciting. Allow us to introduce you to Joey Meyer‘s chess-playing robot, the Raspberry Turk.

The Raspberry Turk chess-playing robot

Image credit: Joey Meyer

Being both an experienced software engineer with an interest in machine learning, and a skilled chess player, it’s not surprising that Joey was interested in tinkering with chess programs. What is really stunning, though, is the scale and complexity of the build he came up with. Fascinated by a famous historical hoax, Joey used his skills in programming and robotics to build an open-source Raspberry Pi-powered recreation of the celebrated Mechanical Turk automaton.

You can see the Raspberry Turk in action on Joey’s YouTube channel:

Chess Playing Robot Powered by Raspberry Pi – Raspberry Turk

The Raspberry Turk is a robot that can play chess-it’s entirely open source, based on Raspberry Pi, and inspired by the 18th century chess playing machine, the Mechanical Turk. Website: http://www.raspberryturk.com Source Code: https://github.com/joeymeyer/raspberryturk

A historical hoax

Joey explains that he first encountered the Mechanical Turk through a book by Tom Standage. A famous example of mechanical trickery, the original Turk was advertised as a chess-playing automaton, capable of defeating human opponents and solving complex puzzles.

Image of the Mechanical Turk Automaton

A modern reconstruction of the Mechanical Turk 
Image from Wikimedia Commons

Its inner workings a secret, the Turk toured Europe for the best part of a century, confounding everyone who encountered it. Unfortunately, it turned out not to be a fabulous example of early robotic engineering after all. Instead, it was just an elaborate illusion. The awesome chess moves were not being worked out by the clockwork brain of the automaton, but rather by a human chess master who was cunningly concealed inside the casing.

Building a modern Turk

A modern version of the Mechanical Turk was constructed in the 1980s. However, the build cost $120,000. At that price, it would have been impossible for most makers to create their own version. Impossible, that is, until now: Joey uses a Raspberry Pi 3 to drive the Raspberry Turk, while a Raspberry Pi Camera Module handles computer vision.

Image of chess board and Raspberry Turk robot

The Raspberry Turk in the middle of a game 
Image credit: Joey Meyer

Joey’s Raspberry Turk is built into a neat wooden table. All of the electronics are housed in a box on one side. The chessboard is painted directly onto the table’s surface. In order for the robot to play, a Camera Module located in a 3D-printed housing above the table takes an image of the chessboard. The image is then analysed to determine which pieces are in which positions at that point. By tracking changes in the positions of the pieces, the Raspberry Turk can determine which moves have been made, and which piece should move next. To train the system, Joey had to build a large dataset to validate a computer vision model. This involved painstakingly moving pieces by hand and collecting multiple images of each possible position.

Look, no hands!

A key feature of the Mechanical Turk was that the automaton appeared to move the chess pieces entirely by itself. Of course, its movements were actually being controlled by a person hidden inside the machine. The Raspberry Turk, by contrast, does move the chess pieces itself. To achieve this, Joey used a robotic arm attached to the table. The arm is made primarily out of Actobotics components. Joey explains:

The motion is controlled by the rotation of two servos which are attached to gears at the base of each link of the arm. At the end of the arm is another servo which moves a beam up and down. At the bottom of the beam is an electromagnet that can be dynamically activated to lift the chess pieces.

Joey individually fitted the chess pieces with tiny sections of metal dowel so that the magnet on the arm could pick them up.

Programming the Raspberry Turk

The Raspberry Turk is controlled by a daemon process that runs a perception/action sequence, and the status updates automatically as the pieces are moved. The code is written almost entirely in Python. It is all available on Joey’s GitHub repo for the project, together with his notebooks on the project.

Image of Raspberry Turk chessboard with Python script alongside

Image credit: Joey Meyer

The AI backend that gives the robot its chess-playing ability is currently Stockfish, a strong open-source chess engine. Joey says he would like to build his own engine when he has time. For the moment, though, he’s confident that this AI will prove a worthy opponent.

The project website goes into much more detail than we are able to give here. We’d definitely recommend checking it out. If you have been experimenting with any robotics or computer vision projects like this, please do let us know in the comments!

The post Raspberry Turk: a chess-playing robot appeared first on Raspberry Pi.