Tag Archives: gaming

Expanding the AWS Blog Team – Welcome Ana, Tina, Tara, and the Localization Team

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/expanding-the-aws-blog-team-welcome-ana-tina-tara-and-the-localization-team/

I wrote my first post for this blog back in 2004, and have published over 2,700 more since then, including 52 last month! Given the ever-increasing pace of AWS innovation, and the amount of cool stuff that we have to share with you, we are expanding our blogging team. Please give a warm welcome to Ana, Tina, and Tara:

Ana Visneski (@acvisneski) was the first official blogger for the United States Coast Guard. While there she focused on search & rescue coordination and also led the team that established a social media presence. Ana is a graduate of the University of Washington Communications Leadership Program, and was the first to complete both the Master of Communication in Digital Media (MCDM) and Master of Communication in Communities and Networks (MCCN) degree programs. Ana works with our guest posters, tracks our metrics, and manages the ticketing system that we use to coordinate our activities.

Tina Barr (@tinathebarr) is a Recruiting Coordinator for the AWS Commercial Sales organization. In order to provide a great first impression for her candidates, she began to read the AWS Customer Success Stories and developed a special interest in startups. In addition to her recruiting duties, Tina writes the AWS Hot Startups (September, October, November) posts each month. Tina earned a Bachelor’s degree in Community Health from Western Washington University and has always enjoyed writing.

Tara Walker (@taraw) is an AWS Technical Evangelist. Her background includes time as a developer and software engineer at multiple high-tech and media companies. With a focus on IoT, mobile, gaming, serverless architectures, and cross-platform development, Tara loves to dive deep into the latest and greatest technical topics and build compelling demos. Tara has a Bachelor’s degree from Georgia State University and is currently working on a Master’s degree in Computer Science from Georgia Institute of Technology. Like me, Tara will focus on writing posts for upcoming AWS launches.

I am thrilled to be working with these three talented and creative new members of our blogging team, and am looking forward to seeing what they come up with.

AWS Blog Localization Team
Many of the posts on this blog have been translated into Japanese and Korean for AWS customers who are most comfortable in those languages. A big thank-you is due to those who are doing this work:

  • JapaneseTatsuji Ishibashi (Product Marketing Manager for AWS Japan) manages and reviews the translation work that is done by a team of Solution Architects in Japan and publishes the content on the AWS Japan Blog.
  • KoreanChanny Yun (Technical Evangelist for AWS Korea) translates posts for the AWS Korea Blog.
  • Mandarin Chinese – Our colleagues in China are translating important announcements for the AWS China Blog.

Jeff;

Holidays with Pi

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/holidays-with-pi/

This column is from The MagPi issue 52. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

When I was a kid, it felt like it took forever for the holidays to arrive. Now that I’m an adult, the opposite is true: it feels like the holidays come hurtling at us faster and faster every year. As a kid, I was most interested in opening presents and eating all of that amazing holiday food. As an adult, I mostly enjoy the opportunity to pause real life for a few days and spend time with my family – though I do still love eating all that amazing holiday food!

Invariably, the conversations with my extended family turn to Raspberry Pi at some point during the holidays. My relatives may have seen something in the news about it, or perhaps they have a friend who is creating their own retro gaming emulator with it, for example. I sometimes show off the Raspberry Pi projects that I’ve been working on and talk about what the Raspberry Pi Foundation is doing in order to put the power of digital making into the hands of people around the globe.

All over the world there will be a lot of folks, both young and old, who may be receiving Raspberry Pis as gifts during the holidays. For them, hopefully it’s the start of a very rewarding journey making awesome stuff and learning about the power of computers.

The side effect of so many people receiving Raspberry Pis as gifts is that around this time of year we get a lot of people asking, “So I have a Raspberry Pi… now what?” Of course, beyond using it as a typical computer, I encourage anyone with a Raspberry Pi to make something with it. There’s no better way to learn about computing than to create something.

There’s no shortage of project inspiration out there. You’ll find projects that you can make in the current edition of The MagPi, as well as all of the back issues online, which are all available as free PDF files. We share the best projects we’ve seen on our blog, and our Resources section contains fantastic how-to projects.

Be inspired

You can also explore sites such as Hackster.io, Instructables, Hackaday.io, and Makezine.com for tons of ideas for what you can make with your Raspberry Pi. Many projects include full step-by-step guides as well. Whatever you’re interested in, whether it’s music, gaming, electronics, natural sciences, or aviation, there’s sure to be something made with Raspberry Pi that’ll spark your interest.

If you’re looking for something to make to celebrate the holiday season, you’re definitely covered. We’ve seen so many great holiday-related Raspberry Pi projects over the years, such as digital advent calendars, Christmas light displays, tree ornaments, digital menorahs, and new year countdown clocks. And, of course, not only does the current issue of The MagPi contain a few holiday-themed Pi projects, you can even make something festive with the cover and a few LEDs.

There’s a lot of stuff out there to make and I encourage you to work together with your family members on a project, even if it doesn’t seem to be their kind of thing. I think people are often surprised at how easy and fun it can be. And if you do make something together, please share some photos with us!

Whatever you create and whatever holidays you celebrate, all of us at Raspberry Pi send you our very best wishes of the season and we look forward to another year ahead of learning, making, sharing, and having fun with computers.

The post Holidays with Pi appeared first on Raspberry Pi.

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

Polly – Text to Speech in 47 Voices and 24 Languages

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/polly-text-to-speech-in-47-voices-and-24-languages/

As I was getting ready to write this post, I thought back to my childhood (largely spent watching TV) and some of the popular computer and robot voices of the 1960’s and 1970’s. In just a few minutes, pleasant memories of HAL-9000, B9 (Lost in Space), the original Star Trek Computer, and Rosie (from The Jetsons) all came to mind. At that time, most people expected mechanically generated speech to sound precise, clipped, and devoid of human emotion.

Fast forward many years and we now see that there are many great applications and use cases for computer-generated speech, commonly known as Text-to-Speech or TTS. Entertainment, gaming, public announcement systems, e-learning, telephony, assistive apps & devices and personal assistants are just a few starting points. Many of these applications are great fits for mobile environments where connectivity is very good but local processing power and storage are so-so at best.

Hello, Polly
In order to address these use cases (and others that you will dream up), we are introducing Polly, a cloud service that converts text to lifelike speech that you can use in your own tools and applications. Polly currently supports a total of 47 male & female voices spread across 24 languages, with additional languages and voices on the roadmap.

Polly was designed to address many of the more challenging aspects of speech generation. For example, consider the difference in pronunciation of the word “live” in the phrases “I live in Seattle” and “Live from New York.” Polly knows that this pair of homographs are spelled the same but are pronounced quite differently. Or, what about the “St.” Depending on the language and the context, this could mean (and should be pronounced) as either “street” or “saint.” Again, Polly knows what to do here. Polly can also deal with units, fractions, abbreviations, currencies, dates, times, and other speech components in sophisticated, language-specific fashion.

In order to do this, we worked with professional, native speakers of each target language. We asked each speaker to pronounce a myriad of representative words and phrases in their chosen language, and then disassembled the audio into sound units known as diphones.

Polly works really well with unadorned text. You simply provide the text and Polly will take care of the rest, delivering an audio file or a stream that represents the text in an accurate, natural, and lifelike way. For more sophisticated applications, you can use SSML (Speech Synthesis Markup Language) to provide Polly with additional information. For example, if your text contains words drawn from more than one language (perhaps English with some French mixed in), you can flag it to be pronounced as such using SSML.

I can’t embed sound clips in this post, so you’ll have to visit the Polly Console and try it out yourself. You simply enter your text and click on Listen to speech:

You can also save the generated audio in an MP3 file and use it within your own applications.

Here is the fully expanded Language and Region menu:

Technical Details
Although you are welcome to use Polly from the Console, you will probably want to do something a bit more dynamic. You can simply call the SynthesizeSpeech API function with your text or your SSML. You can stream the output directly to your user, or you can generate an MP3 or Ogg file and play it back as desired. Polly can generate high quality (up to 22 kHz sampling rate) audio in MP3 or Vorbis formats, along with telephony-quality (8 kHz) audio in PCM format.

You can also use the AWS Command Line Interface (CLI) to generate audio. For example:

$ aws polly synthesize-speech \
  --output-format mp3 --voice-id Joanna \
  --text "Hello my name is Joanna." \
  joanna.mp3

Polly encrypts all data at rest and transfers the audio across SSL connections. The text submissions are disassociated from the submitter, stored in encrypted form for up to 6 months, and used to maintain and improve Polly.

Pricing and Availability
You can use Polly to process 5 million characters per month at no charge. After that, you pay $0.000004 per character, or about $0.004 per minute of generated audio. That works out to about $0.018 for this blog post, or around $2.40 for the full text of Adventures of Huckleberry Finn.

Polly is available now in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions and you can start using it today. Let me know what you come up with!

Jeff;

 

Microsoft joins The Linux Foundation

Post Syndicated from corbet original http://lwn.net/Articles/706585/rss

The Linux Foundation has announced that Microsoft has joined as a platinum
member. “From cloud computing and networking to gaming, Microsoft has steadily
increased its engagement in open source projects and communities. The
company is currently a leading open source contributor on GitHub and
earlier this year announced several milestones that indicate the scope of
its commitment to open source development.

Five(ish) awesome RetroPie builds

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/fiveish-awesome-retropie-builds/

If you’ve yet to hear about RetroPie, how’s it going living under that rock?

RetroPie, for the few who are unfamiliar, allows users to play retro video games on their Raspberry Pi or PC. From Alex Kidd to Ecco the Dolphin, Streets of Rage 2 to Cool Spot, nostalgia junkies can get their fill by flashing the RetroPie image to their Pi and plugging in their TV and a couple of USB controllers.

But for many, this simple setup is not enough. Alongside the RetroPie unit, many makers are building incredible cases and modifications to make their creation stand out from the rest.

Here’s five of what I believe to be some of the best RetroPie builds shared on social media:

1. Furniture Builds

If you don’t have the space for an arcade machine, why not incorporate RetroPie into your coffee table or desk?

This ‘Mid-century-ish Retro Games Table’ by Reddit user GuzziGuy fits a screen and custom-made controllers beneath a folding surface, allowing full use of the table when you’re not busy Space Raiding or Mario Karting.

GuzziGuy RetroPie Table

2. Arcade Cabinets

While the arcade cabinet at Pi Towers has seen better days (we have #LukeTheIntern working on it as I type), many of you makers are putting us to shame with your own builds. Whether it be a tabletop version or full 7ft cabinet, more and more RetroPie arcades are popping up, their builders desperate to replicate the sights of our gaming pasts.

One maker, YouTuber Bob Clagett, built his own RetroPie Arcade Cabinet from scratch, documenting the entire process on his channel.

With sensors that start the machine upon your approach, LED backlighting, and cartoon vinyl artwork of his family, it’s easy to see why this is a firm favourite.

Arcade Cabinet build – Part 3 // How-To

Check out how I made this fully custom arcade cabinet, powered by a Raspberry Pi, to play retro games! Subscribe to my channel: http://bit.ly/1k8msFr Get digital plans for this cabinet to build your own!

3. Handheld Gaming

If you’re looking for a more personal gaming experience, or if you simply want to see just how small you can make your build, you can’t go wrong with a handheld gaming console. With the release of the Raspberry Pi Zero, the ability to fit an entire RetroPie setup within the smallest of spaces has become somewhat of a social media maker challenge.

Chase Lambeth used an old Burger King toy and Pi Zero to create one of the smallest RetroPie Gameboys around… and it broke the internet in the process.

Mini Gameboy Chase Lambeth

4. Console Recycling

What better way to play a retro game than via a retro game console? And while I don’t condone pulling apart a working NES or MegaDrive, there’s no harm in cannibalising a deceased unit for the greater good, or using one of many 3D-printable designs to recreate a classic.

Here’s YouTuber DaftMike‘s entry into the RetroPie Hall of Fame: a mini-NES with NFC-enabled cartridges that autoplay when inserted.

Raspberry Pi Mini NES Classic Console

This is a demo of my Raspberry Pi ‘NES Classic’ build. You can see photos, more details and code here: http://www.daftmike.com/2016/07/NESPi.html Update video: https://youtu.be/M0hWhv1lw48 Update #2: https://youtu.be/hhYf5DPzLqg Electronics kits are now available for pre-order, details here: http://www.daftmike.com/p/nespi-electronics-kit.html Build Guide Update: https://youtu.be/8rFBWdRpufo Build Guide Part 1: https://youtu.be/8feZYk9HmYg Build Guide Part 2: https://youtu.be/vOz1-6GqTZc New case design files: http://www.thingiverse.com/thing:1727668 Better Snap Fit Cases!

5. Everything Else

I can’t create a list of RetroPie builds without mentioning the unusual creations that appear on our social media feeds from time to time. And while you may consider putting more than one example in #5 cheating, I say… well, I say pfft.

Example 1 – Sean (from SimpleCove)’s Retro Arcade

It felt wrong to include this within Arcade Cabinets as it’s not really a cabinet. Creating the entire thing from scratch using monitors, wood, and a lot of veneer, the end result could easily have travelled here from the 1940s.

Retro Arcade Cabinet Using A Raspberry Pi & RetroPie

I’ve wanted one of these raspberry pi/retro pi arcade systems for a while but wanted to make a special box to put it in that looked like an antique table top TV/radio. I feel the outcome of this project is exactly that.

Example 2 – the HackerHouse Portable Console… built-in controller… thing

The team at HackerHouse, along with many other makers, decided to incorporate the entire RetroPie build into the controller, allowing you to easily take your gaming system with you without the need for a separate console unit. Following on from the theme of their YouTube channel, they offer a complete tutorial on how to make the controller.

Make a Raspberry Pi Portable Arcade Console (with Retropie)

Find out how to make an easy portable arcade console (cabinet) using a Raspberry Pi. You can bring it anywhere, plug it into any tv, and play all your favorite classic ROMs. This arcade has 4 general buttons and a joystick, but you can also plug in any old usb enabled controller.

Example 3 – Zach’s PiCart

RetroPie inside a NES game cartridge… need I say more?

Pi Cart: a Raspberry Pi Retro Gaming Rig in an NES Cartridge

I put a Raspberry Pi Zero (and 2,400 vintage games) into an NES cartridge and it’s awesome. Powered by RetroPie. I also wrote a step-by-step guide on howchoo and a list of all the materials you’ll need to build your own: https://howchoo.com/g/mti0oge5nzk/pi-cart-a-raspberry-pi-retro-gaming-rig-in-an-nes-cartridge

Here’s a video to help you set up your own RetroPie. What games would you play first? And what other builds have caught your attention online?

The post Five(ish) awesome RetroPie builds appeared first on Raspberry Pi.

AWS Week in Review – September 27, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-27-2016/

Fourteen (14) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 26

Tuesday

September 27

Wednesday

September 28

Thursday

September 29

Friday

September 30

Saturday

October 1

Sunday

October 2

New & Notable Open Source

  • dynamodb-continuous-backup sets up continuous backup automation for DynamoDB.
  • lambda-billing uses NodeJS to automate billing to AWS tagged projects, producing PDF invoices.
  • vyos-based-vpc-wan is a complete Packer + CloudFormation + Troposphere powered setup of AMIs to run VyOS IPSec tunnels across multiple AWS VPCs, using BGP-4 for dynamic routing.
  • s3encrypt is a utility that encrypts and decrypts files in S3 with KMS keys.
  • lambda-uploader helps to package and upload Lambda functions to AWS.
  • AWS-Architect helps to deploy microservices to Lambda and API Gateway.
  • awsgi is an WSGI gateway for API Gateway and Lambda proxy integration.
  • rusoto is an AWS SDK for Rust.
  • EBS_Scripts contains some EBS tricks and triads.
  • landsat-on-aws is a web application that uses Amazon S3, Amazon API Gateway, and AWS Lambda to create an infinitely scalable interface to navigate Landsat satellite data.

New SlideShare Presentations

New AWS Marketplace Listings

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

New P2 Instance Type for Amazon EC2 – Up to 16 GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-p2-instance-type-for-amazon-ec2-up-to-16-gpus/

I like to watch long-term technology and business trends and watch as they shape the products and services that I get to use and to write about. As I was preparing to write today’s post, three such trends came to mind:

  • Moore’s Law – Coined in 1965, Moore’s Law postulates that the number of transistors on a chip doubles every year.
  • Mass Market / Mass Production – Because all of the technologies that we produce, use, and enjoy every day consume vast numbers of chips, there’s a huge market for them.
  • Specialization  – Due to the previous trend, even niche markets can be large enough to be addressed by purpose-built products.

As the industry pushes forward in accord with these trends, a couple of interesting challenges have surfaced over the past decade or so. Again, here’s a quick list (yes, I do think in bullet points):

  • Speed of Light – Even as transistor density increases, the speed of light imposes scaling limits (as computer pioneer Grace Hopper liked to point out, electricity can travel slightly less than 1 foot in a nanosecond).
  • Semiconductor Physics – Fundamental limits in the switching time (on/off) of a transistor ultimately determine the minimum achievable cycle time for a CPU.
  • Memory Bottlenecks – The well-known von Neumann Bottleneck imposes limits on the value of additional CPU power.

The GPU (Graphics Processing Unit) was born of these trends, and addresses many of the challenges! Processors have reached the upper bound on clock rates, but Moore’s Law gives designers more and more transistors to work with. Those transistors can be used to add more cache and more memory to a traditional architecture, but the von Neumann Bottleneck limits the value of doing so. On the other hand, we now have large markets for specialized hardware (gaming comes to mind as one of the early drivers for GPU consumption). Putting all of this together, the GPU scales out (more processors and parallel banks of memory) instead of up (faster processors and bottlenecked memory). Net-net: the GPU is an effective way to use lots of transistors to provide massive amounts of compute power!

With all of this as background, I would like to tell you about the newest EC2 instance type, the P2. These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

New P2 Instance Type
This new instance type incorporates up to 8 NVIDIA Tesla K80 Accelerators, each running a pair of NVIDIA GK210 GPUs. Each GPU provides 12 GB of memory (accessible via 240 GB/second of memory bandwidth), and 2,496 parallel processing cores. They also include ECC memory protection, allowing them to fix single-bit errors and to detect double-bit errors. The combination of ECC memory protection and double precision floating point operations makes these instances a great fit for all of the workloads that I mentioned above.

Here are the instance specs:

Instance Name GPU Count vCPU Count Memory Parallel Processing Cores
GPU Memory
Network Performance
p2.large 1 4 61 GiB 2,496 12 GB High
p2.8xlarge 8 32 488 GiB 19,968 96 GB 10 Gigabit
p2.16xlarge 16 64 732 GiB 39,936 192 GB 20 Gigabit

All of the instances are powered by an AWS-Specific version of Intel’s Broadwell processor, running at 2.7 GHz. The p2.16xlarge gives you control over C-states and P-states, and can turbo boost up to 3.0 GHz when running on 1 or 2 cores.

The GPUs support CUDA 7.5 and above, OpenCL 1.2, and the GPU Compute APIs. The GPUs on the p2.8xlarge and the p2.16xlarge are connected via a common PCI fabric. This allows for low-latency, peer to peer GPU to GPU transfers.

All of the instances make use of our new Enhanced Network Adapter (ENA – read Elastic Network Adapter – High Performance Network Interface for Amazon EC2 to learn more) and can, per the table above, support up to 20 Gbps of low-latency networking when used within a Placement Group.

Having a powerful multi-vCPU processor and multiple, well-connected GPUs on a single instance, along with low-latency access to other instances with the same features creates a very impressive hierarchy for scale-out processing:

  • One vCPU
  • Multiple vCPUs
  • One GPU
  • Multiple GPUs in an instance
  • Multiple GPUs in multiple instances within a Placement Group

P2 instances are VPC only, require the use of 64-bit, HVM-style, EBS-backed AMIs, and you can launch them today in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions as On-Demand Instances, Spot Instances, Reserved Instances, or Dedicated Hosts.

Here’s how I installed the NVIDIA drivers and the CUDA toolkit on my P2 instance, after first creating, formatting, attaching, and mounting (to /ebs) an EBS volume that had enough room for the CUDA toolkit and the associated samples (10 GiB is more than enough):

$ cd /ebs
$ sudo yum update -y
$ sudo yum groupinstall -y "Development tools"
$ sudo yum install -y kernel-devel-`uname -r`
$ wget http://us.download.nvidia.com/XFree86/Linux-x86_64/352.99/NVIDIA-Linux-x86_64-352.99.run
$ wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda_7.5.18_linux.run
$ chmod +x NVIDIA-Linux-x86_64-352.99.run
$ sudo ./NVIDIA-Linux-x86_64-352.99.run
$ chmod +x cuda_7.5.18_linux.run
$ sudo ./cuda_7.5.18_linux.run   # Don't install driver, just install CUDA and sample
$ sudo nvidia-smi -pm 1
$ sudo nvidia-smi -acp 0
$ sudo nvidia-smi --auto-boost-permission=0
$ sudo nvidia-smi -ac 2505,875

Note that NVIDIA-Linux-x86_64-352.99.run and cuda_7.5.18_linux.run are interactive programs; you need to accept the license agreements, choose some options, and enter some paths. Here’s how I set up the CUDA toolkit and the samples when I ran cuda_7.5.18_linux.run:

P2 and OpenCL in Action
With everything set up, I took this Gist and compiled it on a p2.8xlarge instance:

[[email protected] ~]$ gcc test.c -I /usr/local/cuda/include/ -L /usr/local/cuda-7.5/lib64/ -lOpenCL -o test

Here’s what it reported:

[[email protected] ~]$ ./test
1. Device: Tesla K80
 1.1 Hardware version: OpenCL 1.2 CUDA
 1.2 Software version: 352.99
 1.3 OpenCL C version: OpenCL C 1.2
 1.4 Parallel compute units: 13
2. Device: Tesla K80
 2.1 Hardware version: OpenCL 1.2 CUDA
 2.2 Software version: 352.99
 2.3 OpenCL C version: OpenCL C 1.2
 2.4 Parallel compute units: 13
3. Device: Tesla K80
 3.1 Hardware version: OpenCL 1.2 CUDA
 3.2 Software version: 352.99
 3.3 OpenCL C version: OpenCL C 1.2
 3.4 Parallel compute units: 13
4. Device: Tesla K80
 4.1 Hardware version: OpenCL 1.2 CUDA
 4.2 Software version: 352.99
 4.3 OpenCL C version: OpenCL C 1.2
 4.4 Parallel compute units: 13
5. Device: Tesla K80
 5.1 Hardware version: OpenCL 1.2 CUDA
 5.2 Software version: 352.99
 5.3 OpenCL C version: OpenCL C 1.2
 5.4 Parallel compute units: 13
6. Device: Tesla K80
 6.1 Hardware version: OpenCL 1.2 CUDA
 6.2 Software version: 352.99
 6.3 OpenCL C version: OpenCL C 1.2
 6.4 Parallel compute units: 13
7. Device: Tesla K80
 7.1 Hardware version: OpenCL 1.2 CUDA
 7.2 Software version: 352.99
 7.3 OpenCL C version: OpenCL C 1.2
 7.4 Parallel compute units: 13
8. Device: Tesla K80
 8.1 Hardware version: OpenCL 1.2 CUDA
 8.2 Software version: 352.99
 8.3 OpenCL C version: OpenCL C 1.2
 8.4 Parallel compute units: 13

As you can see, I have a ridiculous amount of compute power available at my fingertips!

New Deep Learning AMI
As I said at the beginning, these instances are a great fit for machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

In order to help you to make great use of one or more P2 instances, we are launching a  Deep Learning AMI today. Deep learning has the potential to generate predictions (also known as scores or inferences) that are more reliable than those produced by less sophisticated machine learning, at the cost of a most complex and more computationally intensive training process. Fortunately, the newest generations of deep learning tools are able to distribute the training work across multiple GPUs on a single instance as well as across multiple instances each containing multiple GPUs.

The new AMI contains the following frameworks, each installed, configured, and tested against the popular MNIST database:

MXNet – This is a flexible, portable, and efficient library for deep learning. It supports declarative and imperative programming models across a wide variety of programming languages including C++, Python, R, Scala, Julia, Matlab, and JavaScript.

Caffe – This deep learning framework was designed with  expression, speed, and modularity in mind. It was developed at the Berkeley Vision and Learning Center (BVLC) with assistance from many community contributors.

Theano – This Python library allows you define, optimize, and evaluate mathematical expressions that involve multi-dimensional arrays.

TensorFlow – This is an open source library for numerical calculation using data flow graphs (each node in the graph represents a mathematical operation; each edge represents multidimensional data communicated between them).

Torch – This is a GPU-oriented scientific computing framework with support for machine learning algorithms, all accessible via LuaJIT.

Consult the README file in ~ec2-user/src to learn more about these frameworks.

AMIs from NVIDIA
You may also find the following AMIs to be of interest:


Jeff;

AWS Week in Review – September 19, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-19-2016/

Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 19

Tuesday

September 20

Wednesday

September 21

Thursday

September 22

Friday

September 23

Saturday

September 24

Sunday

September 25

New & Notable Open Source

  • ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
  • rclone syncs files and directories to and from S3 and many other cloud storage providers.
  • Syncany is an open source cloud storage and filesharing application.
  • chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
  • amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
  • ecs-pilot is a simple tool for managing AWS ECS.
  • vman is an object version manager for AWS S3 buckets.
  • aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
  • autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
  • shep is a framework for building APIs using AWS API Gateway and Lambda.

New SlideShare Presentations

New Customer Success Stories

  • NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
  • New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
  • MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
  • University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

AWS Pop-up Loft and Innovation Lab in Munich

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-pop-up-loft-and-innovation-lab-in-munich/

I’m happy to be able to announce that an AWS Pop-up Loft is opening in Munich on October 26th, with a full calendar of events and a brand-new AWS Innovation Lab, all created with the help of our friends at Intel and Nordcloud. Developers, entrepreneurs, students come to AWS Lofts around the world to learn, code, collaborate, and to ask questions. The Loft will provide developers and architects in Munich with access to local technical resources and expertise that will help them to build robust and successful cloud-powered applications.

Near Munich Königsplatz Station
This loft is located at Brienner Str 49, 80333 in Munich, close to Königsplatz Station and convenient to Stiglmaierplatz. Hours are 10 AM to 6 PM Monday through Friday, with special events in the evening.

During the day, you will have access to the Ask an Architect Bar, daily education sessions, Wi-Fi, a co-working space, coffee, and snacks, all at no charge. There will also be resources to help you to create, run, and grow your startup including educational sessions from local AWS partners, accelerators, and incubators.

Ask an Architect
Step up to the Ask an Architect Bar with your code, architecture diagrams, and your AWS questions at the ready! Simply walk in. You will have access to deep technical expertise and will be able to get guidance on AWS architecture, usage of specific AWS services and features, cost optimization, and more.

AWS Education Sessions
During the day, AWS Solution Architects, Product Managers, and Evangelists will be leading 60-minute educational sessions designed to help you to learn more about specific AWS services and use cases. You can attend these sessions to learn about Serverless Architectures, Mobile & Gaming, Databases, Big Data, Compute & Networking, Architecture, Operations, Security, Machine Learning, and more, all at no charge.

Startup Education Sessions
AWS startup community representatives, incubators, accelerators, startup scene influencers, and hot startup customers running on AWS will share best-practices, entrepreneurial know-how, and lessons learned. Pop in to learn the art of pitching, customer validation & profiling, PR for startups & corporations, and more.

Innovation Lab
The new AWS Innovation Lab is adjacent to the Munich Loft. With over 350 square meters of space, the Lab is Designed to be a resource for mid-market and enterprise companies that are ready to grow their business. It will feature interactive demos, videos, and other materials designed to explain the benefits of digital transformation and cloud-powered innovation, with a focus on Big Data, mobile applications, and the fourth industrial revolution (Industry 4.0).

Come in and Say Hello
We look forward to using the Loft to meet and to connect with our customers, and expect that it will be a place that they visit on a regular basis. Please feel free to stop in and say hello to my colleagues at the Munich Loft if you happen to find yourself in the city!

Jeff;

Pi Cart: RetroPie in a NES Cartridge

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-cart-retropie-in-a-nes-cartridge/

RetroPie builds take up approximately 40% of my daily project searching. Whether it’s across social media, within the depths of YouTube, littering my inbox, or shared across office messaging, I see RetroPie everywhere.

I see… RetroPie

I can look across my desk right this moment and spot two different USB controllers and two RetroPie-imaged SD cards from where I sit. True story.

The mess of Alex's desk

The ‘organised’ clutter-mess of my desk…

Because of this, my attention tends to be drawn away from the inner workings of a gaming build and more toward the aesthetics. After all, if I’ve managed to set up RetroPie, anyone can do it.

When it comes to RetroPie builds, it tends to be the physical casing that really catches my attention. So many makers go the extra mile to build stunning gaming units that really please the eye.

Taking that into consideration, can you really be surprised that I’m writing about the Pi Cart? I mean, c’mon: it’s awesome-looking!

Pi Cart: a Raspberry Pi Retro Gaming Rig in an NES Cartridge

I put a Raspberry Pi Zero (and 2,400 vintage games) into an NES cartridge and it’s awesome. Powered by RetroPie. I also wrote a step-by-step guide on howchoo and a list of all the materials you’ll need to build your own: https://howchoo.com/g/mti0oge5nzk/pi-cart-a-raspberry-pi-retro-gaming-rig-in-an-nes-cartridge

Pi Cart originator Zach offers up a complete how-to for the project, giving all budding gamers and tinkerers the instructions they need to fit a RetroPie-enabled Raspberry Pi Zero into an old NES cartridge.

Using a Raspberry Pi Zero, a four-port USB mini hub (to allow for the use of more than one USB controller), an old NES cartridge, and all the usual gubbins, it’s fairly easy to create your own Pi Cart at minimal cost. 

RetroPie Pi Cart

There are many online guides and videos which give you all the information you need to install RetroPie on the Raspberry Pi, so if you’ve never tried it before and feel a little bit out of your depth, I can assure you that you’ll be fine.

Then all you need is a glue gun (this is possibly the most expensive component of the build!) and an hour or so to go from Zero to retro-gaming Hero!

The post Pi Cart: RetroPie in a NES Cartridge appeared first on Raspberry Pi.

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/32-security-and-compliance-sessions-now-live-in-the-reinvent-2016-session-catalog/

re:Invent 2016 logo

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof?
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.

 

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.

 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is “yes,” this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.

 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.

 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.

 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.

 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.” This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.

 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.

 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.

 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.

 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.

 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.

 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.

 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.

 

The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.

 

For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.

 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.

 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.

 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.

 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.

 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.

 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.

 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources. In addition, we will show you how to configure network security using VPCs and security groups.

 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.

 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.

 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.

 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler.

 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).

 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.

 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.

 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.

 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

 

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.

 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.

– Craig

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3UX2WK7G84E5J/32-Security-and-Compliance-Sessions-Now-Live-in-the-re-Invent-2016-Session-Catal

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32 titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof? 
 
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
 
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.
  

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.
 
 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is "yes," this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type. 
 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.
 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.
 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
 
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.
 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.”
 
This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.
 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.
 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.
 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.
 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.
 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.
 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.
 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.
 
The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.
 
For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.
 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.
 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.
 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.
 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.
 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.
 
 
 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.
 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources.
 
In addition, we will show you how to configure network security using VPCs and security groups. 
 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.
 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.
 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.
 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
 
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler. 
 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).
 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.
 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.
 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.
 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.
 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.
 
– Craig

Month in Review: August 2016

Post Syndicated from Andy Werth original https://blogs.aws.amazon.com/bigdata/post/Tx3MEYSAQDQKHB4/Month-in-Review-August-2016

Another month of big data solutions on the Big Data Blog. Take a look at our summaries below and learn, comment, and share. Thanks for reading!

Readmission Prediction Through Patient Risk Stratification Using Amazon Machine Learning
With this post, learn how to apply advanced analytics concepts like pattern analysis and machine learning to do risk stratification for patient cohorts.

Building and Deploying Custom Applications with Apache Bigtop and Amazon EMR
When you launch a cluster, Amazon EMR lets you choose applications that will run on your cluster. But what if you want to deploy your own custom application? This post shows you how to build a custom application for EMR for Apache Bigtop-based releases 4.x and greater.

Writing SQL on Streaming Data with Amazon Kinesis Analytics – Part 1
This post introduces you to Amazon Kinesis Analytics, the fundamentals of writing ANSI-Standard SQL over streaming data, and works through a simple example application that continuously generates metrics over time windows.

Monitor Your Application for Processing DynamoDB Streams
Learn how to monitor the Amazon Kinesis Client Library (KCL) application you use to process DynamoDB Streams to quickly track and resolve issues or failures so you can avoid losing data. Dashboards, metrics, and application logs all play a part. This post may be most relevant to Java applications running on Amazon EC2 instances.

Data Lake Ingestion: Automatically Partition Hive External Tables with AWS
This post introduces a simple data ingestion and preparation framework based on AWS Lambda, Amazon DynamoDB, and Apache Hive on EMR for data from different sources landing in S3. This solution lets Hive pick up new partitions as data is loaded into S3 because Hive by itself cannot detect new partitions as data lands.

FROM THE ARCHIVE

Powering Gaming Applications with Amazon DynamoDB (July 2014)
Learn how to quickly build a reliable and scalable database tier for a mobile game. We’ll walk through a design example and show how to power a sizable game for less than the cost of a daily cup of coffee. We’ll also profile a fast-growing customer who has scaled to millions of players while saving time and money with Amazon DynamoDB.

——————————————-

Want to learn more about Big Data or Streaming Data? Check out our Big Data and Streaming data educational pages.

Leave a comment below to let us know what big data topics you’d like to see next on the AWS Big Data Blog.

I wish I enjoyed Pokémon Go

Post Syndicated from Eevee original https://eev.ee/blog/2016/07/31/i-wish-i-enjoyed-pok%C3%A9mon-go/

I’ve been trying really hard not to be a sourpuss about this, because everyone seems to enjoy it a lot and I don’t want to be the jerk pissing in their cornflakes.

And yet!

Despite all the potential of the game, despite all the fervor all across the world, it doesn’t tickle my fancy.

It seems like the sort of thing I ought to enjoy. Pokémon is kind of my jam, if you hadn’t noticed. When I don’t enjoy a Pokémon thing, something is wrong with at least one of us.

The app is broken

I’m not talking about the recent update that everyone’s mad about and that I haven’t even tried. They removed pawprints, which didn’t work anyway? That sucks, yeah, but I think it’s more significant that the thing is barely usable.

I’ve gone out hunting Pokémon several times with my partner and their husband. We wandered around for about an hour each time, and like clockwork, the game would just stop working for me every fifteen minutes. It would still run, and the screen would still update, but it would completely ignore all taps or swipes. The only fix seems to be killing it and restarting it, which takes like a week, and meanwhile the rest of my party has already caught the Zubat or whatever and is moving on.

For the brief moments when it works, it seems to be constantly confused about exactly where I am and which way I’m facing. Pokéstops (Poké Stops?) have massive icons when they’re nearby, and more than once I’ve had to mess around with the camera angle to be able to tap a nearby Pokémon, because a cluster of several already-visited Pokéstops are in the way. There’s also a strip along the bottom of the screen, surrounding the menu buttons, where tapping just does nothing at all.

I’ve had the AR Pokémon catching screen — the entire conceit of the game — lag so badly on multiple occasions that a Pokéball just stayed frozen in midair, and I couldn’t tell if I’d hit the Pokémon or not. There was also the time the Pokéball hit the Pokémon, landed on the ground, and… slowly rolled into the distance. For at least five minutes. I’m not exaggerating this time.

The game is much more responsive with AR disabled, so the Pokémon appear on a bland and generic background, which… seems to defeat the purpose of the game.

(Catching Pokémon doesn’t seem to have any real skill to it, either? Maybe I’m missing something, but I don’t understand how I’m supposed to gauge distance to an isolated 3D model and somehow connect this to how fast I flick my finger. I don’t really like “squishy” physics games like Angry Birds, and this is notably worse. It might as well be random.)

I had a better time just enjoying my party’s company and looking at actual wildlife, which in this case consists of cicadas and a few semi-wild rabbits that inexplicably live in a nearby park. I feel that something has gone wrong with your augmented reality game when it is worse than reality.

It’s not about Pokémon

Let’s see if my reasoning is sound, here.

In the mainline Pokémon games, you play as a human, but many of your important interactions are with Pokémon. You carry a number of Pokémon with you. When you encounter a Pokémon, you immediately send out your own. All the NPCs talk about how much they love Pokémon. There are overworld Pokémon hanging out. It’s pretty clear what the focus is. It’s right there on the title screen, even: both the word itself and an actual Pokémon.

Contrast this with Pokémon Go.

Most of the time, the only thing of interest on the screen is your avatar, a human. Once you encounter a Pokémon, you don’t send out your own; it’s just you, and it. In fact, once you catch a Pokémon, you hardly ever interact with it again. You can go look at its stats, assuming you can find it in your party of, what, 250?

The best things I’ve seen done with the app are AR screenshots of Pokémon in funny or interesting real-world places. It didn’t even occur to me that you can only do this with wild Pokémon until I played it. You can’t use the AR feature — again, the main conceit of the game — with your own Pokémon. How obvious is this? How can it not be possible? (If it is possible, it’s so well-hidden that several rounds of poking through the app haven’t revealed how to do it, which is still a knock for hiding the most obvious thing to want to do.)

So you are a human, and you wander around hoping you see Pokémon, and then you catch them, and then they are effectively just a sprite in a list until you feed them to your other Pokémon. And feed them you must, because the only way to level up a Pokémon is to feed them the corpses — sorry, “candies” — of their brethren. The Pokémon themselves aren’t involved in this process; they are passive consumers you fatten up.

If you’re familiar with Nuzlocke runs, you might be aware of just how attached players — or even passive audiences — can get to their Pokémon in mainline games. Yet in Pokémon Go, the critters themselves are just something to collect, just something to have, just something to sacrifice. No other form of interaction is offered.

In Pokémon X and Y, you can pet your Pokémon and feed them cakes, then go solve puzzles with them. They will love you in return. In Pokémon Go, you can swipe to make the model rotate.

There is some kind of battle system in here somewhere, but as far as I can tell, you only ever battle against gym leaders, who are jerks who’ve been playing the damn thing since it came out and have Pokémon whose CP have more digits than you even knew were possible. Also the battling is real-time with some kind of weird gestural interface, so it’s kind of a crapshoot whether you even do the thing you want, a far cry from the ostensibly strategic theme of the mainline games.

If I didn’t know any better, I’d think some no-name third-party company just took an existing product and poorly plastered Pokémon onto it.

There are very few Pokémon per given area

The game is limited to generation 1, the Red/Blue/Yellow series. And that’s fine.

I’ve seen about six of them.

Rumor has it that they are arranged very cleverly, with fire Pokémon appearing in deserts and water Pokémon appearing in waterfronts. That sounds really cool, except that I don’t live at the intersection of fifteen different ecosystems. How do you get ice Pokémon? Visit my freezer?

I freely admit, I’m probably not the target audience here; I don’t have a commute at all, and on an average day I have no reason to leave the house at all. I can understand that I might not see a huge variety, sure. But I’ve seen several friends lamenting that they don’t see much variety on their own commutes, or around the points of interest near where they live.

If you spend most of your time downtown in a major city, the game is probably great; if you live out in the sticks, it sounds a bit barren. It might be a little better if you could actually tell how to find Pokémon that are more than a few feet away — there used to be a distance indicator for nearby Pokémon, which I’m told even worked at one point, but it’s never worked since I first tried the game and it’s gone now.

Ah, of course, there’s always Pokévision, a live map of what Pokémon are where… which Niantic just politely asked to cease and desist.

It’s full of obvious “free-to-play” nudges

I put “free-to-play” in quotes because it’s a big ol’ marketing lie and I don’t know why the gaming community even tolerates the phrase. The game is obviously designed to be significantly worse if you don’t give them money, and there are little reminders of this everywhere.

The most obvious example: eggs rain from the sky, and are the only way to get Pokémon that don’t appear naturally nearby. You have to walk a certain number of kilometers to hatch an egg, much like the mainline games, which is cute.

Ah, but you also have to put an egg in an incubator for the steps to count. And you only start with one. And they’re given to you very rarely, and any beyond the one you start with only have limited uses at a time. And you can carry 9 eggs at a time.

Never fear! You can an extra (limited use) incubator for the low low price of $1.48. Or maybe $1.03. It’s hard to tell, since (following the usual pattern of flagrant dishonesty) you first have to turn real money into game-specific trinkets at one of several carefully obscured exchange rates.

The thing is, you could just sell a Pokémon game. Nintendo has done so quite a few times, in fact. But who would pay for Pokémon Go, in the state it’s in?

In conclusion

This game is bad and I wish it weren’t bad. If you enjoy it, that’s awesome, and I’m not trying to rain on your parade, really. I just wish I enjoyed it too.

CoderDojo Coolest Projects 2016

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2016/

This weekend Philip and I went to Dublin to attend CoderDojo Coolest Projects. We got to meet hundreds of brilliant young digital makers and amazing volunteers.

YoungCoolestProjectAwards

CoderDojo Coolest Projects: a free tech event for the world’s youngest innovators, creators and entrepreneurs

As the event kicked off the news broke that Tim Peake had landed safely back on Earth, which meant Philip had to make some last minute changes to his presentation…

Ben Nuttall on Twitter

“Who knows who this is?” “It’s Tim Peake” “Where is he?” “In space” “No – he’s back on Earth!”pic.twitter.com/elfNXcAwsX

As we walked around the venue we grew more and more impressed by the projects on show. We asked each exhibiting group to talk us through their project, and were genuinely impressed by both the projects and their presentation. The first area we perused was the Scratch projects – games, animations, quizzes and more. I’m not the most accomplished Scratch programmer so I was very impressed with what we were shown.

When we moved on to a room of physical computing projects, we met Iseult Mangan, Ireland’s first Raspberry Pi Certified Educator:

Philip Colligan on Twitter

I met Ireland’s first ever @Raspberry_Pi certified educator @IseultManganpic.twitter.com/9RbLANKZdX

One of Iseult’s students, Aoibheann, showed us a website she’d made all about Raspberry Pi:

Philip Colligan on Twitter

This 9 year old Coder wrote her own @Raspberry_Pi website: http://dontpasstheraspberryjam.weebly.com/ – check it out!pic.twitter.com/TagshFWt2k

I even bumped into Tim Peake a few times…

Ben Nuttall on Twitter

@astro_timpeake sure gets aboutpic.twitter.com/4oS1tFgvQu

The Coolest of Projects

Here are some of my favourite projects.

First up, a home-made 3D holographic display. The picture does it no justice, but look close (or click to embiggen) and you’ll see the Scratch cat, which was spinning around as part of a longer animation. The girl who made it said she put it together out of an old CD case. Very cool indeed!

Scratch cat hologram

Scratch cat hologram

Plenty of great robots…

IMG_3999
IMG_4000
IMG_3997
IMG_3985
IMG_3995
IMG_3970

We arrived at a beautiful Pi-powered retro gaming console, and spoke to the maker’s Dad. He was excited for his son to be able to show his project to people from the Raspberry Pi Foundation and asked if we could stick around to wait for him to return. Here he is:

IMG_4047

When I mentioned one of my favourite Mega Drive games, he loaded it up for me to play:

IMG_4051

It took me about 15 years to complete this game – I was playing it before he was born!

This was really impressive: these two girls had made a Wii remote-pcontrolled hovercraft with a Raspberry Pi:

IMG_4039

Ben Nuttall’s post on Vine

Watch Ben Nuttall’s Vine taken on 18 June 2016. It has 0 likes. The entertainment network where videos and personalities get really big, really fast. Download Vine to watch videos, remixes and trends before they blow up.

I met DJ Dhruv, who demonstrated his livecoding skills in Sonic Pi, and gave a very professional presentation involving a number of handshakes:

Ben Nuttall on Twitter

DJ Dhruv is livecoding in @sonic_pi and teaching us about the history of the amen break. @samaaron you’d love thispic.twitter.com/tSrn0CTQzP

Pi-vision: a way to help blind people find their way around…

IMG_3986

Probably my favourite of all, this group created a 3D Minecraft Pi booth using mirrors. They showed me their Python code which ran simultaneously on two Pis, while one played music in Sonic Pi, with cross-application communication between Python and Sonic Pi to coordinate timings. A Herculean effort achieving a wonderful effect.

IMG_3990

IMG_3991

How can you get involved?

If you want to join us in giving more young people the opportunity to learn programming skills, learn to make things with computers, and generally hack things that didn’t need hacking, there are plenty of ways you can get involved. You can:

  • Set up a Raspberry Jam in your area, or volunteer to help out at one near you
  • Start a Code Club at a local primary school, or another venue like a library or community centre
  • Set up a CoderDojo, or offer to help at one near you

Also, I should point out we have an job opening for a senior programme manager. We’re looking for someone with experience running large programmes for young people. If that’s you, be sure to check it out!

Job opening: Senior Programme Manager at Raspberry Pi Foundation

As part of the Raspberry Pi Foundation’s mission to put the power of computing and digital making into the hands of people all over the world, we want to make these skills more relevant and accessible.

It’s kind of a thing to end blog posts with a GIF, so here’s mine:

SecuriTay on Twitter

Machine learningpic.twitter.com/c3sIJPd3PS

The post CoderDojo Coolest Projects 2016 appeared first on Raspberry Pi.

Meet The Interns: Alex

Post Syndicated from Yev original https://www.backblaze.com/blog/meet-interns-alex/

Intern_Alex
It’s summer time and that means our latest crop of interns are coming! Our first intern started a week ago and has been hard at work in our datacenter, helping our operations department keep our storage pods spinning. Lets take a moment and learn more about Alex, shall we?

What is your Backblaze Title?
Datacenter Intern.

Where are you originally from?

I’m a native of the Napa Valley, some three or four generations of my family have lived there. I was born at St. Helena Hospital, grew up in Calistoga, and now live in Napa (the city); all in the Napa Valley. Fun fact: the default Windows XP background (“Bliss”) was photographed on Highway 121 just southwest of Napa!

What attracted you to Backblaze?

My brother Zack has worked for Backblaze for at least several years now, and I have always been interested in tech. When I heard that Backblaze might be interested in hiring me as a summer intern between semesters, it seemed like too good of an opportunity to pass up. So, here I am!

What do you expect to learn while being at Backblaze?

I’m really interested in learning more about the operations of the datacenter and all the little details that make everything work in harmony (or what passes for that, at least). I’m sure I’m going to learn a lot about UNIX shell commands, about the failure rate of various types of hard drives, and probably whether anyone is brave enough to eat the last ghost pepper in the break room (not me, seriously).

Where else have you worked?

Technically, the only other employer I’ve had has been a temp agency in Napa, but I’ve gotten various kinds of work from them. Bottling, packaging, and palletizing wine at various wineries mostly; however, I’ve done a few other things like grape sorting during harvest, moving around the contents of various pallets in warehouses, and aerosol can assembly and packaging. Some wine bottles I’ve seen hold 6 or more liters – at that point it’s so heavy you need something with wheels to move a single bottle.

Where did do you go to school?

After high school, I originally went to the local community college in Napa, Napa Valley College. There, I did my general education classes and got an AS degree in Natural Science & Mathematics. I currently go to school at California State University, Sacramento (Sac State), majoring in Computer Science. I’m happy to have a job, if even temporarily, where I can learn about something at least tangentially related to the field in which I want to work – experience is everything these days!

What’s your dream job?

A job where I can continually learn, and apply that knowledge in a useful manner.

Favorite place you’ve traveled?

To be honest, I haven’t really traveled much. Of the few places I have been, however, I like Crater Lake in Oregon. It’s a great place if you can manage to be there when the lake is clearly visible, though it is pretty cold sometimes at that altitude.

Favorite hobby?

Gaming, for sure, followed closely by reading and exploring the strange depths of the Internet. Currently working my way through Dark Souls III, although I’ve had to basically relearn how to play as I’m on a different computer with a different controller.

Star Trek or Star Wars?

Star Wars, if only because I’m more familiar with the universe. Not intimately familiar, but I haven’t watched much of Star Trek at all. Star Wars also gave rise to Spaceballs, which is a great movie in my opinion. I do have to give some respect to Star Trek, though, for using the show as an analogue for social issues.

Coke or Pepsi?

Water. I will judge you on your water quality, though; I like cold, clean water and I will go out of my way to find it.

Favorite food?

Oh man, I like a lot of food. I think narrowing it down to one might be sacrilege.

Why do you like certain things?

Because? I’m not sure I can answer that question effectively, my tastes are too eclectic to generalize about all of them.

Anything else you’d like you’d like to tell us?

I know random trivia about obscure things. No real reason for it, but if you bring up a topic I might just know something interesting about it. Disclaimer: your definition of interesting may not be the same as mine.

Welcome aboard Alex, we’ll keep the fresh, cold, crisp water stocked for you!

The post Meet The Interns: Alex appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

Motorised Skateboard

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/motorised-skateboard/

Hello there. I’m Alex, the newest inhabitant of Pi Towers. I like to build things like modified Nerf guns and Iron Man masks (Team Stark for life! Sorry Liz), and when I’m not doing that, I get to search for all your amazing Pi projects and share them with the world via our social media. So keep it up!

Since arriving at Pi Towers my imagination has been running on overdrive, thinking of all the possible projects I can do with this incredible micro-powerhouse. I like to make stuff… and now I can make stuff that does stuff, thanks to the versatility of the Pi.

With this in mind, it’s no surprise that my return from lunch on my first day with a skateboard under my (rain-sodden) arm was met with this project in an email from Liz.

No Title

No Description

A Raspberry Pi-powered motorised skateboard, controlled via a Wii Remote? What’s not to love? The skateboard, Raspberry Pi, and console gaming enthusiast in me rejoiced as I wrung rainwater from my hoodie.

raspberry pi skateboard

As part of a university assignment to produce a project piece that incorporates a Raspberry Pi, Tim Maier constructed this beast of a machine using various components that are commonly found over the internet or at local tech stores. Essentially, Tim has provided me with the concept for my first Raspberry Pi project and I already have the deck at my disposal. And a Raspberry Pi. Motors and batteries litter the cupboards at Pi Towers like dead moths. And I’m sure there’s somebody around here I can beg a Wiimote from.

Raspberry Pi Bluetooth Electric Skateboard IFB102

This project was part of an assignment for university where the prerequisite was to build something with a Raspberry Pi.

What I really love about this project is that once again we see how it’s possible to build your own tech items, despite how readily available the complete builds are online or in stores. Not only do you save money – and in the case of a motorised skateboard, we’re easily talking hundreds of pounds – but you also get that added opportunity to smugly declare “Oh this? I made it myself” that you simply don’t get when opening the packaging of something pre-made.

Hats (or skate caps) off to Tim and this wonderful skateboard. Tim: if you’re reading this, I’d love to know what your final mark was!

The post Motorised Skateboard appeared first on Raspberry Pi.

A Raspberry Pi + IKEA arcade table to make yourself

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-ikea-arcade-table-make-yourself/

Barely a month slips by at the moment without my ordering some new flat-packed goodies from IKEA. Our family, still gradually settling into the house we moved into just before our eldest was born, goes about its book-savouring, toy-categorising, craft-supply-hoarding life within a sturdy framework of TROFAST, EKBY and BESTÅ. The really great thing is that much of this furniture lends itself to modification, and spannerspencer‘s PIK3A Gaming Table, using a Raspberry Pi and the iconic LACK side table, is a wonderful example.

PIK3A gaming table - a glossy red IKEA LACK table with inlaid monitor, joystick and buttons

Shiny retrogaming loveliness

The build instructions over at element14 are generously illustrated with photographs, bringing this project within reach of people who don’t have a ton of experience, but are happy to chuck some time at it. (If I give this one a go, I’ll probably start by getting a couple of tables so that I have a back-up. The mods to the table don’t need any fancy tools – just a drill, a Stanley knife and a hole saw – but these are the steps at greatest risk of mistakes you can’t undo.) The tutorial takes you through everything from cutting the table so as to avoid too many repeat attempts, to mounting and wiring up the controls, to the code you need to run on the Arduino and how to upload it.

Cutting holes in an IKEA LACK table for buttons and other controls

Holes much neater than the ones I will cut

You can buy a new LACK table for £6 in the UK, although the nice red glossy version in the pictures will set you back a whole £2 more. A Raspberry Pi, an Arduino Leonardo, an old LCD monitor, some cheap computer speakers, a joystick, buttons, cables and connectors, and a power supply complete the bill of materials for this build. If you want to make it extra beautiful or simply catproof it, you can add a sheet of acrylic to protect the monitor, as spannerspencer has. He’s also included a panel mount USB port to make it easy to add USB peripherals later.

A cat standing on a PIK3A gaming table protected with a sheet of transparent acrylic

PIK3A, with added catproofing

The PIK3A Gaming Table went down a storm over at element14, and its successor, the PIK3A Mark II two-player gaming table (using a LACK TV bench) is proving pretty popular too. Give them a go!

The post A Raspberry Pi + IKEA arcade table to make yourself appeared first on Raspberry Pi.