Tag Archives: PIN

Hollywood and Netflix Ask Court to Seize Tickbox Streaming Devices

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-and-netflix-ask-court-to-seize-tickbox-streaming-devices-171209/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, sellers who ship devices with unauthorized add-ons give it a bad reputation.

According to the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies, Tickbox TV is one of these bad actors.

Earlier this year, ACE filed a lawsuit against the Georgia-based company, which sells set-top boxes that allow users to stream a variety of popular media. The Tickbox devices use the Kodi media player and come with instructions on how to add various add-ons.

According to ACE, these devices are nothing more than pirate tools, allowing buyers to stream copyright infringing content. “TickBox promotes and distributes TickBox TV for infringing use, and that is exactly the result of its use,” they told court this week.

After the complaint was filed in October, Tickbox made some cosmetic changes to the site, removing some allegedly inducing language. The streaming devices are still for sale, however, but not for long if it’s up to the media giants.

This week ACE submitted a request for a preliminary injunction to the court, hoping to stop Tickbox’s sales activities.

“TickBox is intentionally inducing infringement, pure and simple. Plaintiffs respectfully request that the Court enter a preliminary injunction that requires TickBox to halt its flagrantly illegal conduct immediately,” they write in their application.

The companies explain that that since Tickbox is causing irreparable harm, all existing devices should be impounded.

“[A]ll TickBox TV devices in the possession of TickBox and all of its officers, directors, agents, servants, and employees, and all persons in active concert or participation or in privity with any of them are to be impounded and shall be retained by Defendant until further order of the Court,” the proposed order reads.

In addition, Tickbox should push out a software update which remove all infringing add-ons from the devices that were previously sold.

“TickBox shall, via software update, remove from all distributed TickBox TV devices all Kodi ‘Themes,’ ‘Builds,’ ‘Addons,’ or any other software that facilitates the infringing public performances of Plaintiffs’ Copyrighted Works.”

Among others, the list of allegedly infringing add-ons and themes includes Spinz, Lodi Black, Stream on Fire, Wookie, Aqua, CMM, Spanish Quasar, Paradox, Covenant, Elysium, UK Turk, Gurzil, Maverick, and Poseidon.

The filing shows that ACE is serious about its efforts to stop the sale of these type of streaming devices. Tickbox has yet to reply to the original complaint or the injunction request.

While this is the first US lawsuit of its kind, the anti-piracy conglomerate has been rather active in recent weeks. The group has successfully pressured several addon developers to quit and has been involved in enforcement actions around the globe.

A copy of the proposed preliminary injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

CrimeStoppers Campaign Targets Pirate Set-Top Boxes & Their Users

Post Syndicated from Andy original https://torrentfreak.com/crimestoppers-campaign-targets-pirate-set-top-boxes-their-users-171209/

While many people might believe CrimeStoppers to be an official extension of the police in the UK, the truth is a little more subtle.

CrimeStoppers is a charity that operates a service through which members of the public can report crime anonymously, either using a dedicated phone line or via a website. Callers are not required to give their name, meaning that for those concerned about reprisals or becoming involved in a case for other sensitive reasons, it’s the perfect buffer between them and the authorities.

The people at CrimeStoppers deal with all kinds of crime but perhaps a little surprisingly, they’ve just got involved in the set-top box controversy in the UK.

“Advances in technology have allowed us to enjoy on-screen entertainment in more ways than ever before, with ever increasing amounts of exciting and original content,” the CrimeStoppers campaign begins.

“However, some people are avoiding paying for this content by using modified streaming hardware devices, like a set-top box or stick, in conjunction with software such as illegal apps or add-ons, or illegal mobile apps which allow them to watch new movie releases, TV that hasn’t yet aired, and subscription sports channels for free.”

The campaign has been launched in partnership with the Intellectual Property Office and unnamed “industry partners”. Who these companies are isn’t revealed but given the standard messages being portrayed by the likes of ACE, Premier League and Federation Against Copyright Theft lately, it wouldn’t be a surprise if some or all of them were involved.

Those messages are revealed in a series of four video ads, each taking a different approach towards discouraging the public from using devices loaded with pirate software.

The first video clearly targets the consumer, dispelling the myth that watching pirate video isn’t against the law. It is, that’s not in any doubt, but from the constant tone of the video, one could be forgiven that it’s an extremely serious crime rather than something which is likely to be a civil matter, if anything at all.

It also warns people who are configuring and selling pirate devices that they are breaking the law. Again, this is absolutely true but this activity is clearly several magnitudes more serious than simply viewing. The video blurs the boundaries for what appears to be dramatic effect, however.

Selling and watching is illegal

The second video is all about demonizing the people and groups who may offer set-top boxes to the public.

Instead of portraying the hundreds of “cottage industry” suppliers behind many set-top box sales in the UK, the CrimeStoppers video paints a picture of dark organized crime being the main driver. By buying from these people, the charity warns, criminals are being welcomed in.

“It is illegal. You could also be helping to fund organized crime and bringing it into your community,” the video warns.

Are you funding organized crime?

The third video takes another approach, warning that set-top boxes have few if any parental controls. This could lead to children being exposed to inappropriate content, the charity warns.

“What are your children watching. Does it worry you?” the video asks.

Of course, the same can be said about the Internet, period. Web browsers don’t filter what content children have access to unless parents take pro-active steps to configure special services or software for the purpose.

There’s always the option to supervise children, of course, but Netflix is probably a safer option for those with a preference to stand off. It’s also considerably more expensive, a fact that won’t have escaped users of these devices.

Got kids? Take care….

Finally, video four picks up a theme that’s becoming increasingly common in anti-piracy campaigns – malware and identity theft.

“Why risk having your identity stolen or your bank account or home network hacked. If you access entertainment or sports using dodgy streaming devices or apps, or illegal addons for Kodi, you are increasing the risks,” the ad warns.

Danger….Danger….

Perhaps of most interest is that this entire campaign, which almost certainly has Big Media behind the scenes in advisory and financial capacities, barely mentions the entertainment industries at all.

Indeed, the success of the whole campaign hinges on people worrying about the supposed ill effects of illicit streaming on them personally and then feeling persuaded to inform on suppliers and others involved in the chain.

“Know of someone supplying or promoting these dodgy devices or software? It is illegal. Call us now and help stop crime in your community,” the videos warn.

That CrimeStoppers has taken on this campaign at all is a bit of a head-scratcher, given the bigger crime picture. Struggling with severe budget cuts, police in the UK are already de-prioritizing a number of crimes, leading to something called “screening out”, a process through which victims are given a crime number but no investigation is carried out.

This means that in 2016, 45% of all reported crimes in Greater Manchester weren’t investigated and a staggering 57% of all recorded domestic burglaries weren’t followed up by the police. But it gets worse.

“More than 62pc of criminal damage and arson offenses were not investigated, along with one in three reported shoplifting incidents,” MEN reports.

Given this backdrop, how will police suddenly find the resources to follow up lots of leads from the public and then subsequently prosecute people who sell pirate boxes? Even if they do, will that be at the expense of yet more “screening out” of other public-focused offenses?

No one is saying that selling pirate devices isn’t a crime or at least worthy of being followed up, but is this niche likely to be important to the public when they’re being told that nothing will be done when their homes are emptied by intruders? “NO” says a comment on one of the CrimeStoppers videos on YouTube.

“This crime affects multi-million dollar corporations, I’d rather see tax payers money invested on videos raising awareness of crimes committed against the people rather than the 0.001%,” it concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The Operations Team Just Got Rich-er!

Post Syndicated from Yev original https://www.backblaze.com/blog/operations-team-just-got-rich-er/

We’re growing at a pretty rapid clip, and as we add more customers, we need people to help keep all of our hard drive spinning. Along with support, the other department that grows linearly with the number of customers that join us is the operations team, and they’ve just added a new member to their team, Rich! He joins us as a Network Systems Administrator! Lets take a moment to learn more about Rich, shall we?

What is your Backblaze Title?
Network Systems Administrator

Where are you originally from?
The Upper Peninsula of Michigan. Da UP, eh!

What attracted you to Backblaze?
The fact that it is a small tech company packed with highly intelligent people and a place where I can also be friends with my peers. I am also huge on cloud storage and backing up your past!

What do you expect to learn while being at Backblaze?
I look forward to expanding my Networking skills and System Administration skills while helping build the best Cloud Storage and Backup Company there is!

Where else have you worked?
I first started working in Data Centers at Viawest. I was previously an Infrastructure Engineer at Twitter and a Production Engineer at Groupon.

Where did you go to school?
I started at Finlandia University in Norther Michigan, carried onto Northwest Florida State and graduated with my A.S. from North Lake College in Dallas, TX. I then completed my B.S. Degree online at WGU.

What’s your dream job?
Sr. Network Engineer

Favorite place you’ve traveled?
I have traveled around a bit in my life. I really liked Dublin, Ireland but I have to say favorite has to be Puerto Vallarta, Mexico! Which is actually where I am getting married in 2019!

Favorite hobby?
Water is my life. I like to wakeboard and wakesurf. I also enjoy biking, hunting, fishing, camping, and anything that has to do with the great outdoors!

Of what achievement are you most proud?
I’m proud of moving up in my career as quickly as I have been. I am also very proud of being able to wakesurf behind a boat without a rope! Lol!

Star Trek or Star Wars?
Star Trek! I grew up on it!

Coke or Pepsi?
H2O 😀

Favorite food?
Mexican Food and Pizza!

Why do you like certain things?
Hmm…. because certain things make other certain things particularly certain!

Anything else you’d like you’d like to tell us?
Nope 😀

Who can say no to high quality H2O? Welcome to the team Rich!

The post The Operations Team Just Got Rich-er! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Sean Hodgins’ video-playing Christmas ornament

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sean-hodgins-ornament/

Standard Christmas tree ornaments are just so boring, always hanging there doing nothing. Yawn! Lucky for us, Sean Hodgins has created an ornament that plays classic nineties Christmas adverts, because of nostalgia.

YouTube Christmas Ornament! – Raspberry Pi Project

This Christmas ornament will really take you back…

Ingredients

Sean first 3D printed a small CRT-shaped ornament resembling the family television set in The Simpsons. He then got to work on the rest of the components.

Pi Zero and electronic components — Sean Hodgins Raspberry Pi Christmas ornament

All images featured in this blog post are c/o Sean Hodgins. Thanks, Sean!

The ornament uses a Raspberry Pi Zero W, 2.2″ TFT LCD screen, Mono Amp, LiPo battery, and speaker, plus the usual peripherals. Sean purposely assembled it with jumper wires and tape, so that he can reuse the components for another project after the festive season.

Clip of PowerBoost 1000 LiPo charger — Sean Hodgins Raspberry Pi Christmas ornament

By adding header pins to a PowerBoost 1000 LiPo charger, Sean was able to connect a switch to control the Pi’s power usage. This method is handy if you want to seal your Pi in a casing that blocks access to the power leads. From there, jumper wires connect the audio amplifier, LCD screen, and PowerBoost to the Zero W.

Code

Then, with Raspbian installed to an SD card and SSH enabled on the Zero W, Sean got the screen to work. The type of screen he used has both SPI and FBTFT enabled. And his next step was to set up the audio functionality with the help of an Adafruit tutorial.

Clip demoing Sean Hodgins Raspberry Pi Christmas ornament

For video playback, Sean installed mplayer before writing a program to extract video content from YouTube*. Once extracted, the video files are saved to the Raspberry Pi, allowing for seamless playback on the screen.

Construct

When fully assembled, the entire build fit snugly within the 3D-printed television set. And as a final touch, Sean added the cut-out lens of a rectangular magnifying glass to give the display the look of a curved CRT screen.

Clip of completed Sean Hodgins Raspberry Pi Christmas ornament in a tree

Then finally, the ornament hangs perfectly on the Christmas tree, up and running and spreading nostalgic warmth.

For more information on the build, check out the Instructables tutorial. And to see all of Sean’s builds, subscribe to his YouTube channel.

Make

If you’re looking for similar projects, have a look at this tutorial by Cabe Atwell for building a Pi-powered ornament that receives and displays text messages.

Have you created Raspberry Pi tree ornaments? Maybe you’ve 3D printed some of our own? We’d love to see what you’re doing with a Raspberry Pi this festive season, so make sure to share your projects with us, either in the comments below or via our social media channels.

 

*At this point, I should note that we don’t support the extraction of  video content from YouTube for your own use if you do not have the right permissions. However, since Sean’s device can play back any video, we think it would look great on your tree showing your own family videos from previous years. So, y’know, be good, be legal, and be festive.

The post Sean Hodgins’ video-playing Christmas ornament appeared first on Raspberry Pi.

Security Vulnerabilities in Certificate Pinning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/security_vulner_10.html

New research found that many banks offer certificate pinning as a security feature, but fail to authenticate the hostname. This leaves the systems open to man-in-the-middle attacks.

From the paper:

Abstract: Certificate verification is a crucial stage in the establishment of a TLS connection. A common security flaw in TLS implementations is the lack of certificate hostname verification but, in general, this is easy to detect. In security-sensitive applications, the usage of certificate pinning is on the rise. This paper shows that certificate pinning can (and often does) hide the lack of proper hostname verification, enabling MITM attacks. Dynamic (black-box) detection of this vulnerability would typically require the tester to own a high security certificate from the same issuer (and often same intermediate CA) as the one used by the app. We present Spinner, a new tool for black-box testing for this vulnerability at scale that does not require purchasing any certificates. By redirecting traffic to websites which use the relevant certificates and then analysing the (encrypted) network traffic we are able to determine whether the hostname check is correctly done, even in the presence of certificate pinning. We use Spinner to analyse 400 security-sensitive Android and iPhone apps. We found that 9 apps had this flaw, including two of the largest banks in the world: Bank of America and HSBC. We also found that TunnelBear, one of the most popular VPN apps was also vulnerable. These apps have a joint user base of tens of millions of users.

News article.

timeShift(GrafanaBuzz, 1w) Issue 25

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/12/08/timeshiftgrafanabuzz-1w-issue-25/

Welcome to TimeShift

This week, a few of us from Grafana Labs, along with 4,000 of our closest friends, headed down to chilly Austin, TX for KubeCon + CloudNativeCon North America 2017. We got to see a number of great talks and were thrilled to see Grafana make appearances in some of the presentations. We were also a sponsor of the conference and handed out a ton of swag (we overnighted some of our custom Grafana scarves, which came in handy for Thursday’s snow).

We also announced Grafana Labs has joined the Cloud Native Computing Foundation as a Silver member! We’re excited to share our expertise in time series data visualization and open source software with the CNCF community.


Latest Release

Grafana 4.6.2 is available and includes some bug fixes:

  • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if you’re using Prometheus for alerting. More details in the issue. #9777
  • Color picker: Bug after using textbox input field to change/paste color string #9769
  • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
  • Heatmap: Fixed tooltip for “time series buckets” mode #9332
  • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

Download Grafana 4.6.2 Now


From the Blogosphere

Grafana Labs Joins the CNCF: Grafana Labs has officially joined the Cloud Native Computing Foundation (CNCF). We look forward to working with the CNCF community to democratize metrics and help unify traditionally disparate information.

Automating Web Performance Regression Alerts: Peter and his team needed a faster and easier way to find web performance regressions at the Wikimedia Foundation. Grafana 4’s alerting features were exactly what they needed. This post covers their journey on setting up alerts for both RUM and synthetic testing and shares the alerts they’ve set up on their dashboards.

How To Install Grafana on Ubuntu 17.10: As you probably guessed from the title, this article walks you through installing and configuring Grafana in the latest version of Ubuntu (or earlier releases). It also covers installing plugins using the Grafana CLI tool.

Prometheus: Starting the Server with Alertmanager, cAdvisor and Grafana: Learn how to monitor Docker from scratch using cAdvisor, Prometheus and Grafana in this detailed, step-by-step walkthrough.

Monitoring Java EE Servers with Prometheus and Payara: In this screencast, Adam uses firehose; a Java EE 7+ metrics gateway for Prometheus, to convert the JSON output into Prometheus statistics and visualizes the data in Grafana.

Monitoring Spark Streaming with InfluxDB and Grafana: This article focuses on how to monitor Apache Spark Streaming applications with InfluxDB and Grafana at scale.


GrafanaCon EU, March 1-2, 2018

We are currently reaching out to everyone who submitted a talk to GrafanaCon and will soon publish the final schedule at grafanacon.org.

Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

Lots of plugin updates and a new OpenNMS Helm App plugin to announce! To install or update any plugin in an on-prem Grafana instance, use the Grafana-cli tool, or install and update with 1 click on Hosted Grafana.

NEW PLUGIN

OpenNMS Helm App – The new OpenNMS Helm App plugin replaces the old OpenNMS data source. Helm allows users to create flexible dashboards using both fault management (FM) and performance management (PM) data from OpenNMS® Horizon™ and/or OpenNMS® Meridian™. The old data source is now deprecated.


Install Now

UPDATED PLUGIN

PNP Data Source – This data source plugin (that uses PNP4Nagios to access RRD files) received a small, but important update that fixes template query parsing.


Update

UPDATED PLUGIN

Vonage Status Panel – The latest version of the Status Panel comes with a number of small fixes and changes. Below are a few of the enhancements:

  • Threshold settings – removed Show Always option, and replaced it with 2 options:
    • Display Alias – Select when to show the metric alias.
    • Display Value – Select when to show the metric value.
  • Text format configuration (bold / italic) for warning / critical / disabled states.
  • Option to change the corner radius of the panel. Now you can change the panel’s shape to have rounded corners.

Update

UPDATED PLUGIN

Google Calendar Plugin – This plugin received a small update, so be sure to install version 1.0.4.


Update

UPDATED PLUGIN

Carpet Plot Panel – The Carpet Plot Panel received a fix for IE 11, and also added the ability to choose custom colors.


Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

Docker Meetup @ Tuenti | Madrid, Spain – Dec 12, 2017: Javier Provecho: Intro to Metrics with Swarm, Prometheus and Grafana

Learn how to gain visibility in real time for your micro services. We’ll cover how to deploy a Prometheus server with persistence and Grafana, how to enable metrics endpoints for various service types (docker daemon, traefik proxy and postgres) and how to scrape, visualize and set up alarms based on those metrics.

RSVP

Grafana Lyon Meetup n ° 2 | Lyon, France – Dec 14, 2017: This meetup will cover some of the latest innovations in Grafana and discussion about automation. Also, free beer and chips, so – of course you’re going!

RSVP

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and we’ve heard there were some great talks submitted. There is no need to register; all are welcome.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

We were thrilled to see our dashboards bigger than life at KubeCon + CloudNativeCon this week. Thanks for snapping a photo and sharing!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Hard to believe this is the 25th issue of Timeshift! I have a blast writing these roundups, but Let me know what you think. Submit a comment on this article below, or post something at our community forum. Find an article I haven’t included? Send it my way. Help us make timeShift better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Is blockchain a security topic? (Opensource.com)

Post Syndicated from jake original https://lwn.net/Articles/740929/rss

At Opensource.com, Mike Bursell looks at blockchain security from the angle of trust. Unlike cryptocurrencies, which are pseudonymous typically, other kinds of blockchains will require mapping users to real-life identities; that raises the trust issue.

What’s really interesting is that, if you’re thinking about moving to a permissioned blockchain or distributed ledger with permissioned actors, then you’re going to have to spend some time thinking about trust. You’re unlikely to be using a proof-of-work system for making blocks—there’s little point in a permissioned system—so who decides what comprises a “valid” block that the rest of the system should agree on? Well, you can rotate around some (or all) of the entities, or you can have a random choice, or you can elect a small number of über-trusted entities. Combinations of these schemes may also work.

If these entities all exist within one trust domain, which you control, then fine, but what if they’re distributors, or customers, or partners, or other banks, or manufacturers, or semi-autonomous drones, or vehicles in a commercial fleet? You really need to ensure that the trust relationships that you’re encoding into your implementation/deployment truly reflect the legal and IRL [in real life] trust relationships that you have with the entities that are being represented in your system.

And the problem is that, once you’ve deployed that system, it’s likely to be very difficult to backtrack, adjust, or reset the trust relationships that you’ve designed.”

The Raspberry Pi Christmas shopping list 2017

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/christmas-shopping-list-2017/

Looking for the perfect Christmas gift for a beloved maker in your life? Maybe you’d like to give a relative or friend a taste of the world of coding and Raspberry Pi? Whatever you’re looking for, the Raspberry Pi Christmas shopping list will point you in the right direction.

An ice-skating Raspberry Pi - The Raspberry Pi Christmas Shopping List 2017

For those getting started

Thinking about introducing someone special to the wonders of Raspberry Pi during the holidays? Although you can set up your Pi with peripherals from around your home, such as a mobile phone charger, your PC’s keyboard, and the old mouse dwelling in an office drawer, a starter kit is a nice all-in-one package for the budding coder.



Check out the starter kits from Raspberry Pi Approved Resellers such as Pimoroni, The Pi Hut, ModMyPi, Adafruit, CanaKit…the list is pretty long. Our products page will direct you to your closest reseller, or you can head to element14 to pick up the official Raspberry Pi Starter Kit.



You can also buy the Raspberry Pi Press’s brand-new Raspberry Pi Beginners Book, which includes a Raspberry Pi Zero W, a case, a ready-made SD card, and adapter cables.

Once you’ve presented a lucky person with their first Raspberry Pi, it’s time for them to spread their maker wings and learn some new skills.

MagPi Essentials books - The Raspberry Pi Christmas Shopping List 2017

To help them along, you could pick your favourite from among the Official Projects Book volume 3, The MagPi Essentials guides, and the brand-new third edition of Carrie Anne Philbin’s Adventures in Raspberry Pi. (She is super excited about this new edition!)

And you can always add a link to our free resources on the gift tag.

For the maker in your life

If you’re looking for something for a confident digital maker, you can’t go wrong with adding to their arsenal of electric and electronic bits and bobs that are no doubt cluttering drawers and boxes throughout their house.



Components such as servomotors, displays, and sensors are staples of the maker world. And when it comes to jumper wires, buttons, and LEDs, one can never have enough.



You could also consider getting your person a soldering iron, some helpings hands, or small tools such as a Dremel or screwdriver set.

And to make their life a little less messy, pop it all inside a Really Useful Box…because they’re really useful.



For kit makers

While some people like to dive into making head-first and to build whatever comes to mind, others enjoy working with kits.



The Naturebytes kit allows you to record the animal visitors of your garden with the help of a camera and a motion sensor. Footage of your local badgers, birds, deer, and more will be saved to an SD card, or tweeted or emailed to you if it’s in range of WiFi.

Cortec Tiny 4WD - The Raspberry Pi Christmas Shopping List 2017

Coretec’s Tiny 4WD is a kit for assembling a Pi Zero–powered remote-controlled robot at home. Not only is the robot adorable, building it also a great introduction to motors and wireless control.



Bare Conductive’s Touch Board Pro Kit offers everything you need to create interactive electronics projects using conductive paint.

Pi Hut Arcade Kit - The Raspberry Pi Christmas Shopping List 2017

Finally, why not help your favourite maker create their own gaming arcade using the Arcade Building Kit from The Pi Hut?

For the reader

For those who like to curl up with a good read, or spend too much of their day on public transport, a book or magazine subscription is the perfect treat.

For makers, hackers, and those interested in new technologies, our brand-new HackSpace magazine and the ever popular community magazine The MagPi are ideal. Both are available via a physical or digital subscription, and new subscribers to The MagPi also receive a free Raspberry Pi Zero W plus case.

Cover of CoderDojo Nano Make your own game

Marc Scott Beginner's Guide to Coding Book

You can also check out other publications from the Raspberry Pi family, including CoderDojo’s new CoderDojo Nano: Make Your Own Game, Eben Upton and Gareth Halfacree’s Raspberry Pi User Guide, and Marc Scott’s A Beginner’s Guide to Coding. And have I mentioned Carrie Anne’s Adventures in Raspberry Pi yet?

Stocking fillers for everyone

Looking for something small to keep your loved ones occupied on Christmas morning? Or do you have to buy a Secret Santa gift for the office tech? Here are some wonderful stocking fillers to fill your boots with this season.

Pi Hut 3D Christmas Tree - The Raspberry Pi Christmas Shopping List 2017

The Pi Hut 3D Xmas Tree: available as both a pre-soldered and a DIY version, this gadget will work with any 40-pin Raspberry Pi and allows you to create your own mini light show.



Google AIY Voice kit: build your own home assistant using a Raspberry Pi, the MagPi Essentials guide, and this brand-new kit. “Google, play Mariah Carey again…”



Pimoroni’s Raspberry Pi Zero W Project Kits offer everything you need, including the Pi, to make your own time-lapse cameras, music players, and more.



The official Raspberry Pi Sense HAT, Camera Module, and cases for the Pi 3 and Pi Zero will complete the collection of any Raspberry Pi owner, while also opening up exciting project opportunities.

STEAM gifts that everyone will love

Awesome Astronauts | Building LEGO’s Women of NASA!

LEGO Idea’s bought out this amazing ‘Women of NASA’ set, and I thought it would be fun to build, play and learn from these inspiring women! First up, let’s discover a little more about Sally Ride and Mae Jemison, two AWESOME ASTRONAUTS!

Treat the kids, and big kids, in your life to the newest LEGO Ideas set, the Women of NASA — starring Nancy Grace Roman, Margaret Hamilton, Sally Ride, and Mae Jemison!



Explore the world of wearables with Pimoroni’s sewable, hackable, wearable, adorable Bearables kits.



Add lights and motors to paper creations with the Activating Origami Kit, available from The Pi Hut.




We all loved Hidden Figures, and the STEAM enthusiast you know will do too. The film’s available on DVD, and you can also buy the original book, along with other fascinating non-fiction such as Rebecca Skloot’s The Immortal Life of Henrietta Lacks, Rachel Ignotofsky’s Women in Science, and Sydney Padua’s (mostly true) The Thrilling Adventures of Lovelace and Babbage.

Have we missed anything?

With so many amazing kits, HATs, and books available from members of the Raspberry Pi community, it’s hard to only pick a few. Have you found something splendid for the maker in your life? Maybe you’ve created your own kit that uses the Raspberry Pi? Share your favourites with us in the comments below or via our social media accounts.

The post The Raspberry Pi Christmas shopping list 2017 appeared first on Raspberry Pi.

About the Amazon Trust Services Migration

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/669-2/

Amazon Web Services is moving the certificates for our services—including Amazon SES—to use our own certificate authority, Amazon Trust Services. We have carefully planned this change to minimize the impact it will have on your workflow. Most customers will not have to take any action during this migration.

About the Certificates

The Amazon Trust Services Certificate Authority (CA) uses the Starfield Services CA, which has been valid since 2005. The Amazon Trust Services certificates are available in most major operating systems released in the past 10 years, and are also trusted by all modern web browsers.

If you send email through the Amazon SES SMTP interface using a mail server that you operate, we recommend that you confirm that the appropriate certificates are installed. You can test whether your server trusts the Amazon Trust Services CAs by visiting the following URLs (for example, by using cURL):

If you see a message stating that the certificate issuer is not recognized, then you should install the appropriate root certificate. You can download individual certificates from https://www.amazontrust.com/repository. The process of adding a trusted certificate to your server varies depending on the operating system you use. For more information, see “Adding New Certificates,” below.

AWS SDKs and CLI

Recent versions of the AWS SDKs and the AWS CLI are not impacted by this change. If you use an AWS SDK or a version of the AWS CLI released prior to February 5, 2015, you should upgrade to the latest version.

Potential Issues

If your system is configured to use a very restricted list of root CAs (for example, if you use certificate pinning), you may be impacted by this migration. In this situation, you must update your pinned certificates to include the Amazon Trust Services CAs.

Adding New Root Certificates

The following sections list the steps you can take to install the Amazon Root CA certificates on your systems if they are not already present.

macOS

To install a new certificate on a macOS server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. At the command prompt, type the following command to install the certificate: sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /path/to/certificatename.crt, replacing /path/to/certificatename.crt with the full path to the certificate file.

Windows Server

To install a new certificate on a Windows server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. At the command prompt, type the following command to install the certificate: certutil -addstore -f "ROOT" c:\path\to\certificatename.crt, replacing c:\path\to\certificatename.crt with the full path to the certificate file.

Ubuntu

To install a new certificate on an Ubuntu (or similar) server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. Copy the certificate file to the directory /usr/local/share/ca-certificates/
  4. At the command prompt, type the following command to update the certificate authority store: sudo update-ca-certificates

Red Hat Enterprise Linux/Fedora/CentOS

To install a new certificate on a Red Hat Enterprise Linux (or similar) server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. Copy the certificate file to the directory /etc/pki/ca-trust/source/anchors/
  4. At the command line, type the following command to enable dynamic certificate authority configuration: sudo update-ca-trust force-enable
  5. At the command line, type the following command to update the certificate authority store: sudo update-ca-trust extract

To learn more about this migration, see How to Prepare for AWS’s Move to Its Own Certificate Authority on the AWS Security Blog.

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

How to Easily Apply Amazon Cloud Directory Schema Changes with In-Place Schema Upgrades

Post Syndicated from Mahendra Chheda original https://aws.amazon.com/blogs/security/how-to-easily-apply-amazon-cloud-directory-schema-changes-with-in-place-schema-upgrades/

Now, Amazon Cloud Directory makes it easier for you to apply schema changes across your directories with in-place schema upgrades. Your directory now remains available while Cloud Directory applies backward-compatible schema changes such as the addition of new fields. Without migrating data between directories or applying code changes to your applications, you can upgrade your schemas. You also can view the history of your schema changes in Cloud Directory by using version identifiers, which help you track and audit schema versions across directories. If you have multiple instances of a directory with the same schema, you can view the version history of schema changes to manage your directory fleet and ensure that all directories are running with the same schema version.

In this blog post, I demonstrate how to perform an in-place schema upgrade and use schema versions in Cloud Directory. I add additional attributes to an existing facet and add a new facet to a schema. I then publish the new schema and apply it to running directories, upgrading the schema in place. I also show how to view the version history of a directory schema, which helps me to ensure my directory fleet is running the same version of the schema and has the correct history of schema changes applied to it.

Note: I share Java code examples in this post. I assume that you are familiar with the AWS SDK and can use Java-based code to build a Cloud Directory code example. You can apply the concepts I cover in this post to other programming languages such as Python and Ruby.

Cloud Directory fundamentals

I will start by covering a few Cloud Directory fundamentals. If you are already familiar with the concepts behind Cloud Directory facets, schemas, and schema lifecycles, you can skip to the next section.

Facets: Groups of attributes. You use facets to define object types. For example, you can define a device schema by adding facets such as computers, phones, and tablets. A computer facet can track attributes such as serial number, make, and model. You can then use the facets to create computer objects, phone objects, and tablet objects in the directory to which the schema applies.

Schemas: Collections of facets. Schemas define which types of objects can be created in a directory (such as users, devices, and organizations) and enforce validation of data for each object class. All data within a directory must conform to the applied schema. As a result, the schema definition is essentially a blueprint to construct a directory with an applied schema.

Schema lifecycle: The four distinct states of a schema: Development, Published, Applied, and Deleted. Schemas in the Published and Applied states have version identifiers and cannot be changed. Schemas in the Applied state are used by directories for validation as applications insert or update data. You can change schemas in the Development state as many times as you need them to. In-place schema upgrades allow you to apply schema changes to an existing Applied schema in a production directory without the need to export and import the data populated in the directory.

How to add attributes to a computer inventory application schema and perform an in-place schema upgrade

To demonstrate how to set up schema versioning and perform an in-place schema upgrade, I will use an example of a computer inventory application that uses Cloud Directory to store relationship data. Let’s say that at my company, AnyCompany, we use this computer inventory application to track all computers we give to our employees for work use. I previously created a ComputerSchema and assigned its version identifier as 1. This schema contains one facet called ComputerInfo that includes attributes for SerialNumber, Make, and Model, as shown in the following schema details.

Schema: ComputerSchema
Version: 1

Facet: ComputerInfo
Attribute: SerialNumber, type: Integer
Attribute: Make, type: String
Attribute: Model, type: String

AnyCompany has offices in Seattle, Portland, and San Francisco. I have deployed the computer inventory application for each of these three locations. As shown in the lower left part of the following diagram, ComputerSchema is in the Published state with a version of 1. The Published schema is applied to SeattleDirectory, PortlandDirectory, and SanFranciscoDirectory for AnyCompany’s three locations. Implementing separate directories for different geographic locations when you don’t have any queries that cross location boundaries is a good data partitioning strategy and gives your application better response times with lower latency.

Diagram of ComputerSchema in Published state and applied to three directories

Legend for the diagrams in this post

The following code example creates the schema in the Development state by using a JSON file, publishes the schema, and then creates directories for the Seattle, Portland, and San Francisco locations. For this example, I assume the schema has been defined in the JSON file. The createSchema API creates a schema Amazon Resource Name (ARN) with the name defined in the variable, SCHEMA_NAME. I can use the putSchemaFromJson API to add specific schema definitions from the JSON file.

// The utility method to get valid Cloud Directory schema JSON
String validJson = getJsonFile("ComputerSchema_version_1.json")

String SCHEMA_NAME = "ComputerSchema";

String developmentSchemaArn = client.createSchema(new CreateSchemaRequest()
        .withName(SCHEMA_NAME))
        .getSchemaArn();

// Put the schema document in the Development schema
PutSchemaFromJsonResult result = client.putSchemaFromJson(new PutSchemaFromJsonRequest()
        .withSchemaArn(developmentSchemaArn)
        .withDocument(validJson));

The following code example takes the schema that is currently in the Development state and publishes the schema, changing its state to Published.

String SCHEMA_VERSION = "1";
String publishedSchemaArn = client.publishSchema(
        new PublishSchemaRequest()
        .withDevelopmentSchemaArn(developmentSchemaArn)
        .withVersion(SCHEMA_VERSION))
        .getPublishedSchemaArn();

// Our Published schema ARN is as follows
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1

The following code example creates a directory named SeattleDirectory and applies the published schema. The createDirectory API call creates a directory by using the published schema provided in the API parameters. Note that Cloud Directory stores a version of the schema in the directory in the Applied state. I will use similar code to create directories for PortlandDirectory and SanFranciscoDirectory.

String DIRECTORY_NAME = "SeattleDirectory"; 

CreateDirectoryResult directory = client.createDirectory(
        new CreateDirectoryRequest()
        .withName(DIRECTORY_NAME)
        .withSchemaArn(publishedSchemaArn));

String directoryArn = directory.getDirectoryArn();
String appliedSchemaArn = directory.getAppliedSchemaArn();

// This code section can be reused to create directories for Portland and San Francisco locations with the appropriate directory names

// Our directory ARN is as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX

// Our applied schema ARN is as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1

Revising a schema

Now let’s say my company, AnyCompany, wants to add more information for computers and to track which employees have been assigned a computer for work use. I modify the schema to add two attributes to the ComputerInfo facet: Description and OSVersion (operating system version). I make Description optional because it is not important for me to track this attribute for the computer objects I create. I make OSVersion mandatory because it is critical for me to track it for all computer objects so that I can make changes such as applying security patches or making upgrades. Because I make OSVersion mandatory, I must provide a default value that Cloud Directory will apply to objects that were created before the schema revision, in order to handle backward compatibility. Note that you can replace the value in any object with a different value.

I also add a new facet to track computer assignment information, shown in the following updated schema as the ComputerAssignment facet. This facet tracks these additional attributes: Name (the name of the person to whom the computer is assigned), EMail (the email address of the assignee), Department, and department CostCenter. Note that Cloud Directory refers to the previously available version identifier as the Major Version. Because I can now add a minor version to a schema, I also denote the changed schema as Minor Version A.

Schema: ComputerSchema
Major Version: 1
Minor Version: A 

Facet: ComputerInfo
Attribute: SerialNumber, type: Integer 
Attribute: Make, type: String
Attribute: Model, type: Integer
Attribute: Description, type: String, required: NOT_REQUIRED
Attribute: OSVersion, type: String, required: REQUIRED_ALWAYS, default: "Windows 7"

Facet: ComputerAssignment
Attribute: Name, type: String
Attribute: EMail, type: String
Attribute: Department, type: String
Attribute: CostCenter, type: Integer

The following diagram shows the changes that were made when I added another facet to the schema and attributes to the existing facet. The highlighted area of the diagram (bottom left) shows that the schema changes were published.

Diagram showing that schema changes were published

The following code example revises the existing Development schema by adding the new attributes to the ComputerInfo facet and by adding the ComputerAssignment facet. I use a new JSON file for the schema revision, and for the purposes of this example, I am assuming the JSON file has the full schema including planned revisions.

// The utility method to get a valid CloudDirectory schema JSON
String schemaJson = getJsonFile("ComputerSchema_version_1_A.json")

// Put the schema document in the Development schema
PutSchemaFromJsonResult result = client.putSchemaFromJson(
        new PutSchemaFromJsonRequest()
        .withSchemaArn(developmentSchemaArn)
        .withDocument(schemaJson));

Upgrading the Published schema

The following code example performs an in-place schema upgrade of the Published schema with schema revisions (it adds new attributes to the existing facet and another facet to the schema). The upgradePublishedSchema API upgrades the Published schema with backward-compatible changes from the Development schema.

// From an earlier code example, I know the publishedSchemaArn has this value: "arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1"

// Upgrade publishedSchemaArn to minorVersion A. The Development schema must be backward compatible with 
// the existing publishedSchemaArn. 

String minorVersion = "A"

UpgradePublishedSchemaResult upgradePublishedSchemaResult = client.upgradePublishedSchema(new UpgradePublishedSchemaRequest()
        .withDevelopmentSchemaArn(developmentSchemaArn)
        .withPublishedSchemaArn(publishedSchemaArn)
        .withMinorVersion(minorVersion));

String upgradedPublishedSchemaArn = upgradePublishedSchemaResult.getUpgradedSchemaArn();

// The Published schema ARN after the upgrade shows a minor version as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1/A

Upgrading the Applied schema

The following diagram shows the in-place schema upgrade for the SeattleDirectory directory. I am performing the schema upgrade so that I can reflect the new schemas in all three directories. As a reminder, I added new attributes to the ComputerInfo facet and also added the ComputerAssignment facet. After the schema and directory upgrade, I can create objects for the ComputerInfo and ComputerAssignment facets in the SeattleDirectory. Any objects that were created with the old facet definition for ComputerInfo will now use the default values for any additional attributes defined in the new schema.

Diagram of the in-place schema upgrade for the SeattleDirectory directory

I use the following code example to perform an in-place upgrade of the SeattleDirectory to a Major Version of 1 and a Minor Version of A. Note that you should change a Major Version identifier in a schema to make backward-incompatible changes such as changing the data type of an existing attribute or dropping a mandatory attribute from your schema. Backward-incompatible changes require directory data migration from a previous version to the new version. You should change a Minor Version identifier in a schema to make backward-compatible upgrades such as adding additional attributes or adding facets, which in turn may contain one or more attributes. The upgradeAppliedSchema API lets me upgrade an existing directory with a different version of a schema.

// This upgrades ComputerSchema version 1 of the Applied schema in SeattleDirectory to Major Version 1 and Minor Version A
// The schema must be backward compatible or the API will fail with IncompatibleSchemaException

UpgradeAppliedSchemaResult upgradeAppliedSchemaResult = client.upgradeAppliedSchema(new UpgradeAppliedSchemaRequest()
        .withDirectoryArn(directoryArn)
        .withPublishedSchemaArn(upgradedPublishedSchemaArn));

String upgradedAppliedSchemaArn = upgradeAppliedSchemaResult.getUpgradedSchemaArn();

// The Applied schema ARN after the in-place schema upgrade will appear as follows
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1

// This code section can be reused to upgrade directories for the Portland and San Francisco locations with the appropriate directory ARN

Note: Cloud Directory has excluded returning the Minor Version identifier in the Applied schema ARN for backward compatibility and to enable the application to work across older and newer versions of the directory.

The following diagram shows the changes that are made when I perform an in-place schema upgrade in the two remaining directories, PortlandDirectory and SanFranciscoDirectory. I make these calls sequentially, upgrading PortlandDirectory first and then upgrading SanFranciscoDirectory. I use the same code example that I used earlier to upgrade SeattleDirectory. Now, all my directories are running the most current version of the schema. Also, I made these schema changes without having to migrate data and while maintaining my application’s high availability.

Diagram showing the changes that are made with an in-place schema upgrade in the two remaining directories

Schema revision history

I can now view the schema revision history for any of AnyCompany’s directories by using the listAppliedSchemaArns API. Cloud Directory maintains the five most recent versions of applied schema changes. Similarly, to inspect the current Minor Version that was applied to my schema, I use the getAppliedSchemaVersion API. The listAppliedSchemaArns API returns the schema ARNs based on my schema filter as defined in withSchemaArn.

I use the following code example to query an Applied schema for its version history.

// This returns the five most recent Minor Versions associated with a Major Version
ListAppliedSchemaArnsResult listAppliedSchemaArnsResult = client.listAppliedSchemaArns(new ListAppliedSchemaArnsRequest()
        .withDirectoryArn(directoryArn)
        .withSchemaArn(upgradedAppliedSchemaArn));

// Note: The listAppliedSchemaArns API without the SchemaArn filter returns all the Major Versions in a directory

The listAppliedSchemaArns API returns the two ARNs as shown in the following output.

arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1
arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1/A

The following code example queries an Applied schema for current Minor Version by using the getAppliedSchemaVersion API.

// This returns the current Applied schema's Minor Version ARN 

GetAppliedSchemaVersion getAppliedSchemaVersionResult = client.getAppliedSchemaVersion(new GetAppliedSchemaVersionRequest()
	.withSchemaArn(upgradedAppliedSchemaArn));

The getAppliedSchemaVersion API returns the current Applied schema ARN with a Minor Version, as shown in the following output.

arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1/A

If you have a lot of directories, schema revision API calls can help you audit your directory fleet and ensure that all directories are running the same version of a schema. Such auditing can help you ensure high integrity of directories across your fleet.

Summary

You can use in-place schema upgrades to make changes to your directory schema as you evolve your data set to match the needs of your application. An in-place schema upgrade allows you to maintain high availability for your directory and applications while the upgrade takes place. For more information about in-place schema upgrades, see the in-place schema upgrade documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this post, start a new thread in the Directory Service forum or contact AWS Support.

– Mahendra

 

Running Windows Containers on Amazon ECS

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/running-windows-containers-on-amazon-ecs/

This post was developed and written by Jeremy Cowan, Thomas Fuller, Samuel Karp, and Akram Chetibi.

Containers have revolutionized the way that developers build, package, deploy, and run applications. Initially, containers only supported code and tooling for Linux applications. With the release of Docker Engine for Windows Server 2016, Windows developers have started to realize the gains that their Linux counterparts have experienced for the last several years.

This week, we’re adding support for running production workloads in Windows containers using Amazon Elastic Container Service (Amazon ECS). Now, Amazon ECS provides an ECS-Optimized Windows Server Amazon Machine Image (AMI). This AMI is based on the EC2 Windows Server 2016 AMI, and includes Docker 17.06 Enterprise Edition and the ECS Agent 1.16. This AMI provides improved instance and container launch time performance. It’s based on Windows Server 2016 Datacenter and includes Docker 17.06.2-ee-5, along with a new version of the ECS agent that now runs as a native Windows service.

In this post, I discuss the benefits of this new support, and walk you through getting started running Windows containers with Amazon ECS.

When AWS released the Windows Server 2016 Base with Containers AMI, the ECS agent ran as a process that made it difficult to monitor and manage. As a service, the agent can be health-checked, managed, and restarted no differently than other Windows services. The AMI also includes pre-cached images for Windows Server Core 2016 and Windows Server Nano Server 2016. By caching the images in the AMI, launching new Windows containers is significantly faster. When Docker images include a layer that’s already cached on the instance, Docker re-uses that layer instead of pulling it from the Docker registry.

The ECS agent and an accompanying ECS PowerShell module used to install, configure, and run the agent come pre-installed on the AMI. This guarantees there is a specific platform version available on the container instance at launch. Because the software is included, you don’t have to download it from the internet. This saves startup time.

The Windows-compatible ECS-optimized AMI also reports CPU and memory utilization and reservation metrics to Amazon CloudWatch. Using the CloudWatch integration with ECS, you can create alarms that trigger dynamic scaling events to automatically add or remove capacity to your EC2 instances and ECS tasks.

Getting started

To help you get started running Windows containers on ECS, I’ve forked the ECS reference architecture, to build an ECS cluster comprised of Windows instances instead of Linux instances. You can pull the latest version of the reference architecture for Windows.

The reference architecture is a layered CloudFormation stack, in that it calls other stacks to create the environment. Within the stack, the ecs-windows-cluster.yaml file contains the instructions for bootstrapping the Windows instances and configuring the ECS cluster. To configure the instances outside of AWS CloudFormation (for example, through the CLI or the console), you can add the following commands to your instance’s user data:

Import-Module ECSTools
Initialize-ECSAgent

Or

Import-Module ECSTools
Initialize-ECSAgent –Cluster MyCluster -EnableIAMTaskRole

If you don’t specify a cluster name when you initialize the agent, the instance is joined to the default cluster.

Adding -EnableIAMTaskRole when initializing the agent adds support for IAM roles for tasks. Previously, enabling this setting meant running a complex script and setting an environment variable before you could assign roles to your ECS tasks.

When you enable IAM roles for tasks on Windows, it consumes port 80 on the host. If you have tasks that listen on port 80 on the host, I recommend configuring a service for them that uses load balancing. You can use port 80 on the load balancer, and the traffic can be routed to another host port on your container instances. For more information, see Service Load Balancing.

Create a cluster

To create a new ECS cluster, choose Launch stack, or pull the GitHub project to your local machine and run the following command:

aws cloudformation create-stack –template-body file://<path to master-windows.yaml> --stack-name <name>

Upload your container image

Now that you have a cluster running, step through how to build and push an image into a container repository. You use a repository hosted in Amazon Elastic Container Registry (Amazon ECR) for this, but you could also use Docker Hub. To build and push an image to a repository, install Docker on your Windows* workstation. You also create a repository and assign the necessary permissions to the account that pushes your image to Amazon ECR. For detailed instructions, see Pushing an Image.

* If you are building an image that is based on Windows layers, then you must use a Windows environment to build and push your image to the registry.

Write your task definition

Now that your image is built and ready, the next step is to run your Windows containers using a task.

Start by creating a new task definition based on the windows-simple-iis image from Docker Hub.

  1. Open the ECS console.
  2. Choose Task Definitions, Create new task definition.
  3. Scroll to the bottom of the page and choose Configure via JSON.
  4. Copy and paste the following JSON into that field.
  5. Choose Save, Create.
{
   "family": "windows-simple-iis",
   "containerDefinitions": [
   {
     "name": "windows_sample_app",
     "image": "microsoft/iis",
     "cpu": 100,
     "entryPoint":["powershell", "-Command"],
     "command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html><head><title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center><h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p></body></html>'; C:\\ServiceMonitor.exe w3svc"],
     "portMappings": [
     {
       "protocol": "tcp",
       "containerPort": 80,
       "hostPort": 8080
     }
     ],
     "memory": 500,
     "essential": true
   }
   ]
}

You can now go back into the Task Definition page and see windows-simple-iis as an available task definition.

There are a few important aspects of the task definition file to note when working with Windows containers. First, the hostPort is configured as 8080, which is necessary because the ECS agent currently uses port 80 to enable IAM roles for tasks required for least-privilege security configurations.

There are also some fairly standard task parameters that are intentionally not included. For example, network mode is not available with Windows at the time of this release, so keep that setting blank to allow Docker to configure WinNAT, the only option available today.

Also, some parameters work differently with Windows than they do with Linux. The CPU limits that you define in the task definition are absolute, whereas on Linux they are weights. For information about other task parameters that are supported or possibly different with Windows, see the documentation.

Run your containers

At this point, you are ready to run containers. There are two options to run containers with ECS:

  1. Task
  2. Service

A task is typically a short-lived process that ECS creates. It can’t be configured to actively monitor or scale. A service is meant for longer-running containers and can be configured to use a load balancer, minimum/maximum capacity settings, and a number of other knobs and switches to help ensure that your code keeps running. In both cases, you are able to pick a placement strategy and a specific IAM role for your container.

  1. Select the task definition that you created above and choose Action, Run Task.
  2. Leave the settings on the next page to the default values.
  3. Select the ECS cluster created when you ran the CloudFormation template.
  4. Choose Run Task to start the process of scheduling a Docker container on your ECS cluster.

You can now go to the cluster and watch the status of your task. It may take 5–10 minutes for the task to go from PENDING to RUNNING, mostly because it takes time to download all of the layers necessary to run the microsoft/iis image. After the status is RUNNING, you should see the following results:

You may have noticed that the example task definition is named windows-simple-iis:2. This is because I created a second version of the task definition, which is one of the powerful capabilities of using ECS. You can make the task definitions part of your source code and then version them. You can also roll out new versions and practice blue/green deployment, switching to reduce downtime and improve the velocity of your deployments!

After the task has moved to RUNNING, you can see your website hosted in ECS. Find the public IP or DNS for your ECS host. Remember that you are hosting on port 8080. Make sure that the security group allows ingress from your client IP address to that port and that your VPC has an internet gateway associated with it. You should see a page that looks like the following:

This is a nice start to deploying a simple single instance task, but what if you had a Web API to be scaled out and in based on usage? This is where you could look at defining a service and collecting CloudWatch data to add and remove both instances of the task. You could also use CloudWatch alarms to add more ECS container instances and keep up with the demand. The former is built into the configuration of your service.

  1. Select the task definition and choose Create Service.
  2. Associate a load balancer.
  3. Set up Auto Scaling.

The following screenshot shows an example where you would add an additional task instance when the CPU Utilization CloudWatch metric is over 60% on average over three consecutive measurements. This may not be aggressive enough for your requirements; it’s meant to show you the option to scale tasks the same way you scale ECS instances with an Auto Scaling group. The difference is that these tasks start much faster because all of the base layers are already on the ECS host.

Do not confuse task dynamic scaling with ECS instance dynamic scaling. To add additional hosts, see Tutorial: Scaling Container Instances with CloudWatch Alarms.

Conclusion

This is just scratching the surface of the flexibility that you get from using containers and Amazon ECS. For more information, see the Amazon ECS Developer Guide and ECS Resources.

– Jeremy, Thomas, Samuel, Akram

Newly Updated Whitepaper: FERPA Compliance on AWS

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/newly-updated-whitepaper-ferpa-compliance-on-aws/

One of the main tenets of the Family Educational Rights and Privacy Act (FERPA) is the protection of student education records, including personally identifiable information (PII) and directory information. We recently updated our FERPA Compliance on AWS whitepaper to include AWS service-specific guidance for 24 AWS services. The whitepaper describes how these services can be used to help secure protected data. In conjunction with more detailed service-specific documentation, this updated information helps make it easier for you to plan, deploy, and operate secure environments to meet your compliance requirements in the AWS Cloud.

The updated whitepaper is especially useful for educational institutions and their vendors who need to understand:

  • AWS’s Shared Responsibility Model.
  • How AWS services can be used to help deploy educational and PII workloads securely in the AWS Cloud.
  • Key security disciplines in a security program to help you run a FERPA-compliant program (such as auditing, data destruction, and backup and disaster recovery).

In a related effort to help you secure PII, we also added to the whitepaper a mapping of NIST SP 800-122, which provides guidance for protecting PII, as well as a link to our NIST SP 800-53 Quick Start, a CloudFormation template that automatically configures AWS resources and deploys a multi-tier, Linux-based web application. To learn how this Quick Start works, see the Automate NIST Compliance in AWS GovCloud (US) with AWS Quick Start Tools video. The template helps you streamline and automate secure baselines in AWS—from initial design to operational security readiness—by incorporating the expertise of AWS security and compliance subject matter experts.

For more information about AWS Compliance and FERPA or to request support for your organization, contact your AWS account manager.

– Chris Gile, Senior Manager, AWS Security Assurance

Marvellous retrofitted home assistants

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/retrofitted-home-assistants/

As more and more digital home assistants are appearing on the consumer market, it’s not uncommon to see the towering Amazon Echo or sleek Google Home when visiting friends or family. But we, the maker community, are rarely happy unless our tech stands out from the rest. So without further ado, here’s a roundup of some fantastic retrofitted home assistant projects you can recreate and give pride of place in your kitchen, on your bookshelf, or wherever else you’d like to talk to your virtual, disembodied PA.

Google AIY Robot Conversion

Turned an 80s Tomy Mr Money into a little Google AIY / Raspberry Pi based assistant.

Matt ‘Circuitbeard’ Brailsford’s Tomy Mr Money Google AIY Assistant is just one of many home-brew home assistants makers have built since the release of APIs for Amazon Alexa and Google Home. Here are some more…

Teddy Ruxpin

Oh Teddy, how exciting and mysterious you were when I unwrapped you back in the mideighties. With your awkwardly moving lips and twitching eyelids, you were the cream of the crop of robotic toys! How was I to know that during my thirties, you would become augmented with home assistant software and suddenly instil within me a fear unlike any I’d felt before? (Save for my lifelong horror of ET…)

Alexa Ruxpin – Raspberry Pi & Alexa Powered Teddy Bear

Please watch: “DIY Fidget LED Display – Part 1” https://www.youtube.com/watch?v=FAZIc82Duzk -~-~~-~~~-~~-~- There are tons of virtual assistants out on the market: Siri, Ok Google, Alexa, etc. I had this crazy idea…what if I made the virtual assistant real…kinda. I decided to take an old animatronic teddy bear and hack it so that it ran Amazon Alexa.

Several makers around the world have performed surgery on Teddy to install a Raspberry Pi within his stomach and integrate him with Amazon Alexa Voice or Google’s AIY Projects Voice kit. And because these makers are talented, they’ve also managed to hijack Teddy’s wiring to make his lips move in time with his responses to your commands. Freaky…

Speaking of freaky: check out Zack’s Furlexa — an Amazon Alexa Furby that will haunt your nightmares.

Give old tech new life

Devices that were the height of technology when you purchased them may now be languishing in your attic collecting dust. With new and improved versions of gadgets and gizmos being released almost constantly, it is likely that your household harbours a spare whosit or whatsit which you can dismantle and give a new Raspberry Pi heart and purpose.

Take, for example, Martin Mander’s Google Pi intercom. By gutting and thoroughly cleaning a vintage intercom, Martin fashioned a suitable housing the Google AIY Projects Voice kit to create a new home assistant for his house:

1986 Google Pi Intercom

This is a 1986 Radio Shack Intercom that I’ve converted into a Google Home style device using a Raspberry Pi and the Google AIY (Artificial Intelligence Yourself) kit that came free with the MagPi magazine (issue 57). It uses the Google Assistant to answer questions and perform actions, using IFTTT to integrate with smart home accessories and other web services.

Not only does this build look fantastic, it’s also a great conversation starter for any visitors who had a similar device during the eighties.

Also take a look at Martin’s 1970s Amazon Alexa phone for more nostalgic splendour.

Put it in a box

…and then I’ll put that box inside of another box, and then I’ll mail that box to myself, and when it arrives…

A GIF from the emperors new groove - Raspberry Pi Home Assistant

A GIF. A harmless, little GIF…and proof of the comms team’s obsession with The Emperor’s New Groove.

You don’t have to be fancy when it comes to housing your home assistant. And often, especially if you’re working with the smaller people in your household, the results of a simple homespun approach are just as delightful.

Here are Hannah and her dad Tom, explaining how they built a home assistant together and fit it inside an old cigar box:

Raspberry Pi 3 Amazon Echo – The Alexa Kids Build!

My 7 year old daughter and I decided to play around with the Raspberry Pi and build ourselves an Amazon Echo (Alexa). The video tells you about what we did and the links below will take you to all the sites we used to get this up and running.

Also see the Google AIY Projects Voice kit — the cardboard box-est of home assistant boxes.

Make your own home assistant

And now it’s your turn! I challenge you all (and also myself) to create a home assistant using the Raspberry Pi. Whether you decide to fit Amazon Alexa inside an old shoebox or Google Home inside your sister’s Barbie, I’d love to see what you create using the free home assistant software available online.

Check out these other home assistants for Raspberry Pi, and keep an eye on our blog to see what I manage to create as part of the challenge.

Ten virtual house points for everyone who shares their build with us online, either in the comments below or by tagging us on your social media account.

The post Marvellous retrofitted home assistants appeared first on Raspberry Pi.

Game night 1: Lisa, Lisa, MOOP

Post Syndicated from Eevee original https://eev.ee/blog/2017/12/05/game-night-1-lisa-lisa-moop/

For the last few weeks, glip (my partner) and I have spent a couple hours most nights playing indie games together. We started out intending to play a short list of games that had been recommended to glip, but this turns out to be a nice way to wind down, so we’ve been keeping it up and clicking on whatever looks interesting in the itch app.

Most of the games are small and made by one or two people, so they tend to be pretty tightly scoped and focus on a few particular kinds of details. I’ve found myself having brain thoughts about all that, so I thought I’d write some of them down.

I also know that some people (cough) tend not to play games they’ve never heard of, even if they want something new to play. If that’s you, feel free to play some of these, now that you’ve heard of them!

Also, I’m still figuring the format out here, so let me know if this is interesting or if you hope I never do it again!

First up:

  • Lisa: The Painful
  • Lisa: The Joyful
  • MOOP

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Lisa: The Painful

long · classic rpg · dec 2014 · lin/mac/win · $10 on itch or steam · website

(cw: basically everything??)

Lisa: The Painful is true to its name. I hesitate to describe it as fun, exactly, but I’m glad we played it.

Everything about the game is dark. It’s a (somewhat loose) sequel to another game called Lisa, whose titular character ultimately commits suicide; her body hanging from a noose is the title screen for this game.

Ah, but don’t worry, it gets worse. This game takes place in a post-apocalyptic wasteland, where every female human — women, children, babies — is dead. You play as Brad (Lisa’s brother), who has discovered the lone exception: a baby girl he names Buddy and raises like a daughter. Now, Buddy has been kidnapped, and you have to go rescue her, presumably from being raped.

Ah, but don’t worry, it gets worse.


I’ve had a hard time putting my thoughts in order here, because so much of what stuck with me is the way the game entangles the plot with the mechanics.

I love that kind of thing, but it’s so hard to do well. I can’t really explain why, but I feel like most attempts to do it fall flat — they have a glimmer of an idea, but they don’t integrate it well enough, or they don’t run nearly as far as they could have. I often get the same feeling as, say, a hyped-up big moral choice that turns out to be picking “yes” or “no” from a menu. The idea is there, but the execution is so flimsy that it leaves no impact on me at all.

An obvious recent success here is Undertale, where the entire story is about violence and whether you choose to engage or avoid it (and whether you can do that). If you choose to eschew violence, not only does the game become more difficult, it arguably becomes a different game entirely. Granted, the contrast is lost if you (like me) tried to play as a pacifist from the very beginning. I do feel that you could go further with the idea than Undertale, but Undertale itself doesn’t feel incomplete.

Christ, I’m not even talking about the right game any more.

Okay, so: this game is a “classic” RPG, by which I mean, it was made with RPG Maker. (It’s kinda funny that RPG Maker was designed to emulate a very popular battle style, and now the only games that use that style are… made with RPG Maker.) The main loop, on the surface, is standard RPG fare: you walk around various places, talk to people, solve puzzles, recruit party members, and get into turn-based fights.

Now, Brad is addicted to a drug called Joy. He will regularly go into withdrawal, which manifests in the game as a status effect that cuts his stats (even his max HP!) dramatically.

It is really, really, incredibly inconvenient. And therein lies the genius here. The game could have simply told me that Brad is an addict, and I don’t think I would’ve cared too much. An addiction to a fantasy drug in a wasteland doesn’t mean anything to me, especially about this tiny sprite man I just met, so I would’ve filed this away as a sterile fact and forgotten about it. By making his addiction affect me, I’m now invested in it. I wish Brad weren’t addicted, even if only because it’s annoying. I found a party member once who turned out to have the same addiction, and I felt dread just from seeing the icon for the status effect. I’ve been looped into the events of this story through the medium I use to interact with it: the game.

It’s a really good use of games as a medium. Even before I’m invested in the characters, I’m invested in what’s happening to them, because it impacts the game!

Incidentally, you can get Joy as an item, which will temporarily cure your withdrawal… but you mostly find it by looting the corpses of grotesque mutant flesh horrors you encounter. I don’t think the game would have the player abruptly mutate out of nowhere, but I wasn’t about to find out, either. We never took any.


Virtually every staple of the RPG genre has been played with in some way to tie it into the theme/setting. I love it, and I think it works so well precisely because it plays with expectations of how RPGs usually work.

Most obviously, the game is a sidescroller, not top-down. You can’t jump freely, but you can hop onto one-tile-high boxes and climb ropes. You can also drop off off ledges… but your entire party will take fall damage, which gets rapidly more severe the further you fall.

This wouldn’t be too much of a problem, except that healing is hard to come by for most of the game. Several hub areas have campfires you can sleep next to to restore all your health and MP, but when you wake up, something will have happened to you. Maybe just a weird cutscene, or maybe one of your party members has decided to leave permanently.

Okay, so use healing items instead? Good luck; money is also hard to come by, and honestly so are shops, and many of the healing items are woefully underpowered.

Grind for money? Good luck there, too! While the game has plenty of battles, virtually every enemy is a unique overworld human who only appears once, and then is dead, because you killed him. Only a handful of places have unlimited random encounters, and grinding is not especially pleasant.

The “best” way to get a reliable heal is to savescum — save the game, sleep by the campfire, and reload if you don’t like what you wake up to.

In a similar vein, there’s a part of the game where you’re forced to play Russian Roulette. You choose a party member; he and an opponent will take turns shooting themselves in the head until someone finds a loaded chamber. If your party member loses, he is dead. And you have to keep playing until you win three times, so there’s no upper limit on how many people you might lose. I couldn’t find any way to influence who won, so I just had to savescum for a good half hour until I made it through with minimal losses.

It was maddening, but also a really good idea. Games don’t often incorporate the existence of saves into the gameplay, and when they do, they usually break the fourth wall and get all meta about it. Saves are never acknowledged in-universe here (aside from the existence of save points), but surely these parts of the game were designed knowing that the best way through them is by reloading. It’s rarely done, it can easily feel unfair, and it drove me up the wall — but it was certainly painful, as intended, and I kinda love that.

(Naturally, I’m told there’s a hard mode, where you can only use each save point once.)

The game also drives home the finality of death much better than most. It’s not hard to overlook the death of a redshirt, a character with a bit part who simply doesn’t appear any more. This game permanently kills your party members. Russian Roulette isn’t even the only way you can lose them! Multiple cutscenes force you to choose between losing a life or some other drastic consequence. (Even better, you can try to fight the person forcing this choice on you, and he will decimate you.) As the game progresses, you start to encounter enemies who can simply one-shot murder your party members.

It’s such a great angle. Just like with Brad’s withdrawal, you don’t want to avoid their deaths because it’d be emotional — there are dozens of party members you can recruit (though we only found a fraction of them), and most of them you only know a paragraph about — but because it would inconvenience you personally. Chances are, you have your strongest dudes in your party at any given time, so losing one of them sucks. And with few random encounters, you can’t just grind someone else up to an appropriate level; it feels like there’s a finite amount of XP in the game, and if someone high-level dies, you’ve lost all the XP that went into them.


The battles themselves are fairly straightforward. You can attack normally or use a special move that costs MP. SP? Some kind of points.

Two things in particular stand out. One I mentioned above: the vast majority of the encounters are one-time affairs against distinct named NPCs, who you then never see again, because they are dead, because you killed them.

The other is the somewhat unusual set of status effects. The staples like poison and sleep are here, but don’t show up all that often; more frequent are statuses like weird, drunk, stink, or cool. If you do take Joy (which also cures depression), you become joyed for a short time.

The game plays with these in a few neat ways, besides just Brad’s withdrawal. Some party members have a status like stink or cool permanently. Some battles are against people who don’t want to fight at all — and so they’ll spend most of the battle crying, purely for flavor impact. Seeing that for the first time hit me pretty hard; until then we’d only seen crying as a mechanical side effect of having sand kicked in one’s face.


The game does drag on a bit. I think we poured 10 in-game hours into it, which doesn’t count time spent reloading. It doesn’t help that you walk not super fast.

My biggest problem was with getting my bearings; I’m sure we spent a lot of that time wandering around accomplishing nothing. Most of the world is focused around one of a few hub areas, and once you’ve completed one hub, you can move onto the next one. That’s fine. Trouble is, you can go any of a dozen different directions from each hub, and most of those directions will lead you to very similar-looking hills built out of the same tiny handful of tiles. The connections between places are mostly cave entrances, which also largely look the same. Combine that with needing to backtrack for puzzle or progression reasons, and it’s incredibly difficult to keep track of where you’ve been, what you’ve done, and where you need to go next.

I don’t know that the game is wrong here; the aesthetic and world layout are fantastic at conveying a desolate wasteland. I wouldn’t even be surprised if the navigation were deliberately designed this way. (On the other hand, assuming every annoyance in a despair-ridden game is deliberate might be giving it too much credit.) But damn it’s still frustrating.

I felt a little lost in the battle system, too. Towards the end of the game, Brad in particular had over a dozen skills he could use, but I still couldn’t confidently tell you which were the strongest. New skills sometimes appear in the middle of the list or cost less than previous skills, and the game doesn’t outright tell you how much damage any of them do. I know this is the “classic RPG” style, and I don’t think it was hugely inconvenient, but it feels weird to barely know how my own skills work. I think this puts me off getting into new RPGs, just generally; there’s a whole new set of things I have to learn about, and games in this style often won’t just tell me anything, so there’s this whole separate meta-puzzle to figure out before I can play the actual game effectively.

Also, the sound could use a little bit of… mastering? Some music and sound effects are significantly louder and screechier than others. Painful, you could say.


The world is full of side characters with their own stuff going on, which is also something I love seeing in games; too often, the whole world feels like an obstacle course specifically designed for you.

Also, many of those characters are, well, not great people. Really, most of the game is kinda fucked up. Consider: the weird status effect is most commonly inflicted by the “Grope” skill. It makes you feel weird, you see. Oh, and the currency is porn magazines.

And then there are the gangs, the various spins on sex clubs, the forceful drug kingpins, and the overall violence that permeates everything (you stumble upon an alarming number of corpses). The game neither condones nor condemns any of this; it simply offers some ideas of how people might behave at the end of the world. It’s certainly the grittiest interpretation I’ve seen.

I don’t usually like post-apocalypses, because they try to have these very hopeful stories, but then at the end the world is still a blighted hellscape so what was the point of any of that? I like this game much better for being a blighted hellscape throughout. The story is worth following to see where it goes, not just because you expect everything wrapped up neatly at the end.

…I realize I’ve made this game sound monumentally depressing throughout, but it manages to pack in a lot of funny moments as well, from the subtle to the overt. In retrospect, it’s actually really good at balancing the mood so it doesn’t get too depressing. If nothing else, it’s hilarious to watch this gruff, solemn, battle-scarred, middle-aged man pedal around on a kid’s bike he found.


An obvious theme of the game is despair, but the more I think about it, the more I wonder if ambiguity is a theme as well. It certainly fits the confusing geography.

Even the premise is a little ambiguous. Is/was Olathe a city, a country, a whole planet? Did the apocalypse affect only Olathe, or the whole world? Does it matter in an RPG, where the only world that exists is the one mapped out within the game?

Towards the end of the game, you catch up with Buddy, but she rejects you, apparently resentful that you kept her hidden away for her entire life. Brad presses on anyway, insisting on protecting her.

At that point I wasn’t sure I was still on Brad’s side. But he’s not wrong, either. Is he? Maybe it depends on how old Buddy is — but the game never tells us. Her sprite is a bit smaller than the men’s, but it’s hard to gauge much from small exaggerated sprites, and she might just be shorter. In the beginning of the game, she was doing kid-like drawings, but we don’t know how much time passed after that. Everyone seems to take for granted that she’s capable of bearing children, and she talks like an adult. So is she old enough to be making this decision, or young enough for parent figure Brad to overrule her? What is the appropriate age of agency, anyway, when you’re the last girl/woman left more than a decade after the end of the world?

Can you repopulate a species with only one woman, anyway?


Well, that went on a bit longer than I intended. This game has a lot of small touches that stood out to me, and they all wove together very well.

Should you play it? I have absolutely no idea.

FINAL SCORE: 1 out of 6 chambers

Lisa: The Joyful

fairly short · classic rpg · aug 2015 · lin/mac/win · $5 on itch or steam

Surprise! There’s a third game to round out this trilogy.

Lisa: The Joyful is much shorter, maybe three hours long — enough to be played in a night rather than over the better part of a week.

This one picks up immediately after the end of Painful, with you now playing as Buddy. It takes a drastic turn early on: Buddy decides that, rather than hide from the world, she must conquer it. She sets out to murder all the big bosses and become queen.

The battle system has been inherited from the previous game, but battles are much more straightforward this time around. You can’t recruit any party members; for much of the game, it’s just you and a sword.

There is a catch! Of course.

The catch is that you do not have enough health to survive most boss battles without healing. With no party members, you cannot heal via skills. I don’t think you could buy healing items anywhere, either. You have a few when the game begins, but once you run out, that’s it.

Except… you also have… some Joy. Which restores you to full health and also makes you crit with every hit. And drops off of several enemies.

We didn’t even recognize Joy as a healing item at first, since we never used it in Painful; it’s description simply says that it makes you feel nothing, and we’d assumed the whole point of it was to stave off withdrawal, which Buddy doesn’t experience. Luckily, the game provided a hint in the form of an NPC who offers to switch on easy mode:

What’s that? Bad guys too tough? Not enough jerky? You don’t want to take Joy!? Say no more, you’ve come to the right place!

So the game is aware that it’s unfairly difficult, and it’s deliberately forcing you to take Joy, and it is in fact entirely constructed around this concept. I guess the title is a pretty good hint, too.

I don’t feel quite as strongly about Joyful as I do about Painful. (Admittedly, I was really tired and starting to doze off towards the end of Joyful.) Once you get that the gimmick is to force you to use Joy, the game basically reduces to a moderate-difficulty boss rush. Other than that, the only thing that stood out to me mechanically was that Buddy learns a skill where she lifts her shirt to inflict flustered as a status effect — kind of a lingering echo of how outrageous the previous game could be.

You do get a healthy serving of plot, which is nice and ties a few things together. I wouldn’t say it exactly wraps up the story, but it doesn’t feel like it’s missing anything either; it’s exactly as murky as you’d expect.

I think it’s worth playing Joyful if you’ve played Painful. It just didn’t have the same impact on me. It probably doesn’t help that I don’t like Buddy as a person. She seems cold, violent, and cruel. Appropriate for the world and a product of her environment, I suppose.

FINAL SCORE: 300 Mags

MOOP

fairly short · inventory game · nov 2017 · win · free on itch

Finally, as something of a palate cleanser, we have MOOP: a delightful and charming little inventory game.

I don’t think “inventory game” is a real genre, but I mean the kind of game where you go around collecting items and using them in the right place. Puzzle-driven, but with “puzzles” that can largely be solved by simply trying everything everywhere. I’d put a lot of point and click adventures in the same category, despite having a radically different interface. Is that fair? Yes, because it’s my blog.

MOOP was almost certainly also made in RPG Maker, but it breaks the mold in a very different way by not being an RPG. There are no battles whatsoever, only interactions on the overworld; you progress solely via dialogue and puzzle-solving. Examining something gives you a short menu of verbs — use, talk, get — reminiscent of interactive fiction, or perhaps the graphical “adventure” games that took inspiration from interactive fiction. (God, “adventure game” is the worst phrase. Every game is an adventure! It doesn’t mean anything!)

Everything about the game is extremely chill. I love the monochrome aesthetic combined with a large screen resolution; it feels like I’m peeking into an alternate universe where the Game Boy got bigger but never gained color. I played halfway through the game before realizing that the protagonist (Moop) doesn’t have a walk animation; they simply slide around. Somehow, it works.

The puzzles are a little clever, yet low-pressure; the world is small enough that you can examine everything again if you get stuck, and there’s no way to lose or be set back. The music is lovely, too. It just feels good to wander around in a world that manages to make sepia look very pretty.

The story manages to pack a lot into a very short time. It’s… gosh, I don’t know. It has a very distinct texture to it that I’m not sure I’ve seen before. The plot weaves through several major events that each have very different moods, and it moves very quickly — but it’s well-written and doesn’t feel rushed or disjoint. It’s lighthearted, but takes itself seriously enough for me to get invested. It’s fucking witchcraft.

I think there was even a non-binary character! Just kinda nonchalantly in there. Awesome.

What a happy, charming game. Play if you would like to be happy and charmed.

FINAL SCORE: 1 waxing moon

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

The Pi Towers Secret Santa Babbage

Post Syndicated from Mark Calleja original https://www.raspberrypi.org/blog/secret-santa-babbage/

Tired of pulling names out of a hat for office Secret Santa? Upgrade your festive tradition with a Raspberry Pi, thermal printer, and everybody’s favourite microcomputer mascot, Babbage Bear.

Raspberry Pi Babbage Bear Secret Santa

The name’s Santa. Secret Santa.

It’s that time of year again, when the cosiness gets turned up to 11 and everyone starts thinking about jolly fat men, reindeer, toys, and benevolent home invasion. At Raspberry Pi, we’re running a Secret Santa pool: everyone buys a gift for someone else in the office. Obviously, the person you buy for has to be picked in secret and at random, or the whole thing wouldn’t work. With that in mind, I created Secret Santa Babbage to do the somewhat mundane task of choosing gift recipients. This could’ve just been done with some names in a hat, but we’re Raspberry Pi! If we don’t make a Python-based Babbage robot wearing a jaunty hat and programmed to spread Christmas cheer, who will?

Secret Santa Babbage

Ho ho ho!

Mecha-Babbage Xmas shenanigans

The script the robot runs is pretty basic: a list of names entered as comma-separated strings is shuffled at the press of a GPIO button, then a name is popped off the end and stored as a variable. The name is matched to a photo of the person stored on the Raspberry Pi, and a thermal printer pinched from Alex’s super awesome PastyCam (blog post forthcoming, maybe) prints out the picture and name of the person you will need to shower with gifts at the Christmas party. (Well, OK — with one gift. No more than five quid’s worth. Nothing untoward.) There’s also a redo function, just in case you pick yourself: press another button and the last picked name — still stored as a variable — is appended to the list again, which is shuffled once more, and a new name is popped off the end.

Secret Santa Babbage prototyping

Prototyping!

As the build was a bit of a rush job undertaken at the request of our ‘Director of Vibe’ Emily, there are a few things I’d like to improve about this functionality that I didn’t get around to — more on that later. To add some extra holiday spirit to the project at the last minute, I used Pygame to play a WAV file of Santa’s jolly laugh while Babbage chooses a name for you. The file is included in the GitHub repo along with everything else, because ‘tis the season, etc., etc.

Secret Santa Babbage prototyping

Editor’s note: Considering these desk adornments, Mark’s Secret Santa gift-giver has a lot to go on.

Writing the code for Xmas Mecha-Babbage was fairly straightforward, though it uses some tricky bits for managing the thermal printer. You’ll need to install the drivers to make it go, as well as the CUPS package for managing the print hosting. You can find instructions for these things here, thanks to the wonderful Adafruit crew. Also, for reasons I couldn’t fathom, this will all only work on a Pi 2 and not a Pi 3, as there are some compatibility issues with the thermal printer otherwise. (I also tested the script on a Pi Zero W…no dice.)

Building a Christmassy throne

The hardest (well, fiddliest) parts of making the whole build were constructing the throne and wiring the bear. Using MakerCase, Inkscape, a bit of ingenuity, and a laser cutter, I was able to rig up a Christmassy plywood throne which has a hole through the seat so I could run the wires down from Babbage and to the Pi inside. I finished the throne by rubbing a couple of fingers of beeswax into it; as well as making the wood shine just a little bit and protecting it against getting wet, this had the added bonus of making it smell awesome.

Secret Santa Babbage inside

Next year’s iteration will be mulled wine–scented.

I next soldered two LEDs to some lengths of wire, and then ran the wires through holes at the top of the throne and down the back along a small channel I had carved with a narrow chisel to connect them to the Pi’s GPIO pins. The green LED will remain on as long as Babbage is running his program, and the red one will light up while he is processing your request. Once the red LED goes off again, the next person can have a go. I also laser-cut a final piece of wood to overlay the back of Babbage’s Xmas throne and cover the wiring a bit.

Creating a Xmas cyborg bear

Taking two 6 mm tactile buttons, I clipped the spiky metal legs off one side of each (the buttons were going into a stuffed christmas toy, after all) and soldered a length of wire to each of the remaining legs. Next, I made a small incision into Babbage with my trusty Swiss army knife (in a place that actually made me cringe a little) and fed the buttons up into his paws. At some point in this process I was standing in the office wrestling with the bear and muttering to myself, which elicited some very strange looks from my colleagues.

Secret Santa Babbage throne

Poor Babbage…

One thing to note here is to make sure the wires remain attached at the solder points while you push them up into Babbage’s paws. The first time I tried it, I snapped one of my connections and had to start again. It helped to remove some stuffing like a tunnel and then replace it afterward. Moreover, you can use your fingertip to support the joints as you poke the wire in. Finally, a couple of squirts of hot glue to keep Babbage’s furry cheeks firmly on the seat, and done!

Secret Santa Babbage

Next year: Game of Thrones–inspired candy cane throne

The Secret Santa Babbage masterpiece

The whole build process was the perfect holiday mix of cheerful and macabre, and while getting the thermal printer to work was a little time-consuming, the finished product definitely raised some smiles around the office and added a bit of interesting digital flavour to a staid office tradition. And it also helped people who are new to the office or from other branches of the Foundation to know for whom they will be buying a gift.

Secret Santa Babbage

Ready to dispense Christmas cheer!

There are a few ways in which I’ll polish this project before next year, such as having the script write the names to external text files to create a record that will persist in case of a reboot, and maybe having Secret Santa Babbage play you a random Christmas carol when you squeeze his paw instead of just laughing merrily every time. (I also thought about adding electric shocks for those people who are on the naughty list, but HR said no. Bah, humbug!)

Make your own

The code and laser cut plans for the whole build are available here. If you plan to make your own, let us know which stuffed toy you will be turning into a Secret Santa cyborg! And if you’ve been working on any other Christmas-themed Raspberry Pi projects, we’d like to see those too, so tag us on social media to share the festive maker cheer.

The post The Pi Towers Secret Santa Babbage appeared first on Raspberry Pi.