Tag Archives: Uncategorized

Hacking an 8mm Camera

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/hacking-8mm-camera/

PI8-CAMERA
What if you used a Raspberry Pi and a Camera Module to breathe new life into an old 8mm film camera? That was the question on Claire Wright’s mind when she and her father set to work on modernizing an old motion picture camera that they found at a garage sale five years earlier. Inspired by YouTubers, technology, and the blend of analog and digital, Claire and her father harvested one of the lenses and the classic pistol grip from the original Keystone 8mm. Adding a Raspberry Pi, Camera Module, portable screen, and battery helped them to create the Pi 8 camera. Claire tells the story best:
Hacking an 8mm with a Raspberry Pi
See[W]right films hacks an old 8mm camera using a Raspberry Pi computer. Is it analog? Digital? Something else? Follow me on Instagram @leftnwright https://www.instagram.com/leftnwright/ Inspired/Influenced by: Laura Kampf https://www.youtube.com/channel/UCRix1GJvSBNDpEFY561eSzw LadyAda https://www.youtube.com/user/adafruit Casey Neistat Waelder https://www.youtube.com/user/DavidWaelder

The resulting footage leaves no doubt that older lenses have a big impact on 8mm style. In particular, check out this footage from the Pi 8 Camera, taken in Bastrop, TX:
#8mm or something else? 😉 #film #shortfilm #tx #bastroptx #flatlanders #maidenvoyage #nofilter #noreally
“#8mm or something else? 😉 #film #shortfilm #tx #bastroptx #flatlanders #maidenvoyage #nofilter #noreally”

Bastrop, incidentally, is where some of the Raspberry Pi team had some amazing BBQ in 2015:
Liz took this picture of Rachel and most of the meat in Texas in Bastrop last year.Liz took this picture of Rachel and most of the meat in Texas in Bastrop last year.
Back to the cameras. I think Claire’s onto something because she’s not the only one exploring the renaissance of retro motion picture capture. Kodak announced that they’re getting back into 8mm film with a new camera, which they unveiled at CES.
If you’re feeling inspired by Claire and want to build your own Raspberry Pi-based camera, then Instructables has you covered with many Raspberry Pi camera projects for you to try.
The post Hacking an 8mm Camera appeared first on Raspberry Pi.

Color-Code Your AWS OpsWorks Stacks for Better Instance and Resource Tracking

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/color-code-your-aws-opsworks-stacks-for-better-instance-and-resource-tracking/

AWS OpsWorks provides options for organizing your Amazon EC2 instances and other AWS resources. There are stacks to group related resources and isolate them from each other; layers to group instances with similar roles; and apps to organize software deployments. Each has a name to help you keep track of them.

Because it can be difficult to see if the instance you’re working on belongs to the right stack (for example, an integration or production stack) just by looking at the host name, OpsWorks provides a simple, user-defined attribute that you can use to color-code your stacks. For example, some customers use red for their production stacks. Others apply different colors to correspond to the regions in which the stacks are operating.

A stack color is simply a visual indicator to assist you while you’re working in the console. In those cases when you need to sign in to an instance (for auditing, for example, or to check log files or restart a process), it can be difficult to immediately detect when you have signed in to an instance on the wrong stack.

When you add a small, custom recipe to the setup lifecycle event, however, you can reuse the stack color for the shell prompt. Most modern terminal emulators support a 256-color mode. Changing the color of the prompt is simple.

The following code can be used to change the color of the shell prompt:

colors/recipes/default.rb

stack = search("aws_opsworks_stack").first
match = stack["color"].match(/rgb((d+), (d+), (d+))/)
r, g, b = match[1..3].map { |i| (5 * i.to_f / 255).round }

template "/etc/profile.d/opsworks-color-prompt.sh" do
  source "opsworks-color-prompt.sh.erb"
  variables(:color => 16 + b + g * 6 + 36 * r)
end

colors/templates/default/opsworks-color-prompt.sh.erb

if [ -n "$PS1" ]; then
  PS1="33[38;5;<%= @color %>m[[email protected] W]\$33[0m "
fi

You can use this with Chef 12, this custom cookbook, the latest Amazon Linux AMI, and Bash. You may have to adapt the cookbook for other operating systems and shells.

The stack color is not the only information you can include in the prompt. You can also add the stack and layer names of your instances to the prompt:

We invite you to try color-coding your stacks. If you have questions or other feedback, let us know in the comments.

The little computer that could

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/the-little-computer-that-could/

Liz: Today we’ve got a guest post from the terrifyingly hirsute Pete Stevens. Pete’s from Mythic Beasts, our web hosts; and he’s the reason this website stands up to the absurd amounts of traffic you throw at it. (Yesterday we saw about a quarter of a million sessions – that goes up WAY above a million on some days.) Have at it, Pete!
After our successful test of using the Raspberry Pi 3 for hosting 5% of the traffic on Pi 3 launch day we celebrated by going to the pub. The conversation went something like this:
Eben: Is it possible to host the whole site on Pi 3s?
Pete: How would you do it?
Philip: Wouldn’t it be awesome to do it?
Liz: Dare you to try it!
The first part of the answer is quite easy: not on one Pi 3, it’s not fast enough. A better question is how many Pi 3s are required to host a typical day’s traffic. Extrapolating from some graphs and making up some numbers with the handy pub beermat service, we estimated between 4 and 6 should handle all the PHP code and file delivery. Unfortunately, the database server still looks out of scope: not enough RAM and not enough I/O.
Of course, only an idiot would replace thousands of pounds of highly specified hardware with a handful of £30 computers and expect it to still work.
A few weeks later we have this:
A mini rack of Raspberry Pi 3sA mini rack of Raspberry Pi 3s
We’ve designed a custom plastic enclosure for holding Pi 3s securely, added power over ethernet HATs so we can power them directly from the switch, and a cheap 100Mbps PoE switch. We’ve put all the storage over the network and put a small storage server in the rack with the Pi 3 rack. We’ve used virtual LANs to have two effective network cards on each Pi 3: one just containing it and the storage server, the other with an IPv6 address that talks to the public internet and the load balancers. Ifconfig looks like this:
storage : eth0 : 10.46.189.X
public : eth0.131 : 2a00:1098:0:84:1000:1:0:X

As with all Pi servers, there is no public IPv4 connectivity to each server. To get out to legacy IPv4-only services such as Twitter / Akismet etc. they go through our NAT64 DNS proxy service. Inbound traffic lands on the front end load balancers and is shared between the Raspberry Pi 3s over IPv6.
 iftop, our network monitoring software showing traffic shuttling between the fileserver and the load balancers iftop, our network monitoring software showing traffic shuttling between the fileserver and the load balancers
If you do:
HEAD -E https://www.raspberrypi.org/

you’ll see a header which gives you the final octet of the address of the Pi that served you:
X-Served-By: Raspberry Pi 1e

The first person to tweet all the hex identifiers to Mythic Beasts wins absolutely nothing other than the respect of the Raspberry Pi community.
Is this a commercial hosted Pi service?
It’s not yet a commercially viable service. Scaled up we can fit 96 Pi3s in 4U of rack space including the switches, which is an impressive density. However, the Pis aren’t individually replaceable once in service. That means if a customer botches the SD card the Pi is dead until we can arrange downtime of all 96 Pis in the unit. Kernel upgrades involve a change on the SD card which carries a risk of bricking the Pi if the user gets it wrong. Not having access to the SD card other than via booting the Pi from it means that an enterprising user could compromise the kernel on the SD card and root-kit the machine, before cancelling the service and letting us sell it to another user.
But it’s close. Add in netboot with PXE and most of the above concerns go away, as we can remotely provision, remotely re-provision and remotely recover a broken Pi.
The Pi Rack under construction and testingThe Pi Rack under construction and testing
The Pi rack operational and waiting for your HTTP requestsThe Pi rack operational and waiting for your HTTP requests
The post The little computer that could appeared first on Raspberry Pi.

Power up your life with issue #44 of The MagPi

Post Syndicated from Russell Barnes original https://www.raspberrypi.org/blog/the-magpi-44/

Another month – so that means another issue of the official Raspberry Pi magazine! We’ve got a whole host of treats in store for you in our April 2016 edition including your chance to win one of three U:Create Astro Pi kits worth £100/$145.
Magpi_Cover_44_PhysicalClick the pic to be whisked into a world of Raspberry Pi ideas and inspiration
The theme for this issue (and wonderfully realised by Raspberry Pi’s resident illustrator-extraordinaire Sam Alder) is ways to improve and automate your life with Raspberry Pi. We’ve put together five fun projects to help you power up your life including an automatic pet feeder, a magic mirror and a temperature-sensing kettle so your tea (Earl Grey) is always served hot.
TheMagPi#44-SAMPLE-002
Other highlights from issue 44:

007 gadgets
Pi-powered gadgets that are licensed to thrill
Bluetooth audio guide
Turn your Raspberry Pi 3 into a music streamer
What is pressure?
Find out by doing science with the Sense HAT
Retro vision with Pi Zero
Use any old TV with your brand new Pi Zero in easy steps
And much, much more!

TheMagPi#44-SAMPLE-004
TheMagPi#44-SAMPLE-003

Free Creative Commons download
As always, you can download your copy of The MagPi completely free. Grab it straight from the front page of The MagPi’s website.
Don’t forget that like sales of the Raspberry Pi itself, all proceeds from the print and digital editions of the magazine go to help the Foundation achieve its charitable goals. Buy the magazine and help democratise computing!
Buy in-store
If you want something more tangible to play with, you’ll be glad to hear you can get the print edition in more stores than ever:
WHSmith
Tesco
Sainsbury’s
Asda
And all good newsagents
Order online
Rather shop online? You can grab every available issue from The Pi Hut and have it delivered practically anywhere in the world.
Subscribe today!
Want to have every issue delivered free to your door the moment it’s available? Subscribe today and save up to 25% on the cover price.
I hope you enjoy the issue – see you next month!
 
The post Power up your life with issue #44 of The MagPi appeared first on Raspberry Pi.

LCARS touchscreen interface for your Raspberry Pi

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/lcars-touchscreen-interface-raspberry-pi/

I was invited to a dinner at Queens’ College in Cambridge a few weeks ago. I got talking to another attendee, and said enthusiastically:
“Do you know, I don’t think I’ve been here since I was an undergraduate. Back then I was here every week.”
“Supervisions?”
“No. This is where the Star Trek society met.”
My mother despaired of me: a 21-year-old woman who had a giant crush on a yellow android, went around in public with a communicator keyring that went “burbleburble” and wore a Bajoran earring. (Everything turned out OK in the end.)
So this project…let’s say it really appealed to me.
screenshot (1)REAL nerds know it stands for Library Computer Access/Retrieval System.
This is the first finished, publicly available LCARS interface we’ve seen for the Pi (and it works with a touchscreen as well); Toby Kurien has made this adaptable for any project you’re running on your Raspberry Pi, so you can substitute your own retro-future display for whatever dull desktop you’ve been using up until now. Everything you need is on Toby’s GitHub. Toby’s using one of our official displays here, and the finished product looks (and sounds) great.
Raspberry Pi Star Trek LCARS interface using PyGame
Utilising the Raspberry Pi official touch screen to create a Star Trek style interface for home automation or other projects. The interface is built using Python and the PyGame library Code available at: https://github.com/tobykurien/rpi_lcars

While Toby’s using this interface to monitor and control different parts of his automated house, he’s made it easy for you to swap in your own project. Go and take a look at the code, and report back if you end up using it!
This is not a Rob Z post, but I am going to pretend it is.
picardclap
 
The post LCARS touchscreen interface for your Raspberry Pi appeared first on Raspberry Pi.

Amazon Echo – the homebrew version

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/amazon-echo-homebrew-version/

Amazon’s Echo isn’t available here in the UK yet. This is very aggravating for those of us who pride ourselves on early adoption. For the uninitiated, Echo’s an all-in-one speaker and voice-command device that works with Amazon’s Alexa voice service. Using an Echo, Alexa can answer verbal questions and integrate with a bunch of the connected objects you might have in your house, like lights, music, thermostats and all that good smart-home stuff. It can also provide you with weather forecasts, interact with your calendar and plumb the cold, cold depths of Wikipedia.
51XeN2UYoyL._SL1000_Amazon’s official Echo device
_88948671_b49cb625-c213-43c3-88ac-a58bd9200900The Raspberry Pi version (our tip – hide the Pi in a box!)
Happily for those of us outside the US wanting to sink our teeth into the bold new world of virtual assistants, Amazon’s made a guide to setting up Alexa on your Raspberry Pi which will work wherever you are. You’ll need a Pi 2 or a Pi 3. The Raspberry Pi version differs in one important way from the Echo: the Echo is always on, and always listening for a vocal cue (usually “Alexa”, although users can change that – useful if your name is Alexa), which raises privacy concerns for some. The Raspberry Pi version is not an always-on listening device; instead, you have to press a button on your system to activate it. More work for your index finger, more privacy for your living-room conversations.
Want to build your own? Here’s a video guide to setting the beast up from Novaspirit Tech. You can also find everything you need on Amazon’s GitHub.
Installing Alexa Voice Service to Raspberry Pi
This is a quick tutorial on install Alexa Voice Service to your Raspberry Pi creating your very own Amazon ECHO!! Thanks for the view! **You can also download the Amazon Alexa App for your phone to configure / interface with your raspberry echo!. it will be listed as a new device!!

Let us know if you end up building your own Echo; it’s much less expensive than the official version, and 100% more available outside the USA as well.
 
 
 
The post Amazon Echo – the homebrew version appeared first on Raspberry Pi.

SX Create 2016

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/sx-create-2016/

The last few weeks have turned out to be a big (and busy) time for us at Raspberry Pi! We celebrated our fourth birthday and the release of Raspberry Pi 3 in Cambridge, and wrapped up the month in Austin, Texas during SXSW Interactive, the event that draws techie geeks and enthusiasts.
Raspberry Pi on Twitter
Day two of #SXCreate begins NOW! @pcsforme has a fresh batch of Pi 3s for sale. And come try out the Sense HAT! pic.twitter.com/ac2sRHuTBf

So what did we do all day at SXSW (other than find some of the best BBQ this country has to offer)? Some of us on the The Raspberry Pi team set up a hands-on Sense HAT activity for participants at SX Create, a family-friendly event with some of our favorite maker companies as well as local projects from some of Austin’s up-and-coming entrepreneurs. Our activity introduced basic point-and-click control of the HAT’s on-board LEDs and included a programming challenge to get data from its sensors and display it as text on the LED matrix.
Even the Raspberry Pi team couldn’t resist sitting down to have a go.
Raspberry Pi SX Team 1000
Ethan, the founder of PCs for Me, joined us in the booth for the weekend. Ethan creates Raspberry Pi kits that include all the components you need to jump-start your own projects at home; some are based on our own educational resources. He helped get Raspberry Pi 3s into the hands of eager buyers. His stock of Pi 3s didn’t last long, once the word got out to the tech-savvy crowd of SXSW.
Lucie deLaBruere on Twitter
I just got my hands on my first @Raspberry_Pi 3 from young entrepreneurs at #sxsw http://www.pcsforme.com pic.twitter.com/Qc46iiwMp7

PCs for Me on Twitter
And the last Pi 3 (for real this time) goes to Anuhar! pic.twitter.com/IasUPFw3T7

Fun fact: We met Ethan a year ago at SXSW, and we were so thrilled he decided to be a part of the Raspberry Pi team for the weekend.
In an event that’s as big as SXSW, we still managed to be among some of our friends – long-standing forum contributors, our newest Raspberry Pi Certified Educators, and librarians we’ve been in touch with – in addition to kids jumping in on the HAT activity and parents looking at projects they could do at home. We also met some new friends: Pi users who were just coming into the fold, and longtime community members who shared some brilliant projects created with three different models of Raspberry Pi!
Sense HAT SX Create
Although we were excited to show off Raspberry Pi 3, we were especially looking forward to meeting members of our community. It’s exactly what we love about events like this. If you get the chance, join us at our upcoming events – you can find us at the at the following shows across the US:

USA Science and Engineering Festival, April 15-17 in Washington, DC
Maker Faire Bay Area, May 20-22 in San Mateo, California
American Library Association Annual Conference & Exhibition, June 23-28 in Orlando, Florida
ISTE 2016, June 26-29 in Denver, Colorado
World Maker Faire New York, Oct 1-2 in New York City, NY

We would love to meet you!
The post SX Create 2016 appeared first on Raspberry Pi.

ElastiCache for Redis Update – Upgrade Engines and Scale Up

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/elasticache-for-redis-update-upgrade-engines-and-scale-up/

Amazon ElastiCache makes it easy for you to deploy, operate, and scale an in-memory database in the cloud. As you may already know, ElastiCache supports the Memcached and Redis engines.
More Power for Redis Today we are launching an ElastiCache update that provides you with additional control over your Redis-based ElastiCache clusters. You can now scale up to a larger node type while ElastiCache preserves (on a best-effort basis) your stored information. While ElastiCache for Redis has always allowed you to upgrade the engine version, you can now do so while preserving the stored information. You can apply both changes immediately or during the cluster’s maintenance window.
Behind the scenes, ElastiCache for Redis uses several different strategies to scale up and to upgrade engines. Scaling is based on Redis replication. Engine upgrades use a foreground snapshot (SAVE) when Multi-AZ is turned off, and replication followed by a DNS switch when it is on.
To scale up to a larger node type, simply select the Cache Cluster in the AWS Management Console and click on Modify. Then select the new Node Type, decide if you want to apply the change immediately, and click on Modify to proceed:

Similarly, to upgrade to a newer version of the Redis engine, select the new version and click on Modify:

I would like to take this opportunity to encourage you to upgrade to the engine that is compatible with version 2.8.24 of Redis. This version contains a number of fixes and enhancements to Redis’ stability and robustness (some contributed by the ElastiCache team; see the What’s New for more information).
You can, as always, accomplish the same operations by way of the ElastiCache API.  Here are some quick examples in PHP (via the AWS SDK for PHP):
// Scale to larger node size
$res = $client->modifyCacheCluster([‘CacheNodeType’ => ‘cache.r3.4xlarge’,
‘ApplyImmediately’ => true]);

// Upgrade engine version
$res = $client->modifyCacheCluster([‘EngineVersion’ => ‘2.8.24’,
‘ApplyImmediately’ => true]);

// Do both at once
$res = $client->modifyCacheCluster([‘CacheNodeType’ => ‘cache.r3.4xlarge’,
‘EngineVersion’ => ‘2.8.24’,
‘ApplyImmediately’ => true]);

In all three of these examples, the ApplyImmediately parameter indicates that the changes will be made right away rather than during the maintenance window.
To learn more, read Scaling Your Redis Cluster.
Available Now This feature is available now and you  can start using it today! —
Jeff;

ElastiCache for Redis Update – Upgrade Engines and Scale Up

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/elasticache-for-redis-update-upgrade-engines-and-scale-up/

Amazon ElastiCache makes it easy for you to deploy, operate, and scale an in-memory database in the cloud. As you may already know, ElastiCache supports the Memcached and Redis engines.
More Power for Redis Today we are launching an ElastiCache update that provides you with additional control over your Redis-based ElastiCache clusters. You can now scale up to a larger node type while ElastiCache preserves (on a best-effort basis) your stored information. While ElastiCache for Redis has always allowed you to upgrade the engine version, you can now do so while preserving the stored information. You can apply both changes immediately or during the cluster’s maintenance window.
Behind the scenes, ElastiCache for Redis uses several different strategies to scale up and to upgrade engines. Scaling is based on Redis replication. Engine upgrades use a foreground snapshot (SAVE) when Multi-AZ is turned off, and replication followed by a DNS switch when it is on.
To scale up to a larger node type, simply select the Cache Cluster in the AWS Management Console and click on Modify. Then select the new Node Type, decide if you want to apply the change immediately, and click on Modify to proceed:

Similarly, to upgrade to a newer version of the Redis engine, select the new version and click on Modify:

I would like to take this opportunity to encourage you to upgrade to the engine that is compatible with version 2.8.24 of Redis. This version contains a number of fixes and enhancements to Redis’ stability and robustness (some contributed by the ElastiCache team; see the What’s New for more information).
You can, as always, accomplish the same operations by way of the ElastiCache API.  Here are some quick examples in PHP (via the AWS SDK for PHP):
// Scale to larger node size
$res = $client->modifyCacheCluster([‘CacheNodeType’ => ‘cache.r3.4xlarge’,
‘ApplyImmediately’ => true]);

// Upgrade engine version
$res = $client->modifyCacheCluster([‘EngineVersion’ => ‘2.8.24’,
‘ApplyImmediately’ => true]);

// Do both at once
$res = $client->modifyCacheCluster([‘CacheNodeType’ => ‘cache.r3.4xlarge’,
‘EngineVersion’ => ‘2.8.24’,
‘ApplyImmediately’ => true]);

In all three of these examples, the ApplyImmediately parameter indicates that the changes will be made right away rather than during the maintenance window.
To learn more, read Scaling Your Redis Cluster.
Available Now This feature is available now and you  can start using it today! —
Jeff;

Raspberry Pi Oracle Weather Stations shipped

Post Syndicated from clive original https://www.raspberrypi.org/blog/weather-stations-shipped/

Big brown boxes
If this blog was an Ealing comedy, it would be a speeded-up montage of an increasingly flustered postman delivering huge numbers of huge boxes to school reception desks across the land. At the end, they’d push their cap up at a jaunty angle and wipe their brow with a large spotted handkerchief. With squeaky sound effects.
Over the past couple of days, huge brown boxes have indeed been dropping onto the counters of school receptions across the UK, and they contain something wonderful— a Raspberry Pi Oracle Weather Station.
DJCS on Twitter
Code club students building a weather station kindly donated by the @Raspberry_Pi foundation thanks @clivebeale pic.twitter.com/yGQP4BQ6SP

This week, we sent out the first batch of Weather Station kits to 150 UK schools. Yesterday – World Meteorological Day, of course! – they started to appear in the wild.
DHFS Computing Dept on Twitter
The next code club project has just arrived! Can’t wait to get stuck in! @Raspberry_Pi @clivebeale pic.twitter.com/axA7wJ1RMF

Pilot “lite”
We’re running the UK delivery as a short pilot scheme. With almost 1000 schools involved worldwide, it will give us a chance us to tweak software and resources, and to get a feel for how we can best support schools.  In the next few weeks, we’ll send out the remainder of the weather stations. We’ll have a good idea of when this will be next week, when the first kits have been in schools for a while.
Once all the stations are shipped, we’ll be extending and expanding our teaching and learning resources. In particular, we would like resources for big data management and visualisation, and for non-computing subjects such as geography.  And, of course, if you make any of your own we’d love to see them.
BWoodhead Primary on Twitter
Super exciting raspberry pi weather station arrived, very lucky to be one of the 150 uk schools @rasberrypi pic.twitter.com/ZER0RPKqIf

 “Just” a milestone
This is a big milestone for the project, but it’s not the end by any means. In fact, it’s just the beginning as schools start to build their stations, using them to investigate the weather and to learn. We’re hoping to see and encourage lots of collaboration between schools. We started the project back in 2014. Over time, it’s easy to take any project for granted, so it was brilliant to see the excitement of teachers and students when they received their kit.
Stackpole V.C School on Twitter
We were really excited to receive our @Raspberry_Pi weather station today. Indoor trial tomorrow. @clivebeale pic.twitter.com/7fsI7DYCYg

It’s been a fun two years, and if you’ve opened a big brown box this morning and found a weather station inside, we think you’ll agree that it’s been worth the wait.
Building and setting up your weather station
The weather station page has tutorials for building the hardware and setting up the software for your weather station, along with a scheme of work for teachers and other resources.
Getting involved
The community is hugely important to us and whether you’ve just received a weather station or not, we’d love to hear from you.  The best way to get involved is to come to the friendly Weather Station corner of our forums and say hi. This is also the place to get help and to share ideas. If you’re tweeting, then you can reach us @raspberry_pi or on the hashtag #weatherstation – thanks!
BA Science on Twitter
Our weather station has arrived!Thanks to @Raspberry_Pi now need some students to help us build it! @BromptonAcademy pic.twitter.com/8qZPG3JTaQ

Buying the kit
We’re often asked if we’ll be selling the kits. We’re currently looking into this and hope that they will be commercially available at some point. I’d love to see a Raspberry Pi Weather Station attached to every school – it’s a project that genuinely engages students across many subjects. In addition, the data gathered from thousands of weather stations, all sending data back to a central database, would be really useful.
That’s all for now
But now that the kits are shipped there’ll be lots going on, so expect more news soon. And do pop into the forums for a chat.
Thanks
As well as the talented and lovely folk at Pi Towers, we’ve only made it this far with the help of others. At risk of turning into a mawkish awards ceremony speech, a few shout-outs are needed:
Oracle for their generous funding and the database support, especially Nicole at Oracle Giving, Jane at Oracle Academy, and Jeff who built our Apex database.
Rachel, Kevin and Team @cpc_tweet for the kit build (each kit has around 80 parts!) and amazing logistics support.
@HackerJimbo for sterling software development and the disk image.
If I’ve missed you out, it doesn’t mean I don’t love you.
The post Raspberry Pi Oracle Weather Stations shipped appeared first on Raspberry Pi.

Astro Pi cases!

Post Syndicated from Rachel Rayns original https://www.raspberrypi.org/blog/astro-pi-cases/

Last month we published a guide on how to 3D print your own Astro Pi flight case. Since then we’ve seen some amazing examples pop up over on Twitter. My favorites have to be the two below.
@KaceyandKristi posted this amazing rainbow flight case – great way to make the most of the layered design!
Kacey-Kristi-Lance on Twitter
@astro_timpeake Tim, our rainbow @astro_pi is ready for coding challenge. #principia inspiring future generations. pic.twitter.com/pf7Xnd9liv

You’ll never lose Jonny Teague‘s case in the dark!
Jonny Teague on Twitter
The @astro_pi case in all its glory and luminescence pic.twitter.com/iP7YIlmEtF

Dave has found some other fantastic examples:
John Chinner‘s neon orange case was made by Ryanteck.
John Chinner on Twitter
Found an excellent 3D printing shop in Singapore. Spent an hour talking about @astro_pi and they gave me this! pic.twitter.com/lMoHv3ljum

Love the red buttons on this sleek black one:
Mac the Hat on Twitter
@astro_pi 90% complete,few more parts and will be clone of @astro_pi_ir that @astro_timpeake has on @ISS_Research pic.twitter.com/CfSBpB12GR

Patrick Wiatt made this classy silver and blue case:
Patrick Wiatt on Twitter
Completed the @astro_pi 3d printable case today, now we just need the Sense HAT! #newellfonda @Raspberry_Pi pic.twitter.com/Wv9d9Th15n

PLA (a material often used by 3D printers) comes in all kinds of different colours – LEFRANCOIS has printed his case using a metallic gold, making it a perfect partner for our original aluminium one.
LEFRANCOIS on Twitter
@astro_pi hi there , here is mine 😉 pic.twitter.com/mCXZyPZ9rT

Richard Hayler, a Code Club mentor from Surrey, went for classic silver filament for his case. He even used the same buttons as the real units up on the International Space Station.
Richard Hayler ☀ on Twitter
Some more pics of our operational 3D printed @astro_pi flight case. http://richardhayler.blogspot.com/2016/02/3d-printed-astro-pi-flight-case.html … pic.twitter.com/u2x5zUgecK

Our absolute favourite photo is of one of Richard’s Code Club students, Ozzy, posing as mini-Tim to recreate a photo of Tim Peake with an Astro Pi flight unit that’s become famous in the community…
Tim with Astro PiOzzy as mini-Tim
These are fantastic and we’d love to see more of them, but I also have an additional challenge for you – hack our design! Maybe you’d like to add your name to the front, or add an extra handle – surprise us!
I added the words “Raspberry Pi” to my top layer very quickly in Tinkercad, a simple and free in-browser 3D-modelling program.
Screenshot 2016-03-23 11.56.09
I also successfully installed FreeCad on my Pi3 today, and I’m going to see what I can do over the UK Bank Holiday weekend.
Get your printer on and warming up! Here’s a link to the Astro Pi flight case STL files: go go go!
The post Astro Pi cases! appeared first on Raspberry Pi.

Ten days to enter our Astro Pi competition

Post Syndicated from Rachel Rayns original https://www.raspberrypi.org/blog/ten-days-enter-astro-pi-competition/

Calling all space coders! A quick announcement:
T minus ten days to the deadline of our latest Astro Pi competition.
You have until 12 noon on Thursday 31st March to submit your Sonic Pi tunes and MP3 player code.
Send your code to space
British ESA astronaut Tim Peake wants students to compose music in Sonic Pi for him to listen to. Tim needs to be able to listen to your tunes on one of the Astro Pi flight units, so we are also looking for a Python program to turn the units into an MP3 media player.  You do need to be 18 or under and live in the UK.
We have some fantastic competition judges: musicians including synthpop giants OMD and film composer Ilan Eshkeri, as well as experts from the aerospace industry and our own crack team of developers.
If you haven’t used Sonic Pi before, here is a brilliant introduction from our Education Team:
Getting Started With Sonic Pi | Raspberry Pi Learning Resources
Sonic Pi is an open-source programming environment, designed for creating new sounds with code in a live coding environment; it was developed by Dr Sam Aaron at the University of Cambridge. He uses the software to perform live with his band.

You can find all the competition information, including how to enter, at astro-pi.org/coding-challenges.
The post Ten days to enter our Astro Pi competition appeared first on Raspberry Pi.

Meet the 314GB PiDrive

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/meet-314gb-pidrive/

I’ve been writing for tech mags for as long as the Raspberry Pi has existed, and one of the most popular Pi tutorials I’ve seen in the last four years or so is the classic Raspberry Pi fileserver. It’s a no-brainer really, due to the Pi’s size and power requirements; the only thing you need to add is a USB hard drive. On Pi Day Western Digital, popular purveyors of hard drives, released PiDrive, a Raspberry Pi-optimised USB hard drive that you may want to consider for the job.
You may want to get an enclosure for itYou might want to get an enclosure for it
You see, while the Raspberry Pi may be low power, hard drives are basically a chunk of metal spinning at several thousand RPM; this, as you might expect, needs a little more juice. While in the grand scheme of things a Pi fileserver is still a relatively low-power solution, it does make you wonder. The PiDrive, on the other hand, is designed around the Raspberry Pi. It draws all the power it needs straight from the separately available USB power cable [this article originally implied that the cable was included – we’re sorry for the inaccuracy] which then also fits into the Raspberry Pi. With this and optimisations to the way data is transferred, the power draw of the entire system ends up being lower than the usual methods.
As it was released on Pi Day, WD have gone all-in with the Pi references. It has 314 GB of storage and currently costs $31.42 (£22), which is 31.4% off its RRP of $45.81 (£32).
It comes in a lovely box that reminds you what it's good forIt comes in a lovely box that reminds you what it’s good for
At that size it’s probably most useful for your day-to-day Pi use, offering more storage than your standard 8GB SD card. However, there are four USB ports on a Raspberry Pi and you can connect a drive to each of them if you want to go down the fileserver/NAS route – WD reckons PiDrive will work just fine for that kind of purpose.
The PiDrive is on sale now. Give it a look!

The post Meet the 314GB PiDrive appeared first on Raspberry Pi.

AWS Lambda and Amazon API Gateway launch in Frankfurt region

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/aws-lambda-and-amazon-api-gateway-launch-in-frankfurt-region/

Vyom Nagrani Vyom Nagrani, Sr. Product Manager, AWS Lambda
We’re happy to announce that you can now build and deploy serverless applications using AWS Lambda and Amazon API Gateway in the Frankfurt region.
Amazon S3, Amazon Kinesis, Amazon SNS, Amazon DynamoDB Streams, Amazon CloudWatch Events, Amazon CloudWatch Logs, and Amazon API Gateway are available as event sources in the Frankfurt region. You can now trigger a Lambda function to process your data stored in Germany using any of these AWS services.

AWS Lambda and Amazon API Gateway launch in Frankfurt region

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/aws-lambda-and-amazon-api-gateway-launch-in-frankfurt-region/

Vyom Nagrani Vyom Nagrani, Sr. Product Manager, AWS Lambda
We’re happy to announce that you can now build and deploy serverless applications using AWS Lambda and Amazon API Gateway in the Frankfurt region.
Amazon S3, Amazon Kinesis, Amazon SNS, Amazon DynamoDB Streams, Amazon CloudWatch Events, Amazon CloudWatch Logs, and Amazon API Gateway are available as event sources in the Frankfurt region. You can now trigger a Lambda function to process your data stored in Germany using any of these AWS services.

Scheduling SSH jobs using AWS Lambda

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/

Puneet Agarwal Puneet Agarwal, AWS Solution Architect
  With the addition of the Scheduled Events feature, you can now set up AWS Lambda to invoke your code on a regular, scheduled basis. You can now schedule various AWS API activities in your account (such as creation or deletion of CloudFormation stacks, EBS volume snapshots, etc.) with AWS Lambda. In addition, you can use AWS Lambda to connect to your Linux instances by using SSH and run desired commands and scripts at regular time intervals. This is especially useful for scheduling tasks (e.g., system updates, log cleanups, maintenance tasks) on your EC2 instances, when you don’t want to manage cron or external schedulers for a dynamic fleet of instances.
In the following example, you will run a simple shell script that prints “Hello World” to an output file on instances tagged as “Environment=Dev” in your account. You will trigger this shell script through a Lambda function written in Python 2.7.
At a high level, this is what you will do in this example:

Create a Lambda function to fetch IP addresses of EC2 instances with “Environment=Dev” tag. This function will serve as a trigger function. This trigger function will invoke a worker function, for each IP address. The worker function will connect to EC2 instances using SSH and run a HelloWorld.sh script.
Configure Scheduled Event as an event source to invoke the trigger function every 15 minutes.
Create a Python deployment package (.zip file), with worker function code and other dependencies.
Upload the worker function package to AWS Lambda.

 
Advantages of Scheduled Lambda Events over Ubiquitous Cron
Cron is indeed simple and well understood, which makes it a very popular tool for running scheduled operations. However, there are many architectural benefits that make scheduled Lambda functions and custom scripts a better choice in certain scenarios:

Decouple job schedule and AMI: If your cron jobs are part of an AMI, each schedule change requires you to create a new AMI version, and update existing instances running with that AMI. This is both cumbersome and time-consuming. Using scheduled Lambda functions, you can keep the job schedule outside of your AMI and change the schedule on the fly.
Flexible targeting of EC2 instances: By abstracting the job schedule from AMI and EC2 instances, you can flexibly target a subset of your EC2 instance fleet based on tags or other conditions. In this example, we are targeting EC2 instances with the “Environment=Dev” tag.
Intelligent scheduling: With scheduled Lambda functions, you can add custom logic to you abstracted job scheduler.

While there are many ways of achieving the above benefits, scheduled Lambda functions are an easy-to-use option in your toolkit.
 
Trigger Function
This is a simple Python function that extracts IP addresses of all instances with the “Environment=Dev” tag and invokes the worker function for each of the instances. Decoupling the trigger function from the worker function enables a simpler programming model for parallel execution of tasks on multiple instances.
Steps:

Sign in to the AWS Management Console and open the AWS Lambda console.
Choose Create a Lambda function.
On the Select blueprint page, type cron in the search box.
Choose lambda-canary.
On the Configure event sources page, Event source type defaults to Scheduled Event.  You can create a new schedule by entering a name for the schedule, or can select one of your existing schedules.  For Schedule expression, you can specify a fixed rate (number of minutes, hours, or days between invocations) or you can specify a cron-like expression. Note that rate frequencies of less than five minutes are not supported at this time.  Lambda SSH Configure Events 
Choose Next. The Configure Function page appears.    Here, you can enter the name and description of your function. Replace the sample code here with the following code. trigger_function.py import boto3

def trigger_handler(event, context):
#Get IP addresses of EC2 instances
client = boto3.client(‘ec2’)
instDict=client.describe_instances(
Filters=[{‘Name’:’tag:Environment’,’Values’:[‘Dev’]}]
)

hostList=[]
for r in instDict[‘Reservations’]:
for inst in r[‘Instances’]:
hostList.append(inst[‘PublicIpAddress’])

#Invoke worker function for each IP address
client = boto3.client(‘lambda’)
for host in hostList:
print "Invoking worker_function on " + host
invokeResponse=client.invoke(
FunctionName=’worker_function’,
InvocationType=’Event’,
LogType=’Tail’,
Payload='{"IP":"’+ host +’"}’
)
print invokeResponse

return{
‘message’ : "Trigger function finished"
}
After adding the trigger code in the console, create the appropriate execution role and set a timeout. Note that the execution role must have permissions to execute EC2 DescribeInstances and invoke Lambda functions. Example IAM Policies for the trigger Lambda role are as follows:

Basic execution policy: https://gist.github.com/apun/8f8c0c0cbea38d7e0bdc (automatically created by AWS Console).
Trigger Policy: https://gist.github.com/apun/33c2fd954a8e238bbcb0 (EC2:Describe* and InvokeFunction permissions to invoke worker_function). After role creation, you can add this policy to the trigger_lambda_role using the IAM console.

Choose Next, choose Enable later, and then choose Create function.

 
Worker Function
Next, put together the worker Lambda function that connects to an Amazon EC2 instance using SSH, and then run the HelloWorld.sh script. To initiate SSH connections from the Lambda client, use the Paramiko library. Paramiko is an open source Python implementation of the SSHv2 protocol, providing both client and server functionality. Worker function will irst download a private key file from a secured Amazon S3 bucket to the local /tmp folder, and then use that key file to connect to the EC2 instances by using SSH. You must keep your private key secure and make sure that only the worker function has read access to the file on S3. Assuming that EC2 instances have S3 access permissions through an EC2 role, worker function will download the HelloWorld.sh script from S3 and execute it locally on each EC2 instance.
Steps:

Create worker_function.py file on your local Linux machine or on an EC2 instance using following code worker_function.py import boto3
import paramiko
def worker_handler(event, context):

s3_client = boto3.client(‘s3’)
#Download private key file from secure S3 bucket
s3_client.download_file(‘s3-key-bucket’,’keys/keyname.pem’, ‘/tmp/keyname.pem’)

k = paramiko.RSAKey.from_private_key_file("/tmp/keyname.pem")
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())

host=event[‘IP’]
print "Connecting to " + host
c.connect( hostname = host, username = "ec2-user", pkey = k )
print "Connected to " + host

commands = [
"aws s3 cp s3://s3-bucket/scripts/HelloWorld.sh /home/ec2-user/HelloWorld.sh",
"chmod 700 /home/ec2-user/HelloWorld.sh",
"/home/ec2-user/HelloWorld.sh"
]
for command in commands:
print "Executing {}".format(command)
stdin , stdout, stderr = c.exec_command(command)
print stdout.read()
print stderr.read()

return
{
‘message’ : "Script execution completed. See Cloudwatch logs for complete output"
}
  Now, creating a deployment package is straightforward. For this example, create a deployment package using Virtualenv.
Install Virtualenv on your local Linux machine or an EC2 instance. $ pip install virtualenv
Create a virtual environment named “helloworld-env“, which will use a Python2.7 interpreter. $ virtualenv –p /usr/bin/python2.7 path/to/my/helloworld-env
Activate helloworld-env. source path/to/my/helloworld-env/bin/activate
Install dependencies. $pip install pycrypto PyCrypto provides the low-level (C-based) encryption algorithms we need to implement the SSH protocol. $pip install paramiko
Add worker_function.py to the zip file. $zip path/to/zip/worker_function.zip worker_function.py
Add dependencies from helloworld-env to the zip file. $cd path/to/my/helloworld-env/lib/python2.7/site-packages
$zip –r path/to/zip/worker_function.zip
$cd path/to/my/helloworld-env/lib64/python2.7/site-packages
$zip –r path/to/zip/worker_function.zip Using the AWS console (skip the blueprint step) or AWS CLI, create a new Lambda function named worker_function and upload worker_function.zip.    Example IAM policies for the worker Lambda role are as follows:

Basic execution policy: https://gist.github.com/apun/8f8c0c0cbea38d7e0bdc (Automatically created by AWS Console)
Worker policy: https://gist.github.com/apun/0647280645b399917191 (GetObject permission for S3 key file)
Caution: To keep your keys secure, make sure no other IAM users or roles, other than intended users, have access to this S3 bucket.

 
Upload key and script to S3
All you need to do now is upload your key and script file to S3 buckets and then you are ready to run the example.
Steps:

Upload HellowWorld.sh to an appropriate S3 bucket (e.g., s3://s3-bucket/scripts/). HelloWorld.sh is a simple shell script that prints “Hello World from instanceID” to a log file and copies that log file to your S3 folder.   HelloWorld.sh #Get instanceId from metadata
instanceid=`wget -q -O – http://instance-data/latest/meta-data/instance-id`
LOGFILE="/home/ec2-user/$instanceid.$(date +"%Y%m%d_%H%M%S").log"

#Run Hello World and redirect output to a log file
echo "Hello World from $instanceid" > $LOGFILE

#Copy log file to S3 logs folder
aws s3 cp $LOGFILE s3://s3-bucket/logs/

Upload keyname.pem file, which is your private key to connect to EC2 instances, to a secure S3 bucket (e.g., s3://s3-key-bucket/keys/keyname.pem). To keep your keys secure, make sure no IAM users or roles, other than intended users and the Lambda worker role, have access to this S3 bucket.

 
Running the example
As a final step, enable your trigger_function event source by choosing trigger_function from the list of Lambda functions, choosing the Event sources tab, and clicking Disabled in the State column.
You can now test your newly created Lambda functions and monitor execution logs. AWS Lambda logs all requests handled by your function and automatically stores logs generated by your code using Amazon CloudWatch Logs. The following screenshots show my CloudWatch Logs after completing the preceding steps.
Trigger function log in CloudWatch Logs:   
Worker function log in Cloudwatch Logs:   
Log files that were generated in my S3 bucket:   
 
Other considerations

With the new Lambda VPC support, you can connect to your EC2 instances running in your private VPC by providing private subnet IDs and EC2 security group IDs as part of your Lambda function configuration.
AWS Lambda now supports a maximum function duration of 5 minutes, and so you can use scheduled Lambda functions to run jobs that are expected to finish within 5 minutes. For longer running jobs, you can use following syntax to run jobs in the background so that the Lambda function doesn’t wait for command execution to finish. c.exec_command(cmd + ‘ > /dev/null 2>&1 &’)

Scheduling SSH jobs using AWS Lambda

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/

Puneet Agarwal Puneet Agarwal, AWS Solution Architect
  With the addition of the Scheduled Events feature, you can now set up AWS Lambda to invoke your code on a regular, scheduled basis. You can now schedule various AWS API activities in your account (such as creation or deletion of CloudFormation stacks, EBS volume snapshots, etc.) with AWS Lambda. In addition, you can use AWS Lambda to connect to your Linux instances by using SSH and run desired commands and scripts at regular time intervals. This is especially useful for scheduling tasks (e.g., system updates, log cleanups, maintenance tasks) on your EC2 instances, when you don’t want to manage cron or external schedulers for a dynamic fleet of instances.
In the following example, you will run a simple shell script that prints “Hello World” to an output file on instances tagged as “Environment=Dev” in your account. You will trigger this shell script through a Lambda function written in Python 2.7.
At a high level, this is what you will do in this example:

Create a Lambda function to fetch IP addresses of EC2 instances with “Environment=Dev” tag. This function will serve as a trigger function. This trigger function will invoke a worker function, for each IP address. The worker function will connect to EC2 instances using SSH and run a HelloWorld.sh script.
Configure Scheduled Event as an event source to invoke the trigger function every 15 minutes.
Create a Python deployment package (.zip file), with worker function code and other dependencies.
Upload the worker function package to AWS Lambda.

 
Advantages of Scheduled Lambda Events over Ubiquitous Cron
Cron is indeed simple and well understood, which makes it a very popular tool for running scheduled operations. However, there are many architectural benefits that make scheduled Lambda functions and custom scripts a better choice in certain scenarios:

Decouple job schedule and AMI: If your cron jobs are part of an AMI, each schedule change requires you to create a new AMI version, and update existing instances running with that AMI. This is both cumbersome and time-consuming. Using scheduled Lambda functions, you can keep the job schedule outside of your AMI and change the schedule on the fly.
Flexible targeting of EC2 instances: By abstracting the job schedule from AMI and EC2 instances, you can flexibly target a subset of your EC2 instance fleet based on tags or other conditions. In this example, we are targeting EC2 instances with the “Environment=Dev” tag.
Intelligent scheduling: With scheduled Lambda functions, you can add custom logic to you abstracted job scheduler.

While there are many ways of achieving the above benefits, scheduled Lambda functions are an easy-to-use option in your toolkit.
 
Trigger Function
This is a simple Python function that extracts IP addresses of all instances with the “Environment=Dev” tag and invokes the worker function for each of the instances. Decoupling the trigger function from the worker function enables a simpler programming model for parallel execution of tasks on multiple instances.
Steps:

Sign in to the AWS Management Console and open the AWS Lambda console.
Choose Create a Lambda function.
On the Select blueprint page, type cron in the search box.
Choose lambda-canary.
On the Configure event sources page, Event source type defaults to Scheduled Event.  You can create a new schedule by entering a name for the schedule, or can select one of your existing schedules.  For Schedule expression, you can specify a fixed rate (number of minutes, hours, or days between invocations) or you can specify a cron-like expression. Note that rate frequencies of less than five minutes are not supported at this time.  Lambda SSH Configure Events 
Choose Next. The Configure Function page appears.    Here, you can enter the name and description of your function. Replace the sample code here with the following code. trigger_function.py import boto3

def trigger_handler(event, context):
#Get IP addresses of EC2 instances
client = boto3.client(‘ec2’)
instDict=client.describe_instances(
Filters=[{‘Name’:’tag:Environment’,’Values’:[‘Dev’]}]
)

hostList=[]
for r in instDict[‘Reservations’]:
for inst in r[‘Instances’]:
hostList.append(inst[‘PublicIpAddress’])

#Invoke worker function for each IP address
client = boto3.client(‘lambda’)
for host in hostList:
print "Invoking worker_function on " + host
invokeResponse=client.invoke(
FunctionName=’worker_function’,
InvocationType=’Event’,
LogType=’Tail’,
Payload='{"IP":"’+ host +’"}’
)
print invokeResponse

return{
‘message’ : "Trigger function finished"
}
After adding the trigger code in the console, create the appropriate execution role and set a timeout. Note that the execution role must have permissions to execute EC2 DescribeInstances and invoke Lambda functions. Example IAM Policies for the trigger Lambda role are as follows:

Basic execution policy: https://gist.github.com/apun/8f8c0c0cbea38d7e0bdc (automatically created by AWS Console).
Trigger Policy: https://gist.github.com/apun/33c2fd954a8e238bbcb0 (EC2:Describe* and InvokeFunction permissions to invoke worker_function). After role creation, you can add this policy to the trigger_lambda_role using the IAM console.

Choose Next, choose Enable later, and then choose Create function.

 
Worker Function
Next, put together the worker Lambda function that connects to an Amazon EC2 instance using SSH, and then run the HelloWorld.sh script. To initiate SSH connections from the Lambda client, use the Paramiko library. Paramiko is an open source Python implementation of the SSHv2 protocol, providing both client and server functionality. Worker function will irst download a private key file from a secured Amazon S3 bucket to the local /tmp folder, and then use that key file to connect to the EC2 instances by using SSH. You must keep your private key secure and make sure that only the worker function has read access to the file on S3. Assuming that EC2 instances have S3 access permissions through an EC2 role, worker function will download the HelloWorld.sh script from S3 and execute it locally on each EC2 instance.
Steps:

Create worker_function.py file on your local Linux machine or on an EC2 instance using following code worker_function.py import boto3
import paramiko
def worker_handler(event, context):

s3_client = boto3.client(‘s3’)
#Download private key file from secure S3 bucket
s3_client.download_file(‘s3-key-bucket’,’keys/keyname.pem’, ‘/tmp/keyname.pem’)

k = paramiko.RSAKey.from_private_key_file("/tmp/keyname.pem")
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())

host=event[‘IP’]
print "Connecting to " + host
c.connect( hostname = host, username = "ec2-user", pkey = k )
print "Connected to " + host

commands = [
"aws s3 cp s3://s3-bucket/scripts/HelloWorld.sh /home/ec2-user/HelloWorld.sh",
"chmod 700 /home/ec2-user/HelloWorld.sh",
"/home/ec2-user/HelloWorld.sh"
]
for command in commands:
print "Executing {}".format(command)
stdin , stdout, stderr = c.exec_command(command)
print stdout.read()
print stderr.read()

return
{
‘message’ : "Script execution completed. See Cloudwatch logs for complete output"
}
  Now, creating a deployment package is straightforward. For this example, create a deployment package using Virtualenv.
Install Virtualenv on your local Linux machine or an EC2 instance. $ pip install virtualenv
Create a virtual environment named “helloworld-env“, which will use a Python2.7 interpreter. $ virtualenv –p /usr/bin/python2.7 path/to/my/helloworld-env
Activate helloworld-env. source path/to/my/helloworld-env/bin/activate
Install dependencies. $pip install pycrypto PyCrypto provides the low-level (C-based) encryption algorithms we need to implement the SSH protocol. $pip install paramiko
Add worker_function.py to the zip file. $zip path/to/zip/worker_function.zip worker_function.py
Add dependencies from helloworld-env to the zip file. $cd path/to/my/helloworld-env/lib/python2.7/site-packages
$zip –r path/to/zip/worker_function.zip
$cd path/to/my/helloworld-env/lib64/python2.7/site-packages
$zip –r path/to/zip/worker_function.zip Using the AWS console (skip the blueprint step) or AWS CLI, create a new Lambda function named worker_function and upload worker_function.zip.    Example IAM policies for the worker Lambda role are as follows:

Basic execution policy: https://gist.github.com/apun/8f8c0c0cbea38d7e0bdc (Automatically created by AWS Console)
Worker policy: https://gist.github.com/apun/0647280645b399917191 (GetObject permission for S3 key file)
Caution: To keep your keys secure, make sure no other IAM users or roles, other than intended users, have access to this S3 bucket.

 
Upload key and script to S3
All you need to do now is upload your key and script file to S3 buckets and then you are ready to run the example.
Steps:

Upload HellowWorld.sh to an appropriate S3 bucket (e.g., s3://s3-bucket/scripts/). HelloWorld.sh is a simple shell script that prints “Hello World from instanceID” to a log file and copies that log file to your S3 folder.   HelloWorld.sh #Get instanceId from metadata
instanceid=`wget -q -O – http://instance-data/latest/meta-data/instance-id`
LOGFILE="/home/ec2-user/$instanceid.$(date +"%Y%m%d_%H%M%S").log"

#Run Hello World and redirect output to a log file
echo "Hello World from $instanceid" > $LOGFILE

#Copy log file to S3 logs folder
aws s3 cp $LOGFILE s3://s3-bucket/logs/

Upload keyname.pem file, which is your private key to connect to EC2 instances, to a secure S3 bucket (e.g., s3://s3-key-bucket/keys/keyname.pem). To keep your keys secure, make sure no IAM users or roles, other than intended users and the Lambda worker role, have access to this S3 bucket.

 
Running the example
As a final step, enable your trigger_function event source by choosing trigger_function from the list of Lambda functions, choosing the Event sources tab, and clicking Disabled in the State column.
You can now test your newly created Lambda functions and monitor execution logs. AWS Lambda logs all requests handled by your function and automatically stores logs generated by your code using Amazon CloudWatch Logs. The following screenshots show my CloudWatch Logs after completing the preceding steps.
Trigger function log in CloudWatch Logs:   
Worker function log in Cloudwatch Logs:   
Log files that were generated in my S3 bucket:   
 
Other considerations

With the new Lambda VPC support, you can connect to your EC2 instances running in your private VPC by providing private subnet IDs and EC2 security group IDs as part of your Lambda function configuration.
AWS Lambda now supports a maximum function duration of 5 minutes, and so you can use scheduled Lambda functions to run jobs that are expected to finish within 5 minutes. For longer running jobs, you can use following syntax to run jobs in the background so that the Lambda function doesn’t wait for command execution to finish. c.exec_command(cmd + ‘ > /dev/null 2>&1 &’)

Powering your Amazon ECS Clusters with Spot Fleet

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/powering-your-amazon-ecs-clusters-with-spot-fleet/

My colleague Drew Dennis sent a nice guest post that shows how to use Amazon ECS with Spot fleet.

There are advantages to using on-demand EC2 instances. However, for many workloads, such as stateless or task-based scenarios that simply run as long as they need to run and are easily replaced with subsequent identical processes, Spot fleet can provide additional compute resources that are more economical. Furthermore, Spot fleet attempts to replace any terminated instances to maintain the requested target capacity.
Amazon ECS is a highly scalable, high performance, container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances easily. ECS already handles the placement and scheduling of containers on EC2 instances. When combined with Spot fleet, ECS can deliver significant savings over EC2 on-demand pricing.
Why Spot fleet?
Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Because Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications. Spot fleet enables customers to request a collection of Spot instances across multiple Availability Zones and instance types with a single API call.
The Spot fleet API call can specify a target capacity and an allocation strategy. The two available allocation strategies are lowest price and diversified. Lowest price means the instances are provisioned based solely on the lowest current Spot price available while diversified fulfills the request equally across multiple Spot pools (instances of the same type and OS within an Availability Zone) to help mitigate the risk of a sudden Spot price increase. For more information, see How Spot Fleet Works.
Using Spot fleet
The Spot fleet console is available at https://console.aws.amazon.com/ec2spot/home. It provides a simple approach to creating a Spot fleet request and setting up all necessary attributes of the request, including creating an IAM role and base64-encoding user data. The console also provides the option to download the request JSON, which can be used with the CLI if desired.
If you prefer not to use the Spot fleet console, you need to make sure you have an IAM role created with the necessary privileges for the Spot fleet request to bid on, launch, and terminate instances. Note that the iam:PassRole action is needed in this scenario so that Spot fleet can launch instances with a role to participate in an ECS cluster. You need to make sure that you have an AWS SDK or the AWS CLI installed.
This post assumes you are familiar with the process of creating an ECS cluster, creating an ECS task definition, and launching the task definition as a manual task or service. If not, see the ECS documentation.
Creating a Spot fleet request
Before you make your Spot fleet request, make sure you know the instance types, Availability Zones, and bid prices that you plan to request. Note that individual bid prices for various instance types can be used in a Spot fleet request. When you have decided on these items, you are ready to begin the request. In the screenshot below, a fleet request is being created for four c4.large instances using an Amazon Linux ECS-optimized AMI. You can obtain the most up-to-date list of ECS optimized AMIs by region in the Launching an Amazon ECS Container Instance topic.

Notice the very useful warnings if your bid price is below the minimum price to initially launch the instance. From here, you can also access the Spot pricing history and Spot Bid Advisor to better understand past pricing volatility. After choosing Next, you see options to spread the request across multiple zones, specify values for User data, and define other request attributes as shown below. In this example, the user data sets the ECS cluster to which the ECS container agent connects.

Other examples could create a Spot fleet request that contains multiple instance types with Spot price overrides for each instance type in a single Availability Zone. The allocation strategy could still be diversified, which means it will pull equally from the two instance-type pools. This could easily be combined with the previous example to create a fleet request that spans multiple Availability Zones and instance types, further mitigating the risk of Spot instance termination.
Running ECS tasks and services on your Spot fleet
After your instances have joined your ECS cluster, you are ready to start tasks or services on them. This involves first creating a task definition. For more information, see the Docker basics walkthrough. After the task definition is created, you can run the tasks manually, or schedule them as a long-running process or service.
In the case of an ECS service, if one of the Spot fleet instances is terminated due to a Spot price interruption, ECS re-allocates the running containers on another EC2 instance within the cluster to maintain the desired number of running tasks, assuming that sufficient resources are available.
If not, within a few minutes, the instance is replaced with a new instance by the Spot fleet request. The new instance is launched according to the configuration of the initial Spot fleet request and rejoins the cluster to participate and run any outstanding containers needed to meet the desired quantity.
In summary, Spot fleet provides an effective and economical way to add instances to an ECS cluster. Because a Spot fleet request can span multiple instance types and Availability Zones, and will always try to maintain a target number of instances, it is a great fit for running stateless containers and adding inexpensive capacity to your ECS clusters.
Auto Scaling and Spot fleet requests
Auto Scaling has proven to be a great way to add or remove EC2 capacity to many AWS workloads. ECS supports Auto Scaling on cluster instances and provides CloudWatch metrics to help facilitate this scenario. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms. The combination of Auto Scaling and Spot fleet provides a nice way to have a pool of fixed capacity and variable capacity on demand while reducing costs.
Currently, Spot fleet requests cannot be integrated directly with Auto Scaling policies as they can with Spot instance requests. However, the Spot fleet API does include an action called ModifySpotFleetRequest that can change the target capacity of your request. The Dynamic Scaling with EC2 Spot Fleet blog post shows an example of a scenario that leverages CloudWatch metrics to invoke a Lambda function and change the Spot fleet target capacity. Using ModifySpotFleetRequest can be a great way to not only fine-tune your fleet requests, but also minimize over-provisioning and further lower costs.
Conclusion
Amazon ECS manages clusters of EC2 instances for reliable state management and flexible container scheduling. Docker containers lend themselves to flexible and portable application deployments, and when used with ECS provide a simple and effective way to manage fleets of instances and containers, both large and small.
Combining Spot fleet with ECS can provide lower-cost options to augment existing clusters and even provision new ones. Certainly, this can be done with traditional Spot instance requests. However, because Spot fleet allows requests to span instance families and Availability Zones (with multiple allocation strategies, prices, etc.), it is a great way to enhance your ECS strategy by increasing availability and lowering the overall cost of your cluster’s compute capacity.

Powering your Amazon ECS Clusters with Spot Fleet

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/powering-your-amazon-ecs-clusters-with-spot-fleet/

My colleague Drew Dennis sent a nice guest post that shows how to use Amazon ECS with Spot fleet.

There are advantages to using on-demand EC2 instances. However, for many workloads, such as stateless or task-based scenarios that simply run as long as they need to run and are easily replaced with subsequent identical processes, Spot fleet can provide additional compute resources that are more economical. Furthermore, Spot fleet attempts to replace any terminated instances to maintain the requested target capacity.
Amazon ECS is a highly scalable, high performance, container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances easily. ECS already handles the placement and scheduling of containers on EC2 instances. When combined with Spot fleet, ECS can deliver significant savings over EC2 on-demand pricing.
Why Spot fleet?
Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Because Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications. Spot fleet enables customers to request a collection of Spot instances across multiple Availability Zones and instance types with a single API call.
The Spot fleet API call can specify a target capacity and an allocation strategy. The two available allocation strategies are lowest price and diversified. Lowest price means the instances are provisioned based solely on the lowest current Spot price available while diversified fulfills the request equally across multiple Spot pools (instances of the same type and OS within an Availability Zone) to help mitigate the risk of a sudden Spot price increase. For more information, see How Spot Fleet Works.
Using Spot fleet
The Spot fleet console is available at https://console.aws.amazon.com/ec2spot/home. It provides a simple approach to creating a Spot fleet request and setting up all necessary attributes of the request, including creating an IAM role and base64-encoding user data. The console also provides the option to download the request JSON, which can be used with the CLI if desired.
If you prefer not to use the Spot fleet console, you need to make sure you have an IAM role created with the necessary privileges for the Spot fleet request to bid on, launch, and terminate instances. Note that the iam:PassRole action is needed in this scenario so that Spot fleet can launch instances with a role to participate in an ECS cluster. You need to make sure that you have an AWS SDK or the AWS CLI installed.
This post assumes you are familiar with the process of creating an ECS cluster, creating an ECS task definition, and launching the task definition as a manual task or service. If not, see the ECS documentation.
Creating a Spot fleet request
Before you make your Spot fleet request, make sure you know the instance types, Availability Zones, and bid prices that you plan to request. Note that individual bid prices for various instance types can be used in a Spot fleet request. When you have decided on these items, you are ready to begin the request. In the screenshot below, a fleet request is being created for four c4.large instances using an Amazon Linux ECS-optimized AMI. You can obtain the most up-to-date list of ECS optimized AMIs by region in the Launching an Amazon ECS Container Instance topic.

Notice the very useful warnings if your bid price is below the minimum price to initially launch the instance. From here, you can also access the Spot pricing history and Spot Bid Advisor to better understand past pricing volatility. After choosing Next, you see options to spread the request across multiple zones, specify values for User data, and define other request attributes as shown below. In this example, the user data sets the ECS cluster to which the ECS container agent connects.

Other examples could create a Spot fleet request that contains multiple instance types with Spot price overrides for each instance type in a single Availability Zone. The allocation strategy could still be diversified, which means it will pull equally from the two instance-type pools. This could easily be combined with the previous example to create a fleet request that spans multiple Availability Zones and instance types, further mitigating the risk of Spot instance termination.
Running ECS tasks and services on your Spot fleet
After your instances have joined your ECS cluster, you are ready to start tasks or services on them. This involves first creating a task definition. For more information, see the Docker basics walkthrough. After the task definition is created, you can run the tasks manually, or schedule them as a long-running process or service.
In the case of an ECS service, if one of the Spot fleet instances is terminated due to a Spot price interruption, ECS re-allocates the running containers on another EC2 instance within the cluster to maintain the desired number of running tasks, assuming that sufficient resources are available.
If not, within a few minutes, the instance is replaced with a new instance by the Spot fleet request. The new instance is launched according to the configuration of the initial Spot fleet request and rejoins the cluster to participate and run any outstanding containers needed to meet the desired quantity.
In summary, Spot fleet provides an effective and economical way to add instances to an ECS cluster. Because a Spot fleet request can span multiple instance types and Availability Zones, and will always try to maintain a target number of instances, it is a great fit for running stateless containers and adding inexpensive capacity to your ECS clusters.
Auto Scaling and Spot fleet requests
Auto Scaling has proven to be a great way to add or remove EC2 capacity to many AWS workloads. ECS supports Auto Scaling on cluster instances and provides CloudWatch metrics to help facilitate this scenario. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms. The combination of Auto Scaling and Spot fleet provides a nice way to have a pool of fixed capacity and variable capacity on demand while reducing costs.
Currently, Spot fleet requests cannot be integrated directly with Auto Scaling policies as they can with Spot instance requests. However, the Spot fleet API does include an action called ModifySpotFleetRequest that can change the target capacity of your request. The Dynamic Scaling with EC2 Spot Fleet blog post shows an example of a scenario that leverages CloudWatch metrics to invoke a Lambda function and change the Spot fleet target capacity. Using ModifySpotFleetRequest can be a great way to not only fine-tune your fleet requests, but also minimize over-provisioning and further lower costs.
Conclusion
Amazon ECS manages clusters of EC2 instances for reliable state management and flexible container scheduling. Docker containers lend themselves to flexible and portable application deployments, and when used with ECS provide a simple and effective way to manage fleets of instances and containers, both large and small.
Combining Spot fleet with ECS can provide lower-cost options to augment existing clusters and even provision new ones. Certainly, this can be done with traditional Spot instance requests. However, because Spot fleet allows requests to span instance families and Availability Zones (with multiple allocation strategies, prices, etc.), it is a great way to enhance your ECS strategy by increasing availability and lowering the overall cost of your cluster’s compute capacity.