Each year, the European Astro Pi Challenge allows students and young people in ESA Member States (or Slovenia, Canada, or Malta) to write code for their own experiments, which could run on two Raspberry Pi units aboard the International Space Station.
The Astro Pi Challenge is a lot of fun, it’s about space, and so that we in the Raspberry Pi team don’t have to miss out despite being adults, many of us mentor their own Astro Pi teams — and you should too!
So, gather your team, stock up on freeze-dried ice cream, and let’s do it again: the European Astro Pi Challenge 2019/2020 launches today!
ESA astronaut Luca Parmitano is this year’s ambassador of the European Astro Pi Challenge. In this video, he welcomes students to the challenge and gives an overview of the project. Learn more about Astro Pi: http://bit.ly/AstroPiESA ★ Subscribe: http://bit.ly/ESAsubscribe and click twice on the bell button to receive our notifications.
The European Astro Pi Challenge 2019/2020 is made up of two missions: Mission Zero and Mission Space Lab.
Astro Pi Mission Zero
Mission Zero has been designed for beginners/younger participants up to 14 years old and can be completed in a single session. It’s great for coding clubs or any groups of students don’t have coding experience but still want to do something cool — because having confirmation that code you wrote has run aboard the International Space Station is really, really cool! Teams write a simple Python program to display a message and temperature reading on an Astro Pi computer, for the astronauts to see as they go about their daily tasks on the ISS. No special hardware or prior coding skills are needed, and all teams that follow the challenge rules are guaranteed to have their programs run in space!
Mission Zero eligibility
Participants must be no older than 14 years
2 to 4 people per team
Participants must be supervised by a teacher, mentor, or educator, who will be the point of contact with the Astro Pi team
Teams must be made up of at least 50% team members who are citizens of an ESA Member* State, or Slovenia, Canada, or Malta
Astro Pi Mission Space Lab
Mission Space Lab is aimed at more experienced/older participants up to 19 years old, and it takes place in 4 phases over the course of 8 months. The challenge is to design and write a program for a scientific experiment to be run on an Astro Pi computer. The best experiments will be deployed to the ISS, and teams will have the opportunity to analyse and report on their results.
Mission Space Lab eligibility
Participants must be no older than 19 years
2 to 6 people per team
Participants must be supervised by a teacher, mentor, or educator, who will be the point of contact with the Astro Pi team
Teams must be made up of at least 50% team members who are citizens of an ESA Member State*, or Slovenia, Canada, or Malta
Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the #RaspberryPi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?
For both missions, each member of the team has to be at least one of the following:
Enrolled full-time in a primary or secondary school in an ESA Member State, or Slovenia, Canada, or Malta
Homeschooled (certified by the National Ministry of Education or delegated authority in an ESA Member State or Slovenia, Canada, or Malta)
A member of a club or after-school group (such as Code Club, CoderDojo, or Scouts) located in an ESA Member State*, or Slovenia, Canada, or Malta
Take part
To take part in the European Astro Pi Challenge, head over to the Astro Pi website, where you’ll find more information on how to get started getting your team’s code into SPACE!
Obligatory photo of Raspberry Pis floating in space!
*ESA Member States: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland and the United Kingdom
This is your periodic reminder that there are two Raspberry Pi computers in space! That’s right — our Astro Pi units Ed and Izzy have called the International Space Station home since 2016, and we are proud to work with ESA Education to run the European Astro Pi Challenge, which allows students to conduct scientific investigations in space, by writing computer programs.
An Astro Pi takes photos of the earth from the window of the International Space Station
The Challenge has two missions: Mission Zero and Mission Space Lab. The more advanced one, Mission Space Lab, invites teams of students and young people under 19 years of age to enter by submitting an idea for a scientific experiment to be run on the Astro Pi units.
ESA and the Raspberry Pi Foundation would like to congratulate all the teams that participated in the European Astro Pi Challenge this year. A record-breaking number of more than 15000 people, from all 22 ESA Member States as well as Canada, Slovenia, and Malta, took part in this year’s challenge across both Mission Space Lab and Mission Zero!
Eleven teams have won Mission Space Lab 2018–2019
After designing their own scientific investigations and having their programs run aboard the International Space Station, the Mission Space Lab teams spent their time analysed the data they received back from the ISS. To complete the challenge, they had to submit a short scientific report discuss their results and highlight the conclusions of their experiments. We were very impressed by the quality of the reports, which showed a high level of scientific merit.
We are delighted to announce that, while it was a difficult task, the Astro Pi jury has now selected eleven winning teams, as well as highly commending four additional teams. The eleven winning teams won the chance to join an exclusive video call with ESA astronaut Frank De Winne. He is the head of the European Astronaut Centre in Germany, where astronauts train for their missions. Each team had the once-in-a-lifetime chance to ask Frank about his life as an astronaut.
And the winners are…
Firewatchers from Post CERN HSSIP Group, Portugal, used a machine learning method on their images to identify areas that had recently suffered from wildfires.
Go, 3.141592…, Go! from IES Tomás Navarro Tomás, Spain, took pictures of the Yosemite and Lost River forests and analysed them to study the effects of global drought stress. They did this by using indexes of vegetation and moisture to assess whether forests are healthy and well-preserved.
Les Robotiseurs from Ecole Primaire Publique de Saint-André d’Embrun, France, investigated variations in Earth’s magnetic field between the North and South hemispheres, and between day and night.
TheHappy.Pi from I Liceum Ogólnokształcące im. Bolesława Krzywoustego w Słupsku, Poland, successfully processed their images to measure the relative chlorophyll concentrations of vegetation on Earth.
AstroRussell from Liceo Bertrand Russell, Italy, developed a clever image processing algorithm to classify images into sea, cloud, ice, and land categories.
Les Puissants 2.0 from Lycee International de Londres Winston Churchill, United Kingdom, used the Astro Pi’s accelerometer to study the motion of the ISS itself under conditions of normal flight and course correction/reboost maneuvers.
Torricelli from ITIS “E.Torricelli”, Italy, recorded images and took sensor measurements to calculate the orbital period and flight speed of the ISS followed by the mass of the Earth using Newton’s universal law of gravitation.
ApplePi from I Liceum Ogólnokształcące im. Króla Stanisława Leszczyńskiego w Jaśle, Poland, compared their images from Astro Pi Izzy to historical images from 35 years ago and could show that coastlines have changed slightly due to erosion or human impact.
Spacethon from Saint Joseph La Salle Pruillé Le Chétif, France, tested their image-processing algorithm to identify solid, liquid, and gaseous features of exoplanets.
Stithians Rocket Code Club from Stithians CP School, United Kingdom, performed an experiment comparing the temperature aboard the ISS to the average temperature of the nearest country the space station was flying over.
Vytina Aerospace from Primary School of Vytina, Greece, recorded images of reservoirs and lakes on Earth to compare them with historical images from the last 30 years in order to investigate climate change.
Highly commended teams
We also selected four teams to be highly commended, and they will receive a selection of goodies from ESA Education and the Raspberry Pi Foundation:
Aguere Team from IES Marina Cebrián, Spain, investigated variations in the Earth’s magnetic field due to solar activity and a particular disturbance due to a solar coronal hole.
Astroraga from CoderDojo Trento, Italy, measured the magnetic field to investigate whether astronauts can still use a compass, just like on Earth, to orient themselves on the ISS.
Betlemites from Escoles Betlem, Spain, recorded the temperature on the ISS to find out if the pattern of a convection cell is different in microgravity.
Rovel In The Space from Scuola secondaria I grado A.Rosmini ROVELLO PORRO(Como), Italy, executed a program that monitored the pressure and would warn astronauts in case space debris or micrometeoroids collided with the ISS.
The next edition is not far off!
ESA and the Raspberry Pi Foundation would like to invite all school teachers, students, and young people to join the next edition of the challenge. Make sure to follow updates on the Astro Pi website and Astro Pi Twitter account to look out for the announcement of next year’s Astro Pi Challenge!
In honour of the 50th anniversary of the Apollo moon landing, this year’s Pi Wars was space-themed. Visitors to the two-day event — held at the University of Cambridge in March — were lucky enough to witness a number of competitors and demonstration space-themed robots in action.
Among the most impressive was the Yuri 3 mini Mars rover, which was designed, lovingly crafted, and operated by Airbus engineer John Chinner. Fascinated by Yuri 3’s accuracy, we got John to give us the inside scoop.
Airbus ambassador
John is on the STEM Ambassador team at Airbus and has previously demonstrated its prototype ExoMars rover, Bridget (you can drool over images of this here: magpi.cc/btQnEw), including at the BBC Stargazing Live event in Leicester. Realising the impressive robot’s practical limitations in terms of taking it out and about to schools, John embarked on a smaller but highly faithful, easily transportable Mars rover. His robot-building experience began in his teens with a six-legged robot he took along to his technical engineering apprenticeship interview and had walk along the desk. Job deftly bagged, he’s been building robots ever since.
Yuri is a combination of an Actobotics chassis based on one created by Beatty Robotics plus 3D-printed wheels and six 12 V DC brushed gears. Six Hitec servo motors operate the steering, while the entire rover has an original Raspberry Pi B+ at its heart.
Yuri 3 usually runs in ‘tank steer’ mode. Cannily, the positioning of four of its six wheels at the corners means Yuri 3’s wheels can each be turned so that it spins on the spot. It can also ‘crab’ to the side due to its individually steerable wheels.
The part more challenging for home users is the ‘gold thermal blanket’. The blanket ensures that the rover can maintain working temperature in the extreme conditions found on Mars. “I was very fortunate to have a bespoke blanket made by the team who make them for satellites,” says John. “They used it as a training exercise for the apprentices.”
John has made some bookmarks from the leftover thermal material which he gives away to schools to use as prizes.
Rover design
While designing Yuri 3, it probably helped that John was able to sneak peeks of Airbus’s ExoMars prototypes being tested at the firm’s Mars Yard. (He once snuck Yuri 3 onto the yard and gave it a test run, but that’s supposed to be a secret!) Also, says John, “I get to see the actual flight rover in its interplanetary bio clean room”.
His involvement with all things Raspberry Pi came about when he was part of the Astro Pi programme, in which students send code to two Raspberry Pi devices aboard the International Space Station every year. “I did the shock, vibration, and EMC testing on the actual Astro Pi units in Airbus, Portsmouth,” John proudly tells us.
A very British rover
As part of the European Space Agency mission ExoMars, Airbus is building and integrating the rover in Stevenage. “What a fantastic opportunity for exciting outreach,” says John. “After all the fun with Tim Peake’s Principia mission, why not make the next British astronaut a Mars rover? … It is exciting to be able to go and visit Stevenage and see the prototype rovers testing on the Mars Yard.”
John also mentions that he’d love to see Yuri 3 put in an appearance at the Raspberry Pi Store; in the meantime, drooling punters will have to build their own Mars rover from similar kit. Or, we’ll just enjoy John’s footage of Yuri 3 in action and perhaps ask very nicely if he’ll bring Yuri along for a demonstration at an event or school near us.
John wrote about the first year of his experience building Yuri 3 on his blog. And you can follow the adventures of Yuri 3 over on Twitter: @Yuri_3_Rover.
Read the new issue of The MagPi
This article is from today’s brand-new issue of The MagPi, the official Raspberry Pi magazine. Buy it from all good newsagents, subscribe to pay less per issue and support our work, or download the free PDF to give it a try first.
You’re most likely aware of the Astro Pi Challenge. In case you’re not, it’s a wonderfully exciting programme organised by the European Space Agency (ESA) and us at Raspberry Pi. Astro Pi challenges European young people to write scientific experiments in code, and the best experiments run aboard the International Space Station (ISS) on two Astro Pi units: Raspberry Pi 1 B+ and Sense HATs encased in flight-grade aluminium spacesuits.
It’s very cool. So, so cool. As adults, we’re all extremely jealous that we’re unable to take part. We all love space and, to be honest, we all want to be astronauts. Astronauts are the coolest.
So imagine our excitement at Pi Towers when ESA shared this photo on Friday:
This is a Soyuz vehicle on its way to dock with the International Space Station. And while Soyuz vehicles ferry between earth and the ISS all the time, what’s so special about this occasion is that this very photo was captured using a Raspberry Pi 1 B+ and a Raspberry Pi Camera Module, together known as Izzy, one of the Astro Pi units!
So if anyone ever asks you whether the Raspberry Pi Camera Module is any good, just show them this photo. We don’t think you’ll need to provide any further evidence after that.
Today, ESA Education and the Raspberry Pi Foundation are proud to celebrate the International Day of Women and Girls in Science! In support of this occasion and to encourage young women to enter a career in STEM (science, technology, engineering, mathematics), CSA astronaut Jenni Sidey discusses why she believes computing and digital making skills are so important, and tells us about the role models that inspired her.
Today, ESA Education and the Raspberry Pi Foundation are proud to celebrate the International Day of Women and Girls in Science! In support of this occasion and to encourage young women to enter a career in STEM (science, technology, engineering, mathematics), CSA astronaut Jenni Sidey discusses why she believes computing and digital making skills are so important, and tells us about the role models that inspired her.
Happy International Day of Women and Girls in Science!
The International Day of Women and Girls in Science is part of the United Nations’ plan to achieve their 2030 Agenda for Sustainable Development. According to current UNESCO data, less than 30% of researchers in STEM are female and only 30% of young women are selecting STEM-related subjects in higher education
That’s why part of the UN’s 2030 Agenda is to promote full and equal access to and participation in science for women and girls. And to help young women and girls develop their computing and digital making skills, we want to encourage their participation in the European Astro Pi Challenge!
The European Astro Pi Challenge
The European Astro Pi Challenge is an ESA Education programme run in collaboration with the Raspberry Pi Foundation that offers students and young people the amazing opportunity to conduct scientific investigations in space! The challenge is to write computer programs for one of two Astro Pi units — Raspberry Pi computers on board the International Space Station.
Astro Pi’s Mission Zero is open until 20 March 2019, and this mission gives young people up to 14 years of age the chance to write a simple program to display a message to the astronauts on the ISS. No special equipment or prior coding skills are needed, and all participants that follow the mission rules are guaranteed to have their program run in space!
Take part in Mission Zero — in your language!
To help many more people take part in their native language, we’ve translated the Mission Zero resource, guidelines, and web page into 19 different languages! Head to our languages section to find your version of Mission Zero and take part.
If you have any questions regarding the European Astro Pi Challenge, email us at [email protected].
In 2014, Raspberry Pi Foundation partnered with the UK Space Agency and the European Space Agency to fly two Raspberry Pi computers to the International Space Station. These Pis, known as Astro Pis Ed and Izzy, are each equipped with a Sense HAT and Camera Module (IR or Vis) and housed within special space-hardened cases.
In our annual Astro Pi Challenge, young people from all 22 ESA member states have the opportunity to design and code experiments for the Astro Pis to become the next generation of space scientists.
Mission Zero vs Mission Space Lab
Back in September, we announced the 2017/2018 European Astro Pi Challenge, in partnership with the European Space Agency. This year, for the first time, the Astro Pi Challenge comprised two missions: Mission Zero and Mission Space Lab.
Mission Zero is a new entry-level challenge that allows young coders to have their message displayed to the astronauts on-board the ISS. It finished up in February, with more than 5400 young people in over 2500 teams taking part!
For Mission Space Lab, young people work like real scientists by designing their own experiment to investigate one of two topics:
Life in space
For this topic, young coders write code to run on Astro Pi Vis (Ed) in the Columbus module to investigate life aboard the ISS.
Life on Earth
For this topic, young people design a code experiment to run on Astro Pi IR (Izzy), aimed towards the Earth through a window, to investigate life down on our planet.
Our participants
We had more than 1400 students across 330 teams take part in this year’s Mission Space Lab. Teams who submitted an eligible idea for an experiment received an Astro Pi kit from ESA to develop their Python code. These kits contain the same hardware that’s aboard the ISS, enabling students to test their experiments in conditions similar to those on the space station. The best experiments were granted flight status earlier this year, and the code of these teams ran on the ISS in April.
And the winners are…
The teams received the results of their experiments and were asked to submit scientific reports based on their findings. Just a few weeks ago, 98 teams sent us brilliant reports, and we had the difficult task of whittling the pool of teams down to find the final ten winners!
As you can see in the video above, the winning teams were lucky enough to take part in a very special video conference with ESA Astronaut Tim Peake.
2017/18 Mission Space Lab winning teams
The Dark Side of Light from Branksome Hall, Canada, investigated whether the light pollution in an area could be used to determine the source of energy for the electricity consumption.
Spaceballs from Attert Lycée Redange, Luxembourg, successfully calculated the speed of the ISS by analysing ground photographs.
Enrico Fermi from Liceo XXV Aprile, Italy, investigated the link between the Astro Pi’s magnetometer and X-ray measurements from the GOES-15 satellite.
Team Aurora from Hyvinkään yhteiskoulun lukio, Finland, showed how the Astro Pi’s magnetometer could be used to map the Earth’s magnetic field and determine the latitude of the ISS.
@stroMega from Institut de Genech, France, used Astro Pi Izzy’s near-infrared Camera Module to measure the health and density of vegetation on Earth.
Ursa Major from a CoderDojo in Belgium created a program to autonomously measure the percentage of vegetation, water, and clouds in photographs from Astro Pi Izzy.
Canarias 1 from IES El Calero, Spain, built on existing data and successfully determined whether the ISS was eclipsed from on-board sensor data.
The Earth Watchers from S.T.E.M Robotics Academy, Greece, used Astro Pi Izzy to compare the health of vegetation in Quebec, Canada, and Guam.
Trentini DOP from CoderDojo Trento, Italy, investigated the stability of the on-board conditions of the ISS and whether or not they were effected by eclipsing.
Team Lampone from CoderDojo Trento, Italy, accurately measured the speed of the ISS by analysing ground photographs taken by Astro Pi Izzy.
Well done to everyone who took part, and massive congratulations to all the winners!
Thanks to Susan Ferrell, Senior Technical Writer, for a great blog post on how to use CodeCommit branch-level permissions. —-
AWS CodeCommit users have been asking for a way to restrict commits to some repository branches to just a few people. In this blog post, we’re going to show you how to do that by creating and applying a conditional policy, an AWS Identity and Access Management (IAM) policy that contains a context key.
Why would I do this?
When you create a branch in an AWS CodeCommit repository, the branch is available, by default, to all repository users. Here are some scenarios in which refining access might help you:
You maintain a branch in a repository for production-ready code, and you don’t want to allow changes to this branch except from a select group of people.
You want to limit the number of people who can make changes to the default branch in a repository.
You want to ensure that pull requests cannot be merged to a branch except by an approved group of developers.
We’ll show you how to create a policy in IAM that prevents users from pushing commits to and merging pull requests to a branch named master. You’ll attach that policy to one group or role in IAM, and then test how users in that group are affected when that policy is applied. We’ll explain how it works, so you can create custom policies for your repositories.
What you need to get started
You’ll need to sign in to AWS with sufficient permissions to:
Create and apply policies in IAM.
Create groups in IAM.
Add users to those groups.
Apply policies to those groups.
You can use existing IAM groups, but because you’re going to be changing permissions, you might want to first test this out on groups and users you’ve created specifically for this purpose.
You’ll need a repository in AWS CodeCommit with at least two branches: master and test-branch. For information about how to create repositories, see Create a Repository. For information about how to create branches, see Create a Branch. In this blog post, we’ve named the repository MyDemoRepo. You can use an existing repository with branches of another name, if you prefer.
Let’s get started!
Create two groups in IAM
We’re going to set up two groups in IAM: Developers and Senior_Developers. To start, both groups will have the same managed policy, AWSCodeCommitPowerUsers, applied. Users in each group will have exactly the same permissions to perform actions in IAM.
Figure 1: Two example groups in IAM, with distinct users but the same managed policy applied to each group
In the navigation pane, choose Groups, and then choose Create New Group.
In the Group Name box, type Developers, and then choose Next Step.
In the list of policies, select the check box for AWSCodeCommitPowerUsers, then choose Next Step.
Choose Create Group.
Now, follow these steps to create the Senior_Developers group and attach the AWSCodeCommitPowerUsers managed policy. You now have two empty groups with the same policy attached.
Create users in IAM
Next, add at least one unique user to each group. You can use existing IAM users, but because you’ll be affecting their access to AWS CodeCommit, you might want to create two users just for testing purposes. Let’s go ahead and create Arnav and Mary.
In the navigation pane, choose Users, and then choose Add user.
For the new user, type Arnav_Desai.
Choose Add another user, and then type Mary_Major.
Select the type of access (programmatic access, access to the AWS Management Console, or both). In this blog post, we’ll be testing everything from the console, but if you want to test AWS CodeCommit using the AWS CLI, make sure you include programmatic access and console access.
For Console password type, choose Custom password. Each user is assigned the password that you type in the box. Write these down so you don’t forget them. You’ll need to sign in to the console using each of these accounts.
Choose Next: Permissions.
On the Set permissions page, choose Add user to group. Add Arnav to the Developers group. Add Mary to the Senior_Developers group.
Choose Next: Review to see all of the choices you made up to this point. When you are ready to proceed, choose Create user.
Sign in as Arnav, and then follow these steps to go to the master branch and add a file. Then sign in as Mary and follow the same steps.
On the Dashboard page, from the list of repositories, choose MyDemoRepo.
In the Code view, choose the branch named master.
Choose Add file, and then choose Create file. Type some text or code in the editor.
Provide information to other users about who added this file to the repository and why.
In Author name, type the name of the user (Arnav or Mary).
In Email address, type an email address so that other repository users can contact you about this change.
In Commit message, type a brief description to help you remember why you added this file or any other details you might find helpful.
Type a name for the file.
Choose Commit file.
Now follow the same steps to add a file in a different branch. (In our example repository, that’s the branch named test-branch.) You should be able to add a file to both branches regardless of whether you’re signed in as Arnav or Mary.
Let’s change that.
Create a conditional policy in IAM
You’re going to create a policy in IAM that will deny API actions if certain conditions are met. We want to prevent users with this policy applied from updating a branch named master, but we don’t want to prevent them from viewing the branch, cloning the repository, or creating pull requests that will merge to that branch. For this reason, we want to pick and choose our APIs carefully. Looking at the Permissions Reference, the logical permissions for this are:
GitPush
PutFile
MergePullRequestByFastForward
Now’s the time to think about what else you might want this policy to do. For example, because we don’t want users with this policy to make changes to this branch, we probably don’t want them to be able to delete it either, right? So let’s add one more permission:
DeleteBranch
The branch in which we want to deny these actions is master. The repository in which the branch resides is MyDemoRepo. We’re going to need more than just the repository name, though. We need the repository ARN. Fortunately, that’s easy to find. Just go to the AWS CodeCommit console, choose the repository, and choose Settings. The repository ARN is displayed on the General tab.
Now we’re ready to create a policy. 1. Open the IAM console at https://console.aws.amazon.com/iam/. Make sure you’re signed in with the account that has sufficient permissions to create policies, and not as Arnav or Mary. 2. In the navigation pane, choose Policies, and then choose Create policy. 3. Choose JSON, and then paste in the following:
You’ll notice a few things here. First, change the repository ARN to the ARN for your repository and include the repository name. Second, if you want to restrict access to a branch with a name different from our example, master, change that reference too.
Now let’s talk about this policy and what it does. You might be wondering why we’re using a Git reference (refs/heads) value instead of just the branch name. The answer lies in how Git references things, and how AWS CodeCommit, as a Git-based repository service, implements its APIs. A branch in Git is a simple pointer (reference) to the SHA-1 value of the head commit for that branch.
You might also be wondering about the second part of the condition, the nullification language. This is necessary because of the way git push and git-receive-pack work. Without going into too many technical details, when you attempt to push a change from a local repo to AWS CodeCommit, an initial reference call is made to AWS CodeCommit without any branch information. AWS CodeCommit evaluates that initial call to ensure that:
a) You’re authorized to make calls.
b) A repository exists with the name specified in the initial call. If you left that null out of the policy, users with that policy would be unable to complete any pushes from their local repos to the AWS CodeCommit remote repository at all, regardless of which branch they were trying to push their commits to.
Could you write a policy in such a way that the null is not required? Of course. IAM policy language is flexible. There’s an example of how to do this in the AWS CodeCommit User Guide, if you’re curious. But for the purposes of this blog post, let’s continue with this policy as written.
So what have we essentially said in this policy? We’ve asked IAM to deny the relevant CodeCommit permissions if the request is made to the resource MyDemoRepo and it meets the following condition: the reference is to refs/heads/master. Otherwise, the deny does not apply.
I’m sure you’re wondering if this policy has to be constrained to a specific repository resource like MyDemoRepo. After all, it would be awfully convenient if a single policy could apply to all branches in any repository in an AWS account, particularly since the default branch in any repository is initially the master branch. Good news! Simply replace the ARN with an *, and your policy will affect ALL branches named master in every AWS CodeCommit repository in your AWS account. Make sure that this is really what you want, though. We suggest you start by limiting the scope to just one repository, and then changing things when you’ve tested it and are happy with how it works.
When you’re sure you’ve modified the policy for your environment, choose Review policy to validate it. Give this policy a name, such as DenyChangesToMaster, provide a description of its purpose, and then choose Create policy.
Now that you have a policy, it’s time to apply and test it.
Apply the policy to a group
In theory, you could apply the policy you just created directly to any IAM user, but that really doesn’t scale well. You should apply this policy to a group, if you use IAM groups to manage users, or to a role, if your users assume a role when interacting with AWS resources.
In the IAM console, choose Groups, and then choose Developers.
On the Permissions tab, choose Attach Policy.
Choose DenyChangesToMaster, and then choose Attach policy.
Your groups now have a critical difference: users in the Developers group have an additional policy applied that restricts their actions in the master branch. In other words, Mary can continue to add files, push commits, and merge pull requests in the master branch, but Arnav cannot.
Figure 2: Two example groups in IAM, one with an additional policy applied that will prevent users in this group from making changes to the master branch
Test it out. Sign in as Arnav, and do the following:
On the Dashboard page, from the list of repositories, choose MyDemoRepo.
In the Code view, choose the branch named master.
Choose Add file, and then choose Create file, just as you did before. Provide some text, and then add the file name and your user information.
Choose Commit file.
This time you’ll see an error after choosing Commit file. It’s not a pretty message, but at the very end, you’ll see a telling phrase: “explicit deny”. That’s the policy in action. You, as Arnav, are explicitly denied PutFile, which prevents you from adding a file to the master branch. You’ll see similar results if you try other actions denied by that policy, such as deleting the master branch.
Stay signed in as Arnav, but this time add a file to test-branch. You should be able to add a file without seeing any errors. You can create a branch based on the master branch, add a file to it, and create a pull request that will merge to the master branch, all just as before. However, you cannot perform denied actions on that master branch.
Sign out as Arnav and sign in as Mary. You’ll see that as that IAM user, you can add and edit files in the master branch, merge pull requests to it, and even, although we don’t recommend this, delete it.
Conclusion
You can use conditional statements in policies in IAM to refine how users interact with your AWS CodeCommit repositories. This blog post showed how to use such a policy to prevent users from making changes to a branch named master. There are many other options. We hope this blog post will encourage you to experiment with AWS CodeCommit, IAM policies, and permissions. If you have any questions or suggestions, we’d love to hear from you.
The CoreOS blog is carrying an article describing the path forward now that CoreOS is owned by Red Hat. “Since Red Hat’s acquisition of CoreOS was announced, we received questions on the fate of Container Linux. CoreOS’s first project, and initially its namesake, pioneered the lightweight, ‘over-the-air’ automatically updated container native operating system that fast rose in popularity running the world’s containers. With the acquisition, Container Linux will be reborn as Red Hat CoreOS, a new entry into the Red Hat ecosystem. Red Hat CoreOS will be based on Fedora and Red Hat Enterprise Linux sources and is expected to ultimately supersede Atomic Host as Red Hat’s immutable, container-centric operating system.” Some information can also be found in this Red Hat press release.
Many companies across the globe use Amazon DynamoDB to store and query historical user-interaction data. DynamoDB is a fast NoSQL database used by applications that need consistent, single-digit millisecond latency.
Often, customers want to turn their valuable data in DynamoDB into insights by analyzing a copy of their table stored in Amazon S3. Doing this separates their analytical queries from their low-latency critical paths. This data can be the primary source for understanding customers’ past behavior, predicting future behavior, and generating downstream business value. Customers often turn to DynamoDB because of its great scalability and high availability. After a successful launch, many customers want to use the data in DynamoDB to predict future behaviors or provide personalized recommendations.
DynamoDB is a good fit for low-latency reads and writes, but it’s not practical to scan all data in a DynamoDB database to train a model. In this post, I demonstrate how you can use DynamoDB table data copied to Amazon S3 by AWS Data Pipeline to predict customer behavior. I also demonstrate how you can use this data to provide personalized recommendations for customers using Amazon SageMaker. You can also run ad hoc queries using Amazon Athena against the data. DynamoDB recently released on-demand backups to create full table backups with no performance impact. However, it’s not suitable for our purposes in this post, so I chose AWS Data Pipeline instead to create managed backups are accessible from other services.
To do this, I describe how to read the DynamoDB backup file format in Data Pipeline. I also describe how to convert the objects in S3 to a CSV format that Amazon SageMaker can read. In addition, I show how to schedule regular exports and transformations using Data Pipeline. The sample data used in this post is from Bank Marketing Data Set of UCI.
The solution that I describe provides the following benefits:
Separates analytical queries from production traffic on your DynamoDB table, preserving your DynamoDB read capacity units (RCUs) for important production requests
Automatically updates your model to get real-time predictions
Optimizes for performance (so it doesn’t compete with DynamoDB RCUs after the export) and for cost (using data you already have)
Makes it easier for developers of all skill levels to use Amazon SageMaker
All code and data set in this post are available in this .zip file.
Solution architecture
The following diagram shows the overall architecture of the solution.
The steps that data follows through the architecture are as follows:
Data Pipeline regularly copies the full contents of a DynamoDB table as JSON into an S3
Exported JSON files are converted to comma-separated value (CSV) format to use as a data source for Amazon SageMaker.
Amazon SageMaker renews the model artifact and update the endpoint.
The converted CSV is available for ad hoc queries with Amazon Athena.
Data Pipeline controls this flow and repeats the cycle based on the schedule defined by customer requirements.
Building the auto-updating model
This section discusses details about how to read the DynamoDB exported data in Data Pipeline and build automated workflows for real-time prediction with a regularly updated model.
Find the automation_script.sh file and edit it for your environment. For example, you need to replace 's3://<your bucket>/<datasource path>/' with your own S3 path to the data source for Amazon ML. In the script, the text enclosed by angle brackets—< and >—should be replaced with your own path.
Upload the json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar file to your S3 path so that the ADD jar command in Apache Hive can refer to it.
For this solution, the banking.csv should be imported into a DynamoDB table.
Export a DynamoDB table
To export the DynamoDB table to S3, open the Data Pipeline console and choose the Export DynamoDB table to S3 template. In this template, Data Pipeline creates an Amazon EMR cluster and performs an export in the EMRActivity activity. Set proper intervals for backups according to your business requirements.
One core node(m3.xlarge) provides the default capacity for the EMR cluster and should be suitable for the solution in this post. Leave the option to resize the cluster before running enabled in the TableBackupActivity activity to let Data Pipeline scale the cluster to match the table size. The process of converting to CSV format and renewing models happens in this EMR cluster.
For a more in-depth look at how to export data from DynamoDB, see Export Data from DynamoDB in the Data Pipeline documentation.
Add the script to an existing pipeline
After you export your DynamoDB table, you add an additional EMR step to EMRActivity by following these steps:
Open the Data Pipeline console and choose the ID for the pipeline that you want to add the script to.
For Actions, choose Edit.
In the editing console, choose the Activities category and add an EMR step using the custom script downloaded in the previous section, as shown below.
Paste the following command into the new step after the data upload step:
The element #{output.directoryPath} references the S3 path where the data pipeline exports DynamoDB data as JSON. The path should be passed to the script as an argument.
The bash script has two goals, converting data formats and renewing the Amazon SageMaker model. Subsequent sections discuss the contents of the automation script.
Automation script: Convert JSON data to CSV with Hive
We use Apache Hive to transform the data into a new format. The Hive QL script to create an external table and transform the data is included in the custom script that you added to the Data Pipeline definition.
When you run the Hive scripts, do so with the -e option. Also, define the Hive table with the 'org.openx.data.jsonserde.JsonSerDe' row format to parse and read JSON format. The SQL creates a Hive EXTERNAL table, and it reads the DynamoDB backup data on the S3 path passed to it by Data Pipeline.
Note: You should create the table with the “EXTERNAL” keyword to avoid the backup data being accidentally deleted from S3 if you drop the table.
The full automation script for converting follows. Add your own bucket name and data source path in the highlighted areas.
After creating an external table, you need to read data. You then use the INSERT OVERWRITE DIRECTORY ~ SELECT command to write CSV data to the S3 path that you designated as the data source for Amazon SageMaker.
Depending on your requirements, you can eliminate or process the columns in the SELECT clause in this step to optimize data analysis. For example, you might remove some columns that have unpredictable correlations with the target value because keeping the wrong columns might expose your model to “overfitting” during the training. In this post, customer_id columns is removed. Overfitting can make your prediction weak. More information about overfitting can be found in the topic Model Fit: Underfitting vs. Overfitting in the Amazon ML documentation.
Automation script: Renew the Amazon SageMaker model
After the CSV data is replaced and ready to use, create a new model artifact for Amazon SageMaker with the updated dataset on S3. For renewing model artifact, you must create a new training job. Training jobs can be run using the AWS SDK ( for example, Amazon SageMaker boto3 ) or the Amazon SageMaker Python SDK that can be installed with “pip install sagemaker” command as well as the AWS CLI for Amazon SageMaker described in this post.
In addition, consider how to smoothly renew your existing model without service impact, because your model is called by applications in real time. To do this, you need to create a new endpoint configuration first and update a current endpoint with the endpoint configuration that is just created.
#!/bin/bash
## Define variable
REGION=$2
DTTIME=`date +%Y-%m-%d-%H-%M-%S`
ROLE="<your AmazonSageMaker-ExecutionRole>"
# Select containers image based on region.
case "$REGION" in
"us-west-2" )
IMAGE="174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest"
;;
"us-east-1" )
IMAGE="382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest"
;;
"us-east-2" )
IMAGE="404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest"
;;
"eu-west-1" )
IMAGE="438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest"
;;
*)
echo "Invalid Region Name"
exit 1 ;
esac
# Start training job and creating model artifact
TRAINING_JOB_NAME=TRAIN-${DTTIME}
S3OUTPUT="s3://<your bucket name>/model/"
INSTANCETYPE="ml.m4.xlarge"
INSTANCECOUNT=1
VOLUMESIZE=5
aws sagemaker create-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION} --algorithm-specification TrainingImage=${IMAGE},TrainingInputMode=File --role-arn ${ROLE} --input-data-config '[{ "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://<your bucket name>/<datasource path>/", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "text/csv", "CompressionType": "None" , "RecordWrapperType": "None" }]' --output-data-config S3OutputPath=${S3OUTPUT} --resource-config InstanceType=${INSTANCETYPE},InstanceCount=${INSTANCECOUNT},VolumeSizeInGB=${VOLUMESIZE} --stopping-condition MaxRuntimeInSeconds=120 --hyper-parameters feature_dim=20,predictor_type=binary_classifier
# Wait until job completed
aws sagemaker wait training-job-completed-or-stopped --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}
# Get newly created model artifact and create model
MODELARTIFACT=`aws sagemaker describe-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION} --query 'ModelArtifacts.S3ModelArtifacts' --output text `
MODELNAME=MODEL-${DTTIME}
aws sagemaker create-model --region ${REGION} --model-name ${MODELNAME} --primary-container Image=${IMAGE},ModelDataUrl=${MODELARTIFACT} --execution-role-arn ${ROLE}
# create a new endpoint configuration
CONFIGNAME=CONFIG-${DTTIME}
aws sagemaker create-endpoint-config --region ${REGION} --endpoint-config-name ${CONFIGNAME} --production-variants VariantName=Users,ModelName=${MODELNAME},InitialInstanceCount=1,InstanceType=ml.m4.xlarge
# create or update the endpoint
STATUS=`aws sagemaker describe-endpoint --endpoint-name ServiceEndpoint --query 'EndpointStatus' --output text --region ${REGION} `
if [[ $STATUS -ne "InService" ]] ;
then
aws sagemaker create-endpoint --endpoint-name ServiceEndpoint --endpoint-config-name ${CONFIGNAME} --region ${REGION}
else
aws sagemaker update-endpoint --endpoint-name ServiceEndpoint --endpoint-config-name ${CONFIGNAME} --region ${REGION}
fi
Grant permission
Before you execute the script, you must grant proper permission to Data Pipeline. Data Pipeline uses the DataPipelineDefaultResourceRole role by default. I added the following policy to DataPipelineDefaultResourceRole to allow Data Pipeline to create, delete, and update the Amazon SageMaker model and data source in the script.
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint. This approach is useful for interactive web, mobile, or desktop applications.
Following, I provide a simple Python code example that queries against Amazon SageMaker endpoint URL with its name (“ServiceEndpoint”) and then uses them for real-time prediction.
Data Pipeline exports DynamoDB table data into S3. The original JSON data should be kept to recover the table in the rare event that this is needed. Data Pipeline then converts JSON to CSV so that Amazon SageMaker can read the data.Note: You should select only meaningful attributes when you convert CSV. For example, if you judge that the “campaign” attribute is not correlated, you can eliminate this attribute from the CSV.
Train the Amazon SageMaker model with the new data source.
When a new customer comes to your site, you can judge how likely it is for this customer to subscribe to your new product based on “predictedScores” provided by Amazon SageMaker.
If the new user subscribes your new product, your application must update the attribute “y” to the value 1 (for yes). This updated data is provided for the next model renewal as a new data source. It serves to improve the accuracy of your prediction. With each new entry, your application can become smarter and deliver better predictions.
Running ad hoc queries using Amazon Athena
Amazon Athena is a serverless query service that makes it easy to analyze large amounts of data stored in Amazon S3 using standard SQL. Athena is useful for examining data and collecting statistics or informative summaries about data. You can also use the powerful analytic functions of Presto, as described in the topic Aggregate Functions of Presto in the Presto documentation.
With the Data Pipeline scheduled activity, recent CSV data is always located in S3 so that you can run ad hoc queries against the data using Amazon Athena. I show this with example SQL statements following. For an in-depth description of this process, see the post Interactive SQL Queries for Data in Amazon S3 on the AWS News Blog.
Creating an Amazon Athena table and running it
Simply, you can create an EXTERNAL table for the CSV data on S3 in Amazon Athena Management Console.
=== Table Creation ===
CREATE EXTERNAL TABLE datasource (
age int,
job string,
marital string ,
education string,
default string,
housing string,
loan string,
contact string,
month string,
day_of_week string,
duration int,
campaign int,
pdays int ,
previous int ,
poutcome string,
emp_var_rate double,
cons_price_idx double,
cons_conf_idx double,
euribor3m double,
nr_employed double,
y int
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n'
LOCATION 's3://<your bucket name>/<datasource path>/';
The following query calculates the correlation coefficient between the target attribute and other attributes using Amazon Athena.
=== Sample Query ===
SELECT corr(age,y) AS correlation_age_and_target,
corr(duration,y) AS correlation_duration_and_target,
corr(campaign,y) AS correlation_campaign_and_target,
corr(contact,y) AS correlation_contact_and_target
FROM ( SELECT age , duration , campaign , y ,
CASE WHEN contact = 'telephone' THEN 1 ELSE 0 END AS contact
FROM datasource
) datasource ;
Conclusion
In this post, I introduce an example of how to analyze data in DynamoDB by using table data in Amazon S3 to optimize DynamoDB table read capacity. You can then use the analyzed data as a new data source to train an Amazon SageMaker model for accurate real-time prediction. In addition, you can run ad hoc queries against the data on S3 using Amazon Athena. I also present how to automate these procedures by using Data Pipeline.
You can adapt this example to your specific use case at hand, and hopefully this post helps you accelerate your development. You can find more examples and use cases for Amazon SageMaker in the video AWS 2017: Introducing Amazon SageMaker on the AWS website.
Yong Seong Lee is a Cloud Support Engineer for AWS Big Data Services. He is interested in every technology related to data/databases and helping customers who have difficulties in using AWS services. His motto is “Enjoy life, be curious and have maximum experience.”
Last year’s haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016.
Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats.
Wholesale prices of flying squid have climbed as a result. Last year’s average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
This post courtesy of Giedrius Praspaliauskas, AWS Solutions Architect
Even with best IVR systems, customers get frustrated. What if you knew that 10 callers in your Amazon Connect contact flow were likely to say “Agent!” in frustration in the next 30 seconds? Would you like to get to them before that happens? What if your bot was smart enough to admit, “I’m sorry this isn’t helping. Let me find someone for you.”?
Setting up a Lambda function for sentiment analysis
There are multiple natural language and text processing frameworks or services available to use with Lambda, including but not limited to Amazon Comprehend, TextBlob, Pattern, and NLTK. Pick one based on the nature of your system: the type of interaction, languages supported, and so on. For this post, I picked Amazon Comprehend, which uses natural language processing (NLP) to extract insights and relationships in text.
The walkthrough in this post is just an example. In a full-scale implementation, you would likely implement a more nuanced approach. For example, you could keep the overall sentiment score through the conversation and act only when it reaches a certain threshold. It is worth noting that this Lambda function is not called for missed utterances, so there may be a gap between what is being analyzed and what was actually said.
The Lambda function is straightforward. It analyses the input transcript field of the Amazon Lex event. Based on the overall sentiment value, it generates a response message with next step instructions. When the sentiment is neutral, positive, or mixed, the response leaves it to Amazon Lex to decide what the next steps should be. It adds to the response overall sentiment value as an additional session attribute, along with slots’ values received as an input.
When the overall sentiment is negative, the function returns the dialog action, pointing to an escalation intent (specified in the environment variable ESCALATION_INTENT_NAME) or returns the fulfillment closure action with a failure state when the intent is not specified. In addition to actions or intents, the function returns a message, or prompt, to be provided to the customer before taking the next step. Based on the returned action, Amazon Connect can select the appropriate next step in a contact flow.
For this walkthrough, you create a Lambda function using the AWS Management Console:
Open the Lambda console.
Choose Create Function.
Choose Author from scratch (no blueprint).
For Runtime, choose Python 3.6.
For Role, choose Create a custom role. The custom execution role allows the function to detect sentiments, create a log group, stream log events, and store the log events.
Enter the following values:
For Role Description, enter Lambda execution role permissions.
For IAM Role, choose Create an IAM role.
For Role Name, enter LexSentimentAnalysisLambdaRole.
Copy/paste the following code to the editor window
import os, boto3
ESCALATION_INTENT_MESSAGE="Seems that you are having troubles with our service. Would you like to be transferred to the associate?"
FULFILMENT_CLOSURE_MESSAGE="Seems that you are having troubles with our service. Let me transfer you to the associate."
escalation_intent_name = os.getenv('ESACALATION_INTENT_NAME', None)
client = boto3.client('comprehend')
def lambda_handler(event, context):
sentiment=client.detect_sentiment(Text=event['inputTranscript'],LanguageCode='en')['Sentiment']
if sentiment=='NEGATIVE':
if escalation_intent_name:
result = {
"sessionAttributes": {
"sentiment": sentiment
},
"dialogAction": {
"type": "ConfirmIntent",
"message": {
"contentType": "PlainText",
"content": ESCALATION_INTENT_MESSAGE
},
"intentName": escalation_intent_name
}
}
else:
result = {
"sessionAttributes": {
"sentiment": sentiment
},
"dialogAction": {
"type": "Close",
"fulfillmentState": "Failed",
"message": {
"contentType": "PlainText",
"content": FULFILMENT_CLOSURE_MESSAGE
}
}
}
else:
result ={
"sessionAttributes": {
"sentiment": sentiment
},
"dialogAction": {
"type": "Delegate",
"slots" : event["currentIntent"]["slots"]
}
}
return result
Below the code editor specify the environment variable ESCALATION_INTENT_NAME with a value of Escalate.
Click on Save in the top right of the console.
Now you can test your function.
Click Test at the top of the console.
Configure a new test event using the following test event JSON:
This message should return a response from Lambda with a sentiment session attribute of NEUTRAL.
However, if you change the input to “This is garbage!”, Lambda changes the dialog action to the escalation intent specified in the environment variable ESCALATION_INTENT_NAME.
Setting up Amazon Lex
Now that you have your Lambda function running, it is time to create the Amazon Lex bot. Use the BookTrip sample bot and call it BookSomething. The IAM role is automatically created on your behalf. Indicate that this bot is not subject to the COPPA, and choose Create. A few minutes later, the bot is ready.
Make the following changes to the default configuration of the bot:
Add an intent with no associated slots. Name it Escalate.
Specify the Lambda function for initialization and validation in the existing two intents (“BookCar” and “BookHotel”), at the same time giving Amazon Lex permission to invoke it.
Leave the other configuration settings as they are and save the intents.
You are ready to build and publish this bot. Set a new alias, BookSomethingWithSentimentAnalysis. When the build finishes, test it.
After the instance is created, you need to integrate the Amazon Lex bot created in the previous step. For more information, see the Amazon Lex section in the Configuring Your Amazon Connect Instance topic. You may also want to look at the excellent post by Randall Hunt, New – Amazon Connect and Amazon Lex Integration.
Create a new contact flow, “Sentiment analysis walkthrough”:
Log in into the Amazon Connect instance.
Choose Create contact flow, Create transfer to agent flow.
Add a Get customer input block, open the icon in the top left corner, and specify your Amazon Lex bot and its intents.
Select the Text to speech audio prompt type and enter text for Amazon Connect to play at the beginning of the dialog.
Choose Amazon Lex, enter your Amazon Lex bot name and the alias.
Specify the intents to be used as dialog branches that a customer can choose: BookHotel, BookTrip, or Escalate.
Add two Play prompt blocks and connect them to the customer input block.
If booking hotel or car intent is returned from the bot flow, play the corresponding prompt (“OK, will book it for you”) and initiate booking (in this walkthrough, just hang up after the prompt).
However, if escalation intent is returned (caused by the sentiment analysis results in the bot), play the prompt (“OK, transferring to an agent”) and initiate the transfer.
Save and publish the contact flow.
As a result, you have a contact flow with a single customer input step and a text-to-speech prompt that uses the Amazon Lex bot. You expect one of the three intents returned:
Edit the phone number to associate the contact flow that you just created. It is now ready for testing. Call the phone number and check how your contact flow works.
Cleanup
Don’t forget to delete all the resources created during this walkthrough to avoid incurring any more costs:
Amazon Connect instance
Amazon Lex bot
Lambda function
IAM role LexSentimentAnalysisLambdaRole
Summary
In this walkthrough, you implemented sentiment analysis with a Lambda function. The function can be integrated into Amazon Lex and, as a result, into Amazon Connect. This approach gives you the flexibility to analyze user input and then act. You may find the following potential use cases of this approach to be of interest:
Extend the Lambda function to identify “hot” topics in the user input even if the sentiment is not negative and take action proactively. For example, switch to an escalation intent if a user mentioned “where is my order,” which may signal potential frustration.
Use Amazon Connect Streams to provide agent sentiment analysis results along with call transfer. Enable service tailored towards particular customer needs and sentiments.
Route calls to agents based on both skill set and sentiment.
Prioritize calls based on sentiment using multiple Amazon Connect queues instead of transferring directly to an agent.
Monitor quality and flag for review contact flows that result in high overall negative sentiment.
Implement sentiment and AI/ML based call analysis, such as a real-time recommendation engine. For more details, see Machine Learning on AWS.
If you have questions or suggestions, please comment below.
You can enable continuous backups with a single click in the AWS Management Console, a simple API call, or with the AWS Command Line Interface (CLI). DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days. We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict. You can still keep your on-demand backups for as long as needed for archival purposes but PITR works as additional insurance against accidental loss of data. Let’s see how this works.
Continuous Backup
To enable this feature in the console we navigate to our table and select the Backups tab. From there simply click Enable to turn on the feature. I could also turn on continuous backups via the UpdateContinuousBackups API call. After continuous backup is enabled we should be able to see an Earliest restore date and Latest restore date
Let’s imagine a scenario where I have a lot of old user profiles that I want to delete.
I really only want to send service updates to our active users based on their last_update date. I decided to write a quick Python script to delete all the users that haven’t used my service in a while.
import boto3
table = boto3.resource("dynamodb").Table("VerySuperImportantTable")
items = table.scan(
FilterExpression="last_update >= :date",
ExpressionAttributeValues={":date": "2014-01-01T00:00:00"},
ProjectionExpression="ImportantId"
)['Items']
print("Deleting {} Items! Dangerous.".format(len(items)))
with table.batch_writer() as batch:
for item in items:
batch.delete_item(Key=item)
Great! This should delete all those pesky non-users of my service that haven’t logged in since 2013. So,— CTRL+C CTRL+C CTRL+C CTRL+C (interrupt the currently executing command).
Yikes! Do you see where I went wrong? I’ve just deleted my most important users! Oh, no! Where I had a greater-than sign, I meant to put a less-than! Quick, before Jeff Barr can see, I’m going to restore the table. (I probably could have prevented that typo with Boto 3’s handy DynamoDB conditions: Attr("last_update").lt("2014-01-01T00:00:00"))
Restoring
Luckily for me, restoring a table is easy. In the console I’ll navigate to the Backups tab for my table and click Restore to point-in-time.
I’ll specify the time (a few seconds before I started my deleting spree) and a name for the table I’m restoring to.
For a relatively small and evenly distributed table like mine, the restore is quite fast.
The time it takes to restore a table varies based on multiple factors and restore times are not neccesarily coordinated with the size of the table. If your dataset is evenly distributed across your primary keys you’ll be able to take advanatage of parallelization which will speed up your restores.
Learn More & Try It Yourself There’s plenty more to learn about this new feature in the documentation here.
Pricing for continuous backups varies by region and is based on the current size of the table and all indexes.
A few things to note:
PITR works with encrypted tables.
If you disable PITR and later reenable it, you reset the start time from which you can recover.
Just like on-demand backups, there are no performance or availability impacts to enabling this feature.
Stream settings, Time To Live settings, PITR settings, tags, Amazon CloudWatch alarms, and auto scaling policies are not copied to the restored table.
Jeff, it turns out, knew I restored the table all along because every PITR API call is recorded in AWS CloudTrail.
Let us know how you’re going to use continuous backups and PITR on Twitter and in the comments. – Randall
Security updates have been issued by Arch Linux (bchunk, thunderbird, and xerces-c), Debian (freeplane, icu, libvirt, and net-snmp), Fedora (monitorix, php-simplesamlphp-saml2, php-simplesamlphp-saml2_1, php-simplesamlphp-saml2_3, puppet, and qt5-qtwebengine), openSUSE (curl, libmodplug, libvorbis, mailman, nginx, opera, python-paramiko, and samba, talloc, tevent), Red Hat (python-paramiko, rh-maven35-slf4j, rh-mysql56-mysql, rh-mysql57-mysql, rh-ruby22-ruby, rh-ruby23-ruby, and rh-ruby24-ruby), Slackware (thunderbird), SUSE (clamav, kernel, memcached, and php53), and Ubuntu (samba and tiff).
Security updates have been issued by Debian (adminer, isc-dhcp, kamailio, libvorbisidec, plexus-utils2, and simplesamlphp), Fedora (exim and glibc-arm-linux-gnu), Mageia (sqlite3), openSUSE (Chromium, kernel, and qemu), SUSE (memcached), and Ubuntu (sharutils).
Before our beloved SpaceDave left the Raspberry Pi Foundation to join the ranks of the European Space Agency (ESA) — and no, we’re still not jealous *ahem* — he kindly drafted us one final blog post about the Astro Pi upgrades heading to the International Space Station today! So here it is. Enjoy!
We are very excited to announce that Astro Pi upgrades are on their way to the International Space Station! Back in September, we blogged about a small payload being launched to the International Space Station to upgrade the capabilities of our Astro Pi units.
Sneak peek
For the longest time, the payload was scheduled to be launched on SpaceX CRS 14 in February. However, the launch was delayed to April and so impacted the flight operations we have planned for running Mission Space Lab student experiments.
To avoid this, ESA had the payload transferred to Russian Soyuz MS-08 (54S), which is launching today to carry crew members Oleg Artemyev, Andrew Feustel, and Ricky Arnold to the ISS.
You can watch coverage of the launch on NASA TV from 4.30pm GMT this afternoon, with the launch scheduled for 5.44pm GMT. Check the NASA TV schedule for updates.
The upgrades
The pictures below show the flight hardware in its final configuration before loading onto the launch vehicle.
All access
With the wireless dongle, the Astro Pi units can be deployed in ISS locations other than the Columbus module, where they don’t have access to an Ethernet switch.
We are also sending some flexible optical filters. These are made from the same material as the blue square which is shipped with the Raspberry Pi NoIR Camera Module.
#bluefilter
So that future Astro Pi code will need to command fewer windows to download earth observation imagery to the ground, we’re also including some 32GB micro SD cards to replace the current 8GB cards.
More space in space
Tthe items above are enclosed in a large 8″ ziplock bag that has been designated the “AstroPi Kit”.
It’s ziplock bags all the way down up
Once the Soyuz docks with the ISS, this payload is one of the first which will be unpacked, so that the Astro Pi units can be upgraded and deployed ready to run your experiments!
More Astro Pi
Stay tuned for our next update in April, when student code is set to be run on the Astro Pi units as part of our Mission Space Lab programme. And to find out more about Astro Pi, head to the programme website.
Daniel Stone begins a series on how the Linux graphic stack has improved in recent times. “This has made mainline Linux much more attractive: the exact same generic codebases of GNOME and Weston that I’m using to write this blog post on an Intel laptop run equally well on AMD workstations, low-power NXP boards destined for in-flight entertainment, and high-end Renesas SoCs which might well be in your car. Now that the drivers are easy to write, and applications are portable, we’ve seen over ten new DRM drivers merged to the upstream kernel since atomic modesetting was merged.”
A customer has been successfully creating and running multiple Amazon Elasticsearch Service (Amazon ES) domains to support their business users’ search needs across products, orders, support documentation, and a growing suite of similar needs. The service has become heavily used across the organization. This led to some domains running at 100% capacity during peak times, while others began to run low on storage space. Because of this increased usage, the technical teams were in danger of missing their service level agreements. They contacted me for help.
This post shows how you can set up automated alarms to warn when domains need attention.
Solution overview
Amazon ES is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities along with the availability, scalability, and security that production workloads require. The service offers built-in integrations with a number of other components and AWS services, enabling customers to go from raw data to actionable insights quickly and securely.
One of these other integrated services is Amazon CloudWatch. CloudWatch is a monitoring service for AWS Cloud resources and the applications that you run on AWS. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.
CloudWatch collects metrics for Amazon ES. You can use these metrics to monitor the state of your Amazon ES domains, and set alarms to notify you about high utilization of system resources. For more information, see Amazon Elasticsearch Service Metrics and Dimensions.
While the metrics are automatically collected, the missing piece is how to set alarms on these metrics at appropriate levels for each of your domains. This post includes sample Python code to evaluate the current state of your Amazon ES environment, and to set up alarms according to AWS recommendations and best practices.
There are two components to the sample solution:
es-check-cwalarms.py: This Python script checks the CloudWatch alarms that have been set, for all Amazon ES domains in a given account and region.
es-create-cwalarms.py: This Python script sets up a set of CloudWatch alarms for a single given domain.
The sample code can also be found in the amazon-es-check-cw-alarms GitHub repo. The scripts are easy to extend or combine, as described in the section “Extensions and Adaptations”.
Assessing the current state
The first script, es-check-cwalarms.py, is used to give an overview of the configurations and alarm settings for all the Amazon ES domains in the given region. The script takes the following parameters:
python es-checkcwalarms.py -h
usage: es-checkcwalarms.py [-h] [-e ESPREFIX] [-n NOTIFY] [-f FREE][-p PROFILE] [-r REGION]
Checks a set of recommended CloudWatch alarms for Amazon Elasticsearch Service domains (optionally, those beginning with a given prefix).
optional arguments:
-h, --help show this help message and exit
-e ESPREFIX, --esprefix ESPREFIX Only check Amazon Elasticsearch Service domains that begin with this prefix.
-n NOTIFY, --notify NOTIFY List of CloudWatch alarm actions; e.g. ['arn:aws:sns:xxxx']
-f FREE, --free FREE Minimum free storage (MB) on which to alarm
-p PROFILE, --profile PROFILE IAM profile name to use
-r REGION, --region REGION AWS region for the domain. Default: us-east-1
The script first identifies all the domains in the given region (or, optionally, limits them to the subset that begins with a given prefix). It then starts running a set of checks against each one.
The script can be run from the command line or set up as a scheduled Lambda function. For example, for one customer, it was deemed appropriate to regularly run the script to check that alarms were correctly set for all domains. In addition, because configuration changes—cluster size increases to accommodate larger workloads being a common change—might require updates to alarms, this approach allowed the automatic identification of alarms no longer appropriately set as the domain configurations changed.
The output shown below is the output for one domain in my account.
Starting checks for Elasticsearch domain iotfleet , version is 53
Iotfleet Automated snapshot hour (UTC): 0
Iotfleet Instance configuration: 1 instances; type:m3.medium.elasticsearch
Iotfleet Instance storage definition is: 4 GB; free storage calced to: 819.2 MB
iotfleet Desired free storage set to (in MB): 819.2
iotfleet WARNING: Not using VPC Endpoint
iotfleet WARNING: Does not have Zone Awareness enabled
iotfleet WARNING: Instance count is ODD. Best practice is for an even number of data nodes and zone awareness.
iotfleet WARNING: Does not have Dedicated Masters.
iotfleet WARNING: Neither index nor search slow logs are enabled.
iotfleet WARNING: EBS not in use. Using instance storage only.
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.yellow-Alarm ClusterStatus.yellow
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.red-Alarm ClusterStatus.red
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-CPUUtilization-Alarm CPUUtilization
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-JVMMemoryPressure-Alarm JVMMemoryPressure
iotfleet WARNING: Missing alarm!! ('ClusterIndexWritesBlocked', 'Maximum', 60, 5, 'GreaterThanOrEqualToThreshold', 1.0)
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-AutomatedSnapshotFailure-Alarm AutomatedSnapshotFailure
iotfleet Alarm: Threshold does not match: Test-Elasticsearch-iotfleet-FreeStorageSpace-Alarm Should be: 819.2 ; is 3000.0
The output messages fall into the following categories:
System overview, Informational: The Amazon ES version and configuration, including instance type and number, storage, automated snapshot hour, etc.
Free storage: A calculation for the appropriate amount of free storage, based on the recommended 20% of total storage.
Warnings: best practices that are not being followed for this domain. (For more about this, read on.)
Alarms: An assessment of the CloudWatch alarms currently set for this domain, against a recommended set.
The script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. Using the array allows alarm parameters (such as free space) to be updated within the code based on current domain statistics and configurations.
For a given domain, the script checks if each alarm has been set. If the alarm is set, it checks whether the values match those in the array esAlarms. In the output above, you can see three different situations being reported:
Alarm ok; definition matches. The alarm set for the domain matches the settings in the array.
Alarm: Threshold does not match. An alarm exists, but the threshold value at which the alarm is triggered does not match.
WARNING: Missing alarm!! The recommended alarm is missing.
All in all, the list above shows that this domain does not have a configuration that adheres to best practices, nor does it have all the recommended alarms.
Setting up alarms
Now that you know that the domains in their current state are missing critical alarms, you can correct the situation.
To demonstrate the script, set up a new domain named “ver”, in us-west-2. Specify 1 node, and a 10-GB EBS disk. Also, create an SNS topic in us-west-2 with a name of “sendnotification”, which sends you an email.
Run the second script, es-create-cwalarms.py, from the command line. This script creates (or updates) the desired CloudWatch alarms for the specified Amazon ES domain, “ver”.
python es-create-cwalarms.py -r us-west-2 -e test -c ver -n "['arn:aws:sns:us-west-2:xxxxxxxxxx:sendnotification']"
EBS enabled: True type: gp2 size (GB): 10 No Iops 10240 total storage (MB)
Desired free storage set to (in MB): 2048.0
Creating Test-Elasticsearch-ver-ClusterStatus.yellow-Alarm
Creating Test-Elasticsearch-ver-ClusterStatus.red-Alarm
Creating Test-Elasticsearch-ver-CPUUtilization-Alarm
Creating Test-Elasticsearch-ver-JVMMemoryPressure-Alarm
Creating Test-Elasticsearch-ver-FreeStorageSpace-Alarm
Creating Test-Elasticsearch-ver-ClusterIndexWritesBlocked-Alarm
Creating Test-Elasticsearch-ver-AutomatedSnapshotFailure-Alarm
Successfully finished creating alarms!
As with the first script, this script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. This approach allows you to add or modify alarms based on your use case (more on that below).
After running the script, navigate to Alarms on the CloudWatch console. You can see the set of alarms set up on your domain.
Because the “ver” domain has only a single node, cluster status is yellow, and that alarm is in an “ALARM” state. It’s already sent a notification that the alarm has been triggered.
In most cases, the alarm triggers due to an increased workload. The likely action is to reconfigure the system to handle the increased workload, rather than reducing the incoming workload. Reconfiguring any backend store—a category of systems that includes Elasticsearch—is best performed when the system is quiescent or lightly loaded. Reconfigurations such as setting zone awareness or modifying the disk type cause Amazon ES to enter a “processing” state, potentially disrupting client access.
Other changes, such as increasing the number of data nodes, may cause Elasticsearch to begin moving shards, potentially impacting search performance on these shards while this is happening. These actions should be considered in the context of your production usage. For the same reason I also do not recommend running a script that resets all domains to match best practices.
Avoid the need to reconfigure during heavy workload by setting alarms at a level that allows a considered approach to making the needed changes. For example, if you identify that each weekly peak is increasing, you can reconfigure during a weekly quiet period.
While Elasticsearch can be reconfigured without being quiesced, it is not a best practice to automatically scale it up and down based on usage patterns. Unlike some other AWS services, I recommend against setting a CloudWatch action that automatically reconfigures the system when alarms are triggered.
There are other situations where the planned reconfiguration approach may not work, such as low or zero free disk space causing the domain to reject writes. If the business is dependent on the domain continuing to accept incoming writes and deleting data is not an option, the team may choose to reconfigure immediately.
Extensions and adaptations
You may wish to modify the best practices encoded in the scripts for your own environment or workloads. It’s always better to avoid situations where alerts are generated but routinely ignored. All alerts should trigger a review and one or more actions, either immediately or at a planned date. The following is a list of common situations where you may wish to set different alarms for different domains:
Dev/test vs. production You may have a different set of configuration rules and alarms for your dev environment configurations than for test. For example, you may require zone awareness and dedicated masters for your production environment, but not for your development domains. Or, you may not have any alarms set in dev. For test environments that mirror your potential peak load, test to ensure that the alarms are appropriately triggered.
Differing workloads or SLAs for different domains You may have one domain with a requirement for superfast search performance, and another domain with a heavy ingest load that tolerates slower search response. Your reaction to slow response for these two workloads is likely to be different, so perhaps the thresholds for these two domains should be set at a different level. In this case, you might add a “max CPU utilization” alarm at 100% for 1 minute for the fast search domain, while the other domain only triggers an alarm when the average has been higher than 60% for 5 minutes. You might also add a “free space” rule with a higher threshold to reflect the need for more space for the heavy ingest load if there is danger that it could fill the available disk quickly.
“Normal” alarms versus “emergency” alarms If, for example, free disk space drops to 25% of total capacity, an alarm is triggered that indicates action should be taken as soon as possible, such as cleaning up old indexes or reconfiguring at the next quiet period for this domain. However, if free space drops below a critical level (20% free space), action must be taken immediately in order to prevent Amazon ES from setting the domain to read-only. Similarly, if the “ClusterIndexWritesBlocked” alarm triggers, the domain has already stopped accepting writes, so immediate action is needed. In this case, you may wish to set “laddered” alarms, where one threshold causes an alarm to be triggered to review the current workload for a planned reconfiguration, but a different threshold raises a “DefCon 3” alarm that immediate action is required.
The sample scripts provided here are a starting point, intended for you to adapt to your own environment and needs.
Running the scripts one time can identify how far your current state is from your desired state, and create an initial set of alarms. Regularly re-running these scripts can capture changes in your environment over time and adjusting your alarms for changes in your environment and configurations. One customer has set them up to run nightly, and to automatically create and update alarms to match their preferred settings.
Removing unwanted alarms
Each CloudWatch alarm costs approximately $0.10 per month. You can remove unwanted alarms in the CloudWatch console, under Alarms. If you set up a “ver” domain above, remember to remove it to avoid continuing charges.
Conclusion
Setting CloudWatch alarms appropriately for your Amazon ES domains can help you avoid suboptimal performance and allow you to respond to workload growth or configuration issues well before they become urgent. This post gives you a starting point for doing so. The additional sleep you’ll get knowing you don’t need to be concerned about Elasticsearch domain performance will allow you to focus on building creative solutions for your business and solving problems for your customers.
Dr. Veronika Megler is a senior consultant at Amazon Web Services. She works with our customers to implement innovative big data, AI and ML projects, helping them accelerate their time-to-value when using AWS.
Security updates have been issued by Debian (freexl and simplesamlphp), Fedora (krb5, libvirt, php-phpmyadmin-motranslator, php-phpmyadmin-sql-parser, and phpMyAdmin), Mageia (krb5, leptonica, and libvirt), Slackware (dhcp and ntp), and Ubuntu (isc-dhcp).
Today, AWS made it easier to use the AWS Command Line Interface (CLI) to manage services in your AWS accounts. Now you can sign into the AWS Single Sign-On (AWS SSO) user portal using your existing corporate credentials, choose an AWS account and a specific permission set, and get temporary credentials to manage your AWS services through the AWS CLI.
AWS SSO is a service that enables you to centrally manage single sign-on access to multiple AWS accounts and business applications. AWS temporary security credentials are an easy way to get short-term credentials to manage your AWS services through the AWS CLI or a programmatic client.
Previously, when you issued commands from the CLI to access resources in each of several AWS accounts, you had to remember the password for each account, sign in to each AWS account individually, and fetch the credentials for each account one at a time. Now, AWS SSO eliminates the need to sign in to each AWS account individually to get temporary credentials. Instead, you can sign in to the AWS SSO user portal once using your existing corporate credentials and then fetch temporary credentials for any of your authorized AWS accounts to use with the AWS CLI to access the resources in that account, limited by the permissions granted to you.
In this blog post, I’ll show how to fetch temporary credentials from the AWS SSO user portal to use with the AWS CLI to access resources in your AWS accounts. First, I’ll show you how to obtain short-term credentials for any account for a permission set for which you are authorized. Next, I’ll show you three ways to use these credentials.
For this scenario, let’s say I am an administrator at “AnyCompany” and I want to list instances in two AWS accounts by using the AWS CLI command, aws ec2 describe-instances. “AnyCompany” has enabled access to AWS accounts through AWS SSO.
Prerequisites
You need to install the AWS CLI to use this feature. You also need to configure AWS SSO, connect a corporate directory, and grant access to users or groups to access AWS accounts with permission sets. To learn more, see, “Introducing AWS Single Sign-On“.
How to access resources in your AWS accounts by using AWS SSO and the AWS CLI
1. Sign in to the AWS SSO user portal using your corporate credentials. If you don’t know the URL of your AWS SSO user portal, ask your IT administrator. This URL can be found in AWS SSO Console in the Dashboard menu, under “User portal URL” section. In the user portal, you will see the AWS accounts to which you have been granted access.
2. Choose “AWS Account” to expand the list of AWS accounts.
3. Choose the AWS account that you want to access using the AWS CLI. This expands the list of permission sets in the account that you can use to access the account. For this example, I choose “Administrator” permission set which has the necessary permissions to create security groups in accounts. I then choose “Command Line” or “Programmatic Access” associated with the “Administrator” permissions set.
4. AWS SSO shows the credentials you requested in the appropriate format for your operating system. If you need credentials for an operating system that is different from the one shown, you can switch between the MacOSand Linux and Windows tabs. AWS SSO offers three options to use the temporary security credentials (these credentials are valid for up to 60 minutes; see the following screenshot for examples of each option):
a. To run commands from the AWS CLI against the selected AWS account, copy the commands in the “Setup AWS CLI environment variables” section and paste the commands in the terminal window to set the necessary environment variables. These environment variables will be effective in the current terminal window.
b. To run commands from multiple terminal windows against the same AWS account, copy the profile in the “Setup AWS CLI profile” sectionto setup a new named profile in your AWS credentials file. To learn more, see: “Configuration and Credential Files“. You then will be able to use the –profile option with your AWS CLI command to use this credential. This will be effective in all terminal windows that use the same credential file.
c. To access AWS resources from an AWS service client, use the credentials under the “Copy individual values” section to initialize your client. For more information, see the “Use the temporary credentials to access AWS resources” section on “Getting Temporary Credentials with AWS STS“.
5. Move your mouse over the option you want to copy credentials. I chose option 1.
6. I have copied, pasted, and run the AWS CLI environment variables commands in my terminal window:
7. Optionally, you can verify that the credentials are set up correctly by running the “aws configure list” command. Verify that the access_key and secret_key have values assigned.
8. Now you can run any applicable AWS CLI commands (based on the permission set granted to you by your administrator). In the following example, I list instances in my AWS account.
9. To run the same (or different) AWS CLI command against a different AWS account, repeat this process, starting with Step 3. By keeping the AWS SSO user portal open in a browser window, you can easily switch to another AWS account without needing to sign in again. Every time you want to switch between accounts/permission sets or do additional work in an account after the temporary credentials expire, just copy fresh credentials for that account/permission set from the user portal.
Conclusion
In this post, in order to manage services using the AWS CLI, I’ve showed you how to use your existing corporate username and password to get temporary credentials from AWS SSO. If you have questions, please start a new thread in the AWS SSO Forum.
In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).
Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:
Life in space Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
Life on Earth Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.
ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:
Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?
Flight status
We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!
But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.
Belgium
Flight status achieved:
Team De Vesten, Campus De Vesten, Antwerpen
Ursa Major, CoderDojo Belgium, West-Vlaanderen
Special operations STEM, Sint-Claracollege, Antwerpen
Canada
Flight status achieved:
Let It Grow, Branksome Hall, Toronto
The Dark Side of Light, Branksome Hall, Toronto
Genie On The ISS, Branksome Hall, Toronto
Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
The Broadviewnauts, Broadview, Ottawa
Czech Republic
Flight status achieved:
BLEK, Střední Odborná Škola Blatná, Strakonice
Denmark
Flight status achieved:
2y Infotek, Nærum Gymnasium, Nærum
Equation Quotation, Allerød Gymnasium, Lillerød
Team Weather Watchers, Allerød Gymnasium, Allerød
Space Gardners, Nærum Gymnasium, Nærum
Finland
Flight status achieved:
Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää
France
Flight status achieved:
INC2, Lycée Raoul Follereau, Bourgogne
Space Project SP4, Lycée Saint-Paul IV, Reunion Island
Dresseurs2Python, clg Albert CAMUS, essonne
Lazos, Lycée Aux Lazaristes, Rhone
The space nerds, Lycée Saint André Colmar, Alsace
Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.