Tag Archives: esa

Gaming Giants Highlight the Latest Piracy Threats

Post Syndicated from Ernesto original https://torrentfreak.com/gaming-giants-highlight-the-latest-piracy-threats-191004/

Along with the RIAA and several other industry groups, the Entertainment Software Association (ESA) submitted its overview of “notorious markets” to the Office of the US Trade Representative (USTR) this week.

These submissions serve as input for the USTR’s yearly overview of piracy ‘markets’ which helps to shape the Government’s global copyright enforcement agenda.

The ESA, which represents video game companies including EA, Nintendo, Sony, Take-Two Interactive and Ubisoft, hopes that the interests of its members will be taken into account. In its report, the group lists various pirate sites that allow the public to download games for free.

Torrent sites are among the most significant threats according to the ESA, with The Pirate Bay being a key player. According to the game companies, TPB is a “major source” of copyright infringement that operates “with the assistance” of an unnamed U.S.-based CDN provider.

The less popular Skytorrents is the only other torrent site that’s included, while the list of ‘rogue’ sites also includes the linking sites oceanofganies.com and darkumbra.net, plus the cyberlockers rapidu.net, ltichier.com.

Pirate sites are not the only rogue actors. A special mention goes out to the so-called bulletproof hosting service FlokiNET. ESA reports that this company ignores its takedown requests, which allows the sites team-xecuter.com and sx.xecuter.co to operate freely.

“FlokiNET is a hosting provider that does not respond to notices of infringement or warning letters concerning their hosting and support of infringing websites. Despite attempts to send notices to FlokiNET’s abuse contacts pursuant to FlokiNET’s Acceptable Use Policy, the notices go ignored,” ESA writes.

These two FlokiNET hosted sites enable piracy of Nintendo Switch games and similar sites were previously blocked in the UK.

Finally, the ESA also highlights so-called “pirate servers” or “Grey Shards” that offer free access to subscription-based game services. Cloud-based games are less vulnerable to traditional forms of piracy but these “rogue” services circumvent the technological protection measures.

“When users are diverted to play on such servers, video game publishers are not able to monetize their online content on as described above and thus face reduced opportunities to recoup their investment in new distribution platforms,” the ESA notes.

As an example, the ESA lists Firestorm-servers.com and Warmane.com. The latter allows over 20,000 people per day to play World of Warcraft without paying the monthly subscription fee Blizzard requires.

While the purpose of the submission is to identify “notorious markets” that operate outside of the US, ESA frequently mentions that pirate sites are assisted by a US-based CDN provider. The provider in question is not named, but the game companies are clearly referring to Cloudflare.

In a footnote, ESA mentions that CDN’s have legitimate purposes, but that they also allow pirate sites to hide their true hosting location, while speeding up file transfers. Roughly half of the highlighted sites work with the unnamed CDN, they note, stressing that this has to stop.

“[I]t is important that all U.S.-based CDNs join ISPs, search engines, payment processors, and advertising services that have successfully collaborated with rights holders in recent years to develop reasonable, voluntary measures to prevent sites focused on copyright infringement from using their services,” ESA writes.

In a few months, the US Trade Representative will use the submissions of the ESA and other parties to compile its final list of piracy havens. The U.S. Government can then alert the countries where ‘rogue’ sites operate, in the hope that local authorities take action.

A copy of ESA’s submission for the 2019 Special 301 Out-of-Cycle Review of Notorious Markets is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Run your code aboard the International Space Station with Astro Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/run-your-code-aboard-the-international-space-station-with-astro-pi/

Each year, the European Astro Pi Challenge allows students and young people in ESA Member States (or Slovenia, Canada, or Malta) to write code for their own experiments, which could run on two Raspberry Pi units aboard the International Space Station.

The Astro Pi Challenge is a lot of fun, it’s about space, and so that we in the Raspberry Pi team don’t have to miss out despite being adults, many of us mentor their own Astro Pi teams — and you should too!

So, gather your team, stock up on freeze-dried ice cream, and let’s do it again: the European Astro Pi Challenge 2019/2020 launches today!

Luca Parmitano launches the 2019-20 European Astro Pi Challenge

ESA astronaut Luca Parmitano is this year’s ambassador of the European Astro Pi Challenge. In this video, he welcomes students to the challenge and gives an overview of the project. Learn more about Astro Pi: http://bit.ly/AstroPiESA ★ Subscribe: http://bit.ly/ESAsubscribe and click twice on the bell button to receive our notifications.

The European Astro Pi Challenge 2019/2020 is made up of two missions: Mission Zero and Mission Space Lab.

Astro Pi Mission Zero

Mission Zero has been designed for beginners/younger participants up to 14 years old and can be completed in a single session. It’s great for coding clubs or any groups of students don’t have coding experience but still want to do something cool — because having confirmation that code you wrote has run aboard the International Space Station is really, really cool! Teams write a simple Python program to display a message and temperature reading on an Astro Pi computer, for the astronauts to see as they go about their daily tasks on the ISS. No special hardware or prior coding skills are needed, and all teams that follow the challenge rules are guaranteed to have their programs run in space!

Astro Pi Mission Zero logo

Mission Zero eligibility

  • Participants must be no older than 14 years
  • 2 to 4 people per team
  • Participants must be supervised by a teacher, mentor, or educator, who will be the point of contact with the Astro Pi team
  • Teams must be made up of at least 50% team members who are citizens of an ESA Member* State, or Slovenia, Canada, or Malta

Astro Pi Mission Space Lab

Mission Space Lab is aimed at more experienced/older participants up to 19 years old, and it takes place in 4 phases over the course of 8 months. The challenge is to design and write a program for a scientific experiment to be run on an Astro Pi computer. The best experiments will be deployed to the ISS, and teams will have the opportunity to analyse and report on their results.

Astro Pi Mission Space Lab logo

Mission Space Lab eligibility

  • Participants must be no older than 19 years
  • 2 to 6 people per team
  • Participants must be supervised by a teacher, mentor, or educator, who will be the point of contact with the Astro Pi team
  • Teams must be made up of at least 50% team members who are citizens of an ESA Member State*, or Slovenia, Canada, or Malta

How to plan your Astro Pi Mission Space Lab experiment

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the #RaspberryPi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

For both missions, each member of the team has to be at least one of the following:

  • Enrolled full-time in a primary or secondary school in an ESA Member State, or Slovenia, Canada, or Malta
  • Homeschooled (certified by the National Ministry of Education or delegated authority in an ESA Member State or Slovenia, Canada, or Malta)
  • A member of a club or after-school group (such as Code Club, CoderDojo, or Scouts) located in an ESA Member State*, or Slovenia, Canada, or Malta

Take part

To take part in the European Astro Pi Challenge, head over to the Astro Pi website, where you’ll find more information on how to get started getting your team’s code into SPACE!

Obligatory photo of Raspberry Pis floating in space!

*ESA Member States: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland and the United Kingdom

The post Run your code aboard the International Space Station with Astro Pi appeared first on Raspberry Pi.

European Astro Pi Challenge: Mission Space Lab winners 2018–2019!

Post Syndicated from Olympia Brown original https://www.raspberrypi.org/blog/european-astro-pi-challenge-mission-space-lab-winners-2018-2019/

This is your periodic reminder that there are two Raspberry Pi computers in space! That’s right — our Astro Pi units Ed and Izzy have called the International Space Station home since 2016, and we are proud to work with ESA Education to run the European Astro Pi Challenge, which allows students to conduct scientific investigations in space, by writing computer programs.

Astro PI IR on ISS

An Astro Pi takes photos of the earth from the window of the International Space Station

The Challenge has two missions: Mission Zero and Mission Space Lab. The more advanced one, Mission Space Lab, invites teams of students and young people under 19 years of age to enter by submitting an idea for a scientific experiment to be run on the Astro Pi units.

ESA and the Raspberry Pi Foundation would like to congratulate all the teams that participated in the European Astro Pi Challenge this year. A record-breaking number of more than 15000 people, from all 22 ESA Member States as well as Canada, Slovenia, and Malta, took part in this year’s challenge across both Mission Space Lab and Mission Zero!

Eleven teams have won Mission Space Lab 2018–2019

After designing their own scientific investigations and having their programs run aboard the International Space Station, the Mission Space Lab teams spent their time analysed the data they received back from the ISS. To complete the challenge, they had to submit a short scientific report discuss their results and highlight the conclusions of their experiments. We were very impressed by the quality of the reports, which showed a high level of scientific merit.

We are delighted to announce that, while it was a difficult task, the Astro Pi jury has now selected eleven winning teams, as well as highly commending four additional teams. The eleven winning teams won the chance to join an exclusive video call with ESA astronaut Frank De Winne. He is the head of the European Astronaut Centre in Germany, where astronauts train for their missions. Each team had the once-in-a-lifetime chance to ask Frank about his life as an astronaut.

And the winners are…

Firewatchers from Post CERN HSSIP Group, Portugal, used a machine learning method on their images to identify areas that had recently suffered from wildfires.

Go, 3.141592…, Go! from IES Tomás Navarro Tomás, Spain, took pictures of the Yosemite and Lost River forests and analysed them to study the effects of global drought stress. They did this by using indexes of vegetation and moisture to assess whether forests are healthy and well-preserved.

Les Robotiseurs from Ecole Primaire Publique de Saint-André d’Embrun, France, investigated variations in Earth’s magnetic field between the North and South hemispheres, and between day and night.

TheHappy.Pi from I Liceum Ogólnokształcące im. Bolesława Krzywoustego w Słupsku, Poland, successfully processed their images to measure the relative chlorophyll concentrations of vegetation on Earth.

AstroRussell from Liceo Bertrand Russell, Italy, developed a clever image processing algorithm to classify images into sea, cloud, ice, and land categories.

Les Puissants 2.0 from Lycee International de Londres Winston Churchill, United Kingdom, used the Astro Pi’s accelerometer to study the motion of the ISS itself under conditions of normal flight and course correction/reboost maneuvers.

Torricelli from ITIS “E.Torricelli”, Italy, recorded images and took sensor measurements to calculate the orbital period and flight speed of the ISS followed by the mass of the Earth using Newton’s universal law of gravitation.

ApplePi from I Liceum Ogólnokształcące im. Króla Stanisława Leszczyńskiego w Jaśle, Poland, compared their images from Astro Pi Izzy to historical images from 35 years ago and could show that coastlines have changed slightly due to erosion or human impact.

Spacethon from Saint Joseph La Salle Pruillé Le Chétif, France, tested their image-processing algorithm to identify solid, liquid, and gaseous features of exoplanets.

Stithians Rocket Code Club from Stithians CP School, United Kingdom, performed an experiment comparing the temperature aboard the ISS to the average temperature of the nearest country the space station was flying over.

Vytina Aerospace from Primary School of Vytina, Greece, recorded images of reservoirs and lakes on Earth to compare them with historical images from the last 30 years in order to investigate climate change.

Highly commended teams

We also selected four teams to be highly commended, and they will receive a selection of goodies from ESA Education and the Raspberry Pi Foundation:

Aguere Team from IES Marina Cebrián, Spain, investigated variations in the Earth’s magnetic field due to solar activity and a particular disturbance due to a solar coronal hole.

Astroraga from CoderDojo Trento, Italy, measured the magnetic field to investigate whether astronauts can still use a compass, just like on Earth, to orient themselves on the ISS.

Betlemites from Escoles Betlem, Spain, recorded the temperature on the ISS to find out if the pattern of a convection cell is different in microgravity.

Rovel In The Space from Scuola secondaria I grado A.Rosmini ROVELLO PORRO(Como), Italy, executed a program that monitored the pressure and would warn astronauts in case space debris or micrometeoroids collided with the ISS.

The next edition is not far off!

ESA and the Raspberry Pi Foundation would like to invite all school teachers, students, and young people to join the next edition of the challenge. Make sure to follow updates on the Astro Pi website and Astro Pi Twitter account to look out for the announcement of next year’s Astro Pi Challenge!

The post European Astro Pi Challenge: Mission Space Lab winners 2018–2019! appeared first on Raspberry Pi.

Yuri 3 rover | The MagPi #82

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/yuri-3-rover-the-magpi-82/

In honour of the 50th anniversary of the Apollo moon landing, this year’s Pi Wars was space-themed. Visitors to the two-day event — held at the University of Cambridge in March — were lucky enough to witness a number of competitors and demonstration space-themed robots in action.

Yuri 3 rover

Among the most impressive was the Yuri 3 mini Mars rover, which was designed, lovingly crafted, and operated by Airbus engineer John Chinner. Fascinated by Yuri 3’s accuracy, we got John to give us the inside scoop.

Airbus ambassador

John is on the STEM Ambassador team at Airbus and has previously demonstrated its prototype ExoMars rover, Bridget (you can drool over images of this here: magpi.cc/btQnEw), including at the BBC Stargazing Live event in Leicester. Realising the impressive robot’s practical limitations in terms of taking it out and about to schools, John embarked on a smaller but highly faithful, easily transportable Mars rover. His robot-building experience began in his teens with a six-legged robot he took along to his technical engineering apprenticeship interview and had walk along the desk. Job deftly bagged, he’s been building robots ever since.

Inside the Yuri 3 Mars rover

Yuri is a combination of an Actobotics chassis based on one created by Beatty Robotics plus 3D-printed wheels and six 12 V DC brushed gears. Six Hitec servo motors operate the steering, while the entire rover has an original Raspberry Pi B+ at its heart.

Yuri 3 usually runs in ‘tank steer’ mode. Cannily, the positioning of four of its six wheels at the corners means Yuri 3’s wheels can each be turned so that it spins on the spot. It can also ‘crab’ to the side due to its individually steerable wheels.

Servo motors

The part more challenging for home users is the ‘gold thermal blanket’. The blanket ensures that the rover can maintain working temperature in the extreme conditions found on Mars. “I was very fortunate to have a bespoke blanket made by the team who make them for satellites,” says John. “They used it as a training exercise for the apprentices.”

John has made some bookmarks from the leftover thermal material which he gives away to schools to use as prizes.

Yuri 3 rover thermal blanket samples

Rover design

While designing Yuri 3, it probably helped that John was able to sneak peeks of Airbus’s ExoMars prototypes being tested at the firm’s Mars Yard. (He once snuck Yuri 3 onto the yard and gave it a test run, but that’s supposed to be a secret!) Also, says John, “I get to see the actual flight rover in its interplanetary bio clean room”.

A young girl inspects the Yuri 3 Mars rover

His involvement with all things Raspberry Pi came about when he was part of the Astro Pi programme, in which students send code to two Raspberry Pi devices aboard the International Space Station every year. “I did the shock, vibration, and EMC testing on the actual Astro Pi units in Airbus, Portsmouth,” John proudly tells us.

A very British rover

As part of the European Space Agency mission ExoMars, Airbus is building and integrating the rover in Stevenage. “What a fantastic opportunity for exciting outreach,” says John. “After all the fun with Tim Peake’s Principia mission, why not make the next British astronaut a Mars rover? … It is exciting to be able to go and visit Stevenage and see the prototype rovers testing on the Mars Yard.”

The Yuri 3 Mars rover

John also mentions that he’d love to see Yuri 3 put in an appearance at the Raspberry Pi Store; in the meantime, drooling punters will have to build their own Mars rover from similar kit. Or, we’ll just enjoy John’s footage of Yuri 3 in action and perhaps ask very nicely if he’ll bring Yuri along for a demonstration at an event or school near us.

John wrote about the first year of his experience building Yuri 3 on his blog. And you can follow the adventures of Yuri 3 over on Twitter: @Yuri_3_Rover.

Read the new issue of The MagPi

This article is from today’s brand-new issue of The MagPi, the official Raspberry Pi magazine. Buy it from all good newsagents, subscribe to pay less per issue and support our work, or download the free PDF to give it a try first.

Cover of The MagPi issue 82

The post Yuri 3 rover | The MagPi #82 appeared first on Raspberry Pi.

Raspberry Pi captures a Soyuz in space!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-captures-soyuz-in-space/

So this happened. And we are buzzing!

You’re most likely aware of the Astro Pi Challenge. In case you’re not, it’s a wonderfully exciting programme organised by the European Space Agency (ESA) and us at Raspberry Pi. Astro Pi challenges European young people to write scientific experiments in code, and the best experiments run aboard the International Space Station (ISS) on two Astro Pi units: Raspberry Pi 1 B+ and Sense HATs encased in flight-grade aluminium spacesuits.

It’s very cool. So, so cool. As adults, we’re all extremely jealous that we’re unable to take part. We all love space and, to be honest, we all want to be astronauts. Astronauts are the coolest.

So imagine our excitement at Pi Towers when ESA shared this photo on Friday:

This is a Soyuz vehicle on its way to dock with the International Space Station. And while Soyuz vehicles ferry between earth and the ISS all the time, what’s so special about this occasion is that this very photo was captured using a Raspberry Pi 1 B+ and a Raspberry Pi Camera Module, together known as Izzy, one of the Astro Pi units!

So if anyone ever asks you whether the Raspberry Pi Camera Module is any good, just show them this photo. We don’t think you’ll need to provide any further evidence after that.

The post Raspberry Pi captures a Soyuz in space! appeared first on Raspberry Pi.

Jenni Sidey inspires young women in science with Astro Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/jenni-sidey-inspires-young-women-science-astro-pi/

Today, ESA Education and the Raspberry Pi Foundation are proud to celebrate the International Day of Women and Girls in Science! In support of this occasion and to encourage young women to enter a career in STEM (science, technology, engineering, mathematics), CSA astronaut Jenni Sidey discusses why she believes computing and digital making skills are so important, and tells us about the role models that inspired her.

Jenni Sidey inspires young women in science with Astro Pi

Today, ESA Education and the Raspberry Pi Foundation are proud to celebrate the International Day of Women and Girls in Science! In support of this occasion and to encourage young women to enter a career in STEM (science, technology, engineering, mathematics), CSA astronaut Jenni Sidey discusses why she believes computing and digital making skills are so important, and tells us about the role models that inspired her.

Happy International Day of Women and Girls in Science!

The International Day of Women and Girls in Science is part of the United Nations’ plan to achieve their 2030 Agenda for Sustainable Development. According to current UNESCO data, less than 30% of researchers in STEM are female and only 30% of young women are selecting STEM-related subjects in higher education
Jenni Sidey

That’s why part of the UN’s 2030 Agenda is to promote full and equal access to and participation in science for women and girls. And to help young women and girls develop their computing and digital making skills, we want to encourage their participation in the European Astro Pi Challenge!

The European Astro Pi Challenge

The European Astro Pi Challenge is an ESA Education programme run in collaboration with the Raspberry Pi Foundation that offers students and young people the amazing opportunity to conduct scientific investigations in space! The challenge is to write computer programs for one of two Astro Pi units — Raspberry Pi computers on board the International Space Station.

Astro Pi Mission Zero logo

Astro Pi’s Mission Zero is open until 20 March 2019, and this mission gives young people up to 14 years of age the chance to write a simple program to display a message to the astronauts on the ISS. No special equipment or prior coding skills are needed, and all participants that follow the mission rules are guaranteed to have their program run in space!

Take part in Mission Zero — in your language!

To help many more people take part in their native language, we’ve translated the Mission Zero resource, guidelines, and web page into 19 different languages! Head to our languages section to find your version of Mission Zero and take part.

If you have any questions regarding the European Astro Pi Challenge, email us at [email protected].

The post Jenni Sidey inspires young women in science with Astro Pi appeared first on Raspberry Pi.

Tim Peake congratulates winning Mission Space Lab teams!

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/mission-space-lab-winners-2018/

This week, the ten winning Astro Pi Mission Space Lab teams got to take part in a video conference with ESA Astronaut Tim Peake!

ESA Astro Pi students meet Tim Peake

Uploaded by Raspberry Pi on 2018-06-26.

A brief history of Astro Pi

In 2014, Raspberry Pi Foundation partnered with the UK Space Agency and the European Space Agency to fly two Raspberry Pi computers to the International Space Station. These Pis, known as Astro Pis Ed and Izzy, are each equipped with a Sense HAT and Camera Module (IR or Vis) and housed within special space-hardened cases.

In our annual Astro Pi Challenge, young people from all 22 ESA member states have the opportunity to design and code experiments for the Astro Pis to become the next generation of space scientists.

Mission Zero vs Mission Space Lab

Back in September, we announced the 2017/2018 European Astro Pi Challenge, in partnership with the European Space Agency. This year, for the first time, the Astro Pi Challenge comprised two missions: Mission Zero and Mission Space Lab.

Mission Zero is a new entry-level challenge that allows young coders to have their message displayed to the astronauts on-board the ISS. It finished up in February, with more than 5400 young people in over 2500 teams taking part!

Astro Pi Mission Space Lab logo

For Mission Space Lab, young people work like real scientists by designing their own experiment to investigate one of two topics:

Life in space

For this topic, young coders write code to run on Astro Pi Vis (Ed) in the Columbus module to investigate life aboard the ISS.

Life on Earth

For this topic, young people design a code experiment to run on Astro Pi IR (Izzy), aimed towards the Earth through a window, to investigate life down on our planet.

Our participants

We had more than 1400 students across 330 teams take part in this year’s Mission Space Lab. Teams who submitted an eligible idea for an experiment received an Astro Pi kit from ESA to develop their Python code. These kits contain the same hardware that’s aboard the ISS, enabling students to test their experiments in conditions similar to those on the space station. The best experiments were granted flight status earlier this year, and the code of these teams ran on the ISS in April.

And the winners are…

The teams received the results of their experiments and were asked to submit scientific reports based on their findings. Just a few weeks ago, 98 teams sent us brilliant reports, and we had the difficult task of whittling the pool of teams down to find the final ten winners!

As you can see in the video above, the winning teams were lucky enough to take part in a very special video conference with ESA Astronaut Tim Peake.


2017/18 Mission Space Lab winning teams

The Dark Side of Light from Branksome Hall, Canada, investigated whether the light pollution in an area could be used to determine the source of energy for the electricity consumption.

Spaceballs from Attert Lycée Redange, Luxembourg, successfully calculated the speed of the ISS by analysing ground photographs.

Enrico Fermi from Liceo XXV Aprile, Italy, investigated the link between the Astro Pi’s magnetometer and X-ray measurements from the GOES-15 satellite.

Team Aurora from Hyvinkään yhteiskoulun lukio, Finland, showed how the Astro Pi’s magnetometer could be used to map the Earth’s magnetic field and determine the latitude of the ISS.

@stroMega from Institut de Genech, France, used Astro Pi Izzy’s near-infrared Camera Module to measure the health and density of vegetation on Earth.

Ursa Major from a CoderDojo in Belgium created a program to autonomously measure the percentage of vegetation, water, and clouds in photographs from Astro Pi Izzy.

Canarias 1 from IES El Calero, Spain, built on existing data and successfully determined whether the ISS was eclipsed from on-board sensor data.

The Earth Watchers from S.T.E.M Robotics Academy, Greece, used Astro Pi Izzy to compare the health of vegetation in Quebec, Canada, and Guam.

Trentini DOP from CoderDojo Trento, Italy, investigated the stability of the on-board conditions of the ISS and whether or not they were effected by eclipsing.

Team Lampone from CoderDojo Trento, Italy, accurately measured the speed of the ISS by analysing ground photographs taken by Astro Pi Izzy.

Well done to everyone who took part, and massive congratulations to all the winners!

The post Tim Peake congratulates winning Mission Space Lab teams! appeared first on Raspberry Pi.

Despite US Criticism, Ukraine Cybercrime Chief Receives Few Piracy Complaints

Post Syndicated from Andy original https://torrentfreak.com/despite-us-criticism-ukraine-cybercrime-chief-receives-few-piracy-complaints-180522/

On a large number of occasions over the past decade, Ukraine has played host to some of the world’s largest pirate sites.

At various points over the years, The Pirate Bay, KickassTorrents, ExtraTorrent, Demonoid and raft of streaming portals could be found housed in the country’s data centers, reportedly taking advantage of laws more favorable than those in the US and EU.

As a result, Ukraine has been regularly criticized for not doing enough to combat piracy but when placed under pressure, it does take action. In 2010, for example, the local government expressed concerns about the hosting of KickassTorrents in the country and in August the same year, the site was kicked out by its host.

“Kickasstorrents.com main web server was shut down by the hosting provider after it was contacted by local authorities. One way or another I’m afraid we must say goodbye to Ukraine and move the servers to other countries,” the site’s founder told TF at the time.

In the years since, Ukraine has launched sporadic action against pirate sites and has taken steps to tighten up copyright law. The Law on State Support of Cinematography came into force during April 2017 and gave copyright owners new tools to combat infringement by forcing (in theory, at least) site operators and web hosts to respond to takedown requests.

But according to the United States and Europe, not enough is being done. After the EU Commission warned that Ukraine risked damaging relations with the EU, last September US companies followed up with another scathing attack.

In a recommendation to the U.S. Government, the IIPA, which counts the MPAA, RIAA, and ESA among its members, asked U.S. authorities to suspend or withdraw Ukraine’s trade benefits until the online piracy situation improves.

“Legislation is needed to institute proper notice and takedown provisions, including a requirement that service providers terminate access to individuals (or entities) that have repeatedly engaged in infringement, and the retention of information for law enforcement, as well as to provide clear third party liability regarding ISPs,” the IIPA wrote.

But amid all the criticism, Ukraine cyber police chief Sergey Demedyuk says that while his department is committed to tackling piracy, it can only do so when complaints are filed with him.

“Yes, we are engaged in piracy very closely. The problem is that piracy is a crime of private accusation. So here we deal with them only in cases where we are contacted,” Demedyuk said in an Interfax interview published yesterday.

Surprisingly, given the number of dissenting voices, it appears that complaints about these matters aren’t exactly prevalent. So are there many at all?

“Unfortunately, no. In the media, many companies claim that their rights are being violated by pirates. But if you count the applications that come to us, they are one,” Demedyuk reveals.

“In general, we are handling Ukrainian media companies, who produce their own product and are worried about its fate. Also on foreign films, the ‘Anti-Piracy Agency’ refers to us, but not as intensively as before.”

Why complaints are going down, Demedyuk does not know, but when his unit is asked to take action it does so, he claims. Indeed, Demedyuk cites two particularly significant historical operations against a pair of large ‘pirate’ sites.

In 2012, Ukraine shut down EX.ua, a massive cyberlocker site following a six-month investigation initiated by international tech companies including Microsoft, Graphisoft and Adobe. Around 200 servers were seized, together hosting around 6,000 terabytes of data.

Then in November 2016, following a complaint from the MPAA, police raided FS.to, one of Ukraine’s most popular pirate sites. Initial reports indicated that 60 servers were seized and 19 people were arrested.

“To see the effect of combating piracy, this should not be done at the level of cyberpolicy, but at the state level,” Demedyuk advises.

“This requires constant close interaction between law enforcement agencies and rights holders. Only by using all these tools will we be able to effectively counteract copyright infringements.”

Meanwhile, the Office of the United States Trade Representative has maintained Ukraine’s position on the Priority Watchlist of its latest Special 301 Report and there a no signs it will be leaving anytime soon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

ExtraTorrent Replacement Displays Warning On Predecessor’s Shutdown Anniversary

Post Syndicated from Andy original https://torrentfreak.com/extratorrent-replacement-displays-warning-on-predecessors-shutdown-anniversary-180518/

Exactly one year ago, millions of users in the BitTorrent community went into mourning with the shock depature of one of its major players.

ExtraTorrent was founded in back in November 2006, at a time when classic platforms such as TorrentSpy and Mininova were dominating the torrent site landscape. But with dedication and determination, the site amassed millions of daily visitors, outperforming every other torrent site apart from the mighty Pirate Bay.

Then, on May 17, 2017, everything came crashing down.

“ExtraTorrent has shut down permanently,” a note in the site read. “ExtraTorrent with all mirrors goes offline. We permanently erase all data. Stay away from fake ExtraTorrent websites and clones. Thx to all ET supporters and torrent community. ET was a place to be….”

While ExtraTorrent staff couldn’t be more clear in advising people to stay away from clones, few people listened to their warnings. Within hours, new sites appeared claiming to be official replacements for the much-loved torrent site and people flocked to them in their millions.

One of those was ExtraTorrent.ag, a torrent site connected to the operators of EZTV.ag, which appeared as a replacement in the wake of the official EZTV’s demise. Graphically very similar to the original ExtraTorrent, the .ag ‘replacement’ had none of its namesake’s community or unique content. But that didn’t dent its popularity.

ExtraTorrent.ag

At the start of this week, ExtraTorrent.ag was one of the most popular torrent sites on the Internet. With an Alexa rank of around 2,200, it would’ve clinched ninth position in our Top 10 Torrent Sites report earlier this year. However, after registering the site’s domain a year ago, something seems to have gone wrong.

Yesterday, on the anniversary of ExtraTorrent’s shutdown and exactly a year after the ExtraTorrent.ag domain was registered, ExtraTorrent.ag disappeared only to be replaced by a generic landing page, as shown below.

ExtraTorrent.ag landing page

This morning, however, there appear to be additional complications. Accessing with Firefox produces the page above but attempting to do so with Chrome produces an ominous security warning.

Chrome warning

Indeed, those protected by MalwareBytes won’t be able to access the page at all, since ExtraTorrent.ag redirects to the domain FindBetterResults.com, which the anti-malware app flags as malicious.

The change was reported to TF by the operator of domain unblocking site Unblocked.lol, which offers torrent site proxies as well as access to live TV and sports.

“I noticed when I started receiving emails saying ExtraTorrent was redirecting to some parked domain. When I jumped on the PC and checked myself it was just redirecting to a blank page,” he informs us.

“First I thought they’d blocked our IP address so I used some different ones. But I soon discovered the domain was in fact parked.”

So what has happened to this previously-functioning domain?

Whois records show that ExtraTorrent.ag was created on May 17, 2017 and appears to have been registered for a year. Yesterday, on May 17, 2018, the domain was updated to list what could potentially be a new owner, with an expiry date of May 17, 2019.

Once domains have expired, they usually enter an ‘Auto-Renew Grace Period’ for up to 45 days. This is followed by a 30-day ‘Redemption Grace Period’. At the end of this second period, domains cannot be renewed and are released for third-parties to register. That doesn’t appear to have been the case here.

So, to find out more about the sudden changes we reached out to the email address listed in the WHOIS report but received no response. Should we hear more we’ll update this report but in the meantime the Internet has lost one of its largest torrent sites and gained a rather pointless landing page with potential security risks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Pirate IPTV Service Gave Customer Details to Premier League, But What’s the Risk?

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-service-gave-customer-details-to-premier-league-but-whats-the-risk-180515/

In a report last weekend, we documented what appear to be the final days of pirate IPTV provider Ace Hosting.

From information provided by several sources including official liquidation documents, it became clear that a previously successful and profitable Ace had succumbed to pressure from the Premier League, which accused the service of copyright infringement.

The company had considerable funds in the bank – £255,472.00 to be exact – but it also had debts of £717,278.84, including £260,000 owed to HMRC and £100,000 to the Premier League as part of a settlement agreement.

Information received by TF late Sunday suggested that £100K was the tip of the iceberg as far as the Premier League was concerned and in a statement yesterday, the football outfit confirmed that was the case.

“A renowned pirate of Premier League content to consumers has been forced to liquidate after agreeing to pay £600,000 for breaching the League’s copyright,” the Premier League announced.

“Ace IPTV, run by Craig Driscoll and Ian Isaac, was selling subscriptions to illegal Premier League streams directly to consumers which allowed viewing on a range of devices, including notorious Kodi-type boxes, as well as to smaller resellers in the UK and abroad.”

Sources familiar with the case suggest that while Ace Hosting Limited didn’t have the funds to pay the Premier League the full £600K, Ace’s operators agreed to pay (and have already paid, to some extent at least) what were essentially their own funds to cover amounts above the final £100K, which is due to be paid next year.

But that’s not the only thing that’s been handed over to the Premier League.

“Ace voluntarily disclosed the personal details of their customers, which the League will now review in compliance with data protection legislation. Further investigations will be conducted, and action taken where appropriate,” the Premier League added.

So, the big question now is how exposed Ace’s former subscribers are.

The truth is that only the Premier League knows for sure but TF has been able to obtain information from several sources which indicate that former subscribers probably aren’t the Premier League’s key interest and even if they were, information obtained on them would be of limited use.

According to a source with knowledge of how a system like Ace’s works, there is a separation of data which appears to help (at least to some degree) with the subscriber’s privacy.

“The system used to manage accounts and take payment is actually completely separate from the software used to manage streams and the lines themselves. They are never usually even on the same server so are two very different databases,” he told TF.

“So at best the only information that has voluntarily been provided to the [Premier League], is just your email, name and address (assuming you even used real details) and what hosting package or credits you bought.”

While this information is bad enough, the action against Ace is targeted, in that it focuses on the Premier League’s content and how Ace (and therefore its users) infringed on the football outfit’s copyrights. So, proving that subscribers actually watched any Premier League content would be an ideal position but it’s not straightforward, despite the potential for detailed logging.

“The management system contains no history of what you watched, when you watched it, when you signed in and so on. That is all contained in a different database on a different server.

“Because every connection is recorded [on the second server], it can create some two million entries a day and as such most providers either turn off this feature or delete the logs daily as having so many entries slows down the system down used for actual streams,” he explains.

Our source says that this data would likely to have been the first to be deleted and is probably “long gone” by now. However, even if the Premier League had obtained it, it’s unlikely they would be able to do much with it due to data protection laws.

“The information was passed to the [Premier League] voluntarily by ACE which means this information has been given from one entity to another without the end users’ consent, not part of the [creditors’ voluntary liquidation] and without a court order to support it. Data Protection right now is taken very seriously in the EU,” he notes.

At this point, it’s probably worth noting that while the word “voluntarily” has been used several times to explain the manner in which Ace handed over its subscribers’ details to the Premier League, the same word can be used to describe the manner in which the £600K settlement amount will be paid.

No one forces someone to pay or hand something over, that’s what the courts are for, and the aim here was to avoid that eventuality.

Other pieces of information culled from various sources suggest that PayPal payment information, limited to amounts only, was also handed over to the Premier League. And, perhaps most importantly (and perhaps predictably) as far as former subscribers are concerned, the football group was more interested in Ace’s upwards supplier chain (the ‘wholesale’ stream suppliers used, for example) than those buying the service.

Finally, while the Premier League is now seeking to send a message to customers that these services are risky to use, it’s difficult to argue with the assertion that it’s unsafe to hand over personal details to an illegal service.

“Ace IPTV’s collapse also highlighted the risk consumers take with their personal data when they sign up to illegal streaming services,” Premier League notes.

TF spoke with three IPTV providers who all confirmed that they don’t care what names and addresses people use to sign up with and that no checks are carried out to make sure they’re correct. However, one concedes that in order to run as a business, this information has to be requested and once a customer types it in, it’s possible that it could be handed over as part of a settlement.

“I’m not going to tell people to put in dummy details, how can I? It’s up to people to use their common sense. If they’re still worried they should give Sky their money because if our backs are against the wall, what do you think is going to happen?” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

The plan for merging CoreOS into Red Hat

Post Syndicated from corbet original https://lwn.net/Articles/754058/rss

The CoreOS blog is carrying an
article
describing the path forward now that CoreOS is owned by Red
Hat. “Since Red Hat’s acquisition of CoreOS was announced, we
received questions on the fate of Container Linux. CoreOS’s first project,
and initially its namesake, pioneered the lightweight, ‘over-the-air’
automatically updated container native operating system that fast rose in
popularity running the world’s containers. With the acquisition, Container
Linux will be reborn as Red Hat CoreOS, a new entry into the Red Hat
ecosystem. Red Hat CoreOS will be based on Fedora and Red Hat Enterprise
Linux sources and is expected to ultimately supersede Atomic Host as Red
Hat’s immutable, container-centric operating system.
” Some
information can also be found in this
Red Hat press release
.

Analyze data in Amazon DynamoDB using Amazon SageMaker for real-time prediction

Post Syndicated from YongSeong Lee original https://aws.amazon.com/blogs/big-data/analyze-data-in-amazon-dynamodb-using-amazon-sagemaker-for-real-time-prediction/

Many companies across the globe use Amazon DynamoDB to store and query historical user-interaction data. DynamoDB is a fast NoSQL database used by applications that need consistent, single-digit millisecond latency.

Often, customers want to turn their valuable data in DynamoDB into insights by analyzing a copy of their table stored in Amazon S3. Doing this separates their analytical queries from their low-latency critical paths. This data can be the primary source for understanding customers’ past behavior, predicting future behavior, and generating downstream business value. Customers often turn to DynamoDB because of its great scalability and high availability. After a successful launch, many customers want to use the data in DynamoDB to predict future behaviors or provide personalized recommendations.

DynamoDB is a good fit for low-latency reads and writes, but it’s not practical to scan all data in a DynamoDB database to train a model. In this post, I demonstrate how you can use DynamoDB table data copied to Amazon S3 by AWS Data Pipeline to predict customer behavior. I also demonstrate how you can use this data to provide personalized recommendations for customers using Amazon SageMaker. You can also run ad hoc queries using Amazon Athena against the data. DynamoDB recently released on-demand backups to create full table backups with no performance impact. However, it’s not suitable for our purposes in this post, so I chose AWS Data Pipeline instead to create managed backups are accessible from other services.

To do this, I describe how to read the DynamoDB backup file format in Data Pipeline. I also describe how to convert the objects in S3 to a CSV format that Amazon SageMaker can read. In addition, I show how to schedule regular exports and transformations using Data Pipeline. The sample data used in this post is from Bank Marketing Data Set of UCI.

The solution that I describe provides the following benefits:

  • Separates analytical queries from production traffic on your DynamoDB table, preserving your DynamoDB read capacity units (RCUs) for important production requests
  • Automatically updates your model to get real-time predictions
  • Optimizes for performance (so it doesn’t compete with DynamoDB RCUs after the export) and for cost (using data you already have)
  • Makes it easier for developers of all skill levels to use Amazon SageMaker

All code and data set in this post are available in this .zip file.

Solution architecture

The following diagram shows the overall architecture of the solution.

The steps that data follows through the architecture are as follows:

  1. Data Pipeline regularly copies the full contents of a DynamoDB table as JSON into an S3
  2. Exported JSON files are converted to comma-separated value (CSV) format to use as a data source for Amazon SageMaker.
  3. Amazon SageMaker renews the model artifact and update the endpoint.
  4. The converted CSV is available for ad hoc queries with Amazon Athena.
  5. Data Pipeline controls this flow and repeats the cycle based on the schedule defined by customer requirements.

Building the auto-updating model

This section discusses details about how to read the DynamoDB exported data in Data Pipeline and build automated workflows for real-time prediction with a regularly updated model.

Download sample scripts and data

Before you begin, take the following steps:

  1. Download sample scripts in this .zip file.
  2. Unzip the src.zip file.
  3. Find the automation_script.sh file and edit it for your environment. For example, you need to replace 's3://<your bucket>/<datasource path>/' with your own S3 path to the data source for Amazon ML. In the script, the text enclosed by angle brackets—< and >—should be replaced with your own path.
  4. Upload the json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar file to your S3 path so that the ADD jar command in Apache Hive can refer to it.

For this solution, the banking.csv  should be imported into a DynamoDB table.

Export a DynamoDB table

To export the DynamoDB table to S3, open the Data Pipeline console and choose the Export DynamoDB table to S3 template. In this template, Data Pipeline creates an Amazon EMR cluster and performs an export in the EMRActivity activity. Set proper intervals for backups according to your business requirements.

One core node(m3.xlarge) provides the default capacity for the EMR cluster and should be suitable for the solution in this post. Leave the option to resize the cluster before running enabled in the TableBackupActivity activity to let Data Pipeline scale the cluster to match the table size. The process of converting to CSV format and renewing models happens in this EMR cluster.

For a more in-depth look at how to export data from DynamoDB, see Export Data from DynamoDB in the Data Pipeline documentation.

Add the script to an existing pipeline

After you export your DynamoDB table, you add an additional EMR step to EMRActivity by following these steps:

  1. Open the Data Pipeline console and choose the ID for the pipeline that you want to add the script to.
  2. For Actions, choose Edit.
  3. In the editing console, choose the Activities category and add an EMR step using the custom script downloaded in the previous section, as shown below.

Paste the following command into the new step after the data ­­upload step:

s3://#{myDDBRegion}.elasticmapreduce/libs/script-runner/script-runner.jar,s3://<your bucket name>/automation_script.sh,#{output.directoryPath},#{myDDBRegion}

The element #{output.directoryPath} references the S3 path where the data pipeline exports DynamoDB data as JSON. The path should be passed to the script as an argument.

The bash script has two goals, converting data formats and renewing the Amazon SageMaker model. Subsequent sections discuss the contents of the automation script.

Automation script: Convert JSON data to CSV with Hive

We use Apache Hive to transform the data into a new format. The Hive QL script to create an external table and transform the data is included in the custom script that you added to the Data Pipeline definition.

When you run the Hive scripts, do so with the -e option. Also, define the Hive table with the 'org.openx.data.jsonserde.JsonSerDe' row format to parse and read JSON format. The SQL creates a Hive EXTERNAL table, and it reads the DynamoDB backup data on the S3 path passed to it by Data Pipeline.

Note: You should create the table with the “EXTERNAL” keyword to avoid the backup data being accidentally deleted from S3 if you drop the table.

The full automation script for converting follows. Add your own bucket name and data source path in the highlighted areas.

#!/bin/bash
hive -e "
ADD jar s3://<your bucket name>/json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar ; 
DROP TABLE IF EXISTS blog_backup_data ;
CREATE EXTERNAL TABLE blog_backup_data (
 customer_id map<string,string>,
 age map<string,string>, job map<string,string>, 
 marital map<string,string>,education map<string,string>, 
 default map<string,string>, housing map<string,string>,
 loan map<string,string>, contact map<string,string>, 
 month map<string,string>, day_of_week map<string,string>, 
 duration map<string,string>, campaign map<string,string>,
 pdays map<string,string>, previous map<string,string>, 
 poutcome map<string,string>, emp_var_rate map<string,string>, 
 cons_price_idx map<string,string>, cons_conf_idx map<string,string>,
 euribor3m map<string,string>, nr_employed map<string,string>, 
 y map<string,string> ) 
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' 
LOCATION '$1/';

INSERT OVERWRITE DIRECTORY 's3://<your bucket name>/<datasource path>/' 
SELECT concat( customer_id['s'],',', 
 age['n'],',', job['s'],',', 
 marital['s'],',', education['s'],',', default['s'],',', 
 housing['s'],',', loan['s'],',', contact['s'],',', 
 month['s'],',', day_of_week['s'],',', duration['n'],',', 
 campaign['n'],',',pdays['n'],',',previous['n'],',', 
 poutcome['s'],',', emp_var_rate['n'],',', cons_price_idx['n'],',',
 cons_conf_idx['n'],',', euribor3m['n'],',', nr_employed['n'],',', y['n'] ) 
FROM blog_backup_data
WHERE customer_id['s'] > 0 ; 

After creating an external table, you need to read data. You then use the INSERT OVERWRITE DIRECTORY ~ SELECT command to write CSV data to the S3 path that you designated as the data source for Amazon SageMaker.

Depending on your requirements, you can eliminate or process the columns in the SELECT clause in this step to optimize data analysis. For example, you might remove some columns that have unpredictable correlations with the target value because keeping the wrong columns might expose your model to “overfitting” during the training. In this post, customer_id  columns is removed. Overfitting can make your prediction weak. More information about overfitting can be found in the topic Model Fit: Underfitting vs. Overfitting in the Amazon ML documentation.

Automation script: Renew the Amazon SageMaker model

After the CSV data is replaced and ready to use, create a new model artifact for Amazon SageMaker with the updated dataset on S3.  For renewing model artifact, you must create a new training job.  Training jobs can be run using the AWS SDK ( for example, Amazon SageMaker boto3 ) or the Amazon SageMaker Python SDK that can be installed with “pip install sagemaker” command as well as the AWS CLI for Amazon SageMaker described in this post.

In addition, consider how to smoothly renew your existing model without service impact, because your model is called by applications in real time. To do this, you need to create a new endpoint configuration first and update a current endpoint with the endpoint configuration that is just created.

#!/bin/bash
## Define variable 
REGION=$2
DTTIME=`date +%Y-%m-%d-%H-%M-%S`
ROLE="<your AmazonSageMaker-ExecutionRole>" 


# Select containers image based on region.  
case "$REGION" in
"us-west-2" )
    IMAGE="174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest"
    ;;
"us-east-1" )
    IMAGE="382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest" 
    ;;
"us-east-2" )
    IMAGE="404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest" 
    ;;
"eu-west-1" )
    IMAGE="438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest" 
    ;;
 *)
    echo "Invalid Region Name"
    exit 1 ;  
esac

# Start training job and creating model artifact 
TRAINING_JOB_NAME=TRAIN-${DTTIME} 
S3OUTPUT="s3://<your bucket name>/model/" 
INSTANCETYPE="ml.m4.xlarge"
INSTANCECOUNT=1
VOLUMESIZE=5 
aws sagemaker create-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --algorithm-specification TrainingImage=${IMAGE},TrainingInputMode=File --role-arn ${ROLE}  --input-data-config '[{ "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://<your bucket name>/<datasource path>/", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "text/csv", "CompressionType": "None" , "RecordWrapperType": "None"  }]'  --output-data-config S3OutputPath=${S3OUTPUT} --resource-config  InstanceType=${INSTANCETYPE},InstanceCount=${INSTANCECOUNT},VolumeSizeInGB=${VOLUMESIZE} --stopping-condition MaxRuntimeInSeconds=120 --hyper-parameters feature_dim=20,predictor_type=binary_classifier  

# Wait until job completed 
aws sagemaker wait training-job-completed-or-stopped --training-job-name ${TRAINING_JOB_NAME}  --region ${REGION}

# Get newly created model artifact and create model
MODELARTIFACT=`aws sagemaker describe-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --query 'ModelArtifacts.S3ModelArtifacts' --output text `
MODELNAME=MODEL-${DTTIME}
aws sagemaker create-model --region ${REGION} --model-name ${MODELNAME}  --primary-container Image=${IMAGE},ModelDataUrl=${MODELARTIFACT}  --execution-role-arn ${ROLE}

# create a new endpoint configuration 
CONFIGNAME=CONFIG-${DTTIME}
aws sagemaker  create-endpoint-config --region ${REGION} --endpoint-config-name ${CONFIGNAME}  --production-variants  VariantName=Users,ModelName=${MODELNAME},InitialInstanceCount=1,InstanceType=ml.m4.xlarge

# create or update the endpoint
STATUS=`aws sagemaker describe-endpoint --endpoint-name  ServiceEndpoint --query 'EndpointStatus' --output text --region ${REGION} `
if [[ $STATUS -ne "InService" ]] ;
then
    aws sagemaker  create-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}    
else
    aws sagemaker  update-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}
fi

Grant permission

Before you execute the script, you must grant proper permission to Data Pipeline. Data Pipeline uses the DataPipelineDefaultResourceRole role by default. I added the following policy to DataPipelineDefaultResourceRole to allow Data Pipeline to create, delete, and update the Amazon SageMaker model and data source in the script.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "sagemaker:CreateTrainingJob",
 "sagemaker:DescribeTrainingJob",
 "sagemaker:CreateModel",
 "sagemaker:CreateEndpointConfig",
 "sagemaker:DescribeEndpoint",
 "sagemaker:CreateEndpoint",
 "sagemaker:UpdateEndpoint",
 "iam:PassRole"
 ],
 "Resource": "*"
 }
 ]
}

Use real-time prediction

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint. This approach is useful for interactive web, mobile, or desktop applications.

Following, I provide a simple Python code example that queries against Amazon SageMaker endpoint URL with its name (“ServiceEndpoint”) and then uses them for real-time prediction.

=== Python sample for real-time prediction ===

#!/usr/bin/env python
import boto3
import json 

client = boto3.client('sagemaker-runtime', region_name ='<your region>' )
new_customer_info = '34,10,2,4,1,2,1,1,6,3,190,1,3,4,3,-1.7,94.055,-39.8,0.715,4991.6'
response = client.invoke_endpoint(
    EndpointName='ServiceEndpoint',
    Body=new_customer_info, 
    ContentType='text/csv'
)
result = json.loads(response['Body'].read().decode())
print(result)
--- output(response) ---
{u'predictions': [{u'score': 0.7528127431869507, u'predicted_label': 1.0}]}

Solution summary

The solution takes the following steps:

  1. Data Pipeline exports DynamoDB table data into S3. The original JSON data should be kept to recover the table in the rare event that this is needed. Data Pipeline then converts JSON to CSV so that Amazon SageMaker can read the data.Note: You should select only meaningful attributes when you convert CSV. For example, if you judge that the “campaign” attribute is not correlated, you can eliminate this attribute from the CSV.
  2. Train the Amazon SageMaker model with the new data source.
  3. When a new customer comes to your site, you can judge how likely it is for this customer to subscribe to your new product based on “predictedScores” provided by Amazon SageMaker.
  4. If the new user subscribes your new product, your application must update the attribute “y” to the value 1 (for yes). This updated data is provided for the next model renewal as a new data source. It serves to improve the accuracy of your prediction. With each new entry, your application can become smarter and deliver better predictions.

Running ad hoc queries using Amazon Athena

Amazon Athena is a serverless query service that makes it easy to analyze large amounts of data stored in Amazon S3 using standard SQL. Athena is useful for examining data and collecting statistics or informative summaries about data. You can also use the powerful analytic functions of Presto, as described in the topic Aggregate Functions of Presto in the Presto documentation.

With the Data Pipeline scheduled activity, recent CSV data is always located in S3 so that you can run ad hoc queries against the data using Amazon Athena. I show this with example SQL statements following. For an in-depth description of this process, see the post Interactive SQL Queries for Data in Amazon S3 on the AWS News Blog. 

Creating an Amazon Athena table and running it

Simply, you can create an EXTERNAL table for the CSV data on S3 in Amazon Athena Management Console.

=== Table Creation ===
CREATE EXTERNAL TABLE datasource (
 age int, 
 job string, 
 marital string , 
 education string, 
 default string, 
 housing string, 
 loan string, 
 contact string, 
 month string, 
 day_of_week string, 
 duration int, 
 campaign int, 
 pdays int , 
 previous int , 
 poutcome string, 
 emp_var_rate double, 
 cons_price_idx double,
 cons_conf_idx double, 
 euribor3m double, 
 nr_employed double, 
 y int 
)
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n' 
LOCATION 's3://<your bucket name>/<datasource path>/';

The following query calculates the correlation coefficient between the target attribute and other attributes using Amazon Athena.

=== Sample Query ===

SELECT corr(age,y) AS correlation_age_and_target, 
 corr(duration,y) AS correlation_duration_and_target, 
 corr(campaign,y) AS correlation_campaign_and_target,
 corr(contact,y) AS correlation_contact_and_target
FROM ( SELECT age , duration , campaign , y , 
 CASE WHEN contact = 'telephone' THEN 1 ELSE 0 END AS contact 
 FROM datasource 
 ) datasource ;

Conclusion

In this post, I introduce an example of how to analyze data in DynamoDB by using table data in Amazon S3 to optimize DynamoDB table read capacity. You can then use the analyzed data as a new data source to train an Amazon SageMaker model for accurate real-time prediction. In addition, you can run ad hoc queries against the data on S3 using Amazon Athena. I also present how to automate these procedures by using Data Pipeline.

You can adapt this example to your specific use case at hand, and hopefully this post helps you accelerate your development. You can find more examples and use cases for Amazon SageMaker in the video AWS 2017: Introducing Amazon SageMaker on the AWS website.

 


Additional Reading

If you found this post useful, be sure to check out Serving Real-Time Machine Learning Predictions on Amazon EMR and Analyzing Data in S3 using Amazon Athena.

 


About the Author

Yong Seong Lee is a Cloud Support Engineer for AWS Big Data Services. He is interested in every technology related to data/databases and helping customers who have difficulties in using AWS services. His motto is “Enjoy life, be curious and have maximum experience.”

 

 

Sci-Hub ‘Pirate Bay For Science’ Security Certs Revoked by Comodo

Post Syndicated from Andy original https://torrentfreak.com/sci-hub-pirate-bay-for-science-security-certs-revoked-by-comodo-ca-180503/

Sci-Hub is often referred to as the “Pirate Bay of Science”. Like its namesake, it offers masses of unlicensed content for free, mostly against the wishes of copyright holders.

While The Pirate Bay will index almost anything, Sci-Hub is dedicated to distributing tens of millions of academic papers and articles, something which has turned itself into a target for publishing giants like Elsevier.

Sci-Hub and its Kazakhstan-born founder Alexandra Elbakyan have been under sustained attack for several years but more recently have been fending off an unprecedented barrage of legal action initiated by the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry.

After winning a default judgment for $4.8 million in copyright infringement damages last year, ACS was further granted a broad injunction.

It required various third-party services (including domain registries, hosting companies and search engines) to stop facilitating access to the site. This plunged Sci-Hub into a game of domain whac-a-mole, one that continues to this day.

Determined to head Sci-Hub off at the pass, ACS obtained additional authority to tackle the evasive site and any new domains it may register in the future.

While Sci-Hub has been hopping around domains for a while, this week a new development appeared on the horizon. Visitors to some of the site’s domains were greeted with errors indicating that the domains’ security certificates had been revoked.

Tests conducted by TorrentFreak revealed clear revocations on Sci-Hub.hk and Sci-Hub.nz, both of which returned the error ‘NET::ERR_CERT_REVOKED’.

Certificate revoked

These certificates were first issued and then revoked by Comodo CA, the world’s largest certification authority. TF contacted the company who confirmed that it had been forced to take action against Sci-Hub.

“In response to a court order against Sci-Hub, Comodo CA has revoked four certificates for the site,” Jonathan Skinner, Director, Global Channel Programs at Comodo CA informed TorrentFreak.

“By policy Comodo CA obeys court orders and the law to the full extent of its ability.”

Comodo refused to confirm any additional details, including whether these revocations were anything to do with the current ACS injunction. However, Susan R. Morrissey, Director of Communications at ACS, told TorrentFreak that the revocations were indeed part of ACS’ legal action against Sci-Hub.

“[T]he action is related to our continuing efforts to protect ACS’ intellectual property,” Morrissey confirmed.

Sci-Hub operates multiple domains (an up-to-date list is usually available on Wikipedia) that can be switched at any time. At the time of writing the domain sci-hub.ga currently returns ‘ERR_SSL_VERSION_OR_CIPHER_MISMATCH’ while .CN and .GS variants both have Comodo certificates that expired last year.

When TF first approached Comodo earlier this week, Sci-Hub’s certificates with the company hadn’t been completely wiped out. For example, the domain https://sci-hub.tw operated perfectly, with an active and non-revoked Comodo certificate.

Still in the game…but not for long

By Wednesday, however, the domain was returning the now-familiar “revoked” message.

These domain issues are the latest technical problems to hit Sci-Hub as a result of the ACS injunction. In February, Cloudflare terminated service to several of the site’s domains.

“Cloudflare will terminate your service for the following domains sci-hub.la, sci-hub.tv, and sci-hub.tw by disabling our authoritative DNS in 24 hours,” Cloudflare told Sci-Hub.

While ACS has certainly caused problems for Sci-Hub, the platform is extremely resilient and remains online.

The domains https://sci-hub.is and https://sci-hub.nu are fully operational with certificates issued by Let’s Encrypt, a free and open certificate authority supported by the likes of Mozilla, EFF, Chrome, Private Internet Access, and other prominent tech companies.

It’s unclear whether these certificates will be targeted in the future but Sci-Hub doesn’t appear to be in the mood to back down.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Friday Squid Blogging: Squid Prices Rise as Catch Decreases

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/friday_squid_bl_621.html

In Japan:

Last year’s haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016.

Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats.

Wholesale prices of flying squid have climbed as a result. Last year’s average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Using AWS Lambda and Amazon Comprehend for sentiment analysis

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/using-aws-lambda-and-amazon-comprehend-for-sentiment-analysis/

This post courtesy of Giedrius Praspaliauskas, AWS Solutions Architect

Even with best IVR systems, customers get frustrated. What if you knew that 10 callers in your Amazon Connect contact flow were likely to say “Agent!” in frustration in the next 30 seconds? Would you like to get to them before that happens? What if your bot was smart enough to admit, “I’m sorry this isn’t helping. Let me find someone for you.”?

In this post, I show you how to use AWS Lambda and Amazon Comprehend for sentiment analysis to make your Amazon Lex bots in Amazon Connect more sympathetic.

Setting up a Lambda function for sentiment analysis

There are multiple natural language and text processing frameworks or services available to use with Lambda, including but not limited to Amazon Comprehend, TextBlob, Pattern, and NLTK. Pick one based on the nature of your system:  the type of interaction, languages supported, and so on. For this post, I picked Amazon Comprehend, which uses natural language processing (NLP) to extract insights and relationships in text.

The walkthrough in this post is just an example. In a full-scale implementation, you would likely implement a more nuanced approach. For example, you could keep the overall sentiment score through the conversation and act only when it reaches a certain threshold. It is worth noting that this Lambda function is not called for missed utterances, so there may be a gap between what is being analyzed and what was actually said.

The Lambda function is straightforward. It analyses the input transcript field of the Amazon Lex event. Based on the overall sentiment value, it generates a response message with next step instructions. When the sentiment is neutral, positive, or mixed, the response leaves it to Amazon Lex to decide what the next steps should be. It adds to the response overall sentiment value as an additional session attribute, along with slots’ values received as an input.

When the overall sentiment is negative, the function returns the dialog action, pointing to an escalation intent (specified in the environment variable ESCALATION_INTENT_NAME) or returns the fulfillment closure action with a failure state when the intent is not specified. In addition to actions or intents, the function returns a message, or prompt, to be provided to the customer before taking the next step. Based on the returned action, Amazon Connect can select the appropriate next step in a contact flow.

For this walkthrough, you create a Lambda function using the AWS Management Console:

  1. Open the Lambda console.
  2. Choose Create Function.
  3. Choose Author from scratch (no blueprint).
  4. For Runtime, choose Python 3.6.
  5. For Role, choose Create a custom role. The custom execution role allows the function to detect sentiments, create a log group, stream log events, and store the log events.
  6. Enter the following values:
    • For Role Description, enter Lambda execution role permissions.
    • For IAM Role, choose Create an IAM role.
    • For Role Name, enter LexSentimentAnalysisLambdaRole.
    • For Policy, use the following policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Action": [
                "comprehend:DetectDominantLanguage",
                "comprehend:DetectSentiment"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
    1. Choose Create function.
    2. Copy/paste the following code to the editor window
import os, boto3

ESCALATION_INTENT_MESSAGE="Seems that you are having troubles with our service. Would you like to be transferred to the associate?"
FULFILMENT_CLOSURE_MESSAGE="Seems that you are having troubles with our service. Let me transfer you to the associate."

escalation_intent_name = os.getenv('ESACALATION_INTENT_NAME', None)

client = boto3.client('comprehend')

def lambda_handler(event, context):
    sentiment=client.detect_sentiment(Text=event['inputTranscript'],LanguageCode='en')['Sentiment']
    if sentiment=='NEGATIVE':
        if escalation_intent_name:
            result = {
                "sessionAttributes": {
                    "sentiment": sentiment
                    },
                    "dialogAction": {
                        "type": "ConfirmIntent", 
                        "message": {
                            "contentType": "PlainText", 
                            "content": ESCALATION_INTENT_MESSAGE
                        }, 
                    "intentName": escalation_intent_name
                    }
            }
        else:
            result = {
                "sessionAttributes": {
                    "sentiment": sentiment
                },
                "dialogAction": {
                    "type": "Close",
                    "fulfillmentState": "Failed",
                    "message": {
                            "contentType": "PlainText",
                            "content": FULFILMENT_CLOSURE_MESSAGE
                    }
                }
            }

    else:
        result ={
            "sessionAttributes": {
                "sentiment": sentiment
            },
            "dialogAction": {
                "type": "Delegate",
                "slots" : event["currentIntent"]["slots"]
            }
        }
    return result
  1. Below the code editor specify the environment variable ESCALATION_INTENT_NAME with a value of Escalate.

  1. Click on Save in the top right of the console.

Now you can test your function.

  1. Click Test at the top of the console.
  2. Configure a new test event using the following test event JSON:
{
  "messageVersion": "1.0",
  "invocationSource": "DialogCodeHook",
  "userId": "1234567890",
  "sessionAttributes": {},
  "bot": {
    "name": "BookSomething",
    "alias": "None",
    "version": "$LATEST"
  },
  "outputDialogMode": "Text",
  "currentIntent": {
    "name": "BookSomething",
    "slots": {
      "slot1": "None",
      "slot2": "None"
    },
    "confirmationStatus": "None"
  },
  "inputTranscript": "I want something"
}
  1. Click Create
  2. Click Test on the console

This message should return a response from Lambda with a sentiment session attribute of NEUTRAL.

However, if you change the input to “This is garbage!”, Lambda changes the dialog action to the escalation intent specified in the environment variable ESCALATION_INTENT_NAME.

Setting up Amazon Lex

Now that you have your Lambda function running, it is time to create the Amazon Lex bot. Use the BookTrip sample bot and call it BookSomething. The IAM role is automatically created on your behalf. Indicate that this bot is not subject to the COPPA, and choose Create. A few minutes later, the bot is ready.

Make the following changes to the default configuration of the bot:

  1. Add an intent with no associated slots. Name it Escalate.
  2. Specify the Lambda function for initialization and validation in the existing two intents (“BookCar” and “BookHotel”), at the same time giving Amazon Lex permission to invoke it.
  3. Leave the other configuration settings as they are and save the intents.

You are ready to build and publish this bot. Set a new alias, BookSomethingWithSentimentAnalysis. When the build finishes, test it.

As you see, sentiment analysis works!

Setting up Amazon Connect

Next, provision an Amazon Connect instance.

After the instance is created, you need to integrate the Amazon Lex bot created in the previous step. For more information, see the Amazon Lex section in the Configuring Your Amazon Connect Instance topic.  You may also want to look at the excellent post by Randall Hunt, New – Amazon Connect and Amazon Lex Integration.

Create a new contact flow, “Sentiment analysis walkthrough”:

  1. Log in into the Amazon Connect instance.
  2. Choose Create contact flow, Create transfer to agent flow.
  3. Add a Get customer input block, open the icon in the top left corner, and specify your Amazon Lex bot and its intents.
  4. Select the Text to speech audio prompt type and enter text for Amazon Connect to play at the beginning of the dialog.
  5. Choose Amazon Lex, enter your Amazon Lex bot name and the alias.
  6. Specify the intents to be used as dialog branches that a customer can choose: BookHotel, BookTrip, or Escalate.
  7. Add two Play prompt blocks and connect them to the customer input block.
    • If booking hotel or car intent is returned from the bot flow, play the corresponding prompt (“OK, will book it for you”) and initiate booking (in this walkthrough, just hang up after the prompt).
    • However, if escalation intent is returned (caused by the sentiment analysis results in the bot), play the prompt (“OK, transferring to an agent”) and initiate the transfer.
  8. Save and publish the contact flow.

As a result, you have a contact flow with a single customer input step and a text-to-speech prompt that uses the Amazon Lex bot. You expect one of the three intents returned:

Edit the phone number to associate the contact flow that you just created. It is now ready for testing. Call the phone number and check how your contact flow works.

Cleanup

Don’t forget to delete all the resources created during this walkthrough to avoid incurring any more costs:

  • Amazon Connect instance
  • Amazon Lex bot
  • Lambda function
  • IAM role LexSentimentAnalysisLambdaRole

Summary

In this walkthrough, you implemented sentiment analysis with a Lambda function. The function can be integrated into Amazon Lex and, as a result, into Amazon Connect. This approach gives you the flexibility to analyze user input and then act. You may find the following potential use cases of this approach to be of interest:

  • Extend the Lambda function to identify “hot” topics in the user input even if the sentiment is not negative and take action proactively. For example, switch to an escalation intent if a user mentioned “where is my order,” which may signal potential frustration.
  • Use Amazon Connect Streams to provide agent sentiment analysis results along with call transfer. Enable service tailored towards particular customer needs and sentiments.
  • Route calls to agents based on both skill set and sentiment.
  • Prioritize calls based on sentiment using multiple Amazon Connect queues instead of transferring directly to an agent.
  • Monitor quality and flag for review contact flows that result in high overall negative sentiment.
  • Implement sentiment and AI/ML based call analysis, such as a real-time recommendation engine. For more details, see Machine Learning on AWS.

If you have questions or suggestions, please comment below.

New – Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR)

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/

The Amazon DynamoDB team is back with another useful feature hot on the heels of encryption at rest. At AWS re:Invent 2017 we launched global tables and on-demand backup and restore of your DynamoDB tables and today we’re launching continuous backups with point-in-time recovery (PITR).

You can enable continuous backups with a single click in the AWS Management Console, a simple API call, or with the AWS Command Line Interface (CLI). DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days. We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict. You can still keep your on-demand backups for as long as needed for archival purposes but PITR works as additional insurance against accidental loss of data. Let’s see how this works.

Continuous Backup

To enable this feature in the console we navigate to our table and select the Backups tab. From there simply click Enable to turn on the feature. I could also turn on continuous backups via the UpdateContinuousBackups API call.

After continuous backup is enabled we should be able to see an Earliest restore date and Latest restore date

Let’s imagine a scenario where I have a lot of old user profiles that I want to delete.

I really only want to send service updates to our active users based on their last_update date. I decided to write a quick Python script to delete all the users that haven’t used my service in a while.

import boto3
table = boto3.resource("dynamodb").Table("VerySuperImportantTable")
items = table.scan(
    FilterExpression="last_update >= :date",
    ExpressionAttributeValues={":date": "2014-01-01T00:00:00"},
    ProjectionExpression="ImportantId"
)['Items']
print("Deleting {} Items! Dangerous.".format(len(items)))
with table.batch_writer() as batch:
    for item in items:
        batch.delete_item(Key=item)

Great! This should delete all those pesky non-users of my service that haven’t logged in since 2013. So,— CTRL+C CTRL+C CTRL+C CTRL+C (interrupt the currently executing command).

Yikes! Do you see where I went wrong? I’ve just deleted my most important users! Oh, no! Where I had a greater-than sign, I meant to put a less-than! Quick, before Jeff Barr can see, I’m going to restore the table. (I probably could have prevented that typo with Boto 3’s handy DynamoDB conditions: Attr("last_update").lt("2014-01-01T00:00:00"))

Restoring

Luckily for me, restoring a table is easy. In the console I’ll navigate to the Backups tab for my table and click Restore to point-in-time.

I’ll specify the time (a few seconds before I started my deleting spree) and a name for the table I’m restoring to.

For a relatively small and evenly distributed table like mine, the restore is quite fast.

The time it takes to restore a table varies based on multiple factors and restore times are not neccesarily coordinated with the size of the table. If your dataset is evenly distributed across your primary keys you’ll be able to take advanatage of parallelization which will speed up your restores.

Learn More & Try It Yourself
There’s plenty more to learn about this new feature in the documentation here.

Pricing for continuous backups varies by region and is based on the current size of the table and all indexes.

A few things to note:

  • PITR works with encrypted tables.
  • If you disable PITR and later reenable it, you reset the start time from which you can recover.
  • Just like on-demand backups, there are no performance or availability impacts to enabling this feature.
  • Stream settings, Time To Live settings, PITR settings, tags, Amazon CloudWatch alarms, and auto scaling policies are not copied to the restored table.
  • Jeff, it turns out, knew I restored the table all along because every PITR API call is recorded in AWS CloudTrail.

Let us know how you’re going to use continuous backups and PITR on Twitter and in the comments.
Randall

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/750150/rss

Security updates have been issued by Arch Linux (bchunk, thunderbird, and xerces-c), Debian (freeplane, icu, libvirt, and net-snmp), Fedora (monitorix, php-simplesamlphp-saml2, php-simplesamlphp-saml2_1, php-simplesamlphp-saml2_3, puppet, and qt5-qtwebengine), openSUSE (curl, libmodplug, libvorbis, mailman, nginx, opera, python-paramiko, and samba, talloc, tevent), Red Hat (python-paramiko, rh-maven35-slf4j, rh-mysql56-mysql, rh-mysql57-mysql, rh-ruby22-ruby, rh-ruby23-ruby, and rh-ruby24-ruby), Slackware (thunderbird), SUSE (clamav, kernel, memcached, and php53), and Ubuntu (samba and tiff).

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/750037/rss

Security updates have been issued by Debian (adminer, isc-dhcp, kamailio, libvorbisidec, plexus-utils2, and simplesamlphp), Fedora (exim and glibc-arm-linux-gnu), Mageia (sqlite3), openSUSE (Chromium, kernel, and qemu), SUSE (memcached), and Ubuntu (sharutils).

Astro Pi upgrades launch today!

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-upgrades-launch/

Before our beloved SpaceDave left the Raspberry Pi Foundation to join the ranks of the European Space Agency (ESA) — and no, we’re still not jealous *ahem* — he kindly drafted us one final blog post about the Astro Pi upgrades heading to the International Space Station today! So here it is. Enjoy!

We are very excited to announce that Astro Pi upgrades are on their way to the International Space Station! Back in September, we blogged about a small payload being launched to the International Space Station to upgrade the capabilities of our Astro Pi units.

Astro Pi Raspberry Pi International Space Station

Sneak peek

For the longest time, the payload was scheduled to be launched on SpaceX CRS 14 in February. However, the launch was delayed to April and so impacted the flight operations we have planned for running Mission Space Lab student experiments.

To avoid this, ESA had the payload transferred to Russian Soyuz MS-08 (54S), which is launching today to carry crew members Oleg Artemyev, Andrew Feustel, and Ricky Arnold to the ISS.

Ricky Arnold on Twitter

L-47 hours.

You can watch coverage of the launch on NASA TV from 4.30pm GMT this afternoon, with the launch scheduled for 5.44pm GMT. Check the NASA TV schedule for updates.

The upgrades

The pictures below show the flight hardware in its final configuration before loading onto the launch vehicle.

Wireless dongle in bag — Astro Pi upgrades

All access

With the wireless dongle, the Astro Pi units can be deployed in ISS locations other than the Columbus module, where they don’t have access to an Ethernet switch.

We are also sending some flexible optical filters. These are made from the same material as the blue square which is shipped with the Raspberry Pi NoIR Camera Module.

Optical filters in bag — Astro Pi upgrades

#bluefilter

So that future Astro Pi code will need to command fewer windows to download earth observation imagery to the ground, we’re also including some 32GB micro SD cards to replace the current 8GB cards.

Micro SD cards in bag — Astro Pi upgrades

More space in space

Tthe items above are enclosed in a large 8″ ziplock bag that has been designated the “AstroPi Kit”.

bag of Astro Pi upgrades

It’s ziplock bags all the way down up

Once the Soyuz docks with the ISS, this payload is one of the first which will be unpacked, so that the Astro Pi units can be upgraded and deployed ready to run your experiments!

More Astro Pi

Stay tuned for our next update in April, when student code is set to be run on the Astro Pi units as part of our Mission Space Lab programme. And to find out more about Astro Pi, head to the programme website.

The post Astro Pi upgrades launch today! appeared first on Raspberry Pi.