Tag Archives: winner

C is to low level

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/c-is-too-low-level.html

I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.

The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.


The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.

But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.

Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.

The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.

Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.

As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.

This is an intellectually compelling argument, but so far bunk.

The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.

Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.

The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.

The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.

For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).

The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.

For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.

You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.

The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.

The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.

Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.

Summary

Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Public Lab and Karen Sandler are 2017 Free Software Awards winners

Post Syndicated from ris original https://lwn.net/Articles/750153/rss

The Free Software Foundation (FSF) announced
the winners of the 2017 Free Software Awards during LibrePlanet.
Public Lab is a community and non-profit organization with the goal
of democratizing science to address environmental issues. Their
community-created tools and techniques utilize free software and low-cost
devices to enable people at any level of technical skill to investigate
environmental concerns.
” The organization received the Award for
Projects of Social Benefit. Karen Sandler, the Executive Director of the
Software Freedom Conservancy, received the Award for the Advancement of
Free Software.

Mission Space Lab flight status announced!

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/mission-space-lab-flight-status-announced/

In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).

Astro Pi Mission Space Lab logo

Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:

  • Life in space
    Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
  • Life on Earth
    Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.

ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:

ESA astronaut Alexander Gerst congratulates Astro Pi 2017-18 winners

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Flight status

We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!

But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.

Belgium

Flight status achieved:

  • Team De Vesten, Campus De Vesten, Antwerpen
  • Ursa Major, CoderDojo Belgium, West-Vlaanderen
  • Special operations STEM, Sint-Claracollege, Antwerpen

Canada

Flight status achieved:

  • Let It Grow, Branksome Hall, Toronto
  • The Dark Side of Light, Branksome Hall, Toronto
  • Genie On The ISS, Branksome Hall, Toronto
  • Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
  • The Broadviewnauts, Broadview, Ottawa

Czech Republic

Flight status achieved:

  • BLEK, Střední Odborná Škola Blatná, Strakonice

Denmark

Flight status achieved:

  • 2y Infotek, Nærum Gymnasium, Nærum
  • Equation Quotation, Allerød Gymnasium, Lillerød
  • Team Weather Watchers, Allerød Gymnasium, Allerød
  • Space Gardners, Nærum Gymnasium, Nærum

Finland

Flight status achieved:

  • Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää

France

Flight status achieved:

  • INC2, Lycée Raoul Follereau, Bourgogne
  • Space Project SP4, Lycée Saint-Paul IV, Reunion Island
  • Dresseurs2Python, clg Albert CAMUS, essonne
  • Lazos, Lycée Aux Lazaristes, Rhone
  • The space nerds, Lycée Saint André Colmar, Alsace
  • Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
  • AstroMega, Institut de Genech, north
  • Al’Crew, Lycée Algoud-Laffemas, Auvergne-Rhône-Alpes
  • AstroPython, clg Albert CAMUS, essonne
  • Aruden Corp, Lycée Pablo Neruda, Normandie
  • HeroSpace, clg Albert CAMUS, essonne
  • GalaXess [R]evolution, Lycée Saint Cricq, Nouvelle-Aquitaine
  • AstroBerry, clg Albert CAMUS, essonne
  • Ambitious Girls, Lycée Adam de Craponne, PACA

Germany

Flight status achieved:

  • Uschis, St. Ursula Gymnasium Freiburg im Breisgau, Breisgau
  • Dosi-Pi, Max-Born-Gymnasium Germering, Bavaria

Greece

Flight status achieved:

  • Deep Space Pi, 1o Epal Grevenon, Grevena
  • Flox Team, 1st Lyceum of Kifissia, Attiki
  • Kalamaria Space Team, Second Lyceum of Kalamaria, Central Macedonia
  • The Earth Watchers, STEM Robotics Academy, Thessaly
  • Celestial_Distance, Gymnasium of Kanithos, Sterea Ellada – Evia
  • Pi Stars, Primary School of Rododaphne, Achaias
  • Flarions, 5th Primary School of Salamina, Attica

Ireland

Flight status achieved:

  • Plant Parade, Templeogue College, Leinster
  • For Peats Sake, Templeogue College, Leinster
  • CoderDojo Clonakilty, Co. Cork

Italy

Flight status achieved:

  • Trentini DOP, CoderDojo Trento, TN
  • Tarantino Space Lab, Liceo G. Tarantino, BA
  • Murgia Sky Lab, Liceo G. Tarantino, BA
  • Enrico Fermi, Liceo XXV Aprile, Veneto
  • Team Lampone, CoderDojoTrento, TN
  • GCC, Gali Code Club, Trentino Alto Adige/Südtirol
  • Another Earth, IISS “Laporta/Falcone-Borsellino”
  • Anti Pollution Team, IIS “L. Einaudi”, Sicily
  • e-HAND, Liceo Statale Scientifico e Classico ‘Ettore Majorana’, Lombardia
  • scossa team, ITTS Volterra, Venezia
  • Space Comet Sisters, Scuola don Bosco, Torino

Luxembourg

Flight status achieved:

  • Spaceballs, Atert Lycée Rédange, Diekirch
  • Aline in space, Lycée Aline Mayrisch Luxembourg (LAML)

Poland

Flight status achieved:

  • AstroLeszczynPi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Astrokompasy, High School nr XVII in Wrocław named after Agnieszka Osiecka, Lower Silesian
  • Cosmic Investigators, Publiczna Szkoła Podstawowa im. Św. Jadwigi Królowej w Rzezawie, Małopolska
  • ApplePi, III Liceum Ogólnokształcące im. prof. T. Kotarbińskiego w Zielonej Górze, Lubusz Voivodeship
  • ELE Society 2, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • ELE Society 1, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • SpaceOn, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Dewnald Ducks, III Liceum Ogólnokształcące w Zielonej Górze, lubuskie
  • Nova Team, III Liceum Ogolnoksztalcace im. prof. T. Kotarbinskiego, lubuskie district
  • The Moons, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Live, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Storm Hunters, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • DeepSky, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Small Explorers, ZPO Konina, Malopolska
  • AstroZSCL, Zespół Szkół w Czerwionce-Leszczynach, śląskie
  • Orchestra, Szkola Podstawowa nr 12 w Jasle, Podkarpackie
  • ApplePi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Green Crew, Szkoła Podstawowa nr 2 w Czeladzi, Silesia

Portugal

Flight status achieved:

  • Magnetics, Escola Secundária João de Deus, Faro
  • ECA_QUEIROS_PI, Secondary School Eça de Queirós, Lisboa
  • ESDMM Pi, Escola Secundária D. Manuel Martins, Setúbal
  • AstroPhysicists, EB 2,3 D. Afonso Henriques, Braga

Romania

Flight status achieved:

  • Caelus, “Tudor Vianu” National High School of Computer Science, District One
  • CodeWarriors, “Tudor Vianu” National High School of Computer Science, District One
  • Dark Phoenix, “Tudor Vianu” National High School of Computer Science, District One
  • ShootingStars, “Tudor Vianu” National High School of Computer Science, District One
  • Astro Pi Carmen Sylva 2, Liceul Teoretic “Carmen Sylva”, Constanta
  • Astro Meridian, Astro Club Meridian 0, Bihor

Slovenia

Flight status achieved:

  • astrOSRence, OS Rence
  • Jakopičevca, Osnovna šola Riharda Jakopiča, Ljubljana

Spain

Flight status achieved:

  • Exea in Orbit, IES Cinco Villas, Zaragoza
  • Valdespartans, IES Valdespartera, Zaragoza
  • Valdespartans2, IES Valdespartera, Zaragoza
  • Astropithecus, Institut de Bruguers, Barcelona
  • SkyPi-line, Colegio Corazón de María, Asturias
  • ClimSOLatic, Colegio Corazón de María, Asturias
  • Científicosdelsaz, IES Profesor Pablo del Saz, Málaga
  • Canarias 2, IES El Calero, Las Palmas
  • Dreamers, M. Peleteiro, A Coruña
  • Canarias 1, IES El Calero, Las Palmas

The Netherlands

Flight status achieved:

  • Team Kaki-FM, Rkbs De Reiger, Noord-Holland

United Kingdom

Flight status achieved:

  • Binco, Teignmouth Community School, Devon
  • 2200 (Saddleworth), Detached Flight Royal Air Force Air Cadets, Lanchashire
  • Whatevernext, Albyn School, Highlands
  • GraviTeam, Limehurst Academy, Leicestershire
  • LSA Digital Leaders, Lytham St Annes Technology and Performing Arts College, Lancashire
  • Mead Astronauts, Mead Community Primary School, Wiltshire
  • STEAMCademy, Castlewood Primary School, West Sussex
  • Lux Quest, CoderDojo Banbridge, Co. Down
  • Temparatus, Dyffryn Taf, Carmarthenshire
  • Discovery STEMers, Discovery STEM Education, South Yorkshire
  • Code Inverness, Code Club Inverness, Highland
  • JJB, Ashton Sixth Form College, Tameside
  • Astro Lab, East Kent College, Kent
  • The Life Savers, Scratch and Python, Middlesex
  • JAAPiT, Taylor Household, Nottingham
  • The Heat Guys, The Archer Academy, Greater London
  • Astro Wantenauts, Wantage C of E Primary School, Oxfordshire
  • Derby Radio Museum, Radio Communication Museum of Great Britain, Derbyshire
  • Bytesyze, King’s College School, Cambridgeshire

Other

Flight status achieved:

  • Intellectual Savage Stars, Lycée français de Luanda, Luanda

 

Congratulations to all successful teams! We are looking forward to reading your reports.

The post Mission Space Lab flight status announced! appeared first on Raspberry Pi.

Community Profile: Dr. Lucy Rogers

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-lucy-rogers/

This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

Dr Lucy Rogers calls herself a Transformer. “I transform simple electronics into cool gadgets, I transform science into plain English, I transform problems into opportunities. I am also a catalyst. I am interested in everything around me, and can often see ways of putting two ideas from very different fields together into one package. If I cannot do this myself, I connect the people who can.”

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Among many other projects, Dr Lucy Rogers currently focuses much of her attention on reducing the damage from space debris

It’s a pretty wide range of interests and skills for sure. But it only takes a brief look at Lucy’s résumé to realise that she means it. When she says she’s interested in everything around her, this interest reaches from electronics to engineering, wearable tech, space, robotics, and robotic dinosaurs. And she can be seen talking about all of these things across various companies’ social media, such as IBM, websites including the Women’s Engineering Society, and books, including her own.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

With her bright LED boots, Lucy was one of the wonderful Pi community members invited to join us and HRH The Duke of York at St James’s Palace just over a year ago

When not attending conferences as guest speaker, tinkering with electronics, or creating engaging IoT tutorials, she can be found retrofitting Raspberry Pis into the aforementioned robotic dinosaurs at Blackgang Chine Land of Imagination, writing, and judging battling bots for the BBC’s Robot Wars.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

First broadcast in the UK between 1998 and 2004, Robot Wars was revived in 2016 with a new look and new judges, including Dr Lucy Rogers. Competitors battle their home-brew robots, and Lucy, together with the other two judges, awards victories among the carnage of robotic remains

Lucy graduated from Lancaster University with a degree in Mechanical Engineering. After that, she spent seven years at Rolls-Royce Industrial Power Group as a graduate trainee before becoming a chartered engineer and earning her PhD in bubbles.

Bubbles?

“Foam formation in low‑expansion fire-fighting equipment. I investigated the equipment to determine how the bubbles were formed,” she explains. Obviously. Bubbles!

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy graduated from the Singularity University Graduate Studies Program in 2011, focusing on how robotics, nanotech, medicine, and various technologies can tackle the challenges facing the world

She then went on to become a fellow of the Royal Astronomical Society (RAS) in 2005 and, later, a fellow of both the Institution of Mechanical Engineers (IMechE) and British Interplanetary Society. As a member of the Association of British Science Writers, Lucy wrote It’s ONLY Rocket Science: an Introduction in Plain English.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

In It’s Only Rocket Science: An Introduction in Plain English Lucy explains that ‘hard to understand’ isn’t the same as ‘impossible to understand’, and takes her readers through the journey of building a rocket, leaving Earth, and travelling the cosmos

As a standout member of the industry, and all-round fun person to be around, Lucy has quickly established herself as a valued member of the Pi community.

In 2014, with the help of Neil Ford and Andy Stanford-Clark, Lucy worked with the UK’s oldest amusement park, Blackgang Chine Land of Imagination, on the Isle of Wight, with the aim of updating its animatronic dinosaurs. The original Blackgang Chine dinosaurs had a limited range of behaviour: able to roar, move their heads, and stomp a foot in a somewhat repetitive action.

When she contacted Raspberry Pi back in the November of that same year, the team were working on more creative, varied behaviours, giving each dinosaur a new Raspberry Pi-sized brain. This later evolved into a very successful dino-hacking Raspberry Jam.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy, Neil Ford, and Andy Stanford-Clark used several Raspberry Pis and Node-RED to visualise flows of events when updating the robotic dinosaurs at Blackgang Chine. They went on to create the successful WightPi Raspberry Jam event, where visitors could join in with the unique hacking opportunity.

Given her love for tinkering with tech, and a love for stand-up comedy that can be uncovered via a quick YouTube search, it’s no wonder that Lucy was asked to help judge the first round of the ‘Make us laugh’ Pioneers challenge for Raspberry Pi. Alongside comedian Bec Hill, Code Club UK director Maria Quevedo, and the face of the first challenge, Owen Daughtery, Lucy lent her expertise to help name winners in the various categories of the teens event, and offered her support to future Pioneers.

The post Community Profile: Dr. Lucy Rogers appeared first on Raspberry Pi.

Raspbery Pi-newood Derby

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pinewood-derby/

Andre Miron’s Pinewood Derby Instant Replay System (sorry, not sorry for the pun in the title) uses a Raspberry Pi to monitor the finishing line and play back a slow-motion instant replay, putting an end to “No, I won!” squabbles once and for all.

Raspberry Pi Based Pinewood Derby Instant Replay Demo

This is the same system I demo in this video (https://youtu.be/-QyMxKfBaAE), but on our actual track with real pinewood derby cars. Glad to report that it works great!

Pinewood Derby

For those unfamiliar with the term, the Pinewood Derby is a racing event for Cub Scouts in the USA. Cub Scouts, often with the help of a guardian, build race cars out of wood according to rules regarding weight, size, materials, etc.

Pinewood derby race car

The Cubs then race their cars in heats, with the winners advancing to district and council races.

Who won?

Andre’s Instant Replay System registers the race cars as they cross the finishing line, and it plays back slow-motion video of the crossing on a monitor. As he explains on YouTube:

The Pi is recording a constant stream of video, and when the replay is triggered, it records another half-second of video, then takes the last second and a half and saves it in slow motion (recording is done at 90 fps), before replaying.

The build also uses an attached Arduino, connected to GPIO pin 5, to trigger the recording and playback as it registers the passing cars via a voltage splitter. Additionally, the system announces the finishing places on a rather attractive-looking display above the finishing line.

Pinewood derby race car Raspberry Pi

The result? No more debate about whose car crossed the line first in neck-and-neck races.

Build your own

Andre takes us through the physical setup of the build in the video below, and you’ll find the complete code pasted in the description of the video here. Thanks, Andre!

Raspberry Pi based Pinewood Derby Instant Replay System

See the system on our actual track here: https://youtu.be/B3lcQHWGq88 Raspberry Pi based instant replay system, triggered by Arduino Pinewood Derby Timer. The Pi uses GPIO pin 5 attached to a voltage splitter on Arduino output 11 (and ground-ground) to detect when a car crosses the finish line, which triggers the replay.

Digital making in your club

If you’re a member of an various after-school association such as the Scouts or Guides, then using the Raspberry Pi and our free project resources, or visiting a Code Club or CoderDojo, are excellent ways to work towards various badges and awards. So talk to your club leader to discover all the ways in which you can incorporate digital making into your club!

The post Raspbery Pi-newood Derby appeared first on Raspberry Pi.

Bitcoin: In Crypto We Trust

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/bitcoin-in-crypto-we-trust.html

Tim Wu, who coined “net neutrality”, has written an op-ed on the New York Times called “The Bitcoin Boom: In Code We Trust“. He is wrong about “code”.

The wrong “trust”

Wu builds a big manifesto about how real-world institutions can’t be trusted. Certainly, this reflects the rhetoric from a vocal wing of Bitcoin fanatics, but it’s not the Bitcoin manifesto.

Instead, the word “trust” in the Bitcoin paper is much narrower, referring to how online merchants can’t trust credit-cards (for example). When I bought school supplies for my niece when she studied in Canada, the online site wouldn’t accept my U.S. credit card. They didn’t trust my credit card. However, they trusted my Bitcoin, so I used that payment method instead, and succeeded in the purchase.

Real-world currencies like dollars are tethered to the real-world, which means no single transaction can be trusted, because “they” (the credit-card company, the courts, etc.) may decide to reverse the transaction. The manifesto behind Bitcoin is that a transaction cannot be reversed — and thus, can always be trusted.

Deliberately confusing the micro-trust in a transaction and macro-trust in banks and governments is a sort of bait-and-switch.

The wrong inspiration

Wu claims:

“It was, after all, a carnival of human errors and misfeasance that inspired the invention of Bitcoin in 2009, namely, the financial crisis.”

Not true. Bitcoin did not appear fully formed out of the void, but was instead based upon a series of innovations that predate the financial crisis by a decade. Moreover, the financial crisis had little to do with “currency”. The value of the dollar and other major currencies were essentially unscathed by the crisis. Certainly, enthusiasts looking backward like to cherry pick the financial crisis as yet one more reason why the offline world sucks, but it had little to do with Bitcoin.

In crypto we trust

It’s not in code that Bitcoin trusts, but in crypto. Satoshi makes that clear in one of his posts on the subject:

A generation ago, multi-user time-sharing computer systems had a similar problem. Before strong encryption, users had to rely on password protection to secure their files, placing trust in the system administrator to keep their information private. Privacy could always be overridden by the admin based on his judgment call weighing the principle of privacy against other concerns, or at the behest of his superiors. Then strong encryption became available to the masses, and trust was no longer required. Data could be secured in a way that was physically impossible for others to access, no matter for what reason, no matter how good the excuse, no matter what.

You don’t possess Bitcoins. Instead, all the coins are on the public blockchain under your “address”. What you possess is the secret, private key that matches the address. Transferring Bitcoin means using your private key to unlock your coins and transfer them to another. If you print out your private key on paper, and delete it from the computer, it can never be hacked.

Trust is in this crypto operation. Trust is in your private crypto key.

We don’t trust the code

The manifesto “in code we trust” has been proven wrong again and again. We don’t trust computer code (software) in the cryptocurrency world.

The most profound example is something known as the “DAO” on top of Ethereum, Bitcoin’s major competitor. Ethereum allows “smart contracts” containing code. The quasi-religious manifesto of the DAO smart-contract is that the “code is the contract”, that all the terms and conditions are specified within the smart-contract code, completely untethered from real-world terms-and-conditions.

Then a hacker found a bug in the DAO smart-contract and stole most of the money.

In principle, this is perfectly legal, because “the code is the contract”, and the hacker just used the code. In practice, the system didn’t live up to this. The Ethereum core developers, acting as central bankers, rewrote the Ethereum code to fix this one contract, returning the money back to its original owners. They did this because those core developers were themselves heavily invested in the DAO and got their money back.

Similar things happen with the original Bitcoin code. A disagreement has arisen about how to expand Bitcoin to handle more transactions. One group wants smaller and “off-chain” transactions. Another group wants a “large blocksize”. This caused a “fork” in Bitcoin with two versions, “Bitcoin” and “Bitcoin Cash”. The fork championed by the core developers (central bankers) is worth around $20,000 right now, while the other fork is worth around $2,000.

So it’s still “in central bankers we trust”, it’s just that now these central bankers are mostly online instead of offline institutions. They have proven to be even more corrupt than real-world central bankers. It’s certainly not the code that is trusted.

The bubble

Wu repeats the well-known reference to Amazon during the dot-com bubble. If you bought Amazon’s stock for $107 right before the dot-com crash, it still would be one of wisest investments you could’ve made. Amazon shares are now worth around $1,200 each.

The implication is that Bitcoin, too, may have such long term value. Even if you buy it today and it crashes tomorrow, it may still be worth ten-times its current value in another decade or two.

This is a poor analogy, for three reasons.

The first reason is that we knew the Internet had fundamentally transformed commerce. We knew there were going to be winners in the long run, it was just a matter of picking who would win (Amazon) and who would lose (Pets.com). We have yet to prove Bitcoin will be similarly transformative.

The second reason is that businesses are real, they generate real income. While the stock price may include some irrational exuberance, it’s ultimately still based on the rational expectations of how much the business will earn. With Bitcoin, it’s almost entirely irrational exuberance — there are no long term returns.

The third flaw in the analogy is that there are an essentially infinite number of cryptocurrencies. We saw this today as Coinbase started trading Bitcoin Cash, a fork of Bitcoin. The two are nearly identical, so there’s little reason one should be so much valuable than another. It’s only a fickle fad that makes one more valuable than another, not business fundamentals. The successful future cryptocurrency is unlikely to exist today, but will be invented in the future.

The lessons of the dot-com bubble is not that Bitcoin will have long term value, but that cryptocurrency companies like Coinbase and BitPay will have long term value. Or, the lesson is that “old” companies like JPMorgan that are early adopters of the technology will grow faster than their competitors.

Conclusion

The point of Wu’s paper is to distinguish trust in traditional real-world institutions and trust in computer software code. This is an inaccurate reading of the situation.

Bitcoin is not about replacing real-world institutions but about untethering online transactions.

The trust in Bitcoin is in crypto — the power crypto gives individuals instead of third-parties.

The trust is not in the code. Bitcoin is a “cryptocurrency” not a “codecurrency”.

Pioneers winners: only you can save us

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/pioneers-winners-only-you-can-save-us/

She asked for help, and you came to her aid. Pioneers, the winners of the Only you can save us challenge have been picked!

Can you see me? Only YOU can save us!

I need your help. This is a call out for those between 11- and 16-years-old in the UK and Republic of Ireland. Something has gone very, very wrong and only you can save us. I’ve collected together as much information for you as I can. You’ll find it at http://www.raspberrypi.org/pioneers.

The challenge

In August we intercepted an emergency communication from a lonesome survivor. She seemed to be in quite a bit of trouble, and asked all you young people aged 11 to 16 to come up with something to help tackle the oncoming crisis, using whatever technology you had to hand. You had ten weeks to work in teams of two to five with an adult mentor to fulfil your mission.

The judges

We received your world-saving ideas, and our savvy survivor pulled together a ragtag bunch of apocalyptic experts to help us judge which ones would be the winning entries.

Dr Shini Somara

Dr Shini Somara is an advocate for STEM education and a mechanical engineer. She was host of The Health Show and has appeared in documentaries for the BBC, PBS Digital, and Sky. You can check out her work hosting Crash Course Physics on YouTube.

Prof Lewis Dartnell is an astrobiologist and author of the book The Knowledge: How to Rebuild Our World From Scratch.

Emma Stephenson has a background in aeronautical engineering and currently works in the Shell Foundation’s Access to Energy and Sustainable Mobility portfolio.

Currently sifting through the entries with the other judges of #makeyourideas with @raspberrypifoundation @_raspberrypi_

151 Likes, 3 Comments – Shini Somara (@drshinisomara) on Instagram: “Currently sifting through the entries with the other judges of #makeyourideas with…”

The winners

Our survivor is currently putting your entries to good use repairing, rebuilding, and defending her base. Our judges chose the following projects as outstanding examples of world-saving digital making.

Theme winner: Computatron

Raspberry Pioneers 2017 – Nerfus Dislikus Killer Robot

This is our entry to the pioneers ‘Only you can save us’ competition. Our team name is Computatrum. Hope you enjoy!

Are you facing an unknown enemy whose only weakness is Nerf bullets? Then this is the robot for you! We loved the especially apocalyptic feel of the Computatron’s cleverly hacked and repurposed elements. The team even used an old floppy disc mechanism to help fire their bullets!

Technically brilliant: Robot Apocalypse Committee

Pioneers Apocalypse 2017 – RationalPi

Thousands of lines of code… Many sheets of acrylic… A camera, touchscreen and fingerprint scanner… This is our entry into the Raspberry Pi Pioneers2017 ‘Only YOU can Save Us’ theme. When zombies or other survivors break into your base, you want a secure way of storing your crackers.

The Robot Apocalypse Committee is back, and this time they’ve brought cheese! The crew designed a cheese- and cracker-dispensing machine complete with face and fingerprint recognition to ensure those rations last until the next supply drop.

Best explanation: Pi Chasers

Tala – Raspberry Pi Pioneers Project

Hi! We are PiChasers and we entered the Raspberry Pi Pionners challenge last time when the theme was “Make it Outdoors!” but now we’ve been faced with another theme “Apocolypse”. We spent a while thinking of an original thing that would help in an apocolypse and decided upon a ‘text-only phone’ which uses local radio communication rather than cellular.

This text-based communication device encased in a tupperware container could be a lifesaver in a crisis! And luckily, the Pi Chasers produced an excellent video and amazing GitHub repo, ensuring that any and all survivors will be able to build their own in the safety of their base.

Most inspiring journey: Three Musketeers

Pioneers Entry – The Apocalypse

Pioneers Entry Team Name: The Three Musketeers Team Participants: James, Zach and Tom

We all know that zombies are terrible at geometry, and the Three Musketeers used this fact to their advantage when building their zombie security system. We were impressed to see the team working together to overcome the roadblocks they faced along the way.

We appreciate what you’re trying to do: Zombie Trolls

Zombie In The Middle

Uploaded by CDA Bodgers on 2017-12-01.

Playing piggy in the middle with zombies sure is a unique way of saving humankind from total extinction! We loved this project idea, and although the Zombie Trolls had a little trouble with their motors, we’re sure with a little more tinkering this zombie-fooling contraption could save us all.

Most awesome

Our judges also wanted to give a special commendation to the following teams for their equally awesome apocalypse-averting ideas:

  • PiRates, for their multifaceted zombie-proofing defence system and the high production value of their video
  • Byte them Pis, for their beautiful zombie-detecting doormat
  • Unatecxon, for their impressive bunker security system
  • Team Crompton, for their pressure-activated door system
  • Team Ernest, for their adventures in LEGO

The prizes

All our winning teams have secured exclusive digital maker boxes. These are jam-packed with tantalising tech to satisfy all tinkering needs, including:

Our theme winners have also secured themselves a place at Coolest Projects 2018 in Dublin, Ireland!

Thank you to everyone who got involved in this round of Pioneers. Look out for your awesome submission swag arriving in the mail!

The post Pioneers winners: only you can save us appeared first on Raspberry Pi.

Happy Halloween!

Post Syndicated from Yev original https://www.backblaze.com/blog/happy-halloween/

Thank you so much to everyone who entered our Halloween Video Contest. We had a lot of fun watching the videos and have chosen a winner!

There were many strong entries, but in the end we gave in to our love of Stranger Things. Which fits, because after years of hearing backup stories, some of them have been strange indeed!

Congratulations to our 2017 Halloween Video Contest winner Preston! Here’s his winning video:

Thank you to everyone who participated, we loved the creativity!

The post Happy Halloween! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Spooky Halloween Video Contest

Post Syndicated from Yev original https://www.backblaze.com/blog/spooky-halloween-video-contest/

Would You LIke to Play a Game? Let's make a scary movie or at least a silly one.

Think you can create a really spooky Halloween video?

We’re giving out $100 Visa gift cards just in time for the holidays. Want a chance to win? You’ll need to make a spooky 30-second Halloween-themed video. We had a lot of fun with this the last time we did it a few years back so we’re doing it again this year.

Here’s How to Enter

  1. Prepare a short, 30 seconds or less, video recreating your favorite horror movie scene using your computer or hard drive as the victim — or make something original!
  2. Insert the following image at the end of the video (right-click and save as):
    Backblaze cloud backup
  3. Upload your video to YouTube
  4. Post a link to your video on the Backblaze Facebook wall or on Twitter with the hashtag #Backblaze so we can see it and enter it into the contest. Or, link to it in the comments below!
  5. Share your video with friends

Common Questions
Q: How many people can be in the video?
A: However many you need in order to recreate the scene!
Q: Can I make it longer than 30 seconds?
A: Maybe 32 seconds, but that’s it. If you want to make a longer “director’s cut,” we’d love to see it, but the contest video should be close to 30 seconds. Please keep it short and spooky.
Q: Can I record it on an iPhone, Android, iPad, Camera, etc?
A: You can use whatever device you wish to record your video.
Q: Can I submit multiple videos?
A: If you have multiple favorite scenes, make a vignette! But please submit only one video.
Q: How many winners will there be?
A: We will select up to three winners total.

Contest Rules

  • To upload the video to YouTube, you must have a valid YouTube account and comply with all YouTube rules for age, content, copyright, etc.
  • To post a link to your video on the Backblaze Facebook wall, you must use a valid Facebook account and comply with all Facebook rules for age, content, copyrights, etc.
  • We reserve the right to remove and/or not consider as a valid entry, any videos which we deem inappropriate. We reserve the exclusive right to determine what is inappropriate.
  • Backblaze reserves the right to use your video for promotional purposes.
  • The contest will end on October 29, 2017 at 11:59:59 PM Pacific Daylight Time. The winners (up to three) will be selected by Backblaze and will be announced on October 31, 2017.
  • We will be giving away gift cards to the top winners. The prize will be mailed to the winner in a timely manner.
  • Please keep the content of the post PG rated — no cursing or extreme gore/violence.
  • By submitting a video you agree to all of these rules.

Need an example?

The post Spooky Halloween Video Contest appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A security review of three NTP implementations

Post Syndicated from corbet original https://lwn.net/Articles/735211/rss

The Core Infrastructure Initiative commissioned security audits of three
network time protocol (NTP) implementations (ntpd, NTPSec, and Chrony) and
has released
the results
. “From a security standpoint (and here at the CII we
are security people), Chrony was the clear winner between these three NTP
implementations. Chrony does not have all of the bells and whistles that
ntpd does, and it doesn’t implement every single option listed in the NTP
specification, but for the vast majority of users this will not matter. If
all you need is an NTP client or server (with or without reference clock),
which is all that most people need, then its security benefits most likely
outweigh any missing features.

Pioneers Summer Camp 2017

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pioneers-summer-camp-2017/

In July, winners of the first two Pioneers challenges came together at Google HQ at Kings Cross in London for the Pioneers Summer Camp. This event was a special day to celebrate their awesomeness, and to give them access to some really cool stuff.

Pioneers: Google Summer Camp 2017

In July this year, winners of the first two Pioneers challenges came to Google HQ in London’s Kings Cross to meet, make and have an awesome time.

The lucky Pioneers

The summer camp was organised specifically for the winners of the two Pioneers challenges Make us laugh and Make it outdoors. Invitations went out to every team that won an award, including the Theme winners, winners in categories such as Best Explanation or Inspiring Journey, and those teams that received a Judges’ Recognition. We also allowed their mentors to attend, because they earned it too.

Code Club Scotland on Twitter

Excited about @Raspberry_Pi Pioneers day at @Google today with @jm_paterson and The Frontier Team #makeyourideas https://t.co/wZqfqqgZuL

With teams of excited Pioneers arriving from all over the UK, the day was bound to be a great success and a fun experience for all.

The Pioneers Summer Camp

The event took place at the rather impressive Google HQ in King’s Cross, London. Given that YouTube Space London is attached to this building, everyone, including the mentors and the Raspberry Pi team, was immediately eager to explore.

YouTube Space London

image c/o IBT

In rooms designed around David-Bowie-associated themes, e.g. Major Tom and Aladdin Sane, our intrepid Pioneers spent the morning building robots and using the Google AIY Projects kit to control their builds. Every attendee got to keep their robot and AIY kits, to be able to continue their tech experiments at home. They also each received their own Raspberry Pi, as well as some Google goodies and a one-of-a-kind Raspberry Pi hoody…much to the jealousy of many of our Twitter followers.

Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017

Meanwhile, mentors were invited to play with their own AIY kits, and the team from pi-top took accompanying parents aside to introduce them to the world of Scratch. This in itself was wonderful to witness: nervous parents started the day anxiously prodding at their pi-top screens, and they ended it with a new understanding of why code and digital making makes their kids tick.

Raspberry Pi Pioneers Summer Camp 2017

After the making funtimes, the Pioneers got to learn about career opportunities within the field of digital making from some of the best in the industry. Representatives from Google, YouTube, and the Shell Scholarship Fund offered insights into their day-to-day work and some of their teams’ cool projects.

Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017

And to top off the day, our Pioneers winners went on a tour of the YouTube studios, a space to which only YouTube Creators have access. Lucky bunch!

The evening

When the evening rolled around, Pioneers got to work setting up their winning projects. From singing potatoes to sun-powered instruments and builds for plant maintenance, the array of ideas and creations showcased the incredible imagination these young makers have displayed throughout the first two seasons of Pioneers.

Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017
Raspberry Pi Pioneers Summer Camp 2017

As well as a time for showing off winning makes, the evening was also an opportunity for Pioneers, mentors, and parents to mingle, chat, swap Twitter usernames, and get to know others as interested in making and changing the world as they are.

Raspberry Pi Pioneers Summer Camp 2017

The Pioneers Summer Camp came to a close with a great Q&A by some eager Pioneers, followed by praise from Raspberry Pi Foundation CEO Philip Colligan, Mike Warriner of Google UK, and Make it outdoors judge Georgina Asmah from the Shell Centenary Scholarship Fund.

Become a Pioneer

We’ll be announcing the next Pioneers challenge on Monday 18 September, and we’re so excited to see what our makers do with the next theme. We’ve put a lot of brain power into coming up with the ultimate challenge, and it’s taking everything we have not to let it slip!

Well, maybe I can just…don’t tell anyone, but here’s a sneek peak at part of the logo. Shhhh…

One thing we can tell you: this season of Pioneers will include makers from the Republic of Ireland, thanks in part to the incredible support from our team at CoderDojo. Woohoo!

We’ll announce the challenge via the Raspberry Pi blog, but make sure to sign up for the Pioneers newsletter to get all the latest information directly to your inbox.

The post Pioneers Summer Camp 2017 appeared first on Raspberry Pi.

Sean’s DIY Bitcoin Lottery with a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/seans-diy-bitcoin-lottery/

After several explorations into the world of 3D printing, and fresh off the back of his $5 fidget spinner crowd funding campaign, Sean Hodgins brings us his latest project: a DIY Bitcoin Lottery!

DIY Bitcoin Lottery with a Raspberry Pi

Build your own lottery! Thingiverse Files: https://www.thingiverse.com/thing:2494568 Pi How-to: http://www.idlehandsproject.com/raspberry-pi-bitcoin-lottery/ Instructables: https://www.instructables.com/id/DIY-Bitcoin-Lottery-With-Raspberry-Pi/ Send me bitcoins if you want!

What is Bitcoin mining?

According to the internet, Bitcoin mining is:

[A] record-keeping service. Miners keep the blockchain consistent, complete, and unalterable by repeatedly verifying and collecting newly broadcast transactions into a new group of transactions called a block. Each block contains a cryptographic hash of the previous block, using the SHA-256 hashing algorithm, which links it to the previous block, thus giving the blockchain its name.

If that makes no sense to you, welcome to the club. So here’s a handy video which explains it better.

What is Bitcoin Mining?

For more information: https://www.bitcoinmining.com and https://www.weusecoins.com What is Bitcoin Mining? Have you ever wondered how Bitcoin is generated? This short video is an animated introduction to Bitcoin Mining. Credits: Voice – Chris Rice (www.ricevoice.com) Motion Graphics – Fabian Rühle (www.fabianruehle.de) Music/Sound Design – Christian Barth (www.akkord-arbeiter.de) Andrew Mottl (www.andrewmottl.com)

Okay, now I get it.

I swear.

Sean’s Bitcoin Lottery

As a retired Bitcoin miner, Sean understands how the system works and what is required for mining. And since news sources report that Bitcoin is currently valued at around $4000, Sean decided to use a Raspberry Pi to bring to life an idea he’d been thinking about for a little while.

Sean Hodgins Raspberry Pi Bitcoin Lottery

He fitted the Raspberry Pi into a 3D-printed body, together with a small fan, a strip of NeoPixels, and a Block Eruptor ASIC which is the dedicated mining hardware. The Pi runs a Python script compatible with CGMiner, a mining software that needs far more explanation than I can offer in this short blog post.

The Neopixels take the first 6 characters of the 64-character-long number of the current block, and interpret it as a hex colour code. In this way, the block’s data is converted into colour, which, when you think about it, is kind of beautiful.

The device moves on to trying to solve a new block every 20 minutes. When it does, the NeoPixel LEDs play a flashing ‘Win’ or ‘Lose’ animation to let you know whether you were the one to solve the previous block.

Sean Hodgins Raspberry Pi Bitcoin Lottery

Lottery results

Sean has done the maths to calculate the power consumption of the device. He says that the annual cost of running his Bitcoin Lottery is roughly what you would pay for two lottery scratch cards. Now, the odds of solving a block are much lower than those of buying a winning scratch card. However, since the mining device moves on to a new block every 20 minutes, the odds of being a winner with Bitcoin using Sean’s build are actually better than those of winning the lottery.

Sean Hodgins Raspberry Pi Bitcoin Lottery

MATHS!

But even if you don’t win, Sean’s project is a fun experiment in Bitcoin mining and creating colour through code. And if you want to make your own, you can download the 3D-files here, find the code here, and view the step-by-step guide here on Instructables.

Good luck and happy mining!

The post Sean’s DIY Bitcoin Lottery with a Raspberry Pi appeared first on Raspberry Pi.

Announcing the Winners of the AWS Chatbot Challenge – Conversational, Intelligent Chatbots using Amazon Lex and AWS Lambda

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-the-winners-of-the-aws-chatbot-challenge-conversational-intelligent-chatbots-using-amazon-lex-and-aws-lambda/

A couple of months ago on the blog, I announced the AWS Chatbot Challenge in conjunction with Slack. The AWS Chatbot Challenge was an opportunity to build a unique chatbot that helped to solve a problem or that would add value for its prospective users. The mission was to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend.

I know that you all have been anxiously waiting to hear announcements of who were the winners of the AWS Chatbot Challenge as much as I was. Well wait no longer, the winners of the AWS Chatbot Challenge have been decided.

May I have the Envelope Please? (The Trumpets sound)

The winners of the AWS Chatbot Challenge are:

  • First Place: BuildFax Counts by Joe Emison
  • Second Place: Hubsy by Andrew Riess, Andrew Puch, and John Wetzel
  • Third Place: PFMBot by Benny Leong and his team from MoneyLion.
  • Large Organization Winner: ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang

 

Diving into the Winning Chatbot Projects

Let’s take a walkthrough of the details for each of the winning projects to get a view of what made these chatbots distinctive, as well as, learn more about the technologies used to implement the chatbot solution.

 

BuildFax Counts by Joe Emison

The BuildFax Counts bot was created as a real solution for the BuildFax company to decrease the amount the time that sales and marketing teams can get answers on permits or properties with permits meet certain criteria.

BuildFax, a company co-founded by bot developer Joe Emison, has the only national database of building permits, which updates data from approximately half of the United States on a monthly basis. In order to accommodate the many requests that come in from the sales and marketing team regarding permit information, BuildFax has a technical sales support team that fulfills these requests sent to a ticketing system by manually writing SQL queries that run across the shards of the BuildFax databases. Since there are a large number of requests received by the internal sales support team and due to the manual nature of setting up the queries, it may take several days for getting the sales and marketing teams to receive an answer.

The BuildFax Counts chatbot solves this problem by taking the permit inquiry that would normally be sent into a ticket from the sales and marketing team, as input from Slack to the chatbot. Once the inquiry is submitted into Slack, a query executes and the inquiry results are returned immediately.

Joe built this solution by first creating a nightly export of the data in their BuildFax MySQL RDS database to CSV files that are stored in Amazon S3. From the exported CSV files, an Amazon Athena table was created in order to run quick and efficient queries on the data. He then used Amazon Lex to create a bot to handle the common questions and criteria that may be asked by the sales and marketing teams when seeking data from the BuildFax database by modeling the language used from the BuildFax ticketing system. He added several different sample utterances and slot types; both custom and Lex provided, in order to correctly parse every question and criteria combination that could be received from an inquiry.  Using Lambda, Joe created a Javascript Lambda function that receives information from the Lex intent and used it to build a SQL statement that runs against the aforementioned Athena database using the AWS SDK for JavaScript in Node.js library to return inquiry count result and SQL statement used.

The BuildFax Counts bot is used today for the BuildFax sales and marketing team to get back data on inquiries immediately that previously took up to a week to receive results.

Not only is BuildFax Counts bot our 1st place winner and wonderful solution, but its creator, Joe Emison, is a great guy.  Joe has opted to donate his prize; the $5,000 cash, the $2,500 in AWS Credits, and one re:Invent ticket to the Black Girls Code organization. I must say, you rock Joe for helping these kids get access and exposure to technology.

 

Hubsy by Andrew Riess, Andrew Puch, and John Wetzel

Hubsy bot was created to redefine and personalize the way users traditionally manage their HubSpot account. HubSpot is a SaaS system providing marketing, sales, and CRM software. Hubsy allows users of HubSpot to create engagements and log engagements with customers, provide sales teams with deals status, and retrieves client contact information quickly. Hubsy uses Amazon Lex’s conversational interface to execute commands from the HubSpot API so that users can gain insights, store and retrieve data, and manage tasks directly from Facebook, Slack, or Alexa.

In order to implement the Hubsy chatbot, Andrew and the team members used AWS Lambda to create a Lambda function with Node.js to parse the users request and call the HubSpot API, which will fulfill the initial request or return back to the user asking for more information. Terraform was used to automatically setup and update Lambda, CloudWatch logs, as well as, IAM profiles. Amazon Lex was used to build the conversational piece of the bot, which creates the utterances that a person on a sales team would likely say when seeking information from HubSpot. To integrate with Alexa, the Amazon Alexa skill builder was used to create an Alexa skill which was tested on an Echo Dot. Cloudwatch Logs are used to log the Lambda function information to CloudWatch in order to debug different parts of the Lex intents. In order to validate the code before the Terraform deployment, ESLint was additionally used to ensure the code was linted and proper development standards were followed.

 

PFMBot by Benny Leong and his team from MoneyLion

PFMBot, Personal Finance Management Bot,  is a bot to be used with the MoneyLion finance group which offers customers online financial products; loans, credit monitoring, and free credit score service to improve the financial health of their customers. Once a user signs up an account on the MoneyLion app or website, the user has the option to link their bank accounts with the MoneyLion APIs. Once the bank account is linked to the APIs, the user will be able to login to their MoneyLion account and start having a conversation with the PFMBot based on their bank account information.

The PFMBot UI has a web interface built with using Javascript integration. The chatbot was created using Amazon Lex to build utterances based on the possible inquiries about the user’s MoneyLion bank account. PFMBot uses the Lex built-in AMAZON slots and parsed and converted the values from the built-in slots to pass to AWS Lambda. The AWS Lambda functions interacting with Amazon Lex are Java-based Lambda functions which call the MoneyLion Java-based internal APIs running on Spring Boot. These APIs obtain account data and related bank account information from the MoneyLion MySQL Database.

 

ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang

ADP PI (Payroll Innovation) bot is designed to help employees of ADP customers easily review their own payroll details and compare different payroll data by just asking the bot for results. The ADP PI Bot additionally offers issue reporting functionality for employees to report payroll issues and aids HR managers in quickly receiving and organizing any reported payroll issues.

The ADP Payroll Innovation bot is an ecosystem for the ADP payroll consisting of two chatbots, which includes ADP PI Bot for external clients (employees and HR managers), and ADP PI DevOps Bot for internal ADP DevOps team.


The architecture for the ADP PI DevOps bot is different architecture from the ADP PI bot shown above as it is deployed internally to ADP. The ADP PI DevOps bot allows input from both Slack and Alexa. When input comes into Slack, Slack sends the request to Lex for it to process the utterance. Lex then calls the Lambda backend, which obtains ADP data sitting in the ADP VPC running within an Amazon VPC. When input comes in from Alexa, a Lambda function is called that also obtains data from the ADP VPC running on AWS.

The architecture for the ADP PI bot consists of users entering in requests and/or entering issues via Slack. When requests/issues are entered via Slack, the Slack APIs communicate via Amazon API Gateway to AWS Lambda. The Lambda function either writes data into one of the Amazon DynamoDB databases for recording issues and/or sending issues or it sends the request to Lex. When sending issues, DynamoDB integrates with Trello to keep HR Managers abreast of the escalated issues. Once the request data is sent from Lambda to Lex, Lex processes the utterance and calls another Lambda function that integrates with the ADP API and it calls ADP data from within the ADP VPC, which runs on Amazon Virtual Private Cloud (VPC).

Python and Node.js were the chosen languages for the development of the bots.

The ADP PI bot ecosystem has the following functional groupings:

Employee Functionality

  • Summarize Payrolls
  • Compare Payrolls
  • Escalate Issues
  • Evolve PI Bot

HR Manager Functionality

  • Bot Management
  • Audit and Feedback

DevOps Functionality

  • Reduce call volume in service centers (ADP PI Bot).
  • Track issues and generate reports (ADP PI Bot).
  • Monitor jobs for various environment (ADP PI DevOps Bot)
  • View job dashboards (ADP PI DevOps Bot)
  • Query job details (ADP PI DevOps Bot)

 

Summary

Let’s all wish all the winners of the AWS Chatbot Challenge hearty congratulations on their excellent projects.

You can review more details on the winning projects, as well as, all of the submissions to the AWS Chatbot Challenge at: https://awschatbot2017.devpost.com/submissions. If you are curious on the details of Chatbot challenge contest including resources, rules, prizes, and judges, you can review the original challenge website here:  https://awschatbot2017.devpost.com/.

Hopefully, you are just as inspired as I am to build your own chatbot using Lex and Lambda. For more information, take a look at the Amazon Lex developer guide or the AWS AI blog on Building Better Bots Using Amazon Lex (Part 1)

Chat with you soon!

Tara

Getting Your Data into the Cloud is Just the Beginning

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cost-data-of-transfer-cloud-storage/

Total Cloud Storage Cost

Organizations should consider not just the cost of getting their data into the cloud, but also long-term costs for storage and retrieval when deciding which cloud storage solution meets their needs.

As cloud storage has become ubiquitous, organizations large and small are joining in. For larger organizations the lure of reducing capital expenses and their associated operational costs is enticing. For smaller organizations, cloud storage often replaces an unmanageable closet full of external hard drives, thumb drives, SD cards, and other devices. With terabytes or even petabytes of data, the common challenge facing organizations, large and small, is how to get their data up to the cloud.

Transferring Data to the Cloud

The obvious solution for getting your data to the cloud is to upload your data from your internal network through the internet to the cloud storage vendor you’ve selected. Cloud storage vendors don’t charge you for uploading your data to their cloud, but you, of course, have to pay your network provider and that’s where things start to get interesting. Here are a few things to consider.

  • The initial upload: Unless you are just starting out, you will have a large amount of data you want to upload to the cloud. This could be data you wish to archive or have had archived previously, for example data stored on LTO tapes or kept stored on external hard drives.
  • Pipe size: This is the amount of upload bandwidth of your network connection. This is measured in Mbps (megabits per second). Remember, your data is stored in MB (megabytes), so an upload connection of 80 Mbps will transfer no more than 10 MB of data per second and most likely a lot less.
  • Cost and caps: In some places, organizations pay a flat monthly rate for a specified level of service (speed) for internet access. In other locations, internet access is metered, or pay as you go. In either case, there can be internet service caps that limit or completely stop data transfer once you reach your contracted threshold.

One or more of these challenges has the potential to make the initial upload of your data expensive and potentially impossible. You could wait until cloud storage companies start buying up internet providers and make data upload cheap (or free with Amazon Prime!), but there is another option.

Data Transfer Devices

Given the potential challenges of using your network for the initial upload of your data to the cloud, a handful of cloud storage companies have introduced data transfer or data ingest services. Backblaze has the B2 Fireball, Amazon has Snowball (and other similar devices), and Google recently introduced their Transfer Appliance.

KLRU-TV Austin PBS uploaded their Austin City Limits musical anthology series to Backblaze using a B2 Fireball.

These services work as follows:

  • The provider sends you a portable (or somewhat portable) storage device.
  • You connect the device to your network and load some amount of data on the device over your internal network connection.
  • You return the device, loaded with your data, to the provider, who uploads your data to your cloud storage account from inside their own data center.

Data Transfer Devices Save Time

Assuming your Internet connection is a flat rate service that has no caps or limits and your organizational operations can withstand the traffic, you still may want to opt to use a data transfer service to move your data to the cloud. Why? Time. For example, if your initial data upload is 100 TB here’s how long it would take using different network upload connection speeds:

Network SpeedUpload Time
10 Mbps3 years
100 Mbps124 days
500 Mbps25 days
1 Gbps12 days

This assumes you are using most of your upload connection to upload your data, which is probably not realistic if you want to stay in business. You could potentially rent a better connection or upgrade your connection permanently, both of which add to the cost of running your business.

Speaking of cost, there is of course a charge for the data transfer service that can be summarized as follows:

  • Backblaze B2 Fireball — Up to 40 TB of data per trip for $550.00 for 30 days in use at your site.
  • Amazon Snowball — up to 50 TB of data per trip for $200.00 for 10 days use at your site, plus $15/day each day in use at your site thereafter.
  • Google Transfer Appliance — up to 100 TB of data per trip for $300.00 for 10 days use at your site, plus $10/day each day in use at your site thereafter.

These prices do not include shipping, which can range from $100 to $900 depending on shipping method, location, etc.

Both Amazon and Google have transfer devices that are larger and cost more. For comparison purposes below we’ll use the three device versions listed above.

The Real Cost of Uploading Your Data

If we stopped our review at the previous paragraph and we were prepared to load up our transfer device in 10 days or less, the clear winner would be Google. But, this leaves out two very important components of any cloud storage project; the cost of storing your data and the cost of downloading your data.

Let’s look at two examples:

Example 1 — Archive 100 TB of data:

  • Use the data transfer service move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
ServiceTransfer CostCloud StorageTotal
Backblaze B2$1,650 (3 trips)$6,000$7,650
Google Cloud$300 (1 trip)$24,000$24,300
Amazon S3$400 (2 trips)$25,200$25,600

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $16,650 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Example 2 — Store and use 100 TB of data:

  • Use the data transfer service to move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
  • Add 5 TB a month (on average) to the total stored.
  • Delete 2 TB a month (on average) from the total stored.
  • Download 10 TB a month (on average) from the total stored.
ServiceTransfer CostCloud StorageTotal
Backblaze B2$1,650 (3 trips)$9,570$11,220
Google Cloud$300 (1 trip)$39,684$39,984
Amazon S3$400 (2 trips)$36,114$36,514

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $28,764 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Notes:

  • All prices listed are based on list prices from the vendor websites as of the date of this blog post.
  • We are accomplishing the transfer of your data to the device within the 10 day “free” period specified by Amazon and Google.
  • We are comparing cloud storage services that have similar performance. For example, once the data is uploaded, it is readily available for download. The data is also available for access via a Web GUI, CLI, API, and/or various applications integrated with the cloud storage service. Multiple versions of files can be kept as desired. Files can be deleted any time.

To be fair, it requires Backblaze three trips to move 100 TB while it only takes 1 trip for the Google Transfer Appliance. This adds some cost to prepare, monitor, and ship three B2 Fireballs versus one Transfer Appliance. Even with that added cost, the Backblaze B2 solution will still be significantly less expensive over the one year period and beyond.

Have a Data Transfer Device Owner

Before you run out and order a transfer device, make sure the transfer process is someone’s job once the device arrives at your organization. Filling a transfer device should only take a few days, but if it is forgotten, you’ll find you’ve had the device for 2 or 3 weeks. While that’s not much of a problem with a B2 Fireball, it could start to get expensive otherwise.

Just the Beginning

As with most “new” technologies and services, you can expect other companies to jump in and provide various data ingest services. The cost will get cheaper or even free as cloud storage companies race to capture and lock up the data you have kept locally all these years. When you are evaluating cloud storage solutions, it’s best to look past the data ingest loss-leader price, and spend a few minutes to calculate the long-term cost of storing and using your data.

The post Getting Your Data into the Cloud is Just the Beginning appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hightail — Empowering Creative Collaboration in the Cloud

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/hightail-empowering-creative-collaboration-in-the-cloud/

Hightail – formerly YouSendIt – streamlines how creative work is reviewed, improved, and approved by helping more than 50 million professionals around the world get great content in front of their audiences faster. Since its debut in 2004 as a file sharing company, Hightail shifted its strategic direction to focus on delivering value-added creative collaboTagsration services and boasts a strong lineup of name-brand customers.

In today’s guest post, Hightail’s SVP of Technology Shiva Paranandi tells the company’s migration story, moving petabytes of data from on-premises to the cloud. He highlights their cloud vendor evaluation process and reasons for going all-in on AWS.


Hightail started as a way to help people easily share and store large files, but has since evolved into a creative collaboration tool. We became a place where users could not only control and share their digital assets, but also assemble their creative teams, connect with clients, develop creative workflows, and manage projects from start to finish. We now power collaboration services for major brands such as Lionsgate and Jimmy Kimmel Live!. With a growing list of domestic and international clients, we required more internal focus on product development and serving the users. We found that running our own data centers consumed more time, money, and manpower than we were willing to devote.

We needed an approach that would help us iterate more rapidly to meet customer needs and dramatically improve our time to market. We wanted to reduce data center costs and have the flexibility to scale up quickly in any given region around the globe. Setting up a data center in a new location took so long that it was limiting the pace of growth that we could achieve. In addition, we were tired of buying ahead of our needs, which meant we had storage capacity that we did not even use. We required a storage solution that was both tiered and highly scalable to reduce costs by allowing us to keep infrequently used data in inactive storage while also allowing us to resurface it quickly at the customer’s request. Our main drivers were agility and innovation, and the cloud enables these in a significant way. Given that, we decided to adopt a cloud-first policy that would enable us to spend time and money on initiatives that differentiate our business, instead of putting resources into managing our storage and computing infrastructure.

Comparing AWS Against Cloud Competitors

To kick off the migration, we did our due diligence by evaluating a variety of cloud vendors, including AWS, Google, IBM, and Microsoft. AWS stuck out as the clear winner for us. At one point, we considered combining services from multiple cloud providers to meet our needs, but decided the best route was to use AWS exclusively. When we factored in training, synchronization, support, and system availability along with other migration and management elements, it was just not practical to take a multi-cloud approach. With the best cost savings and an unmatched ecosystem of partner solutions, we did not need anyone else and chose to go all-in on AWS.

By migrating to AWS, we were able to secure the lowest cost-per-gigabyte pricing, gain access to a rich ecosystem, quickly develop in-house talent, and maintain SOC II compliance. The ecosystem was particularly important to us and set AWS apart from its competitors with its expansive list of partners. In fact, all the vendors we depend on for services such as previewing images, encoding videos, and serving up presentations were already a part of the network so we were easily able to leverage our existing investments and expertise. If we went with a different provider, it would have meant moving away from a platform that was already working so well for which was not the desired outcome for us. Also, the amount of talent we were able to build up in house on AWS technologies was astounding. Training our internal team to work with AWS was a simple process using available tools such as AWS conferences, training materials, and support.

Migrating Petabytes of Data

Going with AWS made things easier. In many instances, it gave us better functionality than what we were using in house. We moved multiple petabytes of data from on-premises storage to AWS with ease. AWS gave us great speeds with Direct Connect, so we were able to push all the data in a little more than three months with no user impact. We employed AWS Key Management Service to keep our data secure, which eased our minds through the move. We performed extensive QA testing before flipping users over to ensure low customer impact, using methods such as checksums between our data center and the data that got pushed to AWS.

Our new platform on AWS has greatly improved our user experience. We have seen huge improvement in reliability, performance, and uptime—all critical in our line of business. We are now able to achieve upload and download speeds up to 17 times faster than our previous data centers, and uptime has increased by orders of magnitude. Also, the time it takes us to deploy services to a new region has been cut by more than 90%. It used to take us at least six months to get a new region online, and now we can get a region up and running in less than three weeks. On AWS, we can even replicate data at the bucket level across regions for disaster recovery purposes.

To cut costs, we were successfully able to divide our storage infrastructure into frequently and infrequently accessed data. Tiered storage in Amazon S3 has been a huge advantage, allowing us to optimize our storage costs so we have more to invest in product development. We can now move data from inactive to active tiers instantly to meet customer needs and eliminated the need to overprovision our storage infrastructure. It is refreshing to see services automatically scale up or down during peak load times, and know that we are only paying for what we need.

Overall, we achieved our key strategic goal of focusing more on development and less on infrastructure. Our migration felt seamless, and the progress we were able to share is a true testament to how easy it has been for us to run our workloads on AWS. We attribute part of our successful migration to the dedicated support provided by the AWS team. They were pretty awesome. We had a couple of their technicians available 24/7 via chat, which proved to be essential during this large-scale migration.

-Shiva Paranandi, SVP of Technology at Hightail

Learning More

Learn more about cost-effective tiered data storage with Amazon S3, or dive deeper into our AWS Partner Ecosystem to see which solutions could best serve the needs of your company.

Pioneers winners: Make it outdoors challenge

Post Syndicated from Olympia Brown original https://www.raspberrypi.org/blog/pioneers-winners-make-it-outdoors/

To everyone’s surprise, the sun has actually managed to show its face this summer in Britain! So we’re not feeling too guilty for having asked the newest crop of Pioneers to Make it outdoors. In fact, the 11- to 16-year-olds that took part in our second digital making challenge not only made things that celebrate the outdoors – some of them actually carted their entire coding setup into the garden. Epic!

The winners

Winners of the second Pioneers challenge are…

We asked you to make it outdoors with tech, challenging all our Pioneers to code and build awesome projects that celebrate the outside world. And we were not disappointed! Congratulations to everyone who took part. Every entry was great and we loved them all.

We set the challenge to Make it outdoors, and our theme winners HH Squared really delivered! You best captured the spirit of what our challenge was asking with your fabulous, fun-looking project which used the outdoors to make it a success. HH Squared, we loved Pi Spy so much that we may have to make our own for Pi Towers! Congratulations on winning this award.

Watching all the entry videos, our judges had the tricky task of picking the top of the pops from among the projects. In additon to ‘theme winner’, we had a number of other categories to help make their job a little bit easier:

  • We appreciate what you’re trying to do: We know that when tackling a digital making project, time and tech sometimes aren’t in your favour. But we still want to see what you’ve got up to, and this award category recognises that even though you haven’t fully realised your ambition yet, you’ve made a great start. *And*, when you do finish, we think it’s going to be awesome. Congratulations to the UTC Bullfrogs for winning this award – we can’t wait to see the final project!
  • Inspiring journey: This category recognises that getting from where you’ve started to where you want to go isn’t always smooth sailing. Maybe teams had tech problems, maybe they had logistical problems, but the winners of this award did a great job of sharing the trials and tribulations they encountered along the way! Coding Doughnuts, your project was a little outside the box IN a box. We loved it.
  • Technically brilliant: This award is in recognition of some serious digital making chops. Robot Apocalypse Committee, you owned this award. Get in!
  • Best explanation: Digital making is an endeavour that involves making a thing, and then sharing that thing. The winners of this category did a great job of showing us exactly what they made, and how they made it. They also get bonus points for making a highly watchable, entertaining video. Uniteam, we got it. We totally got it! What a great explanation of such a wonderful project – and it made us laugh too. Well done!

The Judges’ Special Recognition Awards

Because we found it so hard to just pick five winners, the following teams will receive our Judges’ Special Recognition Award:

  • PiChasers with their project Auqa (yes, the spelling is intentional!)
  • Sunscreen Superstars, making sure we’re all protected in the glorious British sunshine
  • Off The Shelf and their ingenious Underwater Canal Scanner
  • Glassbox, who made us all want Nerf guns thanks to their project Tin Can Alley
  • Turtle Tamers, ensuring the well-being of LEGO turtles around the world with their project Umbrella Empire

Winners from both our Make us laugh and Make it outdoors challenges will be joining us at Google HQ for a Pioneers summer camp full of making funtimes! They’ll also receive some amazing prizes to help them continue in their digital making adventures.

Massive thanks go to our judges for helping to pick the winners!

Pioneers Make it Outdoors Raspberry Pi

And for your next Pioneers challenge…

Ha, as if we’re going to tell you just yet – we’re still recovering from this challenge! We’ll be back in September to announce the theme of the next cycle – so make sure to sign up for our newsletter to be reminded closer to the time.

The post Pioneers winners: Make it outdoors challenge appeared first on Raspberry Pi.

Commentary on US Election Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/commentary_on_u.html

Good commentaries from Ed Felten and Matt Blaze.

Both make a point that I have also been saying: hacks can undermine the legitimacy of an election, even if there is no actual voter or vote manipulation.

Felten:

The second lesson is that we should be paying more attention to attacks that aim to undermine the legitimacy of an election rather than changing the election’s result. Election-stealing attacks have gotten most of the attention up to now — ­and we are still vulnerable to them in some places — ­but it appears that external threat actors may be more interested in attacking legitimacy.

Attacks on legitimacy could take several forms. An attacker could disrupt the operation of the election, for example, by corrupting voter registration databases so there is uncertainty about whether the correct people were allowed to vote. They could interfere with post-election tallying processes, so that incorrect results were reported­ an attack that might have the intended effect even if the results were eventually corrected. Or the attacker might fabricate evidence of an attack, and release the false evidence after the election.

Legitimacy attacks could be easier to carry out than election-stealing attacks, as well. For one thing, a legitimacy attacker will typically want the attack to be discovered, although they might want to avoid having the culprit identified. By contrast, an election-stealing attack must avoid detection in order to succeed. (If detected, it might function as a legitimacy attack.)

Blaze:

A hostile state actor who can compromise a handful of county networks might not even need to alter any actual votes to create considerable uncertainty about an election’s legitimacy. It may be sufficient to simply plant some suspicious software on back end networks, create some suspicious audit files, or add some obviously bogus names to to the voter rolls. If the preferred candidate wins, they can quietly do nothing (or, ideally, restore the compromised networks to their original states). If the “wrong” candidate wins, however, they could covertly reveal evidence that county election systems had been compromised, creating public doubt about whether the election had been “rigged”. This could easily impair the ability of the true winner to effectively govern, at least for a while.

In other words, a hostile state actor interested in disruption may actually have an easier task than someone who wants to undetectably steal even a small local office. And a simple phishing and trojan horse email campaign like the one in the NSA report is potentially all that would be needed to carry this out.

Me:

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­ by extension ­ our democracy.

And, finally, a report from the Brennan Center for Justice on how to secure elections.