Tag Archives: scratch

Conundrum

Post Syndicated from Eevee original https://eev.ee/blog/2018/03/20/conundrum/

Here’s a problem I’m having. Or, rather, a problem I’m solving, but so slowly that I wonder if I’m going about it very inefficiently.

I intended to just make a huge image out of this and tweet it, but it takes so much text to explain that I might as well put it on my internet website.

The setup

I want to do pathfinding through a Doom map. The ultimate goal is to be able to automatically determine the path the player needs to take to reach the exit — what switches to hit in what order, what keys to get, etc.

Doom maps are 2D planes cut into arbitrary shapes. Everything outside a shape is the void, which we don’t care about. Here are some shapes.

The shapes are defined implicitly by their edges. All of the edges touching the red area, for example, say that they’re red on one side.

That’s very nice, because it means I don’t have to do any geometry to detect which areas touch each other. I can tell at a glance that the red and blue areas touch, because the line between them says it’s red on one side and blue on the other.

Unfortunately, this doesn’t seem to be all that useful. The player can’t necessarily move from the red area to the blue area, because there’s a skinny bottleneck. If the yellow area were a raised platform, the player couldn’t fit through the gap. Worse, if there’s a switch somewhere that lowers that platform, then the gap is conditionally passable.

I thought this would be uncommon enough that I could get started only looking at neighbors and do actual geometry later, but that “conditionally passable” pattern shows up all the time in the form of locked “bars” that let you peek between or around them. So I might as well just do the dang geometry.


The player is a 32×32 square and always axis-aligned (i.e., the hitbox doesn’t actually rotate). That’s very convenient, because it means I can “dilate the world” — expand all the walls by 16 units in both directions, while shrinking the player to a single point. That expansion eliminates narrow gaps and leaves a map of everywhere the player’s center is allowed to be. Allegedly this is how Quake did collision detection — but in 3D! How hard can it be in 2D?

The plan, then, is to do this:

This creates a bit of an unholy mess. (I could avoid some of the overlap by being clever at points where exactly two lines touch, but I have to deal with a ton of overlap anyway so I’m not sure if that buys anything.)

The gray outlines are dilations of inner walls, where both sides touch a shape. The black outlines are dilations of outer walls, touching the void on one side. This map tells me that the player’s center can never go within 16 units of an outer wall, which checks out — their hitbox would get in the way! So I can delete all that stuff completely.

Consider that bottom-left outline, where red and yellow touch horizontally. If the player is in the red area, they can only enter that outlined part if they’re also allowed to be in the yellow area. Once they’re inside it, though, they can move around freely. I’ll color that piece orange, and similarly blend colors for the other outlines. (A small sliver at the top requires access to all three areas, so I colored it gray, because I can’t be bothered to figure out how to do a stripe pattern in Inkscape.)

This is the final map, and it’s easy to traverse because it works like a graph! Each contiguous region is a node, and each border is an edge. Some of the edges are one-way (falling off a ledge) or conditional (walking through a door), but the player can move freely within a region, so I don’t need to care about world geometry any more.

The problem

I’m having a hell of a time doing this mass-intersection of a big pile of shapes.

I’m writing this in Rust, and I would very very very strongly prefer not to wrap a C library (or, god forbid, a C++ library), because that will considerably complicate actually releasing this dang software. Unfortunately, that also limits my options rather a lot.

I was referred to a paper (A simple algorithm for Boolean operations on polygons, Martínez et al, 2013) that describes doing a Boolean operation (union, intersection, difference, xor) on two shapes, and works even with self-intersections and holes and whatnot.

I spent an inordinate amount of time porting its reference implementation from very bad C++ to moderately bad Rust, and I extended it to work with an arbitrary number of polygons and to spit out all resulting shapes. It has been a very bumpy ride, and I keep hitting walls — the latest is that it panics when intersecting everything results in two distinct but exactly coincident edges, which obviously happens a lot with this approach.

So the question is: is there some better way to do this that I’m overlooking, or should I just keep fiddling with this algorithm and hope I come out the other side with something that works?


Bear in mind, the input shapes are not necessarily convex, and quite frequently aren’t. Also, they can have holes, and quite frequently do. That rules out most common algorithms. It’s probably possible to triangulate everything, but I’m a little wary of cutting the map into even more microscopic shards; feel free to convince me otherwise.

Also, the map format technically allows absolutely any arbitrary combination of lines, so all of these are possible:

It would be nice to handle these gracefully somehow, or at least not crash on them. But they’re usually total nonsense as far as the game is concerned. But also that middle one does show up in the original stock maps a couple times.

Another common trick is that lines might be part of the same shape on both sides:

The left example suggests that such a line is redundant and can simply be ignored without changing anything. The right example shows why this is a problem.

A common trick in vanilla Doom is the so-called self-referencing sector. Here, the edges of the inner yellow square all claim to be yellow — on both sides. The outer edges all claim to be blue only on the inside, as normal. The yellow square therefore doesn’t neighbor the blue square at all, because no edges that are yellow on one side and blue on the other. The effect in-game is that the yellow area is invisible, but still solid, so it can be used as an invisible bridge or invisible pit for various effects.

This does raise the question of exactly how Doom itself handles all these edge cases. Vanilla maps are preprocessed by a node builder and split into subsectors, which are all convex polygons. So for any given weird trick or broken geometry, the answer to “how does this behave” is: however the node builder deals with it.

Subsectors are built right into vanilla maps, so I could use those. The drawback is that they’re optional for maps targeting ZDoom (and maybe other ports as well?), because ZDoom has its own internal node builder. Also, relying on built nodes in general would make this code less useful for map editing, or generating, or whatever.

ZDoom’s node builder is open source, so I could bake it in? Or port it to Rust? (It’s only, ah, ten times bigger than the shape algorithm I ported.) It’d be interesting to have a fairly-correct reflection of how the game sees broken geometry, which is something no map editor really tries to do. Is it fast enough? Running it on the largest map I know to exist (MAP14 of Sunder) takes 1.4 seconds, which seems like a long time, but also that’s from scratch, and maybe it could be adapted to work incrementally…? Christ.

I’m not sure I have the time to dedicate to flesh this out beyond a proof of concept anyway, so maybe this is all moot. But all the more reason to avoid spending a lot of time on dead ends.

Raspberry Jam Big Birthday Weekend 2018 roundup

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/big-birthday-weekend-2018-roundup/

A couple of weekends ago, we celebrated our sixth birthday by coordinating more than 100 simultaneous Raspberry Jam events around the world. The Big Birthday Weekend was a huge success: our fantastic community organised Jams in 40 countries, covering six continents!

We sent the Jams special birthday kits to help them celebrate in style, and a video message featuring a thank you from Philip and Eben:

Raspberry Jam Big Birthday Weekend 2018

To celebrate the Raspberry Pi’s sixth birthday, we coordinated Raspberry Jams all over the world to take place over the Raspberry Jam Big Birthday Weekend, 3-4 March 2018. A massive thank you to everyone who ran an event and attended.

The Raspberry Jam photo booth

I put together code for a Pi-powered photo booth which overlaid the Big Birthday Weekend logo onto photos and (optionally) tweeted them. We included an arcade button in the Jam kits so they could build one — and it seemed to be quite popular. Some Jams put great effort into housing their photo booth:



Here are some of my favourite photo booth tweets:

RGVSA on Twitter

PiParty photo booth @RGVSA & @ @Nerdvana_io #Rjam

Denis Stretton on Twitter

The @SouthendRPIJams #PiParty photo booth

rpijamtokyo on Twitter

PiParty photo booth

Preston Raspberry Jam on Twitter

Preston Raspberry Jam Photobooth #RJam #PiParty

If you want to try out the photo booth software yourself, find the code on GitHub.

The great Raspberry Jam bake-off

Traditionally, in the UK, people have a cake on their birthday. And we had a few! We saw (and tasted) a great selection of Pi-themed cakes and other baked goods throughout the weekend:






Raspberry Jams everywhere

We always say that every Jam is different, but there’s a common and recognisable theme amongst them. It was great to see so many different venues around the world filling up with like-minded Pi enthusiasts, Raspberry Jam–branded banners, and Raspberry Pi balloons!

Europe

Sergio Martinez on Twitter

Thank you so much to all the attendees of the Ikana Jam in Krakow past Saturday! We shared fun experiences, some of them… also painful 😉 A big thank you to @Raspberry_Pi for these global celebrations! And a big thank you to @hubraum for their hospitality! #PiParty #rjam

NI Raspberry Jam on Twitter

We also had a super successful set of wearables workshops using @adafruit Circuit Playground Express boards and conductive thread at today’s @Raspberry_Pi Jam! Very popular! #PiParty

Suzystar on Twitter

My SenseHAT workshop, going well! @SouthendRPiJams #PiParty

Worksop College Raspberry Jam on Twitter

Learning how to scare the zombies in case of an apocalypse- it worked on our young learners #PiParty @worksopcollege @Raspberry_Pi https://t.co/pntEm57TJl

Africa

Rita on Twitter

Being one of the two places in Kenya where the #PiParty took place, it was an amazing time spending the day with this team and getting to learn and have fun. @TaitaTavetaUni and @Raspberry_Pi thank you for your support. @TTUTechlady @mictecttu ch

GABRIEL ONIFADE on Twitter

@TheMagP1

GABRIEL ONIFADE on Twitter

@GABONIAVERACITY #PiParty Lagos Raspberry Jam 2018 Special International Celebration – 6th Raspberry-Pi Big Birthday! Lagos Nigeria @Raspberry_Pi @ben_nuttall #RJam #RaspberryJam #raspberrypi #physicalcomputing #robotics #edtech #coding #programming #edTechAfrica #veracityhouse https://t.co/V7yLxaYGNx

North America

Heidi Baynes on Twitter

The Riverside Raspberry Jam @Vocademy is underway! #piparty

Brad Derstine on Twitter

The Philly & Pi #PiParty event with @Bresslergroup and @TechGirlzorg was awesome! The Scratch and Pi workshop was amazing! It was overall a great day of fun and tech!!! Thank you everyone who came out!

Houston Raspi on Twitter

Thanks everyone who came out to the @Raspberry_Pi Big Birthday Jam! Special thanks to @PBFerrell @estefanniegg @pcsforme @pandafulmanda @colnels @bquentin3 couldn’t’ve put on this amazing community event without you guys!

Merge Robotics 2706 on Twitter

We are back at @SciTechMuseum for the second day of @OttawaPiJam! Our robot Mergius loves playing catch with the kids! #pijam #piparty #omgrobots

South America

Javier Garzón on Twitter

Así terminamos el #Raspberry Jam Big Birthday Weekend #Bogota 2018 #PiParty de #RaspberryJamBogota 2018 @Raspberry_Pi Nos vemos el 7 de marzo en #ArduinoDayBogota 2018 y #RaspberryJamBogota 2018

Asia

Fablab UP Cebu on Twitter

Happy 6th birthday, @Raspberry_Pi! Greetings all the way from CEBU,PH! #PiParty #IoTCebu Thanks @CebuXGeeks X Ramos for these awesome pics. #Fablab #UPCebu

福野泰介 on Twitter

ラズパイ、6才のお誕生日会スタート in Tokyo PCNブースで、いろいろ展示とhttps://t.co/L6E7KgyNHFとIchigoJamつないだ、こどもIoTハッカソンmini体験やってます at 東京蒲田駅近 https://t.co/yHEuqXHvqe #piparty #pipartytokyo #rjam #opendataday

Ren Camp on Twitter

Happy birthday @Raspberry_Pi! #piparty #iotcebu @coolnumber9 https://t.co/2ESVjfRJ2d

Oceania

Glenunga Raspberry Pi Club on Twitter

PiParty photo booth

Personally, I managed to get to three Jams over the weekend: two run by the same people who put on the first two Jams to ever take place, and also one brand-new one! The Preston Raspberry Jam team, who usually run their event on a Monday evening, wanted to do something extra special for the birthday, so they came up with the idea of putting on a Raspberry Jam Sandwich — on the Friday and Monday around the weekend! This meant I was able to visit them on Friday, then attend the Manchester Raspberry Jam on Saturday, and finally drop by the new Jam at Worksop College on my way home on Sunday.

Ben Nuttall on Twitter

I’m at my first Raspberry Jam #PiParty event of the big birthday weekend! @PrestonRJam has been running for nearly 6 years and is a great place to start the celebrations!

Ben Nuttall on Twitter

Back at @McrRaspJam at @DigInnMMU for #PiParty

Ben Nuttall on Twitter

Great to see mine & @Frans_facts Balloon Pi-Tay popper project in action at @worksopjam #rjam #PiParty https://t.co/GswFm0UuPg

Various members of the Foundation team attended Jams around the UK and US, and James from the Code Club International team visited AmsterJam.

hackerfemo on Twitter

Thanks to everyone who came to our Jam and everyone who helped out. @phoenixtogether thanks for amazing cake & hosting. Ademir you’re so cool. It was awesome to meet Craig Morley from @Raspberry_Pi too. #PiParty

Stuart Fox on Twitter

Great #PiParty today at the @cotswoldjam with bloody delicious cake and lots of raspberry goodness. Great to see @ClareSutcliffe @martinohanlon playing on my new pi powered arcade build:-)

Clare Sutcliffe on Twitter

Happy 6th Birthday @Raspberry_Pi from everyone at the #PiParty at #cotswoldjam in Cheltenham!

Code Club on Twitter

It’s @Raspberry_Pi 6th birthday and we’re celebrating by taking part in @amsterjam__! Happy Birthday Raspberry Pi, we’re so happy to be a part of the family! #PiParty

For more Jammy birthday goodness, check out the PiParty hashtag on Twitter!

The Jam makers!

A lot of preparation went into each Jam, and we really appreciate all the hard work the Jam makers put in to making these events happen, on the Big Birthday Weekend and all year round. Thanks also to all the teams that sent us a group photo:

Lots of the Jams that took place were brand-new events, so we hope to see them continue throughout 2018 and beyond, growing the Raspberry Pi community around the world and giving more people, particularly youths, the opportunity to learn digital making skills.

Philip Colligan on Twitter

So many wonderful people in the @Raspberry_Pi community. Thanks to everyone at #PottonPiAndPints for a great afternoon and for everything you do to help young people learn digital making. #PiParty

Special thanks to ModMyPi for shipping the special Raspberry Jam kits all over the world!

Don’t forget to check out our Jam page to find an event near you! This is also where you can find free resources to help you get a new Jam started, and download free starter projects made especially for Jam activities. These projects are available in English, Français, Français Canadien, Nederlands, Deutsch, Italiano, and 日本語. If you’d like to help us translate more content into these and other languages, please get in touch!

PS Some of the UK Jams were postponed due to heavy snowfall, so you may find there’s a belated sixth-birthday Jam coming up where you live!

S Organ on Twitter

@TheMagP1 Ours was rescheduled until later in the Spring due to the snow but here is Babbage enjoying the snow!

The post Raspberry Jam Big Birthday Weekend 2018 roundup appeared first on Raspberry Pi.

Coding is for girls

Post Syndicated from magda original https://www.raspberrypi.org/blog/coding-is-for-girls/

Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.

“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.

In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.

Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.

Log In or Sign Up to View

See posts, photos and more on Facebook.

Alien message in console

To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.

I wanted to learn more, but with so many options, I found myself a little overwhelmed. I’d never taught myself any technical skill before, and there was a lot of confusing jargon and new terms to get used to. What was HTML? CSS and JavaScript? What were databases, and how could I connect together all the dots and choose what I wanted to learn? Luckily I had support and was able to keep going.

At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.

Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.

Play with Raspberry Pi and quit your job

I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.

I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.

Magdalena Jadach on Twitter

solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity

One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.

For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.

I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.

Animated GIF – Find & Share on GIPHY

Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

Where to start?

I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.

Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.

  • HTML and CSS are languages for describing, structuring, and styling web pages
  • You can learn JavaScript here and here
  • Raspberry Pi (obviously!) and our online learning projects
  • Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
  • Git is version control software that helps you to work on your own projects and collaborate with other developers
  • Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out

Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or  Coder Dojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.

The post Coding is for girls appeared first on Raspberry Pi.

Transition from Scratch to Python with FutureLearn

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/futurelearn-scratch-to-python/

With the launch of our first new free online course of 2018 — Scratch to Python: Moving from Block- to Text-based Programming — two weeks away, I thought this would be a great opportunity to introduce you to the ins and outs of the course content so you know what to expect.

FutureLearn: Moving from Scratch to Python

Learn how to apply the thinking and programming skills you’ve learnt in Scratch to text-based programming languages like Python.

Take the plunge into text-based programming

The idea for this course arose from our conversations with educators who had set up a Code Club in their schools. Most people start a club by teaching Scratch, a block-based programming language, because it allows learners to drag and drop blocks of pre-written code into a window to create a program. The blocks automatically snap together, making it easy to build fun and educational projects that don’t require much troubleshooting. You can do almost anything a beginner could wish for with Scratch, even physical computing to control LEDs, buzzers, buttons, motors, and more!

Scratch to Python FutureLearn Raspberry Pi

However, on our face-to-face training programme Picademy, educators told us that they were finding it hard to engage children who had outgrown Scratch and needed a new challenge. It was easy for me to imagine: a young learner, who once felt confident about programming using Scratch, is now confused by the alien, seemingly awkward interface of Python. What used to take them minutes in Scratch now takes them hours to code, and they start to lose interest — not a good result, I’m sure you’ll agree. I wanted to help educators to navigate this period in their learners’ development, and so I’ve written a course that shows you how to take the programming and thinking skills you and your learners have developed in Scratch, and apply them to Python.

Scratch to Python FutureLearn Raspberry Pi

Who is the course for?

Educators from all backgrounds who are working with secondary school-aged learners. It will also be interesting to anyone who has spent time working with Scratch and wants to understand how programming concepts translate between different languages.

“It was great fun, and I thought that the ideas and resources would be great to use with Year 7 classes.”
Sue Grey, Classroom Teacher

What is covered?

After showing you the similarities and differences of Scratch and Python, and how the skills learned using one can be applied to the other, we will look at turning more complex Scratch scripts into Python programs. Through creating a Mad Libs game and developing a username generator, you will see how programs can be simplified in a text-based language. We will give you our top tips for debugging Python code, and you’ll have the chance to share your ideas for introducing more complex programs to your students.

Scratch to Python FutureLearn Raspberry Pi

After that, we will look at different data types in Python and write a script to calculate how old you are in dog years. Finally, you’ll dive deeper into the possibilities of Python by installing and using external Python libraries to perform some amazing tasks.

By the end of the course, you’ll be able to:

  • Transfer programming and thinking skills from Scratch to Python
  • Use fundamental Python programming skills
  • Identify errors in your Python code based on error messages, and debug your scripts
  • Produce tools to support students’ transition from block-based to text-based programming
  • Understand the power of text-based programming and what you can create with it

Where can I sign up?

The free four-week course starts on 12 March 2018, and you can sign up now on FutureLearn. While you’re there, be sure to check out our other free courses, such as Prepare to Run a Code Club, Teaching Physical Computing with a Raspberry Pi and Python, and our second new course Build a Makerspace for Young People — more information on it will follow in tomorrow’s blog post.

The post Transition from Scratch to Python with FutureLearn appeared first on Raspberry Pi.

Mission Space Lab flight status announced!

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/mission-space-lab-flight-status-announced/

In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).

Astro Pi Mission Space Lab logo

Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:

  • Life in space
    Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
  • Life on Earth
    Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.

ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:

ESA astronaut Alexander Gerst congratulates Astro Pi 2017-18 winners

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Flight status

We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!

But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.

Belgium

Flight status achieved:

  • Team De Vesten, Campus De Vesten, Antwerpen
  • Ursa Major, CoderDojo Belgium, West-Vlaanderen
  • Special operations STEM, Sint-Claracollege, Antwerpen

Canada

Flight status achieved:

  • Let It Grow, Branksome Hall, Toronto
  • The Dark Side of Light, Branksome Hall, Toronto
  • Genie On The ISS, Branksome Hall, Toronto
  • Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
  • The Broadviewnauts, Broadview, Ottawa

Czech Republic

Flight status achieved:

  • BLEK, Střední Odborná Škola Blatná, Strakonice

Denmark

Flight status achieved:

  • 2y Infotek, Nærum Gymnasium, Nærum
  • Equation Quotation, Allerød Gymnasium, Lillerød
  • Team Weather Watchers, Allerød Gymnasium, Allerød
  • Space Gardners, Nærum Gymnasium, Nærum

Finland

Flight status achieved:

  • Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää

France

Flight status achieved:

  • INC2, Lycée Raoul Follereau, Bourgogne
  • Space Project SP4, Lycée Saint-Paul IV, Reunion Island
  • Dresseurs2Python, clg Albert CAMUS, essonne
  • Lazos, Lycée Aux Lazaristes, Rhone
  • The space nerds, Lycée Saint André Colmar, Alsace
  • Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
  • AstroMega, Institut de Genech, north
  • Al’Crew, Lycée Algoud-Laffemas, Auvergne-Rhône-Alpes
  • AstroPython, clg Albert CAMUS, essonne
  • Aruden Corp, Lycée Pablo Neruda, Normandie
  • HeroSpace, clg Albert CAMUS, essonne
  • GalaXess [R]evolution, Lycée Saint Cricq, Nouvelle-Aquitaine
  • AstroBerry, clg Albert CAMUS, essonne
  • Ambitious Girls, Lycée Adam de Craponne, PACA

Germany

Flight status achieved:

  • Uschis, St. Ursula Gymnasium Freiburg im Breisgau, Breisgau
  • Dosi-Pi, Max-Born-Gymnasium Germering, Bavaria

Greece

Flight status achieved:

  • Deep Space Pi, 1o Epal Grevenon, Grevena
  • Flox Team, 1st Lyceum of Kifissia, Attiki
  • Kalamaria Space Team, Second Lyceum of Kalamaria, Central Macedonia
  • The Earth Watchers, STEM Robotics Academy, Thessaly
  • Celestial_Distance, Gymnasium of Kanithos, Sterea Ellada – Evia
  • Pi Stars, Primary School of Rododaphne, Achaias
  • Flarions, 5th Primary School of Salamina, Attica

Ireland

Flight status achieved:

  • Plant Parade, Templeogue College, Leinster
  • For Peats Sake, Templeogue College, Leinster
  • CoderDojo Clonakilty, Co. Cork

Italy

Flight status achieved:

  • Trentini DOP, CoderDojo Trento, TN
  • Tarantino Space Lab, Liceo G. Tarantino, BA
  • Murgia Sky Lab, Liceo G. Tarantino, BA
  • Enrico Fermi, Liceo XXV Aprile, Veneto
  • Team Lampone, CoderDojoTrento, TN
  • GCC, Gali Code Club, Trentino Alto Adige/Südtirol
  • Another Earth, IISS “Laporta/Falcone-Borsellino”
  • Anti Pollution Team, IIS “L. Einaudi”, Sicily
  • e-HAND, Liceo Statale Scientifico e Classico ‘Ettore Majorana’, Lombardia
  • scossa team, ITTS Volterra, Venezia
  • Space Comet Sisters, Scuola don Bosco, Torino

Luxembourg

Flight status achieved:

  • Spaceballs, Atert Lycée Rédange, Diekirch
  • Aline in space, Lycée Aline Mayrisch Luxembourg (LAML)

Poland

Flight status achieved:

  • AstroLeszczynPi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Astrokompasy, High School nr XVII in Wrocław named after Agnieszka Osiecka, Lower Silesian
  • Cosmic Investigators, Publiczna Szkoła Podstawowa im. Św. Jadwigi Królowej w Rzezawie, Małopolska
  • ApplePi, III Liceum Ogólnokształcące im. prof. T. Kotarbińskiego w Zielonej Górze, Lubusz Voivodeship
  • ELE Society 2, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • ELE Society 1, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • SpaceOn, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Dewnald Ducks, III Liceum Ogólnokształcące w Zielonej Górze, lubuskie
  • Nova Team, III Liceum Ogolnoksztalcace im. prof. T. Kotarbinskiego, lubuskie district
  • The Moons, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Live, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Storm Hunters, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • DeepSky, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Small Explorers, ZPO Konina, Malopolska
  • AstroZSCL, Zespół Szkół w Czerwionce-Leszczynach, śląskie
  • Orchestra, Szkola Podstawowa nr 12 w Jasle, Podkarpackie
  • ApplePi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Green Crew, Szkoła Podstawowa nr 2 w Czeladzi, Silesia

Portugal

Flight status achieved:

  • Magnetics, Escola Secundária João de Deus, Faro
  • ECA_QUEIROS_PI, Secondary School Eça de Queirós, Lisboa
  • ESDMM Pi, Escola Secundária D. Manuel Martins, Setúbal
  • AstroPhysicists, EB 2,3 D. Afonso Henriques, Braga

Romania

Flight status achieved:

  • Caelus, “Tudor Vianu” National High School of Computer Science, District One
  • CodeWarriors, “Tudor Vianu” National High School of Computer Science, District One
  • Dark Phoenix, “Tudor Vianu” National High School of Computer Science, District One
  • ShootingStars, “Tudor Vianu” National High School of Computer Science, District One
  • Astro Pi Carmen Sylva 2, Liceul Teoretic “Carmen Sylva”, Constanta
  • Astro Meridian, Astro Club Meridian 0, Bihor

Slovenia

Flight status achieved:

  • astrOSRence, OS Rence
  • Jakopičevca, Osnovna šola Riharda Jakopiča, Ljubljana

Spain

Flight status achieved:

  • Exea in Orbit, IES Cinco Villas, Zaragoza
  • Valdespartans, IES Valdespartera, Zaragoza
  • Valdespartans2, IES Valdespartera, Zaragoza
  • Astropithecus, Institut de Bruguers, Barcelona
  • SkyPi-line, Colegio Corazón de María, Asturias
  • ClimSOLatic, Colegio Corazón de María, Asturias
  • Científicosdelsaz, IES Profesor Pablo del Saz, Málaga
  • Canarias 2, IES El Calero, Las Palmas
  • Dreamers, M. Peleteiro, A Coruña
  • Canarias 1, IES El Calero, Las Palmas

The Netherlands

Flight status achieved:

  • Team Kaki-FM, Rkbs De Reiger, Noord-Holland

United Kingdom

Flight status achieved:

  • Binco, Teignmouth Community School, Devon
  • 2200 (Saddleworth), Detached Flight Royal Air Force Air Cadets, Lanchashire
  • Whatevernext, Albyn School, Highlands
  • GraviTeam, Limehurst Academy, Leicestershire
  • LSA Digital Leaders, Lytham St Annes Technology and Performing Arts College, Lancashire
  • Mead Astronauts, Mead Community Primary School, Wiltshire
  • STEAMCademy, Castlewood Primary School, West Sussex
  • Lux Quest, CoderDojo Banbridge, Co. Down
  • Temparatus, Dyffryn Taf, Carmarthenshire
  • Discovery STEMers, Discovery STEM Education, South Yorkshire
  • Code Inverness, Code Club Inverness, Highland
  • JJB, Ashton Sixth Form College, Tameside
  • Astro Lab, East Kent College, Kent
  • The Life Savers, Scratch and Python, Middlesex
  • JAAPiT, Taylor Household, Nottingham
  • The Heat Guys, The Archer Academy, Greater London
  • Astro Wantenauts, Wantage C of E Primary School, Oxfordshire
  • Derby Radio Museum, Radio Communication Museum of Great Britain, Derbyshire
  • Bytesyze, King’s College School, Cambridgeshire

Other

Flight status achieved:

  • Intellectual Savage Stars, Lycée français de Luanda, Luanda

 

Congratulations to all successful teams! We are looking forward to reading your reports.

The post Mission Space Lab flight status announced! appeared first on Raspberry Pi.

Community Profile: Estefannie Explains It All

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-estefannie/

This column is from The MagPi issue 59. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

“Hey, world!” Estefannie exclaims, a wide grin across her face as the camera begins to roll for another YouTube tutorial video. With a growing number of followers and wonderful support from her fans, Estefannie is building a solid reputation as an online maker, creating unique, fun content accessible to all.

A woman sitting at a desk with a laptop and papers — Estefannie Explains it All Raspberry Pi

It’s as if she was born into performing and making for an audience, but this fun, enjoyable journey to social media stardom came not from a desire to be in front of the camera, but rather as a unique approach to her own learning. While studying, Estefannie decided the best way to confirm her knowledge of a subject was to create an educational video explaining it. If she could teach a topic successfully, she knew she’d retained the information. And so her YouTube channel, Estefannie Explains It All, came into being.

Note taking — Estefannie Explains it All

Her first videos featured pages of notes with voice-over explanations of data structure and algorithm analysis. Then she moved in front of the camera, and expanded her skills in the process.

But YouTube isn’t her only outlet. With nearly 50000 followers, Estefannie’s Instagram game is strong, adding to an increasing number of female coders taking to the platform. Across her Instagram grid, you’ll find insights into her daily routine, from programming on location for work to behind-the-scenes troubleshooting as she begins to create another tutorial video. It’s hard work, with content creation for both Instagram and YouTube forever on her mind as she continues to work and progress successfully as a software engineer.

A woman showing off a game on a tablet — Estefannie Explains it All Raspberry Pi

As a thank you to her Instagram fans for helping her reach 10000 followers, Estefannie created a free game for Android and iOS called Gravitris — imagine Tetris with balance issues!

Estefannie was born and raised in Mexico, with ambitions to become a graphic designer and animator. However, a documentary on coding at Pixar, and the beauty of Merida’s hair in Brave, opened her mind to the opportunities of software engineering in animation. She altered her career path, moved to the United States, and switched to a Computer Science course.

A woman wearing safety goggles hugging a keyboard Estefannie Explains it All Raspberry Pi

With a constant desire to make and to learn, Estefannie combines her software engineering profession with her hobby to create fun, exciting content for YouTube.

While studying, Estefannie started a Computer Science Girls Club at the University of Houston, Texas, and she found herself eager to put more time and effort into the movement to increase the percentage of women in the industry. The club was a success, and still is to this day. While Estefannie has handed over the reins, she’s still very involved in the cause.

Through her YouTube videos, Estefannie continues the theme of inclusion, with every project offering a warm sense of approachability for all, regardless of age, gender, or skill. From exploring Scratch and Makey Makey with her young niece and nephew to creating her own Disney ‘Made with Magic’ backpack for a trip to Disney World, Florida, Estefannie’s videos are essentially a documentary of her own learning process, produced so viewers can learn with her — and learn from her mistakes — to create their own tech wonders.

Using the Raspberry Pi, she’s been able to broaden her skills and, in turn, her projects, creating a home-automated gingerbread house at Christmas, building a GPS-controlled GoPro for her trip to London, and making everyone’s life better with an Internet Button–controlled French press.

Estefannie Explains it All Raspberry Pi Home Automated Gingerbread House

Estefannie’s automated gingerbread house project was a labour of love, with electronics, wires, and candy strewn across both her living room and kitchen for weeks before completion. While she already was a skilled programmer, the world of physical digital making was still fairly new for Estefannie. Having ditched her hot glue gun in favour of a soldering iron in a previous video, she continued to experiment and try out new, interesting techniques that are now second nature to many members of the maker community. With the gingerbread house, Estefannie was able to research and apply techniques such as light controls, servos, and app making, although the latter was already firmly within her skill set. The result? A fun video of ups and downs that resulted in a wonderful, festive treat. She even gave her holiday home its own solar panel!

A DAY AT RASPBERRY PI TOWERS!! LINK IN BIO ⚡🎥 @raspberrypifoundation

1,910 Likes, 43 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “A DAY AT RASPBERRY PI TOWERS!! LINK IN BIO ⚡🎥 @raspberrypifoundation”

And that’s just the beginning of her adventures with Pi…but we won’t spoil her future plans by telling you what’s coming next. Sorry! However, since this article was written last year, Estefannie has released a few more Pi-based project videos, plus some awesome interviews and live-streams with other members of the maker community such as Simone Giertz. She even made us an awesome video for our Raspberry Pi YouTube channel! So be sure to check out her latest releases.

Best day yet!! I got to hangout, play Jenga with a huge arm robot, and have afternoon tea with @simonegiertz and robots!! 🤖👯 #shittyrobotnation

2,264 Likes, 56 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “Best day yet!! I got to hangout, play Jenga with a huge arm robot, and have afternoon tea with…”

While many wonderful maker videos show off a project without much explanation, or expect a certain level of skill from viewers hoping to recreate the project, Estefannie’s videos exist almost within their own category. We can’t wait to see where Estefannie Explains It All goes next!

The post Community Profile: Estefannie Explains It All appeared first on Raspberry Pi.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.

 

 

 

SUPER game night 3: GAMES MADE QUICK??? 2.0

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/23/super-game-night-3-games-made-quick-2-0/

Game night continues with a smorgasbord of games from my recent game jam, GAMES MADE QUICK??? 2.0!

The idea was to make a game in only a week while watching AGDQ, as an alternative to doing absolutely nothing for a week while watching AGDQ. (I didn’t submit a game myself; I was chugging along on my Anise game, which isn’t finished yet.)

I can’t very well run a game jam and not play any of the games, so here’s some of them in no particular order! Enjoy!

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Weather Quest, by timlmul

short · rpg · jan 2017 · (lin)/mac/win · free on itch · jam entry

Weather Quest is its author’s first shipped game, written completely from scratch (the only vendored code is a micro OO base). It’s very short, but as someone who has also written LÖVE games completely from scratch, I can attest that producing something this game-like in a week is a fucking miracle. Bravo!

For reference, a week into my first foray, I think I was probably still writing my own Tiled importer like an idiot.

Only Mac and Windows builds are on itch, but it’s a LÖVE game, so Linux folks can just grab a zip from GitHub and throw that at love.

FINAL SCORE: ⛅☔☀

Pancake Numbers Simulator, by AnorakThePrimordial

short · sim · jan 2017 · lin/mac/win · free on itch · jam entry

Given a stack of N pancakes (of all different sizes and in no particular order), the Nth pancake number is the most flips you could possibly need to sort the pancakes in order with the smallest on top. A “flip” is sticking a spatula under one of the pancakes and flipping the whole sub-stack over. There’s, ah, a video embedded on the game page with some visuals.

Anyway, this game lets you simulate sorting a stack via pancake flipping, which is surprisingly satisfying! I enjoy cleaning up little simulated messes, such as… incorrectly-sorted pancakes, I guess?

This probably doesn’t work too well as a simulator for solving the general problem — you’d have to find an optimal solution for every permutation of N pancakes to be sure you were right. But it’s a nice interactive illustration of the problem, and if you know the pancake number for your stack size of choice (which I wish the game told you — for seven pancakes, it’s 8), then trying to restore a stack in that many moves makes for a nice quick puzzle.

FINAL SCORE: \(\frac{18}{11}\)

Framed Animals, by chridd

short · metroidvania · jan 2017 · web/win · free on itch · jam entry

The concept here was to kill the frames, save the animals, which is a delightfully literal riff on a long-running AGDQ/SGDQ donation incentive — people vote with their dollars to decide whether Super Metroid speedrunners go out of their way to free the critters who show you how to walljump and shinespark. Super Metroid didn’t have a showing at this year’s AGDQ, and so we have this game instead.

It’s rough, but clever, and I got really into it pretty quickly — each animal you save gives you a new ability (in true Metroid style), and you get to test that ability out by playing as the animal, with only that ability and no others, to get yourself back to the most recent save point.

I did, tragically, manage to get myself stuck near what I think was about to be the end of the game, so some of the animals will remain framed forever. What an unsatisfying conclusion.

Gravity feels a little high given the size of the screen, and like most tile-less platformers, there’s not really any way to gauge how high or long your jump is before you leap. But I’m only even nitpicking because I think this is a great idea and I hope the author really does keep working on it.

FINAL SCORE: $136,596.69

Battle 4 Glory, by Storyteller Games

short · fighter · jan 2017 · win · free on itch · jam entry

This is a Smash Bros-style brawler, complete with the four players, the 2D play area in a 3D world, and the random stage obstacles showing up. I do like the Smash style, despite not otherwise being a fan of fighting games, so it’s nice to see another game chase that aesthetic.

Alas, that’s about as far as it got — which is pretty far for a week of work! I don’t know what more to say, though. The environments are neat, but unless I’m missing something, the only actions at your disposal are jumping and very weak melee attacks. I did have a good few minutes of fun fruitlessly mashing myself against the bumbling bots, as you can see.

FINAL SCORE: 300%

Icnaluferu Guild, Year Sixteen, by CHz

short · adventure · jan 2017 · web · free on itch · jam entry

Here we have the first of several games made with bitsy, a micro game making tool that basically only supports walking around, talking to people, and picking up items.

I tell you this because I think half of my appreciation for this game is in the ways it wriggled against those limits to emulate a Zelda-like dungeon crawler. Everything in here is totally fake, and you can’t really understand just how fake unless you’ve tried to make something complicated with bitsy.

It’s pretty good. The dialogue is entertaining (the rest of your party develops distinct personalities solely through oneliners, somehow), the riffs on standard dungeon fare are charming, and the Link’s Awakening-esque perspective walls around the edges of each room are fucking glorious.

FINAL SCORE: 2 bits

The Lonely Tapes, by JTHomeslice

short · rpg · jan 2017 · web · free on itch · jam entry

Another bitsy entry, this one sees you play as a Wal— sorry, a JogDawg, which has lost its cassette tapes and needs to go recover them!

(A cassette tape is like a VHS, but for music.)

(A VHS is—)

I have the sneaking suspicion that I missed out on some musical in-jokes, due to being uncultured swine. I still enjoyed the game — it’s always clear when someone is passionate about the thing they’re writing about, and I could tell I was awash in that aura even if some of it went over my head. You know you’ve done good if someone from way outside your sphere shows up and still has a good time.

FINAL SCORE: Nine… Inch Nails? They’re a band, right? God I don’t know write your own damn joke

Pirate Kitty-Quest, by TheKoolestKid

short · adventure · jan 2017 · win · free on itch · jam entry

I completely forgot I’d even given “my birthday” and “my cat” as mostly-joking jam themes until I stumbled upon this incredible gem. I don’t think — let me just check here and — yeah no this person doesn’t even follow me on Twitter. I have no idea who they are?

BUT THEY MADE A GAME ABOUT ANISE AS A PIRATE, LOOKING FOR TREASURE

PIRATE. ANISE

PIRATE ANISE!!!

This game wins the jam, hands down. 🏆

FINAL SCORE: Yarr, eight pieces o’ eight

CHIPS Mario, by NovaSquirrel

short · platformer · jan 2017 · (lin/mac)/win · free on itch · jam entry

You see this? This is fucking witchcraft.

This game is made with MegaZeux. MegaZeux games look like THIS. Text-mode, bound to a grid, with two colors per cell. That’s all you get.

Until now, apparently?? The game is a tech demo of “unbound” sprites, which can be drawn on top of the character grid without being aligned to it. And apparently have looser color restrictions.

The collision is a little glitchy, which isn’t surprising for a MegaZeux platformer; I had some fun interactions with platforms a couple times. But hey, goddamn, it’s free-moving Mario, in MegaZeux, what the hell.

(I’m looking at the most recently added games on DigitalMZX now, and I notice that not only is this game in the first slot, but NovaSquirrel’s MegaZeux entry for Strawberry Jam last February is still in the seventh slot. RIP, MegaZeux. I’m surprised a major feature like this was even added if the community has largely evaporated?)

FINAL SCORE: n/a, disqualified for being probably summoned from the depths of Hell

d!¢< pic, by 573 Games

short · story · jan 2017 · web · free on itch · jam entry

This is a short story about not sending dick pics. It’s very short, so I can’t say much without spoiling it, but: you are generally prompted to either text something reasonable, or send a dick pic. You should not send a dick pic.

It’s a fascinating artifact, not because of the work itself, but because it’s so terse that I genuinely can’t tell what the author was even going for. And this is the kind of subject where the author was, surely, going for something. Right? But was it genuinely intended to be educational, or was it tongue-in-cheek about how some dudes still don’t get it? Or is it side-eying the player who clicks the obviously wrong option just for kicks, which is the same reason people do it for real? Or is it commentary on how “send a dick pic” is a literal option for every response in a real conversation, too, and it’s not that hard to just not do it — unless you are one of the kinds of people who just feels a compulsion to try everything, anything, just because you can? Or is it just a quick Twine and I am way too deep in this? God, just play the thing, it’s shorter than this paragraph.

I’m also left wondering when it is appropriate to send a dick pic. Presumably there is a correct time? Hopefully the author will enter Strawberry Jam 2 to expound upon this.

FINAL SCORE: 3½” 😉

Marble maze, by Shtille

short · arcade · jan 2017 · win · free on itch · jam entry

Ah, hm. So this is a maze navigated by rolling a marble around. You use WASD to move the marble, and you can also turn the camera with the arrow keys.

The trouble is… the marble’s movement is always relative to the world, not the camera. That means if you turn the camera 30° and then try to move the marble, it’ll move at a 30° angle from your point of view.

That makes navigating a maze, er, difficult.

Camera-relative movement is the kind of thing I take so much for granted that I wouldn’t even think to do otherwise, and I think it’s valuable to look at surprising choices that violate fundamental conventions, so I’m trying to take this as a nudge out of my comfort zone. What could you design in an interesting way that used world-relative movement? Probably not the player, but maybe something else in the world, as long as you had strong landmarks? Hmm.

FINAL SCORE: ᘔ

Refactor: flight, by fluffy

short · arcade · jan 2017 · lin/mac/win · free on itch · jam entry

Refactor is a game album, which is rather a lot what it sounds like, and Flight is one of the tracks. Which makes this a single, I suppose.

It’s one of those games where you move down an oddly-shaped tunnel trying not to hit the walls, but with some cute twists. Coins and gems hop up from the bottom of the screen in time with the music, and collecting them gives you points. Hitting a wall costs you some points and kills your momentum, but I don’t think outright losing is possible, which is great for me!

Also, the monk cycles through several animal faces. I don’t know why, and it’s very good. One of those odd but memorable details that sits squarely on the intersection of abstract, mysterious, and a bit weird, and refuses to budge from that spot.

The music is great too? Really chill all around.

FINAL SCORE: 🎵🎵🎵🎵

The Adventures of Klyde

short · adventure · jan 2017 · web · free on itch · jam entry

Another bitsy game, this one starring a pig (humorously symbolized by a giant pig nose with ears) who must collect fruit and solve some puzzles.

This is charmingly nostalgic for me — it reminds me of some standard fare in engines like MegaZeux, where the obvious things to do when presented with tiles and pickups were to make mazes. I don’t mean that in a bad way; the maze is the fundamental environmental obstacle.

A couple places in here felt like invisible teleport mazes I had to brute-force, but I might have been missing a hint somewhere. I did make it through with only a little trouble, but alas — I stepped in a bad warp somewhere and got sent to the upper left corner of the starting screen, which is surrounded by walls. So Klyde’s new life is being trapped eternally in a nowhere space.

FINAL SCORE: 19/20 apples

And more

That was only a third of the games, and I don’t think even half of the ones I’ve played. I’ll have to do a second post covering the rest of them? Maybe a third?

Or maybe this is a ludicrous format for commenting on several dozen games and I should try to narrow it down to the ones that resonated the most for Strawberry Jam 2? Maybe??

Coolest Projects: for young people across the Raspberry Pi community

Post Syndicated from Rosa Langhammer original https://www.raspberrypi.org/blog/coolest-projects-young-people-raspberry-pi-community/

Coolest Projects is a world-leading annual showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. Young people come to the event to exhibit the cool ideas they have been working on throughout the year. And from 2018, Coolest Projects is open to young people across the Raspberry Pi community.

Coolest Projects 2016 Highlights

Coolest Projects is a world leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers and entrepreneurs! Find out more at: http://coolestprojects.org/

A huge fair for digital making

When Raspberry Pi’s Philip and Ben first visited Coolest Projects, they were blown away by the scope of the event, the number of children and young people who had travelled to Dublin to share their work, and the commitment they demonstrated to work ranging from Scratch projects to home-made hovercraft.

Coolest Projects International 2018 will be held in Dublin, Ireland, on Saturday 26 May. Participants will travel from all over the world to take part in a festival of creativity and tech. We hope you’ll be among them!

Montage of photos from Coolest Projects 2016: a large space with lots of people, mostly children, sharing projects, socialising, and discussing

“It’s a huge fair especially for coding and digital tech – it’s massive and it’s amazing!

Coolest Projects International and Coolest Projects UK

As well as the flagship international event in Dublin, Ireland, there are regional events in other countries. All these events are now open to makers and creators across the Raspberry Pi community, from Dojos, Code Clubs, and Raspberry Jams.

This year, for the first time, we are bringing Coolest Projects to the UK for a spectacular regional event! Coolest Projects UK will be held at Here East in London on Saturday 28 April. We’re looking forward to discovering over 100 projects that young people have designed and built, and seeing them share their ideas and their passion for technology, make new friends, and learn from one another.

A young boy in a CoderDojo Ninja T-shirt shows another young boy his project, both concentrating intently

Fierce focus at Coolest Projects

Who can take part?

If you’re up to 18 years of age and you’re in primary, secondary, or further education, you can join in. You can work as an individual or as part of a team of up to five. All projects are welcome, whether you’re a beginner or a seasoned expert.

You must be able to attend the event that you’re entering, whether Coolest Projects International or a regional event. Getting together with other makers and their fantastic projects is a really important and exciting part of the event, so you can’t take part with an online-only or video-only entry. There are a few rules to make sure everything runs smoothly and fairly, and you can read them here.

A girl in a CoderDojo Ninja T shirt proudly holds the rocket she has built; it's as long as she is tall

Wiktoria Jarymowicz from Poland presents the rocket she built at Coolest Projects

How do I join in?

Your project should fit into one of six broad categories, covering everything from Scratch to hardware projects. If you’ve made something with tech, or you’ve got a project idea, it will probably fit into one of them! Once you’ve picked your project, you need to register it and apply for your space at the event. You can register for Coolest Projects International 2018 right now, and registration for Coolest Projects UK 2018 will open on Wednesday: join our email list to get an update when it does.

How will you choose who gets a place?

There are places available for 750 projects, and our goal is to have enough room for everyone who wants to come. If more makers want to bring their projects than there are places available, we’ll select entries to show a balance of projects from different regions and different parts of our communities, from groups and individuals, and from girls and boys, as well as a good mixture of projects across different categories.

Poster setting out the process of planning and building a project in six stages, and showing the date of this year's Coolest Projects International: 26 May 2018

I need help to get started, or help to get there

To help get your ideas flowing and guide you through your project, we’ve prepared a set of How to build a project worksheets. And if you’d like to attend Coolest Projects International, but the cost of travel is a problem, you can apply for a travel bursary by 31 January.

Coolest Projects is about rewarding creativity, and we know the Raspberry Pi community has that in spades. It’s about having an idea and making it a reality using the skills you have, whether this is your first project or your fifteenth. We can’t wait to see you at Coolest Projects UK or Coolest Projects International this year!

The post Coolest Projects: for young people across the Raspberry Pi community appeared first on Raspberry Pi.

Recent EC2 Goodies – Launch Templates and Spread Placement

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/recent-ec2-goodies-launch-templates-and-spread-placement/

We launched some important new EC2 instance types and features at AWS re:Invent. I’ve already told you about the M5, H1, T2 Unlimited and Bare Metal instances, and about Spot features such as Hibernation and the New Pricing Model. Randall told you about the Amazon Time Sync Service. Today I would like to tell you about two of the features that we launched: Spread placement groups and Launch Templates. Both features are available in the EC2 Console and from the EC2 APIs, and can be used in all of the AWS Regions in the “aws” partition:

Launch Templates
You can use launch templates to store the instance, network, security, storage, and advanced parameters that you use to launch EC2 instances, and can also include any desired tags. Each template can include any desired subset of the full collection of parameters. You can, for example, define common configuration parameters such as tags or network configurations in a template, and allow the other parameters to be specified as part of the actual launch.

Templates give you the power to set up a consistent launch environment that spans instances launched in On-Demand and Spot form, as well as through EC2 Auto Scaling and as part of a Spot Fleet. You can use them to implement organization-wide standards and to enforce best practices, and you can give your IAM users the ability to launch instances via templates while withholding the ability to do so via the underlying APIs.

Templates are versioned and you can use any desired version when you launch an instance. You can create templates from scratch, base them on the previous version, or copy the parameters from a running instance.

Here’s how you create a launch template in the Console:

Here’s how to include network interfaces, storage volumes, tags, and security groups:

And here’s how to specify advanced and specialized parameters:

You don’t have to specify values for all of these parameters in your templates; enter the values that are common to multiple instances or launches and specify the rest at launch time.

When you click Create launch template, the template is created and can be used to launch On-Demand instances, create Auto Scaling Groups, and create Spot Fleets:

The Launch Instance button now gives you the option to launch from a template:

Simply choose the template and the version, and finalize all of the launch parameters:

You can also manage your templates and template versions from the Console:

To learn more about this feature, read Launching an Instance from a Launch Template.

Spread Placement Groups
Spread placement groups indicate that you do not want the instances in the group to share the same underlying hardware. Applications that rely on a small number of critical instances can launch them in a spread placement group to reduce the odds that one hardware failure will impact more than one instance. Here are a couple of things to keep in mind when you use spread placement groups:

  • Availability Zones – A single spread placement group can span multiple Availability Zones. You can have a maximum of seven running instances per Availability Zone per group.
  • Unique Hardware – Launch requests can fail if there is insufficient unique hardware available. The situation changes over time as overall usage changes and as we add additional hardware; you can retry failed requests at a later time.
  • Instance Types – You can launch a wide variety of M4, M5, C3, R3, R4, X1, X1e, D2, H1, I2, I3, HS1, F1, G2, G3, P2, and P3 instances types in spread placement groups.
  • Reserved Instances – Instances launched into a spread placement group can make use of reserved capacity. However, you cannot currently reserve capacity for a placement group and could receive an ICE (Insufficient Capacity Error) even if you have some RI’s available.
  • Applicability – You cannot use spread placement groups in conjunction with Dedicated Instances or Dedicated Hosts.

You can create and use spread placement groups from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs. The console has a new feature that will help you to learn how to use the command line:

You can specify an existing placement group or create a new one when you launch an EC2 instance:

To learn more, read about Placement Groups.

Jeff;

[$] Monitoring with Prometheus 2.0

Post Syndicated from corbet original https://lwn.net/Articles/744410/rss

Prometheus is a monitoring tool
built from scratch by SoundCloud in 2012. It works by pulling metrics from
monitored services and storing them in a time series database (TSDB). It
has a powerful query language to inspect that database, create alerts, and
plot basic graphs. Those graphs can then be used to detect anomalies or
trends for (possibly automated) resource provisioning. Prometheus also has
extensive service discovery features and supports high availability
configurations.

That’s what the brochure says, anyway; let’s see how it works in the hands
of an old grumpy system administrator. I’ll be drawing comparisons
with Munin and Nagios frequently because those are the tools I have
used for over a decade in monitoring Unix clusters.

Eevee gained 2791 experience points

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/15/eevee-gained-2791-experience-points/

Eevee grew to level 31!

A year strongly defined by mixed success! Also, a lot of video games.

I ran three game jams, resulting in a total of 157 games existing that may not have otherwise, which is totally mindblowing?!

For GAMES MADE QUICK???, glip and I made NEON PHASE, a short little exploratory platformer. Honestly, I should give myself more credit for this and the rest of the LÖVE games I’ve based on the same codebase — I wove a physics engine (and everything else!) from scratch and it has held up remarkably well for a variety of different uses.

I successfully finished an HD version of Isaac’s Descent using my LÖVE engine, though it doesn’t have anything new over the original and I’ve only released it as a tech demo on Patreon.

For Strawberry Jam (NSFW!) we made fox flux (slightly NSFW!), which felt like a huge milestone: the first game where I made all the art! I mean, not counting Isaac’s Descent, which was for a very limited platform. It’s a pretty arbitrary milestone, yes, but it feels significant. I’ve been working on expanding the game into a longer and slightly less buggy experience, but the art is taking the longest by far. I must’ve spent weeks on player sprites alone.

We then set about working on Bolthaven, a sequel of sorts to NEON PHASE, and got decently far, and then abandond it. Oops.

We then started a cute little PICO-8 game, and forgot about it. Oops.

I was recruited to help with Chaos Composer, a more ambitious game glip started with someone else in Unity. I had to get used to Unity, and we squabbled a bit, but the game is finally about at the point where it’s “playable” and “maps” can be designed? It’s slightly on hold at the moment while we all finish up some other stuff, though.

We made a birthday game for two of our friends whose birthdays were very close together! Only they got to see it.

For Ludum Dare 38, we made Lunar Depot 38, a little “wave shooter” or whatever you call those? The AI is pretty rough, seeing as this was the first time I’d really made enemies and I had 72 hours to figure out how to do it, but I still think it’s pretty fun to play and I love the circular world.

I made Roguelike Simulator as an experiment with making something small and quick with a simple tool, and I had a lot of fun! I definitely want to do more stuff like this in the future.

And now we’re working on a game about Star Anise, my cat’s self-insert, which is looking to have more polish and depth than anything we’ve done so far! We’ve definitely come a long way in a year.

Somewhere along the line, I put out a call for a “potluck” project, where everyone would give me sprites of a given size without knowing what anyone else had contributed, and I would then make a game using only those sprites. Unfortunately, that stalled a few times: I tried using the Phaser JS library, but we didn’t get along; I tried LÖVE, but didn’t know where to go with the game; and then I decided to use this as an experiment with procedural generation, and didn’t get around to it. I still feel bad that everyone did work for me and I didn’t follow through, but I don’t know whether this will ever become a game.

veekun, alas, consumed months of my life. I finally got Sun and Moon loaded, but it took weeks of work since I was basically reinventing all the tooling we’d ever had from scratch, without even having most of that tooling available as a reference. It was worth it in the end, at least: Ultra Sun and Ultra Moon only took a few days to get loaded. But veekun itself is still missing some obvious Sun/Moon features, and the whole site needs an overhaul, and I just don’t know if I want to dedicate that much time to it when I have so much other stuff going on that’s much more interesting to me right now.

I finally turned my blog into more of a website, giving it a neat front page that lists a bunch of stuff I’ve done. I made a release category at last, though I’m still not quite in the habit of using it.

I wrote some blog posts, of course! I think the most interesting were JavaScript got better while I wasn’t looking and Object models. I was also asked to write a couple pieces for money for a column that then promptly shut down.

On a whim, I made a set of Eevee mugshots for Doom, which I think is a decent indication of my (pixel) art progress over the year?

I started idchoppers, a Doom parsing and manipulation library written in Rust, though it didn’t get very far and I’ve spent most of the time fighting with Rust because it won’t let me implement all my extremely bad ideas. It can do a couple things, at least, like flip maps very quickly and render maps to SVG.

I did toy around with music a little, but not a lot.

I wrote two short twines for Flora. They’re okay. I’m working on another; I think it’ll be better.

I didn’t do a lot of art overall, at least compared to the two previous years; most of my art effort over the year has gone into fox flux, which requires me to learn a whole lot of things. I did dip my toes into 3D modelling, most notably producing my current Twitter banner as well as this cool Star Anise animation. I wouldn’t mind doing more of that; maybe I’ll even try to make a low-poly pixel-textured 3D game sometime.

I restarted my book with a much better concept, though so far I’ve only written about half a chapter. Argh. I see that the vast majority of the work was done within the span of a single week, which is bad since that means I only worked on it for a week, but good since that means I can actually do a pretty good amount of work in only a week. I also did a lot of squabbling with tooling, which is hopefully mostly out of the way now.

My computer broke? That was an exciting week.


A lot of stuff, but the year as a whole still feels hit or miss. All the time I spent on veekun feels like a black void in the middle of the year, which seems like a good sign that I maybe don’t want to pour even more weeks into it in the near future.

Mostly, I want to do: more games, more art, more writing, more music.

I want to try out some tiny game making tools and make some tiny games with them — partly to get exposure to different things, partly to get more little ideas out into the world regularly, and partly to get more practice at letting myself have ideas. I have a couple tools in mind and I guess I’ll aim at a microgame every two months or so? I’d also like to finish the expanded fox flux by the end of the year, of course, though at the moment I can’t even gauge how long it might take.

I seriously lapsed on drawing last year, largely because fox flux pixel art took me so much time. So I want to draw more, and I want to get much faster at pixel art. It would probably help if I had a more concrete goal for drawing, so I might try to draw some short comics and write a little visual novel or something, which would also force me to aim for consistency.

I want to work on my book more, of course, but I also want to try my hand at a bit more fiction. I’ve had a blast writing dialogue for our games! I just shy away from longer-form writing for some reason — which seems ridiculous when a large part of my audience found me through my blog. I do think I’ve had some sort of breakthrough in the last month or two; I suddenly feel a good bit more confident about writing in general and figuring out what I want to say? One recent post I know I wrote in a single afternoon, which virtually never happens because I keep rewriting and rearranging stuff. Again, a visual novel would be a good excuse to practice writing fiction without getting too bogged down in details.

And, ah, music. I shy heavily away from music, since I have no idea what I’m doing, and also I seem to spend a lot of time fighting with tools. (Surprise.) I tried out SunVox for the first time just a few days ago and have been enjoying it quite a bit for making sound effects, so I might try it for music as well. And once again, visual novel background music is a pretty low-pressure thing to compose for. Hell, visual novels are small games, too, so that checks all the boxes. I guess I’ll go make a visual novel.

Here’s to twenty gayteen!

Weekly roundup: AOOOWR

Post Syndicated from Eevee original https://eev.ee/dev/2018/01/09/weekly-roundup-aooowr/

  • anise!!: Work continues! glip is busy with a big Flora update, so I’m left to just do code things in the meantime. I did some refactoring I’d been wanting to do for months (splitting apart the “map” and the “world” and the scene that draws the world), drew some final-ish menu art (it looks so good), switched to a vastly more accurate way to integrate position, added a bunch of transitions that make the game feel way more polished, and drew some pretty slick dialogue boxes. Nice!

    I’ll be continuing to work on this game during GAMES MADE QUICK??? 2.0, my jam for making games while watching AGDQ all week! Maybe join if you’re watching AGDQ all week!

  • art: I tried drawing a picture and this time I liked it. I also drew the header art for the aforementioned game jam, though I didn’t have time to finish it, but I think I pulled off a deliberate-looking scratchy sketchy style that’s appropriate for a game jam? Sure we’ll go with that.

  • blog: I finished a post about picking random numbers and a post about how game physics cheat. Which, ah, catches me up for December! Heck! I think I’ve found a slightly more casual style that feels easier to get down, though?

  • writing: I finally wrangled a sensible outline for a Twine I’ve been dragging my feet on, so now I don’t have any excuses! Oh no!

Playing tic-tac-toe against a Raspberry Pi at Maker Faire

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tic-tac-toe-maker-faire/

At Maker Faire New York, we met up with student Toby Goebeler of Dover High School, Pennsylvania, to learn more about his Tic-Tac-Toe Robot.

Play Tic-Tac-Toe against a Raspberry Pi #MFNYC

Uploaded by Raspberry Pi on 2017-12-18.

Tic-tac-toe with Dover Robotics

We came to see Toby and Brian Bahn, physics teacher for Dover High School and leader of the Dover Robotics club, so they could tell us about the inner workings of the Tic-Tac-Toe Robot project, and how the Raspberry Pi fit within it. Check out our video for Toby’s explanation of the build and the software controlling it.

Wooden robotic arm — Toby Goebeler Tic-Tac-Toe arm Raspberry Pi

Toby’s original robotic arm prototype used a weight to direct the pen on and off the paper. He later replaced this with a servo motor.

Toby documented the prototyping process for the robot on the Dover Robotics blog. Head over there to hear more about the highs and lows of building a robotic arm from scratch, and about how Toby learned to integrate a Raspberry Pi for both software and hardware control.

Wooden robotic arm playing tic-tac-toe — Toby Goebeler Tic-Tac-Toe arm Raspberry Pi

The finished build is a tic-tac-toe beast, besting everyone who dares to challenge it to a game.

And in case you’re wondering: no, none of the Raspberry Pi team were able to beat the Tic-Tac-Toe Robot when we played against it.

Your turn

We always love seeing Raspberry Pis being used in schools to teach coding and digital making, whether in the classroom or during after-school activities such as the Dover Robotics club and our own Code Clubs and CoderDojos. If you are part of a coding or robotics club, we’d love to hear your story! So make sure to share your experiences and projects in the comments below, or via our social media accounts.

The post Playing tic-tac-toe against a Raspberry Pi at Maker Faire appeared first on Raspberry Pi.

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

Random with care

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/02/random-with-care/

Hi! Here are a few loose thoughts about picking random numbers.

A word about crypto

DON’T ROLL YOUR OWN CRYPTO

This is all aimed at frivolous pursuits like video games. Hell, even video games where money is at stake should be deferring to someone who knows way more than I do. Otherwise you might find out that your deck shuffles in your poker game are woefully inadequate and some smartass is cheating you out of millions. (If your random number generator has fewer than 226 bits of state, it can’t even generate every possible shuffling of a deck of cards!)

Use the right distribution

Most languages have a random number primitive that spits out a number uniformly in the range [0, 1), and you can go pretty far with just that. But beware a few traps!

Random pitches

Say you want to pitch up a sound by a random amount, perhaps up to an octave. Your audio API probably has a way to do this that takes a pitch multiplier, where I say “probably” because that’s how the only audio API I’ve used works.

Easy peasy. If 1 is unchanged and 2 is pitched up by an octave, then all you need is rand() + 1. Right?

No! Pitch is exponential — within the same octave, the “gap” between C and C♯ is about half as big as the gap between B and the following C. If you pick a pitch multiplier uniformly, you’ll have a noticeable bias towards the higher pitches.

One octave corresponds to a doubling of pitch, so if you want to pick a random note, you want 2 ** rand().

Random directions

For two dimensions, you can just pick a random angle with rand() * TAU.

If you want a vector rather than an angle, or if you want a random direction in three dimensions, it’s a little trickier. You might be tempted to just pick a random point where each component is rand() * 2 - 1 (ranging from −1 to 1), but that’s not quite right. A direction is a point on the surface (or, equivalently, within the volume) of a sphere, and picking each component independently produces a point within the volume of a cube; the result will be a bias towards the corners of the cube, where there’s much more extra volume beyond the sphere.

No? Well, just trust me. I don’t know how to make a diagram for this.

Anyway, you could use the Pythagorean theorem a few times and make a huge mess of things, or it turns out there’s a really easy way that even works for two or four or any number of dimensions. You pick each coordinate from a Gaussian (normal) distribution, then normalize the resulting vector. In other words, using Python’s random module:

1
2
3
4
5
6
def random_direction():
    x = random.gauss(0, 1)
    y = random.gauss(0, 1)
    z = random.gauss(0, 1)
    r = math.sqrt(x*x + y*y + z*z)
    return x/r, y/r, z/r

Why does this work? I have no idea!

Note that it is possible to get zero (or close to it) for every component, in which case the result is nonsense. You can re-roll all the components if necessary; just check that the magnitude (or its square) is less than some epsilon, which is equivalent to throwing away a tiny sphere at the center and shouldn’t affect the distribution.

Beware Gauss

Since I brought it up: the Gaussian distribution is a pretty nice one for choosing things in some range, where the middle is the common case and should appear more frequently.

That said, I never use it, because it has one annoying drawback: the Gaussian distribution has no minimum or maximum value, so you can’t really scale it down to the range you want. In theory, you might get any value out of it, with no limit on scale.

In practice, it’s astronomically rare to actually get such a value out. I did a hundred million trials just to see what would happen, and the largest value produced was 5.8.

But, still, I’d rather not knowingly put extremely rare corner cases in my code if I can at all avoid it. I could clamp the ends, but that would cause unnatural bunching at the endpoints. I could reroll if I got a value outside some desired range, but I prefer to avoid rerolling when I can, too; after all, it’s still (astronomically) possible to have to reroll for an indefinite amount of time. (Okay, it’s really not, since you’ll eventually hit the period of your PRNG. Still, though.) I don’t bend over backwards here — I did just say to reroll when picking a random direction, after all — but when there’s a nicer alternative I’ll gladly use it.

And lo, there is a nicer alternative! Enter the beta distribution. It always spits out a number in [0, 1], so you can easily swap it in for the standard normal function, but it takes two “shape” parameters α and β that alter its behavior fairly dramatically.

With α = β = 1, the beta distribution is uniform, i.e. no different from rand(). As α increases, the distribution skews towards the right, and as β increases, the distribution skews towards the left. If α = β, the whole thing is symmetric with a hump in the middle. The higher either one gets, the more extreme the hump (meaning that value is far more common than any other). With a little fiddling, you can get a number of interesting curves.

Screenshots don’t really do it justice, so here’s a little Wolfram widget that lets you play with α and β live:

Note that if α = 1, then 1 is a possible value; if β = 1, then 0 is a possible value. You probably want them both greater than 1, which clamps the endpoints to zero.

Also, it’s possible to have either α or β or both be less than 1, but this creates very different behavior: the corresponding endpoints become poles.

Anyway, something like α = β = 3 is probably close enough to normal for most purposes but already clamped for you. And you could easily replicate something like, say, NetHack’s incredibly bizarre rnz function.

Random frequency

Say you want some event to have an 80% chance to happen every second. You (who am I kidding, I) might be tempted to do something like this:

1
2
if random() < 0.8 * dt:
    do_thing()

In an ideal world, dt is always the same and is equal to 1 / f, where f is the framerate. Replace that 80% with a variable, say P, and every tic you have a P / f chance to do the… whatever it is.

Each second, f tics pass, so you’ll make this check f times. The chance that any check succeeds is the inverse of the chance that every check fails, which is \(1 – \left(1 – \frac{P}{f}\right)^f\).

For P of 80% and a framerate of 60, that’s a total probability of 55.3%. Wait, what?

Consider what happens if the framerate is 2. On the first tic, you roll 0.4 twice — but probabilities are combined by multiplying, and splitting work up by dt only works for additive quantities. You lose some accuracy along the way. If you’re dealing with something that multiplies, you need an exponent somewhere.

But in this case, maybe you don’t want that at all. Each separate roll you make might independently succeed, so it’s possible (but very unlikely) that the event will happen 60 times within a single second! Or 200 times, if that’s someone’s framerate.

If you explicitly want something to have a chance to happen on a specific interval, you have to check on that interval. If you don’t have a gizmo handy to run code on an interval, it’s easy to do yourself with a time buffer:

1
2
3
4
5
6
timer += dt
# here, 1 is the "every 1 seconds"
while timer > 1:
    timer -= 1
    if random() < 0.8:
        do_thing()

Using while means rolls still happen even if you somehow skipped over an entire second.

(For the curious, and the nerds who already noticed: the expression \(1 – \left(1 – \frac{P}{f}\right)^f\) converges to a specific value! As the framerate increases, it becomes a better and better approximation for \(1 – e^{-P}\), which for the example above is 0.551. Hey, 60 fps is pretty accurate — it’s just accurately representing something nowhere near what I wanted. Er, you wanted.)

Rolling your own

Of course, you can fuss with the classic [0, 1] uniform value however you want. If I want a bias towards zero, I’ll often just square it, or multiply two of them together. If I want a bias towards one, I’ll take a square root. If I want something like a Gaussian/normal distribution, but with clearly-defined endpoints, I might add together n rolls and divide by n. (The normal distribution is just what you get if you roll infinite dice and divide by infinity!)

It’d be nice to be able to understand exactly what this will do to the distribution. Unfortunately, that requires some calculus, which this post is too small to contain, and which I didn’t even know much about myself until I went down a deep rabbit hole while writing, and which in many cases is straight up impossible to express directly.

Here’s the non-calculus bit. A source of randomness is often graphed as a PDF — a probability density function. You’ve almost certainly seen a bell curve graphed, and that’s a PDF. They’re pretty nice, since they do exactly what they look like: they show the relative chance that any given value will pop out. On a bog standard bell curve, there’s a peak at zero, and of course zero is the most common result from a normal distribution.

(Okay, actually, since the results are continuous, it’s vanishingly unlikely that you’ll get exactly zero — but you’re much more likely to get a value near zero than near any other number.)

For the uniform distribution, which is what a classic rand() gives you, the PDF is just a straight horizontal line — every result is equally likely.


If there were a calculus bit, it would go here! Instead, we can cheat. Sometimes. Mathematica knows how to work with probability distributions in the abstract, and there’s a free web version you can use. For the example of squaring a uniform variable, try this out:

1
PDF[TransformedDistribution[u^2, u \[Distributed] UniformDistribution[{0, 1}]], u]

(The \[Distributed] is a funny tilde that doesn’t exist in Unicode, but which Mathematica uses as a first-class operator. Also, press shiftEnter to evaluate the line.)

This will tell you that the distribution is… \(\frac{1}{2\sqrt{u}}\). Weird! You can plot it:

1
Plot[%, {u, 0, 1}]

(The % refers to the result of the last thing you did, so if you want to try several of these, you can just do Plot[PDF[…], u] directly.)

The resulting graph shows that numbers around zero are, in fact, vastly — infinitely — more likely than anything else.

What about multiplying two together? I can’t figure out how to get Mathematica to understand this, but a great amount of digging revealed that the answer is -ln x, and from there you can plot them both on Wolfram Alpha. They’re similar, though squaring has a much better chance of giving you high numbers than multiplying two separate rolls — which makes some sense, since if either of two rolls is a low number, the product will be even lower.

What if you know the graph you want, and you want to figure out how to play with a uniform roll to get it? Good news! That’s a whole thing called inverse transform sampling. All you have to do is take an integral. Good luck!


This is all extremely ridiculous. New tactic: Just Simulate The Damn Thing. You already have the code; run it a million times, make a histogram, and tada, there’s your PDF. That’s one of the great things about computers! Brute-force numerical answers are easy to come by, so there’s no excuse for producing something like rnz. (Though, be sure your histogram has sufficiently narrow buckets — I tried plotting one for rnz once and the weird stuff on the left side didn’t show up at all!)

By the way, I learned something from futzing with Mathematica here! Taking the square root (to bias towards 1) gives a PDF that’s a straight diagonal line, nothing like the hyperbola you get from squaring (to bias towards 0). How do you get a straight line the other way? Surprise: \(1 – \sqrt{1 – u}\).

Okay, okay, here’s the actual math

I don’t claim to have a very firm grasp on this, but I had a hell of a time finding it written out clearly, so I might as well write it down as best I can. This was a great excuse to finally set up MathJax, too.

Say \(u(x)\) is the PDF of the original distribution and \(u\) is a representative number you plucked from that distribution. For the uniform distribution, \(u(x) = 1\). Or, more accurately,

$$
u(x) = \begin{cases}
1 & \text{ if } 0 \le x \lt 1 \\
0 & \text{ otherwise }
\end{cases}
$$

Remember that \(x\) here is a possible outcome you want to know about, and the PDF tells you the relative probability that a roll will be near it. This PDF spits out 1 for every \(x\), meaning every number between 0 and 1 is equally likely to appear.

We want to do something to that PDF, which creates a new distribution, whose PDF we want to know. I’ll use my original example of \(f(u) = u^2\), which creates a new PDF \(v(x)\).

The trick is that we need to work in terms of the cumulative distribution function for \(u\). Where the PDF gives the relative chance that a roll will be (“near”) a specific value, the CDF gives the relative chance that a roll will be less than a specific value.

The conventions for this seem to be a bit fuzzy, and nobody bothers to explain which ones they’re using, which makes this all the more confusing to read about… but let’s write the CDF with a capital letter, so we have \(U(x)\). In this case, \(U(x) = x\), a straight 45° line (at least between 0 and 1). With the definition I gave, this should make sense. At some arbitrary point like 0.4, the value of the PDF is 1 (0.4 is just as likely as anything else), and the value of the CDF is 0.4 (you have a 40% chance of getting a number from 0 to 0.4).

Calculus ahoy: the PDF is the derivative of the CDF, which means it measures the slope of the CDF at any point. For \(U(x) = x\), the slope is always 1, and indeed \(u(x) = 1\). See, calculus is easy.

Okay, so, now we’re getting somewhere. What we want is the CDF of our new distribution, \(V(x)\). The CDF is defined as the probability that a roll \(v\) will be less than \(x\), so we can literally write:

$$V(x) = P(v \le x)$$

(This is why we have to work with CDFs, rather than PDFs — a PDF gives the chance that a roll will be “nearby,” whatever that means. A CDF is much more concrete.)

What is \(v\), exactly? We defined it ourselves; it’s the do something applied to a roll from the original distribution, or \(f(u)\).

$$V(x) = P\!\left(f(u) \le x\right)$$

Now the first tricky part: we have to solve that inequality for \(u\), which means we have to do something, backwards to \(x\).

$$V(x) = P\!\left(u \le f^{-1}(x)\right)$$

Almost there! We now have a probability that \(u\) is less than some value, and that’s the definition of a CDF!

$$V(x) = U\!\left(f^{-1}(x)\right)$$

Hooray! Now to turn these CDFs back into PDFs, all we need to do is differentiate both sides and use the chain rule. If you never took calculus, don’t worry too much about what that means!

$$v(x) = u\!\left(f^{-1}(x)\right)\left|\frac{d}{dx}f^{-1}(x)\right|$$

Wait! Where did that absolute value come from? It takes care of whether \(f(x)\) increases or decreases. It’s the least interesting part here by far, so, whatever.

There’s one more magical part here when using the uniform distribution — \(u(\dots)\) is always equal to 1, so that entire term disappears! (Note that this only works for a uniform distribution with a width of 1; PDFs are scaled so the entire area under them sums to 1, so if you had a rand() that could spit out a number between 0 and 2, the PDF would be \(u(x) = \frac{1}{2}\).)

$$v(x) = \left|\frac{d}{dx}f^{-1}(x)\right|$$

So for the specific case of modifying the output of rand(), all we have to do is invert, then differentiate. The inverse of \(f(u) = u^2\) is \(f^{-1}(x) = \sqrt{x}\) (no need for a ± since we’re only dealing with positive numbers), and differentiating that gives \(v(x) = \frac{1}{2\sqrt{x}}\). Done! This is also why square root comes out nicer; inverting it gives \(x^2\), and differentiating that gives \(2x\), a straight line.

Incidentally, that method for turning a uniform distribution into any distribution — inverse transform sampling — is pretty much the same thing in reverse: integrate, then invert. For example, when I saw that taking the square root gave \(v(x) = 2x\), I naturally wondered how to get a straight line going the other way, \(v(x) = 2 – 2x\). Integrating that gives \(2x – x^2\), and then you can use the quadratic formula (or just ask Wolfram Alpha) to solve \(2x – x^2 = u\) for \(x\) and get \(f(u) = 1 – \sqrt{1 – u}\).

Multiply two rolls is a bit more complicated; you have to write out the CDF as an integral and you end up doing a double integral and wow it’s a mess. The only thing I’ve retained is that you do a division somewhere, which then gets integrated, and that’s why it ends up as \(-\ln x\).

And that’s quite enough of that! (Okay but having math in my blog is pretty cool and I will definitely be doing more of this, sorry, not sorry.)

Random vs varied

Sometimes, random isn’t actually what you want. We tend to use the word “random” casually to mean something more like chaotic, i.e., with no discernible pattern. But that’s not really random. In fact, given how good humans can be at finding incidental patterns, they aren’t all that unlikely! Consider that when you roll two dice, they’ll come up either the same or only one apart almost half the time. Coincidence? Well, yes.

If you ask for randomness, you’re saying that any outcome — or series of outcomes — is acceptable, including five heads in a row or five tails in a row. Most of the time, that’s fine. Some of the time, it’s less fine, and what you really want is variety. Here are a couple examples and some fairly easy workarounds.

NPC quips

The nature of games is such that NPCs will eventually run out of things to say, at which point further conversation will give the player a short brush-off quip — a slight nod from the designer to the player that, hey, you hit the end of the script.

Some NPCs have multiple possible quips and will give one at random. The trouble with this is that it’s very possible for an NPC to repeat the same quip several times in a row before abruptly switching to another one. With only a few options to choose from, getting the same option twice or thrice (especially across an entire game, which may have numerous NPCs) isn’t all that unlikely. The notion of an NPC quip isn’t very realistic to start with, but having someone repeat themselves and then abruptly switch to something else is especially jarring.

The easy fix is to show the quips in order! Paradoxically, this is more consistently varied than choosing at random — the original “order” is likely to be meaningless anyway, and it already has the property that the same quip can never appear twice in a row.

If you like, you can shuffle the list of quips every time you reach the end, but take care here — it’s possible that the last quip in the old order will be the same as the first quip in the new order, so you may still get a repeat. (Of course, you can just check for this case and swap the first quip somewhere else if it bothers you.)

That last behavior is, in fact, the canonical way that Tetris chooses pieces — the game simply shuffles a list of all 7 pieces, gives those to you in shuffled order, then shuffles them again to make a new list once it’s exhausted. There’s no avoidance of duplicates, though, so you can still get two S blocks in a row, or even two S and two Z all clumped together, but no more than that. Some Tetris variants take other approaches, such as actively avoiding repeats even several pieces apart or deliberately giving you the worst piece possible.

Random drops

Random drops are often implemented as a flat chance each time. Maybe enemies have a 5% chance to drop health when they die. Legally speaking, over the long term, a player will see health drops for about 5% of enemy kills.

Over the short term, they may be desperate for health and not survive to see the long term. So you may want to put a thumb on the scale sometimes. Games in the Metroid series, for example, have a somewhat infamous bias towards whatever kind of drop they think you need — health if your health is low, missiles if your missiles are low.

I can’t give you an exact approach to use, since it depends on the game and the feeling you’re going for and the variables at your disposal. In extreme cases, you might want to guarantee a health drop from a tough enemy when the player is critically low on health. (Or if you’re feeling particularly evil, you could go the other way and deny the player health when they most need it…)

The problem becomes a little different, and worse, when the event that triggers the drop is relatively rare. The pathological case here would be something like a raid boss in World of Warcraft, which requires hours of effort from a coordinated group of people to defeat, and which has some tiny chance of dropping a good item that will go to only one of those people. This is why I stopped playing World of Warcraft at 60.

Dialing it back a little bit gives us Enter the Gungeon, a roguelike where each room is a set of encounters and each floor only has a dozen or so rooms. Initially, you have a 1% chance of getting a reward after completing a room — but every time you complete a room and don’t get a reward, the chance increases by 9%, up to a cap of 80%. Once you get a reward, the chance resets to 1%.

The natural question is: how frequently, exactly, can a player expect to get a reward? We could do math, or we could Just Simulate The Damn Thing.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
from collections import Counter
import random

histogram = Counter()

TRIALS = 1000000
chance = 1
rooms_cleared = 0
rewards_found = 0
while rewards_found < TRIALS:
    rooms_cleared += 1
    if random.random() * 100 < chance:
        # Reward!
        rewards_found += 1
        histogram[rooms_cleared] += 1
        rooms_cleared = 0
        chance = 1
    else:
        chance = min(80, chance + 9)

for gaps, count in sorted(histogram.items()):
    print(f"{gaps:3d} | {count / TRIALS * 100:6.2f}%", '#' * (count // (TRIALS // 100)))
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  1 |   0.98%
  2 |   9.91% #########
  3 |  17.00% ################
  4 |  20.23% ####################
  5 |  19.21% ###################
  6 |  15.05% ###############
  7 |   9.69% #########
  8 |   5.07% #####
  9 |   2.09% ##
 10 |   0.63%
 11 |   0.12%
 12 |   0.03%
 13 |   0.00%
 14 |   0.00%
 15 |   0.00%

We’ve got kind of a hilly distribution, skewed to the left, which is up in this histogram. Most of the time, a player should see a reward every three to six rooms, which is maybe twice per floor. It’s vanishingly unlikely to go through a dozen rooms without ever seeing a reward, so a player should see at least one per floor.

Of course, this simulated a single continuous playthrough; when starting the game from scratch, your chance at a reward always starts fresh at 1%, the worst it can be. If you want to know about how many rewards a player will get on the first floor, hey, Just Simulate The Damn Thing.

1
2
3
4
5
6
7
  0 |   0.01%
  1 |  13.01% #############
  2 |  56.28% ########################################################
  3 |  27.49% ###########################
  4 |   3.10% ###
  5 |   0.11%
  6 |   0.00%

Cool. Though, that’s assuming exactly 12 rooms; it might be worth changing that to pick at random in a way that matches the level generator.

(Enter the Gungeon does some other things to skew probability, which is very nice in a roguelike where blind luck can make or break you. For example, if you kill a boss without having gotten a new gun anywhere else on the floor, the boss is guaranteed to drop a gun.)

Critical hits

I suppose this is the same problem as random drops, but backwards.

Say you have a battle sim where every attack has a 6% chance to land a devastating critical hit. Presumably the same rules apply to both the player and the AI opponents.

Consider, then, that the AI opponents have exactly the same 6% chance to ruin the player’s day. Consider also that this gives them an 0.4% chance to critical hit twice in a row. 0.4% doesn’t sound like much, but across an entire playthrough, it’s not unlikely that a player might see it happen and find it incredibly annoying.

Perhaps it would be worthwhile to explicitly forbid AI opponents from getting consecutive critical hits.

In conclusion

An emerging theme here has been to Just Simulate The Damn Thing. So consider Just Simulating The Damn Thing. Even a simple change to a random value can do surprising things to the resulting distribution, so unless you feel like differentiating the inverse function of your code, maybe test out any non-trivial behavior and make sure it’s what you wanted. Probability is hard to reason about.

Thank you for my new Raspberry Pi, Santa! What next?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/thank-you-for-my-new-raspberry-pi-santa-what-next/

Note: the Pi Towers team have peeled away from their desks to spend time with their families over the festive season, and this blog will be quiet for a while as a result. We’ll be back in the New Year with a bushel of amazing projects, awesome resources, and much merriment and fun times. Happy holidays to all!

Now back to the matter at hand. Your brand new Christmas Raspberry Pi.

Your new Raspberry Pi

Did you wake up this morning to find a new Raspberry Pi under the tree? Congratulations, and welcome to the Raspberry Pi community! You’re one of us now, and we’re happy to have you on board.

But what if you’ve never seen a Raspberry Pi before? What are you supposed to do with it? What’s all the fuss about, and why does your new computer look so naked?

Setting up your Raspberry Pi

Are you comfy? Good. Then let us begin.

Download our free operating system

First of all, you need to make sure you have an operating system on your micro SD card: we suggest Raspbian, the Raspberry Pi Foundation’s official supported operating system. If your Pi is part of a starter kit, you might find that it comes with a micro SD card that already has Raspbian preinstalled. If not, you can download Raspbian for free from our website.

An easy way to get Raspbian onto your SD card is to use a free tool called Etcher. Watch The MagPi’s Lucy Hattersley show you what you need to do. You can also use NOOBS to install Raspbian on your SD card, and our Getting Started guide explains how to do that.

Plug it in and turn it on

Your new Raspberry Pi 3 comes with four USB ports and an HDMI port. These allow you to plug in a keyboard, a mouse, and a television or monitor. If you have a Raspberry Pi Zero, you may need adapters to connect your devices to its micro USB and micro HDMI ports. Both the Raspberry Pi 3 and the Raspberry Pi Zero W have onboard wireless LAN, so you can connect to your home network, and you can also plug an Ethernet cable into the Pi 3.

Make sure to plug the power cable in last. There’s no ‘on’ switch, so your Pi will turn on as soon as you connect the power. Raspberry Pi uses a micro USB power supply, so you can use a phone charger if you didn’t receive one as part of a kit.

Learn with our free projects

If you’ve never used a Raspberry Pi before, or you’re new to the world of coding, the best place to start is our projects site. It’s packed with free projects that will guide you through the basics of coding and digital making. You can create projects right on your screen using Scratch and Python, connect a speaker to make music with Sonic Pi, and upgrade your skills to physical making using items from around your house.

Here’s James to show you how to build a whoopee cushion using a Raspberry Pi, paper plates, tin foil and a sponge:

Whoopee cushion PRANK with a Raspberry Pi: HOW-TO

Explore the world of Raspberry Pi physical computing with our free FutureLearn courses: http://rpf.io/futurelearn Free make your own Whoopi Cushion resource: http://rpf.io/whoopi For more information on Raspberry Pi and the charitable work of the Raspberry Pi Foundation, including Code Club and CoderDojo, visit http://rpf.io Our resources are free to use in schools, clubs, at home and at events.

Diving deeper

You’ve plundered our projects, you’ve successfully rigged every chair in the house to make rude noises, and now you want to dive deeper into digital making. Good! While you’re digesting your Christmas dinner, take a moment to skim through the Raspberry Pi blog for inspiration. You’ll find projects from across our worldwide community, with everything from home automation projects and retrofit upgrades, to robots, gaming systems, and cameras.

You’ll also find bucketloads of ideas in The MagPi magazine, the official monthly Raspberry Pi publication, available in both print and digital format. You can download every issue for free. If you subscribe, you’ll get a Raspberry Pi Zero W to add to your new collection. HackSpace magazine is another fantastic place to turn for Raspberry Pi projects, along with other maker projects and tutorials.

And, of course, simply typing “Raspberry Pi projects” into your preferred search engine will find thousands of ideas. Sites like Hackster, Hackaday, Instructables, Pimoroni, and Adafruit all have plenty of fab Raspberry Pi tutorials that they’ve devised themselves and that community members like you have created.

And finally

If you make something marvellous with your new Raspberry Pi – and we know you will – don’t forget to share it with us! Our Twitter, Facebook, Instagram and Google+ accounts are brimming with chatter, projects, and events. And our forums are a great place to visit if you have questions about your Raspberry Pi or if you need some help.

It’s good to get together with like-minded folks, so check out the growing Raspberry Jam movement. Raspberry Jams are community-run events where makers and enthusiasts can meet other makers, show off their projects, and join in with workshops and discussions. Find your nearest Jam here.

Have a great festive holiday and welcome to the community. We’ll see you in 2018!

The post Thank you for my new Raspberry Pi, Santa! What next? appeared first on Raspberry Pi.

[$] Demystifying container runtimes

Post Syndicated from jake original https://lwn.net/Articles/741897/rss

As we briefly mentioned in our overview article about
KubeCon + CloudNativeCon, there are multiple container “runtimes”, which are
programs that can create and execute containers that are typically fetched
from online
images. That space is slowly reaching maturity both in terms
of standards and implementation: Docker’s containerd 1.0 was released
during KubeCon, CRI-O 1.0 was released a few months ago, and rkt is
also still in the game. With all of those runtimes, it may be a confusing
time for those looking at deploying their own container-based system
or Kubernetes cluster from
scratch. This article will try to explain
what container runtimes are, what they do, how they compare with each other, and
how to choose the right one. It also provides a primer on container
specifications and standards.