Tag Archives: making things

Working with the Scout Association on digital skills for life

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/working-with-scout-association-digital-skills-for-life/

Today we’re launching a new partnership between the Scouts and the Raspberry Pi Foundation that will help tens of thousands of young people learn crucial digital skills for life. In this blog post, I want to explain what we’ve got planned, why it matters, and how you can get involved.

This is personal

First, let me tell you why this partnership matters to me. As a child growing up in North Wales in the 1980s, Scouting changed my life. My time with 2nd Rhyl provided me with countless opportunities to grow and develop new skills. It taught me about teamwork and community in ways that continue to shape my decisions today.

As my own kids (now seven and ten) have joined Scouting, I’ve seen the same opportunities opening up for them, and like so many parents, I’ve come back to the movement as a volunteer to support their local section. So this is deeply personal for me, and the same is true for many of my colleagues at the Raspberry Pi Foundation who in different ways have been part of the Scouting movement.

That shouldn’t come as a surprise. Scouting and Raspberry Pi share many of the same values. We are both community-led movements that aim to help young people develop the skills they need for life. We are both powered by an amazing army of volunteers who give their time to support that mission. We both care about inclusiveness, and pride ourselves on combining fun with learning by doing.

Raspberry Pi

Raspberry Pi started life in 2008 as a response to the problem that too many young people were growing up without the skills to create with technology. Our goal is that everyone should be able to harness the power of computing and digital technologies, for work, to solve problems that matter to them, and to express themselves creatively.

In 2012 we launched our first product, the world’s first $35 computer. Just six years on, we have sold over 20 million Raspberry Pi computers and helped kickstart a global movement for digital skills.

The Raspberry Pi Foundation now runs the world’s largest network of volunteer-led computing clubs (Code Clubs and CoderDojos), and creates free educational resources that are used by millions of young people all over the world to learn how to create with digital technologies. And lots of what we are able to achieve is because of partnerships with fantastic organisations that share our goals. For example, through our partnership with the European Space Agency, thousands of young people have written code that has run on two Raspberry Pi computers that Tim Peake took to the International Space Station as part of his Mission Principia.

Digital makers

Today we’re launching the new Digital Maker Staged Activity Badge to help tens of thousands of young people learn how to create with technology through Scouting. Over the past few months, we’ve been working with the Scouts all over the UK to develop and test the new badge requirements, along with guidance, project ideas, and resources that really make them work for Scouting. We know that we need to get two things right: relevance and accessibility.

Relevance is all about making sure that the activities and resources we provide are a really good fit for Scouting and Scouting’s mission to equip young people with skills for life. From the digital compass to nature cameras and the reinvented wide game, we’ve had a lot of fun thinking about ways we can bring to life the crucial role that digital technologies can play in the outdoors and adventure.

Compass Coding with Raspberry Pi

We are beyond excited to be launching a new partnership with the Raspberry Pi Foundation, which will help tens of thousands of young people learn digital skills for life.

We also know that there are great opportunities for Scouts to use digital technologies to solve social problems in their communities, reflecting the movement’s commitment to social action. Today we’re launching the first set of project ideas and resources, with many more to follow over the coming weeks and months.

Accessibility is about providing every Scout leader with the confidence, support, and kit to enable them to offer the Digital Maker Staged Activity Badge to their young people. A lot of work and care has gone into designing activities that require very little equipment: for example, activities at Stages 1 and 2 can be completed with a laptop without access to the internet. For the activities that do require kit, we will be working with Scout Stores and districts to make low-cost kit available to buy or loan.

We’re producing accessible instructions, worksheets, and videos to help leaders run sessions with confidence, and we’ll also be planning training for leaders. We will work with our network of Code Clubs and CoderDojos to connect them with local sections to organise joint activities, bringing both kit and expertise along with them.




Get involved

Today’s launch is just the start. We’ll be developing our partnership over the next few years, and we can’t wait for you to join us in getting more young people making things with technology.

Take a look at the brand-new Raspberry Pi resources designed especially for Scouts, to get young people making and creating right away.

The post Working with the Scout Association on digital skills for life appeared first on Raspberry Pi.

Take home Mugsy, the Raspberry Pi coffee robot

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/mugsy/

We love Mugsy, the Raspberry Pi coffee robot that has smashed its crowdfunding goal within days! Our latest YouTube video shows our catch-up with Mugsy and its creator Matthew Oswald at Maker Faire New York last year.

MUGSY THE RASPBERRY PI COFFEE ROBOT #MFNYC

Uploaded by Raspberry Pi on 2018-03-22.

Mugsy

Labelled ‘the world’s first hackable, customisable, dead simple, robotic coffee maker’, Mugsy allows you to take control of every aspect of the coffee-making process: from grind size and water temperature, to brew and bloom time. Feeling lazy instead? Read in your beans’ barcode via an onboard scanner, and it will automatically use the best settings for your brew.

Mugsy Raspberry Pi Coffee Robot

Looking to start your day with your favourite coffee straight out of bed? Send the robot a text, email, or tweet, and it will notify you when your coffee is ready!

Learning through product development

“Initially, I used [Mugsy] as a way to teach myself hardware design,” explained Matthew at his Editor’s Choice–winning Maker Faire stand. “I really wanted to hold something tangible in my hands. By using the Raspberry Pi and just being curious, anytime I wanted to use a new technology, I would try to pull back [and ask] ‘How can I integrate this into Mugsy?’”

Mugsy Raspberry Pi Coffee Robot

By exploring his passions and using Mugsy as his guinea pig, Matthew created a project that not only solves a problem — how to make amazing coffee at home — but also brings him one step closer to ‘making things’ for a living. “I used to dream about this stuff when I was a kid, and I used to say ‘I’m never going to be able to do something like that.’” he admitted. But now, with open-source devices like the Raspberry Pi so readily available, he “can see the end of the road”: making his passion his livelihood.

Back Mugsy

With only a few days left on the Kickstarter campaign, Mugsy has reached its goal and then some. It’s available for backing from $150 if you provide your own Raspberry Pi 3, or from $175 with a Pi included — check it out today!

The post Take home Mugsy, the Raspberry Pi coffee robot appeared first on Raspberry Pi.

Amazon’s Door Lock Is Amazon’s Bid to Control Your Home

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/amazons_door_lo.html

Interesting essay about Amazon’s smart lock:

When you add Amazon Key to your door, something more sneaky also happens: Amazon takes over.

You can leave your keys at home and unlock your door with the Amazon Key app — but it’s really built for Amazon deliveries. To share online access with family and friends, I had to give them a special code to SMS (yes, text) to unlock the door. (Amazon offers other smartlocks that have physical keypads).

The Key-compatible locks are made by Yale and Kwikset, yet don’t work with those brands’ own apps. They also can’t connect with a home-security system or smart-home gadgets that work with Apple and Google software.

And, of course, the lock can’t be accessed by businesses other than Amazon. No Walmart, no UPS, no local dog-walking company.

Keeping tight control over Key might help Amazon guarantee security or a better experience. “Our focus with smart home is on making things simpler for customers ­– things like providing easy control of connected devices with your voice using Alexa, simplifying tasks like reordering household goods and receiving packages,” the Amazon spokeswoman said.

But Amazon is barely hiding its goal: It wants to be the operating system for your home. Amazon says Key will eventually work with dog walkers, maids and other service workers who bill through its marketplace. An Amazon home security service and grocery delivery from Whole Foods can’t be far off.

This is happening all over. Everyone wants to control your life: Google, Apple, Amazon…everyone. It’s what I’ve been calling the feudal Internet. I fear it’s going to get a lot worse.

Roguelike Simulator

Post Syndicated from Eevee original https://eev.ee/release/2017/12/09/roguelike-simulator/

Screenshot of a monochromatic pixel-art game designed to look mostly like ASCII text

On a recent game night, glip and I stumbled upon bitsy — a tiny game maker for “games where you can walk around and talk to people and be somewhere.” It’s enough of a genre to have become a top tag on itch, so we flicked through a couple games.

What we found were tiny windows into numerous little worlds, ill-defined yet crisply rendered in chunky two-colored pixels. Indeed, all you can do is walk around and talk to people and be somewhere, but the somewheres are strangely captivating. My favorite was the last days of our castle, with a day on the town in a close second (though it cheated and extended the engine a bit), but there are several hundred of these tiny windows available. Just single, short, minimal, interactive glimpses of an idea.

I’ve been wanting to do more of that, so I gave it a shot today. The result is Roguelike Simulator, a game that condenses the NetHack experience into about ninety seconds.


Constraints breed creativity, and bitsy is practically made of constraints — the only place you can even make any decisions at all is within dialogue trees. There are only three ways to alter the world: the player can step on an ending tile to end the game, step on an exit tile to instantly teleport to a tile on another map (or not), or pick up an item. That’s it. You can’t even implement keys; the best you can do is make an annoying maze of identical rooms, then have an NPC tell you the solution.

In retrospect, a roguelike — a genre practically defined by its randomness — may have been a poor choice.

I had a lot of fun faking it, though, and it worked well enough to fool at least one person for a few minutes! Some choice hacks follow. Probably play the game a couple times before reading them?

  • Each floor reveals itself, of course, by teleporting you between maps with different chunks of the floor visible. I originally intended for this to be much more elaborate, but it turns out to be a huge pain to juggle multiple copies of the same floor layout.

  • Endings can’t be changed or randomized; even the text is static. I still managed to implement multiple variants on the “ascend” ending! See if you can guess how. (It’s not that hard.)

  • There are no Boolean operators, but there are arithmetic operators, so in one place I check whether you have both of two items by multiplying together how many of each you have.

  • Monsters you “defeat” are actually just items you pick up. They’re both drawn in the same color, and you can’t see your inventory, so you can’t tell the difference.

Probably the best part was writing the text, which is all completely ridiculous. I really enjoy writing a lot of quips — which I guess is why I like Twitter — and I’m happy to see they’ve made people laugh!


I think this has been a success! It’s definitely made me more confident about making smaller things — and about taking the first idea I have and just running with it. I’m going to keep an eye out for other micro game engines to play with, too.

Eevee mugshot set for Doom

Post Syndicated from Eevee original https://eev.ee/release/2017/11/23/eevee-mugshot-set-for-doom/

Screenshot of Industrial Zone from Doom II, with an Eevee face replacing the usual Doom marine in the status bar

A full replacement of Doomguy’s vast array of 42 expressions.

You can get it yourself if you want to play Doom as me, for some reason? It does nothing but replace a few sprites, so it works with any Doom flavor (including vanilla) on 1, 2, or Final. Just run Doom with -file eeveemug.wad. With GZDoom, you can load it automatically.


I don’t entirely know why I did this. I drew the first one on a whim, then realized there was nothing really stopping me from making a full set, so I spent a day doing that.

The funny thing is that I usually play Doom with ZDoom’s “alternate” HUD. It’s a full-screen overlay rather than a huge bar, and — crucially — it does not show the mugshot. It can’t even be configured to show the mugshot. As far as I’m aware, it can’t even be modded to show the mugshot. So I have to play with the OG status bar if I want to actually use the thing I made.

Preview of the Eevee mugshot sprites arranged in a grid, where the Eevee becomes more beaten up in each subsequent column

I’m pretty happy with the results overall! I think I did a decent job emulating the Doom “surreal grit” style. I did the shading with Aseprite‘s shading mode — instead of laying down a solid color, it shifts pixels along a ramp of colors you select every time you draw over them. Doom’s palette has a lot of browns, so I made a ramp out of all of them and kept going over furry areas, nudging pixels into being lighter or darker, until I liked the texture. It was a lot like making a texture in a sketch with a lot of scratchy pencil strokes.

I also gleaned some interesting things about smoothness and how the eye interprets contours? I tried to explain this on Twitter and had a hell of a time putting it into words, but the short version is that it’s amazing to see the difference a single misplaced pixel can make, especially as you slide that pixel between dark and light.


Doom's palette of 256 colors, many of which are very long gradients of reds and browns

Speaking of which, Doom’s palette is incredibly weird to work with. Thank goodness Eevees are brown! The game does have to draw arbitrary levels of darkness all with the same palette, which partly explains the number of dark colors and gradients — but I believe a number of the colors are exact duplicates, so close they might as well be duplicates, or completely unused in stock Doom assets. I guess they had no reason to optimize for people trying to add arbitrary art to the game 25 years later, though. (And nowadays, GZDoom includes a truecolor software renderer, so the palette is becoming less and less important.)

I originally wanted the god mode sprite to be a Sylveon, but Sylveon is made of pink and azure and blurple, and I don’t think I could’ve pulled it off with this set of colors. I even struggled with the color of the mane a bit — I usually color it with pretty pale colors, but Doom only has a couple of those, and they’re very saturated. I ended up using a lot more dark yellows than I would normally, and thankfully it worked out pretty well.

The most significant change I made between the original sprite and the final set was the eye color:

A comparison between an original Doom mugshot sprite, the first sprite I drew, and how it ended up

(This is STFST20, a frame from the default three-frame “glacing around” animation that plays when the player has between 40 and 59 health. Doom Wiki has a whole article on the mugshot if you’re interested.)

The blue eyes in my original just do not work at all. The Doom palette doesn’t have a lot of subtle colors, and its blues in particular are incredibly bad. In the end, I made the eyes basically black, though with a couple pixels of very dark blue in them.

After I decided to make the full set, I started by making a neutral and completely healthy front pose, then derived the others from that (with a very complicated system of layers). You can see some of the side effects of that here: the face doesn’t actually turn when glancing around, because hoo boy that would’ve been a lot of work, and so the cheek fluff is visible on both sides.

I also notice that there are two columns of identical pixels in each eye! I fixed that in the glance to the right, but must’ve forgotten about it here. Oh, well; I didn’t even notice until I zoomed in just now.

A general comparison between the Doom mugshots and my Eevee ones, showing each pose in its healthy state plus the neutral pose in every state of deterioration

The original sprites might not be quite aligned correctly in the above image. The available space in the status bar is 35×31, of which a couple pixels go to an inset border, leaving 33×30. I drew all of my sprites at that size, but the originals are all cropped and have varying offsets (part of the Doom sprite format). I extremely can’t be assed to check all of those offsets for over a dozen sprites, so I just told ImageMagick to center them. (I only notice right now that some of the original sprites are even a full 31 pixels tall and draw over the top border that I was so careful to stay out of!)

Anyway, this is a representative sample of the Doom mugshot poses.

The top row shows all eight frames at full health. The first three are the “idle” state, drawn when nothing else is going on; the sprite usually faces forwards, but glances around every so often at random. The forward-facing sprite is the one I finalized first.

I tried to take a lot of cues from the original sprite, seeing as I wanted to match the style. I’d never tried drawing a sprite with a large palette and a small resolution before, and the first thing that struck me was Doomguy’s lips — the upper lip, lips themselves, and shadow under the lower lip are all created with only one row of pixels each. I thought that was amazing. Now I even kinda wish I’d exaggerated that effect a bit more, but I was wary of going too dark when there’s a shadow only a couple pixels away. I suppose Doomguy has the advantage of having, ah, a chin.

I did much the same for the eyebrows, which was especially necessary because Doomguy has more of a forehead than my Eevee does. I probably could’ve exaggerated those a bit more, as well! Still, I love how they came out — especially in the simple looking-around frames, where even a two-pixel eyebrow raise is almost comically smug.

The fourth frame is a wild-ass grin (even named STFEVL0), which shows for a short time after picking up a new weapon. Come to think of it, that’s a pretty rare occurrence when playing straight through one of the Doom games; you keep your weapons between levels.

The fifth through seventh are also a set. If the player takes damage, the status bar will briefly show one of these frames to indicate where the damage is coming from. You may notice that where Doomguy bravely faces the source of the pain, I drew myself wincing and recoiling away from it.

The middle frame of that set also appears while the player is firing continuously (regardless of damage), so I couldn’t really make it match the left and right ones. I like the result anyway. It was also great fun figuring out the expressions with the mouth — that’s another place where individual pixels make a huge difference.

Finally, the eighth column is the legendary “ouch” face, which appears when the player takes more than 20 damage at once. It may look completely alien to you, because vanilla Doom has a bug that only shows this face when the player gains 20 or more health while taking damage. This is vanishingly rare (though possible!), so the frame virtually never appears in vanilla Doom. Lots of source ports have fixed this bug, making the ouch face it a bit better known, but I usually play without the mugshot visible so it still looks super weird to me. I think my own spin on it is a bit less, ah, body horror?

The second row shows deterioration. It is pretty weird drawing yourself getting beaten up.

A lot of Doomguy’s deterioration is in the form of blood dripping from under his hair, which I didn’t think would translate terribly well to a character without hair. Instead, I went a little cartoony with it, adding bandages here and there. I had a little bit of a hard time with the bloodshot eyes at this resolution, which I realize as I type it is a very poor excuse when I had eyes three times bigger than Doomguy’s. I do love the drooping ears, with the possible exception of the fifth state, which I’m not sure is how that would actually look…? Oh well. I also like the bow becoming gradually unravelled, eventually falling off entirely when you die.

Oh, yes, the sixth frame there (before the gap) is actually for a dead player. Doomguy’s bleeding becomes markedly more extreme here, but again that didn’t really work for me, so I went a little sillier with it. A little. It’s still pretty weird drawing yourself dead.

That leaves only god mode, which is incredible. I love that glow. I love the faux whisker shapes it makes. I love how it fades into the background. I love that 100% pure “oh this is pretty good” smile. It all makes me want to just play Doom in god mode forever.

Now that I’ve looked closely at these sprites again, I spy a good half dozen little inconsistencies and nitpicks, which I’m going to refrain from spelling out. I did do this in only a day, and I think it came out pretty dang well considering.

Maybe I’ll try something else like this in the future. Not quite sure what, though; there aren’t many small and self-contained sets of sprites like this in Doom. Monsters are several times bigger and have a zillion different angles. Maybe some pickups, which only have one frame?


Hmm. Parting thought: I’m not quite sure where I should host this sort of one-off thing. It arguably belongs on Itch, but seems really out of place alongside entire released games. It also arguably belongs on the idgames archive, but I’m hesitant to put it there because it’s such an obscure thing of little interest to a general audience. At the moment it’s just a file I’ve uploaded to wherever on my own space, but I now have three little Doom experiments with no real permanent home.

Taking the first step on the journey

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/taking-first-step-journey/

This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

About five years ago was the first time I unboxed a Raspberry Pi. I hooked it up to our living room television and made space on the TV stand for an old USB keyboard and mouse. Watching the $35 computer boot up for the first time impressed me, and I had a feeling it was a big deal, but I’ll admit that I had no idea how much of a phenomenon Raspberry Pi would become. I had no idea how large the community would grow. I had no idea how much my life would be changed from that moment on. And it all started with a simple first step: booting it up.

Matt Richardson on Twitter

Finally a few minutes to experiment with @Raspberry_Pi! So far, I’m rather impressed!

The key to the success of Raspberry Pi as a computer – and, in turn, a community and a charitable foundation – is that there’s a low barrier to the first step you take with it. The low price is a big reason for that. Whether or not to try Raspberry Pi is not a difficult decision. Since it’s so affordable, you can just give it a go, and see how you get along.

The pressure is off

Linus Torvalds, the creator of the Linux operating system kernel, talked about this in a BBC News interview in 2012. He explained that a lot of people might take the first step with Raspberry Pi, but not everyone will carry on with it. But getting more people to take that first step of turning it on means there are more people who potentially will be impacted by the technology. Torvalds said:

I find things like Raspberry Pi to be an important thing: trying to make it possible for a wider group of people to tinker with computers. And making the computers cheap enough that you really can not only afford the hardware at a big scale, but perhaps more important, also afford failure.

In other words, if things don’t work out with you and your Raspberry Pi, it’s not a big deal, since it’s such an affordable computer.

In this together

Of course, we hope that more and more people who boot up a Raspberry Pi for the first time will decide to continue experimenting, creating, and learning with it. Thanks to improvements to the hardware, the Raspbian operating system, and free software packages, it’s constantly becoming easier to do many amazing things with this little computer. And our continually growing community means you’re not alone on this journey. These improvements and growth over the past few years hopefully encourage more people who boot up Raspberry Pis to keep exploring.
raspberry pi first step

The first step

However, the important thing is that people are given the opportunity to take that first step, especially young people. Young learners are at a critical age, and something like the Raspberry Pi can have an enormously positive impact on the rest of their lives. It’s a major reason why our free resources are aimed at young learners. It’s also why we train educators all over the world for free. And encouraging youngsters to take their first step with Raspberry Pi could not only make a positive difference in their lives, but also in society at large.

With the affordable computational power, excellent software, supportive community, and free resources, you’re given everything you need to make a big impact in the world when you boot up a Raspberry Pi for the first time. That moment could be step one of ten, or one of ten thousand, but it’s up to you to take that first step.

Now you!

Learning and making things with the Pi is incredibly easy, and we’ve created numerous resources and tutorials to help you along. First of all, check out our hardware guide to make sure you’re all set up. Next, you can try out Scratch and Python, our favourite programming languages. Feeling creative? Learn to code music with Sonic Pi, or make visual art with Processing. Ready to control the real world with your Pi? Create a reaction game, or an LED adornment for your clothing. Maybe you’d like to do some science with the help of our Sense HAT, or become a film maker with our camera?

You can do all this with the Raspberry Pi, and so much more. The possibilities are as limitless as your imagination. So where do you want to start?

The post Taking the first step on the journey appeared first on Raspberry Pi.

Pioneers events: what’s your jam?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pioneers-events/

We hope you’re as excited as we are about the launch of the second Pioneers challenge! While you form your teams and start thinking up ways to Make it Outdoors with tech, we’ve been thinking of different ways for you to come together to complete the challenge.

Pioneers: Make it Outdoors: Pioneers events

Team up!

In the last challenge, we saw many teams formed as part of after-school coding clubs or as a collection of best friends at the kitchen table. However, for some this may not be a viable option. Maybe your friends live too far away, or your school doesn’t have a coding club. Maybe you don’t have the time to dedicate to meeting up every week, but you do have a whole Saturday free.

If this is the case, you may want to consider running your Pioneers team as part of an event, such as a makerspace day or Raspberry Jam. Over the course of this second cycle, we’ll be building the number of Pioneers Events. Keep your eyes peeled for details as they are released!

HackLab on Twitter

And the HackLab #Pioneers team are off! Hundreds of laughable ideas pouring forth! @__MisterC__ @Raspberry_Pi #makeyourideas

Come together

Maker events provide the chance to meet other people who are into making things with technology. You’ll find people at events who are just getting started, as well as more expert types who are happy to give advice. This is true of Pioneers Events as well as Raspberry Jams.

Marie MIllward on Twitter

Planning new #makeyourideas Pioneers projects @LeedsRaspJam Did someone mention a robot…?

Raspberry Jams are the perfect place for Pioneers teams to meet and spend the day planning and experimenting with their build. If you’re taking part in Pioneers as part of an informal squad, you might find it helpful to come to your local Jam for input and support. Many Jams run on a monthly basis, so you’ll easily find enough time to complete the build over the space of two months. Make sure you carry on sharing your ideas via social media and email between meetings.

The kindness of strangers

If you are a regular at Raspberry Jams, or an organiser yourself, why not consider supporting some teenagers to take part in Pioneers and give them their first taste of making something using tech? We encourage our Pioneers to work together to discover and overcome problems as a team, and we urge all event organisers to minimise adult participation when overseeing a Pioneers build at an event. You can offer advice and answer some questions; just don’t take over.

HullRaspJam on Twitter

Any 11 – 15 year old coders in #Hull we will happily support you to #MakeYourIdeas – Get in touch! https://t.co/ZExV4mWLJx

There are many other ways for you to help. Imagine the wonderful ideas you can inspire in teens by taking your own creations to a Raspberry Jam! Have you built a live-streaming bird box? Or modified your bike with a Pi Zero? Maybe you’ve built a Pi-powered go-kart or wired your shoes to light up as you walk?

Pioneers is a programme to inspire teens to try digital making, but we also want to create a community of like-minded teens. If we can connect our Pioneers with the wonderful wider community of makers, through networks such as makerspaces, Coder Dojos, and Raspberry Jams, then we will truly start to make something great.

HackLab on Twitter

Are you 12-15yo & like making stuff? Come to @cammakespace 4 the world’s 1st @Raspberry_Pi #Pioneers Event! #FREE: https://t.co/UtVmJ9kPDM

Running your own Jam and Pioneers events

For more information on Pioneers, check out the Pioneers website.

For more information on Raspberry Jams, including event schedules and how to start your own, visit the Raspberry Jam website.

Oh, and keep your eyes on this week’s blogs from tomorrow because … well … just do.

 

The post Pioneers events: what’s your jam? appeared first on Raspberry Pi.

Pioneers: the second challenge is…

Post Syndicated from Olympia Brown original https://www.raspberrypi.org/blog/pioneers-second-challenge/

Pioneers, your next challenge is here!

Do you like making things? Do you fancy trying something new? Are you aged 11 to 16? The Pioneers programme is ready to challenge you to create something new using technology.

As you’ll know if you took part last time, Pioneers challenges are themed. So here’s the lovely Ana from ZSL London Zoo to reveal the theme of the next challenge:

Your next challenge, if you choose to accept it, is…

MakeYourIdeas The second Pioneers challenge is here! Wahoo! Have you registered your team yet? Make sure you do. Head to the Pioneers website for more details: http://www.raspberrypi.org/pioneers

Make it Outdoors

You have until the beginning of July to make something related to the outdoors. As Ana said, the outdoors is pretty big, so here are some ideas:

Resources and discounted kit

If you’re looking at all of these projects and thinking that you don’t know where to start, never fear! Our free resources offer a great starting point for any new project, and can help you to build on your existing skills and widen your scope for creating greatness.

We really want to see your creativity and ingenuity though, so we’d recommend using these projects as starting points rather than just working through the instructions. To help us out, the wonderful Pimoroni are offering 15 percent off kit for our Getting started with wearables and Getting started with picamera resources. You should also check out our new Poo near you resource for an example of a completely code-based project.



For this cycle of Pioneers, thanks to our friends at the Shell Centenary Scholarship Fund, we are making bursaries available to teams to cover the cost of these basic kits (one per team). This is for teens who haven’t taken part in digital making activities before, and for whom the financial commitment would be a barrier to taking part. Details about the bursaries and the discount will be sent to you when you register.

Your Pioneers team

We’ve introduced a few new things for this round of Pioneers, so pay special attention if you took part last time round!

Pioneers challenge: Make it Outdoors

We’re looking for UK-based teams of between two and five people, aged between 11 and 16, to work together to create something related to the outdoors. We’ve found that in our experience there are three main ways to run a Pioneers team. It’s up to you to decide how you’ll proceed when it comes to your participation in Pioneers.

  • You could organise a Group that meets once or twice a week. We find this method works well for school-based teams that can meet at the end of a school day for an hour or two every week.
  • You could mentor a Squad that is largely informal, where the members probably already have a good idea of what they’re doing. A Squad tends to be more independent, and meetings may be sporadic, informal or online only. This option isn’t recommended if it’s your first competition like this, or if you’re not a techie yourself.
  • You could join a local Event at a technology hub near you. We’re hoping to run more and more of these events around the country as Pioneers evolves and grows. If you think you’d like to help us run a Pioneers Event, get in touch! We love to hear from people who want to spread their love of making, and we’ll support you as much as we possibly can to get your event rocking along. If you want to run a Pioneers Event, you will need to preregister on the Pioneers website so that we can get you all the support you need well before you open your doors.

#MakeYourIdeas

As always, we’re excited to watch the progress of your projects via social media channels such as Twitter, Instagram, and Snapchat. As you work on your build, make sure to share the ‘making of…’ stages with us using #MakeYourIdeas.

For inspiration from previous entries, here’s the winner announcement video for the last Pioneers challenge:

Winners of the first Pioneers challenge are…

After months of planning and making, the first round of Pioneers is over! We laid down the epic challenge of making us laugh. And boy, did the teams deliver. We can honestly say that my face hurt from all the laughing on judging day. Congratulations to everyone who took part.

Once you’ve picked a project, the first step is to register. What are you waiting for? Head to the Pioneers website to get started!

The post Pioneers: the second challenge is… appeared first on Raspberry Pi.

Steampunk laptop powered by Pi: OMG so fancy!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/steampunk-laptop/

In this digital age, where backup computers and multiple internet-connected devices are a must, maker phrazelle built this beautiful Raspberry Pi-powered steampunk laptop for his girlfriend.

And now we all want one. I mean, just look at it!

Raspberry Pi Steampunk laptop

There’s no denying that, had Liz seen this before me, she’d have copied the link into an email and titled it INSTABLOG before sending it to my inbox.

This build is gorgeous. And as a fan of quirky-looking tech builds and of making things out of wood, it caught my eye in a heartbeat, causing me to exclaim “Why, I – ugh! – I want a Steampunk laptop?!” Shortly afterwards, there followed the realisation that there is an Instructables page for the project, leading me to rejoice that I could make my own. “You’ll never finish it,” chides the incomplete Magic Mirror beneath my desk. I shush it with a kick.

Winging it

“I didn’t really spec this out when I started building. I knew I wanted a box, but didn’t know how I was going to approach it,” explains phrazelle, a maker after my own “meh, I’ll wing it” heart. He continues, “I started with a mechanical keyboard with some typewriter-esque keys and built out a board for it. This went in a few directions, and I wound up with a Frankenstein keyboard tray.”

Originally wanting a hole for each key, phrazelle used a paint relief method to mark the place of each one. However, this didn’t work out too well, so he decided to jigsaw out a general space for the keys in a group. After a few attempts and an application of Gorilla Glue, it was looking good.

Building a Steampunk laptop

With his father’s help, phrazelle’s next step was to build the box for the body of the laptop. Again, it was something of an unplanned mashup, resulting in a box that was built around the keyboard tray. Via a series of mitred joints, routing, and some last minute trim, he was able to fit an LCD screen from a cannibalised laptop into the lid, complete with an LCD driver acquired from eBay.

All of the Steampunk trimmings

“As I was going in the Steampunk direction, gears and gauges seemed to make sense,” says phrazelle. “I found a lot of cool stuff on Etsy and Amazon. The front battery gauge, back switch plate, and LED indicator housings came off Etsy.” He also discovered that actual watch gears, which he had purchased in bulk, were too flimsy for use as decoration, so he replaced them with some brass replicas from Amazon instead. Hand-blown marbles worked as LED defusers and the case was complete.

Inside the belly of the (beautiful) beast

Within the laptop body, phrazelle (do let us know your actual name, by the way) included a Talentcell battery pack which he modified to cut the output lines, something that was causing grief when trying to charge the battery. He utilised a plugable USB 2.4 four-port powered hub to power the Raspberry Pi and optional USB devices. He also added a bushel of various other modifications, all of which he explains on his Instructables page.

I ran with the Pixel distro for this build. Then I went through and did some basic security housekeeping like changing the default password, closing every unnecessary port on the firewall, and disabling the Bluetooth. I even put the Bro IDS platform on it to keep an eye out for shifty hackers… *shakes fist*

This thing runs like a champ! For its intended functionality, it does everything it needs to. You can get on the internet, write papers, check email… If you want to get nerdy, you can even brush up on your coding skillz.

Instructables and you

As I said, we love this build. Not only is it a great example of creating an all-in-one Raspberry Pi laptop, but it’s also gorgeous! Make sure to check out phrazelle’s other builds on Instructables, including his Zelda-themed bartop arcade and his ornate magic mirror.

While you’re there, check out the other Raspberry Pi-themed builds on Instructables. There are LOADS of them. And they’re great. And if you wrote any of them – ahem! – like I did, you should be proud of yourself – ahem! – like I am. *clears throat even more pointedly*

Have you built your own Pi laptop? Tell us about it in the comments below. We can’t wait to see it!

The post Steampunk laptop powered by Pi: OMG so fancy! appeared first on Raspberry Pi.

Utopia

Post Syndicated from Eevee original https://eev.ee/blog/2017/03/08/utopia/

It’s been a while, but someone’s back on the Patreon blog topic tier! IndustrialRobot asks:

What does your personal utopia look like? Do you think we (as mankind) can achieve it? Why/why not?

Hm.

I spent the month up to my eyeballs in a jam game, but this question was in the back of my mind a lot. I could use it as a springboard to opine about anything, especially in the current climate: politics, religion, nationalism, war, economics, etc., etc. But all of that has been done to death by people who actually know what they’re talking about.

The question does say “personal”. So in a less abstract sense… what do I want the world to look like?

Mostly, I want everyone to have the freedom to make things.

I’ve been having a surprisingly hard time writing the rest of this without veering directly into the ravines of “basic income is good” and “maybe capitalism is suboptimal”. Those are true, but not really the tone I want here, and anyway they’ve been done to death by better writers than I. I’ve talked this out with Mel a few times, and it sounds much better aloud, so I’m going to try to drop my Blog Voice and just… talk.

*ahem*

Art versus business

So, art. Art is good.

I’m construing “art” very broadly here. More broadly than “media”, too. I’m including shitty robots, weird Twitter almost-bots, weird Twitter non-bots, even a great deal of open source software. Anything that even remotely resembles creative work — driven perhaps by curiosity, perhaps by practicality, but always by a soul bursting with ideas and a palpable need to get them out.

Western culture thrives on art. Most culture thrives on art. I’m not remotely qualified to defend this, but I suspect you could define culture in terms of art. It’s pretty important.

You’d think this would be reflected in how we discuss art, but often… it’s not. Tell me how often you’ve heard some of these gems.

  • I could do that.”
  • My eight-year-old kid could do that.”
  • Jokes about the worthlessness of liberal arts degrees.
  • Jokes about people trying to write novels in their spare time, the subtext being that only dreamy losers try to write novels, or something.
  • The caricature of a hippie working on a screenplay at Starbucks.

Oh, and then there was the guy who made a bot to scrape tons of art from artists who were using Patreon as a paywall — and a primary source of income. The justification was that artists shouldn’t expect to make a living off of, er, doing art, and should instead get “real jobs”.

I do wonder. How many of the people repeating these sentiments listen to music, or go to movies, or bought an iPhone because it’s prettier? Are those things not art that took real work to create? Is creating those things not a “real job”?

Perhaps a “real job” has to be one that’s not enjoyable, not a passion? And yet I can’t recall ever hearing anyone say that Taylor Swift should get a “real job”. Or that, say, pro football players should get “real jobs”. What do pro football players even do? They play a game a few times a year, and somehow this drives the flow of unimaginable amounts of money. We dress it up in the more serious-sounding “sport”, but it’s a game in the same general genre as hopscotch. There’s nothing wrong with that, but somehow it gets virtually none of the scorn that art does.

Another possible explanation is America’s partly-Christian, partly-capitalist attitude that you deserve exactly whatever you happen to have at the moment. (Whereas I deserve much more and will be getting it any day now.) Rich people are rich because they earned it, and we don’t question that further. Poor people are poor because they failed to earn it, and we don’t question that further, either. To do so would suggest that the system is somehow unfair, and hard work does not perfectly correlate with any particular measure of success.

I’m sure that factors in, but it’s not quite satisfying: I’ve also seen a good deal of spite aimed at people who are making a fairly decent chunk through Patreon or similar. Something is missing.

I thought, at first, that the key might be the American worship of work. Work is an inherent virtue. Politicians run entire campaigns based on how many jobs they’re going to create. Notably, no one seems too bothered about whether the work is useful, as long as someone decided to pay you for it.

Finally I stumbled upon the key. America doesn’t actually worship work. America worships business. Business means a company is deciding to pay you. Business means legitimacy. Business is what separates a hobby from a career.

And this presents a problem for art.

If you want to provide a service or sell a product, that’ll be hard, but America will at least try to look like it supports you. People are impressed that you’re an entrepreneur, a small business owner. Politicians will brag about policies made in your favor, whether or not they’re stabbing you in the back.

Small businesses have a particular structure they can develop into. You can divide work up. You can have someone in sales, someone in accounting. You can provide specifications and pay a factory to make your product. You can defer all of the non-creative work to someone else, whether that means experts in a particular field or unskilled labor.

But if your work is inherently creative, you can’t do that. The very thing you’re making is your idea in your style, driven by your experience. This is not work that’s readily parallelizable. Even if you sell physical merchandise and register as an LLC and have a dedicated workspace and do various other formal business-y things, the basic structure will still look the same: a single person doing the thing they enjoy. A hobbyist.

Consider the bulleted list from above. Those are all individual painters or artists or authors or screenwriters. The kinds of artists who earn respect without question are generally those managed by a business, those with branding: musical artists signed to labels, actors working for a studio. Even football players are part of a tangle of business.

(This doesn’t mean that business automatically confers respect, of course; tech in particular is full of anecdotes about nerds’ disdain for people whose jobs are design or UI or documentation or whathaveyou. But a businessy look seems to be a significant advantage.)

It seems that although art is a large part of what informs culture, we have a culture that defines “serious” endeavors in such a way that independent art cannot possibly be “serious”.

Art versus money

Which wouldn’t really matter at all, except that we also have a culture that expects you to pay for food and whatnot.

The reasoning isn’t too outlandish. Food is produced from a combination of work and resources. In exchange for getting the food, you should give back some of your own work and resources.

Obviously this is riddled with subtle flaws, but let’s roll with it for now and look at a case study. Like, uh, me!

Mel and I built and released two games together in the six weeks between mid-January and the end of February. Together, those games have made $1,000 in sales. The sales trail off fairly quickly within a few days of release, so we’ll call that the total gross for our effort.

I, dumb, having never actually sold anything before, thought this was phenomenal. Then I had the misfortune of doing some math.

Itch takes at least 10%, so we’re down to $900 net. Divided over six weeks, that’s $150 per week, before taxes — or $3.75 per hour if we’d been working full time.

Ah, but wait! There are two of us. And we hadn’t been working full time — we’d been working nearly every waking hour, which is at least twice “full time” hours. So we really made less than a dollar an hour. Even less than that, if you assume overtime pay.

From the perspective of capitalism, what is our incentive to do this? Between us, we easily have over thirty years of experience doing the things we do, and we spent weeks in crunch mode working on something, all to earn a small fraction of minimum wage. Did we not contribute back our own work and resources? Was our work worth so much less than waiting tables?

Waiting tables is a perfectly respectable way to earn a living, mind you. Ah, but wait! I’ve accidentally done something clever here. It is generally expected that you tip your waiter, because waiters are underpaid by the business, because the business assumes they’ll be tipped. Not tipping is actually, almost impressively, one of the rudest things you can do. And yet it’s not expected that you tip an artist whose work you enjoy, even though many such artists aren’t being paid at all.

Now, to be perfectly fair, both games were released for free. Even a dollar an hour is infinitely more than the zero dollars I was expecting — and I’m amazed and thankful we got as much as we did! Thank you so much. I bring it up not as a complaint, but as an armchair analysis of our systems of incentives.

People can take art for granted and whatever, yes, but there are several other factors at play here that hamper the ability for art to make money.

For one, I don’t want to sell my work. I suspect a great deal of independent artists and writers and open source developers (!) feel the same way. I create things because I want to, because I have to, because I feel so compelled to create that having a non-creative full-time job was making me miserable. I create things for the sake of expressing an idea. Attaching a price tag to something reduces the number of people who’ll experience it. In other words, selling my work would make it less valuable in my eyes, in much the same way that adding banner ads to my writing would make it less valuable.

And yet, I’m forced to sell something in some way, or else I’ll have to find someone who wants me to do bland mechanical work on their ideas in exchange for money… at the cost of producing sharply less work of my own. Thank goodness for Patreon, at least.

There’s also the reverse problem, in that people often don’t want to buy creative work. Everyone does sometimes, but only sometimes. It’s kind of a weird situation, and the internet has exacerbated it considerably.

Consider that if I write a book and print it on paper, that costs something. I have to pay for the paper and the ink and the use of someone else’s printer. If I want one more book, I have to pay a little more. I can cut those costs pretty considerable by printing a lot of books at once, but each copy still has a price, a marginal cost. If I then gave those books away, I would be actively losing money. So I can pretty well justify charging for a book.

Along comes the internet. Suddenly, copying costs nothing. Not only does it cost nothing, but it’s the fundamental operation. When you download a file or receive an email or visit a web site, you’re really getting a copy! Even the process which ultimately shows it on your screen involves a number of copies. This is so natural that we don’t even call it copying, don’t even think of it as copying.

True, bandwidth does cost something, but the rate is virtually nothing until you start looking at very big numbers indeed. I pay $60/mo for hosting this blog and a half dozen other sites — even that’s way more than I need, honestly, but downgrading would be a hassle — and I get 6TB of bandwidth. Even the longest of my posts haven’t exceeded 100KB. A post could be read by 64 million people before I’d start having a problem. If that were the population of a country, it’d be the 23rd largest in the world, between Italy and the UK.

How, then, do I justify charging for my writing? (Yes, I realize the irony in using my blog as an example in a post I’m being paid $88 to write.)

Well, I do pour effort and expertise and a fraction of my finite lifetime into it. But it doesn’t cost me anything tangible — I already had this hosting for something else! — and it’s easier all around to just put it online.

The same idea applies to a vast bulk of what’s online, and now suddenly we have a bit of a problem. Not only are we used to getting everything for free online, but we never bothered to build any sensible payment infrastructure. You still have to pay for everything by typing in a cryptic sequence of numbers from a little physical plastic card, which will then give you a small loan and charge the seller 30¢ plus 2.9% for the “convenience”.

If a website could say “pay 5¢ to read this” and you clicked a button in your browser and that was that, we might be onto something. But with our current setup, it costs far more than 5¢ to transfer 5¢, even though it’s just a number in a computer somewhere. The only people with the power and resources to fix this don’t want to fix it — they’d rather be the ones charging you the 30¢ plus 2.9%.

That leads to another factor of platforms and publishers, which are more than happy to eat a chunk of your earnings even when you do sell stuff. Google Play, the App Store, Steam, and anecdotally many other big-name comparative platforms all take 30% of your sales. A third! And that’s good! It seems common among book publishers to take 85% to 90%. For ebook sales — i.e., ones that don’t actually cost anything — they may generously lower that to a mere 75% to 85%.

Bless Patreon for only taking 5%. Itch.io is even better: it defaults to 10%, but gives you a slider, which you can set to anything from 0% to 100%.

I’ve mentioned all this before, so here’s a more novel thought: finite disposable income. Your audience only has so much money to spend on media right now. You can try to be more compelling to encourage them to spend more of it, rather than saving it, but ultimately everyone has a limit before they just plain run out of money.

Now, popularity is heavily influenced by social and network effects, so it tends to create a power law distribution: a few things are ridiculously hyperpopular, and then there’s a steep drop to a long tail of more modestly popular things.

If a new hyperpopular thing comes out, everyone is likely to want to buy it… but then that eats away a significant chunk of that finite pool of money that could’ve gone to less popular things.

This isn’t bad, and buying a popular thing doesn’t make you a bad person; it’s just what happens. I don’t think there’s any satisfying alternative that doesn’t involve radically changing the way we think about our economy.

Taylor Swift, who I’m only picking on because her infosec account follows me on Twitter, has sold tens of millions of albums and is worth something like a quarter of a billion dollars. Does she need more? If not, should she make all her albums free from now on?

Maybe she does, and maybe she shouldn’t. The alternative is for someone to somehow prevent her from making more money, which doesn’t sit well. Yet it feels almost heretical to even ask if someone “needs” more money, because we take for granted that she’s earned it — in part by being invested in by a record label and heavily advertised. The virtue is work, right? Don’t a lot of people work just as hard? (“But you have to be talented too!” Then please explain how wildly incompetent CEOs still make millions, and leave burning businesses only to be immediately hired by new ones? Anyway, are we really willing to bet there is no one equally talented but not as popular by sheer happenstance?)

It’s kind of a moot question anyway, since she’s probably under contract with billionaires and it’s not up to her.

Where the hell was I going with this.


Right, so. Money. Everyone needs some. But making it off art can be tricky, unless you’re one of the lucky handful who strike gold.

And I’m still pretty goddamn lucky to be able to even try this! I doubt I would’ve even gotten into game development by now if I were still working for an SF tech company — it just drained so much of my creative energy, and it’s enough of an uphill battle for me to get stuff done in the first place.

How many people do I know who are bursting with ideas, but have to work a tedious job to keep the lights on, and are too tired at the end of the day to get those ideas out? Make no mistake, making stuff takes work — a lot of it. And that’s if you’re already pretty good at the artform. If you want to learn to draw or paint or write or code, you have to do just as much work first, with much more frustration, and not as much to show for it.

Utopia

So there’s my utopia. I want to see a world where people have the breathing room to create the things they dream about and share them with the rest of us.

Can it happen? Maybe. I think the cultural issues are a fairly big blocker; we’d be much better off if we treated independent art with the same reverence as, say, people who play with a ball for twelve hours a year. Or if we treated liberal arts degrees as just as good as computer science degrees. (“But STEM can change the world!” Okay. How many people with computer science degrees would you estimate are changing the world, and how many are making a website 1% faster or keeping a lumbering COBOL beast running or trying to trick 1% more people into clicking on ads?)

I don’t really mean stuff like piracy, either. Piracy is a thing, but it’s… complicated. In my experience it’s not even artists who care the most about piracy; it’s massive publishers, the sort who see artists as a sponge to squeeze money out of. You know, the same people who make everything difficult to actually buy, infest it with DRM so it doesn’t work on half the stuff you own, and don’t even sell it in half the world.

I mean treating art as a free-floating commodity, detached from anyone who created it. I mean neo-Nazis adopting a comic book character as their mascot, against the creator’s wishes. I mean politicians and even media conglomerates using someone else’s music in well-funded videos and ads without even asking. I mean assuming Google Image Search, wonder that it is, is some kind of magical free art machine. I mean the snotty Reddit post I found while looking up Patreon’s fee structure, where some doofus was insisting that Patreon couldn’t possibly pay for a full-time YouTuber’s time, because not having a job meant they had lots of time to spare.

Maybe I should go one step further: everyone should create at least once or twice. Everyone should know what it’s like to have crafted something out of nothing, to be a fucking god within the microcosm of a computer screen or a sewing machine or a pottery table. Everyone should know that spark of inspiration that we don’t seem to know how to teach in math or science classes, even though it’s the entire basis of those as well. Everyone should know that there’s a good goddamn reason I listed open source software as a kind of art at the beginning of this post.

Basic income and more arts funding for public schools. If Uber can get billions of dollars for putting little car icons on top of Google Maps and not actually doing any of their own goddamn service themselves, I think we can afford to pump more cash into webcomics and indie games and, yes, even underwater basket weaving.

Lifelong Learning

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/lifelong-learning/

This column is from The MagPi issue 54. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

When you contemplate the Raspberry Pi Foundation’s educational mission, you might first think of young people learning how to code, how computers work, and how to make things with computers. You might also think of teachers leveraging our free resources and training in order to bring digital making to their students in the classroom. Getting young people excited about computing and digital making is an enormous part of what we’re all about.

Last year we trained over 540 Certified Educators in the UK and USA.

We all know that learning doesn’t only happen in the classroom – it also happens in the home, at libraries, code clubs, museums, Scout troop meetings, and after-school enrichment centres. At the Raspberry Pi Foundation, we acknowledge that and try hard to get young people learning about computer science and digital making in all of these contexts. It’s the reason why many of our Raspberry Pi Certified Educators aren’t necessarily classroom teachers, but also educate in other environments.

Raspberry Pis are used as teaching aids in libraries, after-school clubs, and makerspaces across the globe

Even though inspiring and educating young people in and out of the classroom is a huge part of what we set out to do, our mission doesn’t limit us to only the young. Learning can happen at any age and, of course, we love to see kids and adults using Raspberry Pi computers and our learning resources. Although our priority is educating young people, we know that we have a strong community of adults who make, learn, and experiment with Raspberry Pi.

I consider myself among this community of lifelong learners. Ever since I first tried Raspberry Pi in 2012, I’ve learned so much with this affordable computer by making things with it. I may not have set out to learn more about programming and algorithms, but I learned them as a by-product of trying to create an interesting project that required them. This goes beyond computing, too. For instance, I needed to give myself a quick maths refresher when working on my Dynamic Bike Headlight project. I had to get the speed of my bike in miles per hour, knowing the radius of the wheel and the revolutions per minute from a sensor. I suspect that – like me – a lot of adults out there using Raspberry Pi for their home and work projects are learning a lot along the way.

Internet of Tutorials

Even if you’re following a tutorial to build a retro arcade machine, set up a home server, or create a magic mirror, then you’re learning. There are tons of great tutorials out there that don’t just tell you what to type in, but also explain what you’re doing and why you’re doing it at each step along the way. Hopefully, it also leaves room for a maker to experiment and learn.

Many people also learn with Raspberry Pi when they use it as a platform for experimental computing. This experimentation can come from personal curiosity or from a professional need.

They may want to set up a sandbox to test out things such as networking, servers, cluster computing, or containers. Raspberry Pi makes a good platform for this because of its affordability and its universality. In other words, Raspberry Pis have become so common in the world that there’s usually someone out there who has at least attempted to figure out how to do what you want with it.

MAAS Theremin Raspberry Pi

A Raspberry Pi is used in an interactive museum exhibit, and kept on display for visitors to better understand the inner workings of what they’re seeing.

To take it back to the young people, it’s critical to show them that we, as adults, aren’t always teachers. Sometimes we’re learning right beside them. Sometimes we’re even learning from them. Show them that learning doesn’t stop after they graduate. We must show young people that none of us stops learning.

The post Lifelong Learning appeared first on Raspberry Pi.

Bringing Digital Making to the Bett Show 2017

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/bett-2017/

The Cambridge office must have been very quiet last week, as staff from across the Raspberry Pi Foundation exhibited at the Bett Show 2017. Avid readers will note that at the UK’s largest educational technology event, held in London across four days, we tend to go all out. This year was no exception, as we had lots to share with you!

Hello World

It was hugely exciting to help launch Hello World, our latest joint publication with Computing At School (CAS), part of BCS, the Chartered Institute for IT, and sponsored by BT. I joined our CEO Philip Colligan, contributing editor Miles Berry, and Raspberry Pi Certified Educator Ian Simpson on stage in the Bett arena to share our thoughts on computing curriculums around the world, and the importance of sharing good teaching.

In our area of the STEAM village, where we had four pods and a workshop space, the team handed copies out in their thousands to eager educators interested in digital making, computing, and computer science. If you weren’t able to get your hands on a copy, don’t worry; you can download a free digital PDF and educators can subscribe to get this year’s three issues delivered, completely free of charge, to their door.

Sharing the Code Club love

Thanks to the support of some enthusiastic young people and our Code Club regional coordinators, we ran our first ever Code Club at Bett on Saturday.

codeclublondon on Twitter

Massive thanks to @TheChallenge_UK @CodeClub volunteers for helping @Raspberry_Pi out at #Bett2017 today 🙂

There was a great turnout of educators and their children, who all took part in a programming activity, learning just what makes Code Club so special. With activities like this, you can see why there are 5,000 clubs in the UK and 4,000 in the rest of the world!

Code Club South East on Twitter

Here’s @ben_nuttall enjoying our @CodeClub keepy uppy game… https://t.co/bmUAvyjndT

Free stuff

Let’s be honest: exhibitions and conferences are all about the free swag. (I walked away with a hoodie, polo shirt, and three highlighter pens.) We think we had the best offering: free magazines and classroom posters!

Code Club UK on Twitter

It’s our the final day of #Bett2017! Pop over to STEAM village to see the Code Club team & get your hands on our coveted posters! #PiAtBett

We love interacting with people and we’re passionate about making things, so we helped attendees make their very own LED badge that they could keep. It was so popular that after it has had a few tweaks, we’ll will make it available for you to download and use in class, after-school clubs, and Raspberry Jams!

 

The ‘All Seeing Pi‘ kept an eye on attendees passing by that we may have missed, using comedy moustaches to lure them in. We’ve enjoyed checking out its Twitter account to see the results.

Speaking from the heart

The STEAM village was crammed with people enjoying all our activities, but that’s not all; we even found time to support our educator community to give talks about their classroom practice on stage. One of the highlights was seeing three of our Certified Educators, along with their class robots, sharing their journey and experience on a panel chaired by Robot Wars judge and our good friend, Dr Lucy Rogers.

These ARE the droids you’re looking for! Bill Harvey, Neil Rickus, Nic Hughes, Dr Lucy Rogers, and their robots.

Once we started talking about our work, we found it difficult to stop. The team gave talks about Pioneers, our new programme for 12- to 15-year-olds, our digital making curriculum, and Astro Pi.

Bett on Twitter

Well done @Raspberry_Pi for such a good turn out yesterday! Keep up the good work at your stand in STEAM Village.

A royal visit

We were excited to be visited by a very special attendee, our patron the Duke of York, who spent time meeting the team, learned more about our programmes, and discussed teacher training with me.

Team Awesome

Thanks to everyone who visited, supported, and got involved with us. We ran 43 workshops and talks on our stand, handed out 2,000 free copies of Hello World and 400 Code Club posters, caught 100 comedy faces with the All-Seeing Pi, gave 5 presentations on Bett stages, took 5,000 pictures on our balloon cam, and ran 1 Code Club and 1 Raspberry Jam, across 4 days at the Bett show.

Bett lapse

Time Lapse from the Bett Show, London (2017)

 

The post Bringing Digital Making to the Bett Show 2017 appeared first on Raspberry Pi.

NEON PHASE

Post Syndicated from Eevee original https://eev.ee/blog/2017/01/21/neon-phase/

It all started after last year’s AGDQ, when I lamented having spent the entire week just watching speedruns instead of doing anything, and thus having lost my rhythm for days afterwards.

This year, several friends reminded me of this simultaneously, so I begrudgingly went looking for something to focus on during AGDQ. I’d already been working on Isaac’s Descent HD, so why not keep it up? Work on a video game while watching video games.

Working on a game for a week sounded an awful lot like a game jam, so I jokingly tweeted about a game jam whose express purpose was to not completely waste the week staring at a Twitch stream. Then someone suggested I make it an actual jam on itch.io. Then Mel asked to do a game with me.

And so, thanks to an almost comical sequence of events, we made NEON PHASE — a half-hour explorey platformer.

The game

The game is set in the Flora universe, as is everything Mel gets their hands on. (I say this with all the love in the world. ♥ Anyway, my games are also set in the Flora universe, so who am I to talk.)

I started out by literally copy-pasting the source code for Isaac’s Descent HD, the game I’ve been making with LÖVE as an extension of an earlier PICO-8 game I made. It’s not terribly far yet, but it’s almost to the point of replicating the original game, which meant I had a passable platformer engine that could load Tiled maps and had some notion of an “actor”. We both like platformers, anyway, so a platformer it would be.

We probably didn’t make the best use of the week. I think it took us a couple days to figure out how to collaborate, now that we didn’t have the PICO-8’s limitations and tools influencing our direction. Isaac is tile-based, so I’d taken for granted that this game would also be tile-based, whereas Mel, being an illustrator, prefers to draw… illustrations. I, an idiot, decided the best way to handle this would be to start cutting the illustrations into tiles and then piecing them back together. It took several days before I realized that oh, hey, Mel could just draw the entire map as a single image, and I could make the player run around on that.

So I did that. Previously, collision had been associated only with tiles, but it wasn’t too hard to just draw polygons right on the map and use those for collision. (Bless Tiled, by the way. It has some frustrating rough edges due to being a very general-purpose editor, but I can’t imagine how much time it would take me to write my own map editor that can do as much.)

And speaking of collision, while I did have to dig into a few thorny bugs, I’m thrilled with how well the physics came out! The collision detection I’d written for Isaac’s Descent HD was designed to support arbitrary polygons, even though so far I’ve only had square tiles. I knew the whole time I was making my life a lot harder, but I really didn’t want to restrict myself to rectangles right out of the gate. It paid off in NEON PHASE — the world is full of sloping, hilly terrain, and you can run across it fairly naturally!

I’d also thought at first that the game would be a kind of actiony platformer, which is why the very first thing you get greatly resembles a weapon, but you don’t end up actually fighting anything. It turns out enemy behavior takes a bit of careful design and effort, and I ended up busy enough just implementing Mel’s story. Also, dropping fighting meant I didn’t have to worry about death, which meant I didn’t have to worry about saving and loading map state, which was great news because I still haven’t done any of that yet.

It’s kind of interesting how time constraints can influence game design. The game has little buildings you can enter, but because I didn’t have saving/loading implemented, I didn’t want to actually switch maps. Instead, I made the insides of buildings a separate layer in Tiled. And since I had both layers on hand, I just drew the indoor layer right on top of the outdoor layer, which made kind of a cool effect.

A side effect of this approach was that you could see the inside of all buildings (well, within the viewport) while you were inside one, since they all exist in the same space. We ended up adding a puzzle and a couple minor flavor things that took advantage of this.

If I had had saving/loading of maps ready to go, I might have opted instead for a more traditional RPG-like approach, where the inside of each building is on its own map (or appears to be) and floats in a black void.

Another thing I really liked was the glitch effect, which I wrote on a whim early on because I’ve had shaders on the brain lately. We were both a little unsure about it, but in the end Mel wrote it into the plot and I used it more heavily throughout, including as a transition effect between indoors/outdoors.

Mel was responsible for art and music and story, so the plot unfortunately wasn’t finalized until the last day of the jam. It ended up being 30 pages of dialogue. Sprinkled throughout were special effects that sound like standard things you’d find in any RPG dialogue system — menus, branches, screen fades, and the like — but that I just hadn’t written yet.

The dialogue system was downright primitive when we started; I’d only written it as a brief proof of concept for Isaac, and it had only gotten as far as showing lines of wrapped text. It didn’t even know how to deal with text that was too long for the box. Hell, it didn’t even know how to exit the dialogue and return to the game.

So when I got the final script, I went into a sort of mad panic, doing my best to tack on features in ways I wouldn’t regret later and could maybe reuse. I got pretty far, but when it became clear that we couldn’t possibly have a finished product in time, I invoked my powers as jam coordinator and pushed the deadline back by 24 hours. 48 hours. 54⅓ hours. Oh, well.

The final product came out pretty well, modulo a couple release bugs, ahem. I’ve been really impressed with itch.io, too — it has a thousand twiddles, which makes me very happy, plus graphs of how many people have been playing our game and how they found it! Super cool.

Lessons learned

Ah, yes. Here’s that sweet postmortem content you computer people crave.

Don’t leave debug code in

There’s a fairly long optional quest in the game that takes a good few minutes to complete, even if you teleport everywhere instantly. (Ahem.) Finishing the quest kicks off a unique cutscene that involves a decent bit of crappy code I wrote at the last minute. I needed to test it a lot. So, naturally, I added a dummy rule to the beginning of the relevant NPC’s dialogue that just skips right to the end.

I forgot to delete that rule before we released.

Whoops!

The game even has a debug mode, so I could’ve easily made the rule only work then. I didn’t, and it possibly spoiled the whole sidequest for a couple dozen people. My bad.

Try your game at other framerates

The other game-breaking bug we had in the beginning was that some people couldn’t make jumps. For some, it was only when indoors; for others, it was all the time. The common thread was… low framerates.

Why does this matter? Well! When you jump, your upwards velocity is changed to a specific value, calculated to make your jump height slightly more than two tiles. The problem is, gravity is applied after you get jump velocity but before you actually move. It looks like this:

1
self.velocity = self.velocity + gravity * dt

Reasonable, right? Gravity is acceleration, so you multiply it by the amount of time that’s passed to get the change to velocity.

Ah… but if your framerate is low, then dt will be relatively large, and gravity will eat away a relatively large chunk of your upwards velocity. On the frame you jump, this effectively reduces your initial jump speed. If your framerate is low enough, you’ll never be able to jump as high as intended.

One obvious fix would be to rearrange the order things happen, so gravity doesn’t come between jumping and movement. I was wary of doing this as an emergency fix, though, because it would’ve taken a bit of rearchitecturing and I wasn’t sure about the side effects. So instead, I made a fix that’s worth having anyway: when the framerate is too long, I slice up dt and do multiple rounds of updating. Now even if the game draws slowly, it plays at the right speed.

This was really easy to discover once I knew to look; all I had to do was add a sleep() in the update or draw loops to artificially lower the framerate. I even found a second bug, which was that you move slowly at low framerates — much like with jumping, your walk speed is capped at a maximum, then friction lowers it, then you actually move.

I also had problems with framerates that were too high, which took me completely by surprise. Your little companion flips out and jitters all over the place or even gets stuck, and jumping just plain doesn’t work most of the time. The problems here were much simpler. I was needlessly rounding Chip’s position to the nearest pixel, so if dt was very small, Chip would only try to move a fraction of a pixel per frame and never get anywhere; I fixed that by simply not rounding.

The issue with jumping needs a little backstory. One of the problems with sloped terrain is that when you walk up a slope and reach the top, your momentum is still carrying you along the path of the slope, i.e. upwards. I had a lot of problems with launching right off the top of even a fairly shallow hill; it looked goofy and amateurish. My terrible solution was: if you started out on the ground, then after moving, try to move a short distance straight down. If you can’t, because something (presumably the ground) is in the way, then you probably just went over a short bump; move as far as you can downwards so you stick to the ground. If you can move downwards, you just went over a ledge, so abort the movement and let gravity take its course next frame.

The problem was that I used a fixed (arbitrary) distance for this ground test. For very short dt, the distance you moved upwards when jumping was less than the distance I then tried dragging you back down to see if you should stay on the ground. The easy fix was to scale the test distance with dt.

Of course, if you’re jumping, obviously you don’t want to stay on the ground, so I shouldn’t do this test at all. But jumping is an active thing, and staying grounded is currently a passive thing (but shouldn’t be, since it emulates walking rather than sliding), and again I didn’t want to start messing with physics guts after release. I’ll be cleaning a few things up for the next game, I’m sure.

This also turned out to be easy to see once I knew to look — I just turned off vsync, and my framerate shot up to 200+.

Quadratic behavior is bad

The low framerate issue wouldn’t have been quite so bad, except for a teeny tiny problem with indoors. I’d accidentally left a loop in when refactoring, so instead of merely drawing every indoor actor each frame, I was drawing every indoor actor for every indoor actor each frame. I think that worked out to 7225 draws instead of 85. (I don’t skip drawing for offscreen actors yet.) Our computers are pretty beefy, so I never noticed. Our one playtester did comment at the eleventh hour that the framerate dipped very slightly while indoors, but I assumed this was just because indoors requires more drawing than outdoors (since it’s drawn right on top of outdoors) and didn’t investiage.

Of course, if you play on a less powerful machine, the difference will be rather more noticeable. Oops.

Just Do It

My collision detection relies on the separating axis theorem, which only works for convex polygons. (Convex polygons are ones that have no “dents” in them — you could wrap a rubber band around one and it would lie snug along each face.) The map Mel drew has rolling terrain and caverns with ceilings, which naturally lead to a lot of concave polygons. (Concave polygons are not convex. They have caves!)

I must’ve spent a good few hours drawing collision polygons on top of the map, manually eyeballing the terrain and cutting it up into only convex polygons.

Eventually I got so tired of this that I threw up my hands and added support for concave polygons.

It took me, like, two minutes. Not only does LÖVE have a built-in function for cutting a polygon into triangles (which are always convex), it also has a function for detecting whether a polygon is convex. I already had support for objects consisting of multiple shapes, so all I had to do was plug these things into each other.

Collision probably would’ve taken much less time if I’d just done that in the first place.

Delete that old code, or maybe not

One of the very first players reported that they’d managed to crash the game right off the bat. It didn’t take long to realize it was because they’d pressed Q, which isn’t actually used in NEON PHASE. It is used in Isaac’s Descent HD, to scroll through the inventory… but NEON PHASE doesn’t use that inventory, and I’d left in the code for handling the keypress, so the game simply crashed.

(This is Lua, so when I say “crash”, I mean “showed a stack trace and refused to play any more”. Slightly better, but only so much.)

So, maybe delete that old code.

Or, wait, maybe don’t. When I removed the debugging sequence break just after release, I also deleted the code for the Q key… and, in a rush, also deleted the code for handling the E key, which is used in NEON PHASE. Rather heavily. Like, for everything. Dammit.

Maybe just play the game before issuing emergency releases? Nah.

Melding styles is easier than you’d think

When I look at the whole map overall, it’s hilarious to me how much the part I designed sticks out. It’s built out of tiles and consists of one large puzzle, whereas the rest of the game is as untiled as you can get and mostly revolves around talking to people.

And yet I don’t think anyone has noticed. It’s just one part of the game with a thing you do. The rest of the game may not have a bunch of wiring puzzles, but enough loose wires are lying around to make them seem fitting. The tiles Mel gave me are good and varied enough that they don’t look like tiles; they just look like they were deliberately made more square for aesthetic or story reasons.

I drew a few of the tiles and edited a few others. Most of the dialogue was written by Mel, but a couple lines that people really like were my own, completely impromptu, invention. No one seems to have noticed. It’s all one game. We didn’t sit down and have a meeting about the style or how to keep it cohesive; I just did stuff when I felt like it, and I naturally took inspiration from what was already there.

People will pay for things if you ask them to

itch.io does something really interesting.

Anything you download is presented as a purchase. You are absolutely welcome to sell things for free, but rather than being an instant download, itch.io treats this as a case of buying for zero dollars.

Why do that? Well, because you are always free to pay more for something you buy on itch, and the purchase dialog has handy buttons for adding a tip.

It turns out that, when presented with a dialog that offers a way to pay money for a free thing, an awful lot of people… paid money! Over a hundred people chipped in a few bucks for our free game, just because itch offered them a button to do so. The vast majority of them paid one of itch’s preset amounts. I’m totally blown away; I knew abstractly that this was possible, but I didn’t really expect it to happen. I’ve never actually sold anything before, either. This is amazing.

Now, granted, we do offer bonuses (concept art and the OST) if you pay $2 or more, at Mel’s request. But consider that I also put my two PICO-8 games on itch, and those have an interesting difference: they’re played in-browser and load automatically right in the page. Instead of a payment dialog, there’s a “support this game” button below the game. They’re older games that most of my audience has probably played already, but they still got a few hundred views between them. And the number of purchases?

Zero.

I’m not trying to criticize or guilt anyone here! I release stuff for free because I want it to be free. I’m just genuinely amazed by how effective itch’s download workflow seems to be. The buttons for chipping in are a natural part of the process of something you’re already doing, so “I might as well” kicks in. I’ve done this myself — I paid for the free m5x7 font I used in NEON PHASE. But something played in-browser is already there, and it takes a much stronger impulse to go out of your way to initiate the process of supporting the game.

Anyway, this is definitely encouraging me to make more things. I’ll probably put my book on itch when I finish it, too.

Also, my book

Speaking of!

If you remember, I’ve been writing a book about game development. Literally, a book about game development — the concept was that I build some games on various free platforms, then write about what I did and how I did it. Game development as a story, rather than a lecture.

I’ve hit a bit of a problem with it, and that problem is with my “real” games — i.e., the ones I didn’t make for the sake of the book. Writing about Isaac’s Descent requires first explaining how the engine came about, which requires reconstructing how I wrote Under Construction, and now we’re at two games’ worth of stuff even before you consider the whole writing a collision engine thing.

Isaac’s Descent HD is posed to have exactly the same problem: it takes a detour through the development of NEON PHASE, so I should talk about that too in some detail.

Both of these games are huge and complex tales already, far too long for a single “chapter”, and I’d already been worrying that the book would be too long.

So! I’m adjusting the idea slightly. Instead of writing about making a bunch of “artificial” games that I make solely for the sake of writing about the experience… I’m cutting it down to just Isaac’s Descent, HD, and the other games in their lineage. That’s already half a dozen games across two platforms, and I think they offer more than enough opportunity to say everything I want.

The overall idea of “talk about making something” is ultimately the same, but I like this refocusing a lot more. It feels a little more genuine, too.

Guess I’ve got a bit of editing to do!

And, finally

You should try out the other games people made for my jam! I can’t believe a Twitter joke somehow caused more than forty games to come into existence that otherwise would not have. I’ve been busy with NEON PHASE followup stuff (like writing this post) and have only barely scratched the surface so far, but I do intend to play every game that was submitted!

NEON PHASE

Post Syndicated from Eevee original https://eev.ee/blog/2017/01/21/neon-phase/

It all started after last year’s AGDQ, when I lamented having spent the entire week just watching speedruns instead of doing anything, and thus having lost my rhythm for days afterwards.

This year, several friends reminded me of this simultaneously, so I begrudgingly went looking for something to focus on during AGDQ. I’d already been working on Isaac’s Descent HD, so why not keep it up? Work on a video game while watching video games.

Working on a game for a week sounded an awful lot like a game jam, so I jokingly tweeted about a game jam whose express purpose was to not completely waste the week staring at a Twitch stream. Then someone suggested I make it an actual jam on itch.io. Then Mel asked to do a game with me.

And so, thanks to an almost comical sequence of events, we made NEON PHASE — a half-hour explorey platformer.

The game

The game is set in the Flora universe, as is everything Mel gets their hands on. (I say this with all the love in the world. ♥ Anyway, my games are also set in the Flora universe, so who am I to talk.)

I started out by literally copy-pasting the source code for Isaac’s Descent HD, the game I’ve been making with LÖVE as an extension of an earlier PICO-8 game I made. It’s not terribly far yet, but it’s almost to the point of replicating the original game, which meant I had a passable platformer engine that could load Tiled maps and had some notion of an “actor”. We both like platformers, anyway, so a platformer it would be.

We probably didn’t make the best use of the week. I think it took us a couple days to figure out how to collaborate, now that we didn’t have the PICO-8’s limitations and tools influencing our direction. Isaac is tile-based, so I’d taken for granted that this game would also be tile-based, whereas Mel, being an illustrator, prefers to draw… illustrations. I, an idiot, decided the best way to handle this would be to start cutting the illustrations into tiles and then piecing them back together. It took several days before I realized that oh, hey, Mel could just draw the entire map as a single image, and I could make the player run around on that.

So I did that. Previously, collision had been associated only with tiles, but it wasn’t too hard to just draw polygons right on the map and use those for collision. (Bless Tiled, by the way. It has some frustrating rough edges due to being a very general-purpose editor, but I can’t imagine how much time it would take me to write my own map editor that can do as much.)

And speaking of collision, while I did have to dig into a few thorny bugs, I’m thrilled with how well the physics came out! The collision detection I’d written for Isaac’s Descent HD was designed to support arbitrary polygons, even though so far I’ve only had square tiles. I knew the whole time I was making my life a lot harder, but I really didn’t want to restrict myself to rectangles right out of the gate. It paid off in NEON PHASE — the world is full of sloping, hilly terrain, and you can run across it fairly naturally!

I’d also thought at first that the game would be a kind of actiony platformer, which is why the very first thing you get greatly resembles a weapon, but you don’t end up actually fighting anything. It turns out enemy behavior takes a bit of careful design and effort, and I ended up busy enough just implementing Mel’s story. Also, dropping fighting meant I didn’t have to worry about death, which meant I didn’t have to worry about saving and loading map state, which was great news because I still haven’t done any of that yet.

It’s kind of interesting how time constraints can influence game design. The game has little buildings you can enter, but because I didn’t have saving/loading implemented, I didn’t want to actually switch maps. Instead, I made the insides of buildings a separate layer in Tiled. And since I had both layers on hand, I just drew the indoor layer right on top of the outdoor layer, which made kind of a cool effect.

A side effect of this approach was that you could see the inside of all buildings (well, within the viewport) while you were inside one, since they all exist in the same space. We ended up adding a puzzle and a couple minor flavor things that took advantage of this.

If I had had saving/loading of maps ready to go, I might have opted instead for a more traditional RPG-like approach, where the inside of each building is on its own map (or appears to be) and floats in a black void.

Another thing I really liked was the glitch effect, which I wrote on a whim early on because I’ve had shaders on the brain lately. We were both a little unsure about it, but in the end Mel wrote it into the plot and I used it more heavily throughout, including as a transition effect between indoors/outdoors.

Mel was responsible for art and music and story, so the plot unfortunately wasn’t finalized until the last day of the jam. It ended up being 30 pages of dialogue. Sprinkled throughout were special effects that sound like standard things you’d find in any RPG dialogue system — menus, branches, screen fades, and the like — but that I just hadn’t written yet.

The dialogue system was downright primitive when we started; I’d only written it as a brief proof of concept for Isaac, and it had only gotten as far as showing lines of wrapped text. It didn’t even know how to deal with text that was too long for the box. Hell, it didn’t even know how to exit the dialogue and return to the game.

So when I got the final script, I went into a sort of mad panic, doing my best to tack on features in ways I wouldn’t regret later and could maybe reuse. I got pretty far, but when it became clear that we couldn’t possibly have a finished product in time, I invoked my powers as jam coordinator and pushed the deadline back by 24 hours. 48 hours. 54⅓ hours. Oh, well.

The final product came out pretty well, modulo a couple release bugs, ahem. I’ve been really impressed with itch.io, too — it has a thousand twiddles, which makes me very happy, plus graphs of how many people have been playing our game and how they found it! Super cool.

Lessons learned

Ah, yes. Here’s that sweet postmortem content you computer people crave.

Don’t leave debug code in

There’s a fairly long optional quest in the game that takes a good few minutes to complete, even if you teleport everywhere instantly. (Ahem.) Finishing the quest kicks off a unique cutscene that involves a decent bit of crappy code I wrote at the last minute. I needed to test it a lot. So, naturally, I added a dummy rule to the beginning of the relevant NPC’s dialogue that just skips right to the end.

I forgot to delete that rule before we released.

Whoops!

The game even has a debug mode, so I could’ve easily made the rule only work then. I didn’t, and it possibly spoiled the whole sidequest for a couple dozen people. My bad.

Try your game at other framerates

The other game-breaking bug we had in the beginning was that some people couldn’t make jumps. For some, it was only when indoors; for others, it was all the time. The common thread was… low framerates.

Why does this matter? Well! When you jump, your upwards velocity is changed to a specific value, calculated to make your jump height slightly more than two tiles. The problem is, gravity is applied after you get jump velocity but before you actually move. It looks like this:

1
self.velocity = self.velocity + gravity * dt

Reasonable, right? Gravity is acceleration, so you multiply it by the amount of time that’s passed to get the change to velocity.

Ah… but if your framerate is low, then dt will be relatively large, and gravity will eat away a relatively large chunk of your upwards velocity. On the frame you jump, this effectively reduces your initial jump speed. If your framerate is low enough, you’ll never be able to jump as high as intended.

One obvious fix would be to rearrange the order things happen, so gravity doesn’t come between jumping and movement. I was wary of doing this as an emergency fix, though, because it would’ve taken a bit of rearchitecturing and I wasn’t sure about the side effects. So instead, I made a fix that’s worth having anyway: when the framerate is too long, I slice up dt and do multiple rounds of updating. Now even if the game draws slowly, it plays at the right speed.

This was really easy to discover once I knew to look; all I had to do was add a sleep() in the update or draw loops to artificially lower the framerate. I even found a second bug, which was that you move slowly at low framerates — much like with jumping, your walk speed is capped at a maximum, then friction lowers it, then you actually move.

I also had problems with framerates that were too high, which took me completely by surprise. Your little companion flips out and jitters all over the place or even gets stuck, and jumping just plain doesn’t work most of the time. The problems here were much simpler. I was needlessly rounding Chip’s position to the nearest pixel, so if dt was very small, Chip would only try to move a fraction of a pixel per frame and never get anywhere; I fixed that by simply not rounding.

The issue with jumping needs a little backstory. One of the problems with sloped terrain is that when you walk up a slope and reach the top, your momentum is still carrying you along the path of the slope, i.e. upwards. I had a lot of problems with launching right off the top of even a fairly shallow hill; it looked goofy and amateurish. My terrible solution was: if you started out on the ground, then after moving, try to move a short distance straight down. If you can’t, because something (presumably the ground) is in the way, then you probably just went over a short bump; move as far as you can downwards so you stick to the ground. If you can move downwards, you just went over a ledge, so abort the movement and let gravity take its course next frame.

The problem was that I used a fixed (arbitrary) distance for this ground test. For very short dt, the distance you moved upwards when jumping was less than the distance I then tried dragging you back down to see if you should stay on the ground. The easy fix was to scale the test distance with dt.

Of course, if you’re jumping, obviously you don’t want to stay on the ground, so I shouldn’t do this test at all. But jumping is an active thing, and staying grounded is currently a passive thing (but shouldn’t be, since it emulates walking rather than sliding), and again I didn’t want to start messing with physics guts after release. I’ll be cleaning a few things up for the next game, I’m sure.

This also turned out to be easy to see once I knew to look — I just turned off vsync, and my framerate shot up to 200+.

Quadratic behavior is bad

The low framerate issue wouldn’t have been quite so bad, except for a teeny tiny problem with indoors. I’d accidentally left a loop in when refactoring, so instead of merely drawing every indoor actor each frame, I was drawing every indoor actor for every indoor actor each frame. I think that worked out to 7225 draws instead of 85. (I don’t skip drawing for offscreen actors yet.) Our computers are pretty beefy, so I never noticed. Our one playtester did comment at the eleventh hour that the framerate dipped very slightly while indoors, but I assumed this was just because indoors requires more drawing than outdoors (since it’s drawn right on top of outdoors) and didn’t investiage.

Of course, if you play on a less powerful machine, the difference will be rather more noticeable. Oops.

Just Do It

My collision detection relies on the separating axis theorem, which only works for convex polygons. (Convex polygons are ones that have no “dents” in them — you could wrap a rubber band around one and it would lie snug along each face.) The map Mel drew has rolling terrain and caverns with ceilings, which naturally lead to a lot of concave polygons. (Concave polygons are not convex. They have caves!)

I must’ve spent a good few hours drawing collision polygons on top of the map, manually eyeballing the terrain and cutting it up into only convex polygons.

Eventually I got so tired of this that I threw up my hands and added support for concave polygons.

It took me, like, two minutes. Not only does LÖVE have a built-in function for cutting a polygon into triangles (which are always convex), it also has a function for detecting whether a polygon is convex. I already had support for objects consisting of multiple shapes, so all I had to do was plug these things into each other.

Collision probably would’ve taken much less time if I’d just done that in the first place.

Delete that old code, or maybe not

One of the very first players reported that they’d managed to crash the game right off the bat. It didn’t take long to realize it was because they’d pressed Q, which isn’t actually used in NEON PHASE. It is used in Isaac’s Descent HD, to scroll through the inventory… but NEON PHASE doesn’t use that inventory, and I’d left in the code for handling the keypress, so the game simply crashed.

(This is Lua, so when I say “crash”, I mean “showed a stack trace and refused to play any more”. Slightly better, but only so much.)

So, maybe delete that old code.

Or, wait, maybe don’t. When I removed the debugging sequence break just after release, I also deleted the code for the Q key… and, in a rush, also deleted the code for handling the E key, which is used in NEON PHASE. Rather heavily. Like, for everything. Dammit.

Maybe just play the game before issuing emergency releases? Nah.

Melding styles is easier than you’d think

When I look at the whole map overall, it’s hilarious to me how much the part I designed sticks out. It’s built out of tiles and consists of one large puzzle, whereas the rest of the game is as untiled as you can get and mostly revolves around talking to people.

And yet I don’t think anyone has noticed. It’s just one part of the game with a thing you do. The rest of the game may not have a bunch of wiring puzzles, but enough loose wires are lying around to make them seem fitting. The tiles Mel gave me are good and varied enough that they don’t look like tiles; they just look like they were deliberately made more square for aesthetic or story reasons.

I drew a few of the tiles and edited a few others. Most of the dialogue was written by Mel, but a couple lines that people really like were my own, completely impromptu, invention. No one seems to have noticed. It’s all one game. We didn’t sit down and have a meeting about the style or how to keep it cohesive; I just did stuff when I felt like it, and I naturally took inspiration from what was already there.

People will pay for things if you ask them to

itch.io does something really interesting.

Anything you download is presented as a purchase. You are absolutely welcome to sell things for free, but rather than being an instant download, itch.io treats this as a case of buying for zero dollars.

Why do that? Well, because you are always free to pay more for something you buy on itch, and the purchase dialog has handy buttons for adding a tip.

It turns out that, when presented with a dialog that offers a way to pay money for a free thing, an awful lot of people… paid money! Over a hundred people chipped in a few bucks for our free game, just because itch offered them a button to do so. The vast majority of them paid one of itch’s preset amounts. I’m totally blown away; I knew abstractly that this was possible, but I didn’t really expect it to happen. I’ve never actually sold anything before, either. This is amazing.

Now, granted, we do offer bonuses (concept art and the OST) if you pay $2 or more, at Mel’s request. But consider that I also put my two PICO-8 games on itch, and those have an interesting difference: they’re played in-browser and load automatically right in the page. Instead of a payment dialog, there’s a “support this game” button below the game. They’re older games that most of my audience has probably played already, but they still got a few hundred views between them. And the number of purchases?

Zero.

I’m not trying to criticize or guilt anyone here! I release stuff for free because I want it to be free. I’m just genuinely amazed by how effective itch’s download workflow seems to be. The buttons for chipping in are a natural part of the process of something you’re already doing, so “I might as well” kicks in. I’ve done this myself — I paid for the free m5x7 font I used in NEON PHASE. But something played in-browser is already there, and it takes a much stronger impulse to go out of your way to initiate the process of supporting the game.

Anyway, this is definitely encouraging me to make more things. I’ll probably put my book on itch when I finish it, too.

Also, my book

Speaking of!

If you remember, I’ve been writing a book about game development. Literally, a book about game development — the concept was that I build some games on various free platforms, then write about what I did and how I did it. Game development as a story, rather than a lecture.

I’ve hit a bit of a problem with it, and that problem is with my “real” games — i.e., the ones I didn’t make for the sake of the book. Writing about Isaac’s Descent requires first explaining how the engine came about, which requires reconstructing how I wrote Under Construction, and now we’re at two games’ worth of stuff even before you consider the whole writing a collision engine thing.

Isaac’s Descent HD is posed to have exactly the same problem: it takes a detour through the development of NEON PHASE, so I should talk about that too in some detail.

Both of these games are huge and complex tales already, far too long for a single “chapter”, and I’d already been worrying that the book would be too long.

So! I’m adjusting the idea slightly. Instead of writing about making a bunch of “artificial” games that I make solely for the sake of writing about the experience… I’m cutting it down to just Isaac’s Descent, HD, and the other games in their lineage. That’s already half a dozen games across two platforms, and I think they offer more than enough opportunity to say everything I want.

The overall idea of “talk about making something” is ultimately the same, but I like this refocusing a lot more. It feels a little more genuine, too.

Guess I’ve got a bit of editing to do!

And, finally

You should try out the other games people made for my jam! I can’t believe a Twitter joke somehow caused more than forty games to come into existence that otherwise would not have. I’ve been busy with NEON PHASE followup stuff (like writing this post) and have only barely scratched the surface so far, but I do intend to play every game that was submitted!

The Raspberry Pi Foundation’s Digital Making Curriculum

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/digital-making-curriculum/

At Raspberry Pi, we’re determined in our ambition to put the power of digital making into the hands of people all over the world: one way we pursue this is by developing high-quality learning resources to support a growing community of educators. We spend a lot of time thinking hard about what you can learn by tinkering and making with a Raspberry Pi, and other devices and platforms, in order to become skilled in computer programming, electronics, and physical computing.

Now, we’ve taken an exciting step in this journey by defining our own digital making curriculum that will help people everywhere learn new skills.

A PDF version of the curriculum is also available to download.

Who is it for?

We have a large and diverse community of people who are interested in digital making. Some might use the curriculum to help guide and inform their own learning, or perhaps their children’s learning. People who run digital making clubs at schools, community centres, and Raspberry Jams may draw on it for extra guidance on activities that will engage their learners. Some teachers may wish to use the curriculum as inspiration for what to teach their students.

Raspberry Pi produces an extensive and varied range of online learning resources and delivers a huge teacher training program. In creating this curriculum, we have produced our own guide that we can use to help plan our resources and make sure we cover the broad spectrum of learners’ needs.

Progression

Learning anything involves progression. You start with certain skills and knowledge and then, with guidance, practice, and understanding, you gradually progress towards broader and deeper knowledge and competence. Our digital making curriculum is structured around this progression, and in representing it, we wanted to avoid the age-related and stage-related labels that are often associated with a learner’s progress and the preconceptions these labels bring. We came up with our own, using characters to represent different levels of competence, starting with Creator and moving onto Builder and Developer before becoming a Maker.

Progress through our curriculum and become a digital maker

Strands

We want to help people to make things so that they can become the inventors, creators, and makers of tomorrow. Digital making, STEAM, project-based learning, and tinkering are at the core of our teaching philosophy which can be summed up simply as ‘we learn best by doing’.

We’ve created five strands which we think encapsulate key concepts and skills in digital making: Design, Programming, Physical Computing, Manufacture, and Community and Sharing.

Computational thinking

One of the Raspberry Pi Foundation’s aims is to help people to learn about computer science and how to make things with computers. We believe that learning how to create with digital technology will help people shape an increasingly digital world, and prepare them for the work of the future.

Computational thinking is at the heart of the learning that we advocate. It’s the thought process that underpins computing and digital making: formulating a problem and expressing its solution in such a way that a computer can effectively carry it out. Computational thinking covers a broad range of knowledge and skills including, but not limited to:

  • Logical reasoning
  • Algorithmic thinking
  • Pattern recognition
  • Abstraction
  • Decomposition
  • Debugging
  • Problem solving

By progressing through our curriculum, learners will develop computational thinking skills and put them into practice.

What’s not on our curriculum?

If there’s one thing we learned from our extensive work in formulating this curriculum, it’s that no two educators or experts can agree on the best approach to progression and learning in the field of digital making. Our curriculum is intended to represent the skills and thought processes essential to making things with technology. We’ve tried to keep the headline outcomes as broad as possible, and then provide further examples as a guide to what could be included.

Our digital making curriculum is not intended to be a replacement for computer science-related curricula around the world, such as the ‘Computing Programme of Study’ in England or the ‘Digital Technologies’ curriculum in Australia. We hope that following our learning pathways will support the study of formal curricular and exam specifications in a fun and tangible way. As we continue to expand our catalogue of free learning resources, we expect our curriculum will grow and improve, and your input into that process will be vital.

Get involved

We’re proud to be part of a movement that aims to empower people to shape their world through digital technologies. We value the support of our community of makers, educators, volunteers, and enthusiasts. With this in mind, we’re interested to hear your thoughts on our digital making curriculum. Add your feedback to this form, or talk to us at one of the events that Raspberry Pi will attend in 2017.

The post The Raspberry Pi Foundation’s Digital Making Curriculum appeared first on Raspberry Pi.

Peeqo – The GIF Bot

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/peeqo-the-gif-bot/

Peeqo is a conversational UI that answers only in GIFs. For those who know me, it’s essentially the physical version of 90% of my text messages with friends and colleagues.

I’m sure that future historians will look back on 2016 (if they dare acknowledge its presence) as the year we returned to imagery as our main form of social interaction. Once upon a time, we communicated stories and emotions via drawings on cave walls and hieroglyphs etched into stone. Throw in a few thousand years of language evolution, and we’re right back to where we started, albeit with a few added frames of movement.

So whether you pronounce it GIF with a ‘Guh’ or GIF with a ‘Juh’, you’re sure to have come across one in your everyday life. After all, they make for a much better visual response than the boring old word format we’ve grown accustomed to.

So it’s no surprise that when programmer and developer Abhishek Singh introduced Reddit to Peeqo, he managed to peak-o* our GIF interest right away.

Peeqo was Singh’s thesis project at the New York University Tisch School of the Arts. It was his attempt to merge the three things he loves: making things by hand, animated movies, and the GIF.

Some of you may be aware of Slack, a team messaging system used by businesses, groups, and charities (*ahem*) across the globe. One of Slack’s many features is the ability to pull GIFs from the popular GIF database GIPHY and display them in response to text conversation. Peeqo uses this same premise, searching keywords on the site to pull the correct response to your verbal communication with the bot.

(It’s a great lesson in making sure you use correct keywords when saving images to the web for public use, as some of the responses don’t always fit the mood. An example, which I will leave you to find, would be a specific Team America GIF that Liz has banned me from using in the Comms Team channel.)

Peeqo sits on your desk and uses the Google Speech API to detect the use of the wake word ‘Peeqo’ via one of four microphones, then it uses api.ai to search GIPHY for the correct response to your query. All of this runs with a Raspberry Pi at its heart, while two Arduinos work to control the LED notification ring atop its head and the servo motor that dictates the body’s movement. Peeqo also acts as a great bridge into home automation, controlling lights and other smart devices in your home or office, along with acting as a media player and new best friend work-based assistant.

I won’t go into the technical details of the build, but if you’re interested, an almost fully GIF-powered walkthrough of Peeqo is available here.

As is the case with so many of you lovely makers out there, Singh aims to make the entire project open-source; you can sign up for a notification as to when this will happen here.

Until then, here’s Abhishek explaining his project in more detail.

Abhishek Singh – PEEQO – YOUR DELIGHTFUL ROBOT ASSISTANT

Peeqo is a personal desktop robotic assistant who expresses himself through GIFs. Designed for people who spend long hours at their desks, this pint sized robot helps with essential work tasks and provides little moments of delight and entertainment often needed to get through the day. https://itp.nyu.edu/thesis2016/project/abhishek-singh http://peeqo.com

 

*Peak-o? Oh wow. Wow. I’m sorry. I’m so sorry. I’ll get my coat.

 

The post Peeqo – The GIF Bot appeared first on Raspberry Pi.

Making life changes

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/making-life-changes/

This column is from The MagPi issue 51. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

Making things can change your life. It did for me, and I hear the same from others all the time.

After I graduated from university in 2003, I jumped immediately into the workforce. I landed in New York City’s entertainment industry, which is where I’d dreamed of working since I was young. I was excited to be a staffer on a major television show, where I learned what it takes to produce a weekly drama series. I slowly worked my way up the ladder in the industry over a few years.

There’s a lot to admire about how film and television content is produced. A crew of over one hundred people with creative and technical talents come together to create a piece of entertainment, under the watchful eye of the director. It’s an enormous piece of creative collaboration, but it’s also a business. Everyone does their part to make it happen. It’s incredible to see a show get made.

I had found a niche in the television industry that I did well in, but eventually I hit a rut. I had a small role in a big piece of work. I wanted to be more creative, and to have more autonomy and influence over what I was helping to create. It was at that time that I started closely following what makers were doing.

Feeling inspired by the work of others, I started to make things with microcontrollers and electronics. I’d then share information on how to recreate these projects online. Eventually, I was contributing projects to Make: magazine and I was soon able to make money from making things for companies, writing about how to make, and writing about what others were making. Soon enough, I was in a position to leave the television industry and work as a maker full-time.

That eventually led to my current job, doing outreach for Raspberry Pi in the United States. It’s incredibly gratifying work and despite the long road to get here, I couldn’t be happier with what I’m doing. The spare time I invested in making things as a hobby has paid off greatly in a new career that gives me creative freedom and a much more interesting work day.

Matt meets maker Gerald Burkett at World Maker Faire New York 2016.

Matt meets maker Gerald Burkett at World Maker Faire New York 2016.

Make it happen

I meet people all the time who have stories about how making has had an impact on their lives. At World Maker Faire New York recently, I met student Gerald Burkett, who told me his story of becoming a maker. He said, “I’m doing things I wouldn’t have ever dreamed of just four years ago, and it’s changed my life for the better.” And Gerald is having an impact on others as well. Even though he will be graduating soon, he’s encouraging the school’s administration to foster makers in the student body. He says that they “deserve an inviting environment where creativity is encouraged, and access to tools and supplies they couldn’t otherwise obtain in order to prototype and invent.”

Because of more accessible technology like the Raspberry Pi and freely available online resources, it’s easier than ever to make the things that you want to see in the world. Whether you are a student or you are far down a particular career path, it’s easier than ever to explore making as a passion and, potentially, also a livelihood.

If you’re reading this and you feel like you’re stuck in a rut with your job, I understand that feeling and encourage you to pursue making with vigour. There’s a good chance that what you make can change your life. It worked for me.

The post Making life changes appeared first on Raspberry Pi.

Accessible games

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/29/accessible-games/

I’ve now made a few small games. One of the trickiest and most interesting parts of designing them has been making them accessible.

I mean that in a very general and literal sense. I want as many people as possible to experience as much of my games as possible. Finding and clearing out unnecessary hurdles can be hard, but every one I leave risks losing a bunch of players who can’t or won’t clear it.

I’ve noticed three major categories of hurdle, all of them full of tradeoffs. Difficulty is what makes a game challenging, but if a player can’t get past a certain point, they can never see the rest of the game. Depth is great, but not everyone has 80 hours to pour into a game, and it’s tough to spend weeks of dev time on stuff most people won’t see. Distribution is a question of who can even get your game in the first place.

Here are some thoughts.

Mario Maker

Mario Maker is most notable for how accessible it is to budding game designers, which is important but also a completely different sense of accessibility.

The really nice thing about Mario Maker is that its levels are also accessible to players. Virtually everyone who’s heard of video games has heard of Mario. You don’t need to know many rules to be able to play. Move to the right, jump over/on things, and get to the flag.

(The “distribution” model is a bit of a shame, though — you need to own a particular console and a $60 game. If I want people to play a single individual level I made, that’s a lot of upfront investment to ask for. Ultimately Nintendo is in this to sell their own game more than to help people show off their own.)

But the emergent depth of Mario Maker’s myriad objects — the very property that makes the platform more than a toy — also makes it less accessible. Everyone knows you move around and jump, but not everyone knows you can pick up an item with B, or that you can put on a hat you’re carrying by pressing , or that you can spinjump on certain hazards. And these are fairly basic controls — Mario Maker contains plenty of special interactions between more obscure objects, and no manual explaining them all.

I thought it was especially interesting that Nintendo’s own comic series on building Mario Maker levels specifically points out that running jumps don’t come naturally to everyone. It’s hard to imagine too many people playing Mario Maker and not knowing how to jump while running.

And yet.

And yet, imagine being one such person, and encountering a level that requires a running jump early on. You can’t get past it. You might not even understand how to get past it; perhaps you don’t even know Mario can run. Now what? That’s it, you’re stuck. You’ll never see the rest of that level. It’s a hurdle, in a somewhat more literal sense.

Why make the level that way in the first place, then? Does any seasoned Mario player jump over a moderate-width gap and come away feeling proud for having conquered it? Seems unlikely.

I’ve tried playing through 100 Mario Challenge on Expert a number of times (without once managing to complete it), and I’ve noticed three fuzzy categories. Some levels are an arbitrary mess of hazards right from the start, so I don’t expect them to get any easier. Some levels are clearly designed as difficult obstacle courses, so again, I assume they’ll be just as hard all the way through. In both cases, if I give up and skip to the next level, I don’t feel like I’m missing out on anything — I’m not the intended audience.

But there are some Expert-ranked levels that seem pretty reasonable… until this one point where all hell breaks loose. I always wonder how deliberate those parts are, and I vaguely regret skipping them — would the rest of the level have calmed back down and been enjoyable?

That’s the kind of hurdle I think about when I see conspicuous clusters of death markers in my own levels. How many people died there and gave up? I make levels intending for people to play them, to see them through, but how many players have I turned off with some needlessly tricky part?

One of my levels is a Boo house with a few cute tricks in it. Unfortunately, I also put a ring of Boos right at the beginning that’s tricky to jump through, so it’s very easy for a player to die several times right there and never see anything else.

I wanted my Boo house to be interesting rather than difficult, but I let difficulty creep in accidentally, and so I’ve reduced the number of people who can appreciate the interestingness. Every level I’ve made since then, I’ve struggled to keep the difficulty down, and still sometimes failed. It’s easy to make a level that’s very hard; it’s surprisingly hard to make a level that’s fairly easy. All it takes is a single unintended hurdle — a tricky jump, an awkwardly-placed enemy — to start losing players.

This isn’t to say that games should never be difficult, but difficulty needs to be deliberately calibrated, and that’s a hard thing to do. It’s very easy to think only in terms of “can I beat this”, and even that’s not accurate, since you know every nook and cranny of your own level. Can you beat it blind, on the first few tries? Could someone else?

Those questions are especially important in Mario Maker, where the easiest way to encounter an assortment of levels is to play 100 Mario Challenge. You have 100 lives and need to beat 16 randomly-chosen levels. If you run out of lives, you’re done, and you have to start over. If I encounter your level here, I can’t afford to burn more than six or seven lives on it, or I’ll game over and have wasted my time. So if your level looks ridiculously hard (and not even in a fun way), I’ll just skip it and hope I get a better level next time.

I wonder if designers forget to calibrate for this. When you spend a lot of time working on something, it’s easy to imagine it exists in a vacuum, to assume that other people will be as devoted to playing it as you were to making it.

Mario Maker is an extreme case: millions of levels are available, and any player can skip to another one with the push of a button. That might be why I feel like I’ve seen a huge schism in level difficulty: most Expert levels are impossible for me, whereas most Normal levels are fairly doable with one or two rough patches. I haven’t seen much that’s in the middle, that feels like a solid challenge. I suspect that people who are very good at Mario are looking for an extreme challenge, and everyone else just wants to play some Mario, so moderate-difficulty levels just aren’t as common. The former group will be bored by them, and the latter group will skip them.

Or maybe that’s a stretch. It’s hard to generalize about the game’s pool of levels when they number in the millions, and I can’t have played more than a few hundred.

What Mario Maker has really taught me is what a hurdle looks like. The game keeps track of everywhere a player has ever died. I may not be able to watch people play my levels, but looking back at them later and seeing clumps of death markers is very powerful. Those are the places people failed. Did they stop playing after that? Did I intend for those places to be so difficult?

Doom

Doom is an interesting contrast to Mario Maker. A great many Doom maps have been produced over the past two decades, but nowhere near as many levels as Mario Maker has produced in a couple years. On the other hand, many people who still play Doom have been playing Doom this entire time, so a greater chunk of the community is really good at the game and enjoys a serious challenge.

I’ve only released a couple Doom maps of my own: Throughfare (the one I contributed to DUMP 2 earlier this year) and a few one-hour speedmaps I made earlier this week. I like building in Doom, with its interesting balance of restrictions — it’s a fairly accessible way to build an interesting 3D world, and nothing else is quite like it.

I’ve had the privilege of watching a few people play through my maps live, and I have learned some things.

The first is that the community’s love of difficulty is comically misleading. It’s not wrong, but, well, that community isn’t actually my target audience. So far I’ve “published” maps on this blog and Twitter, where my audience hasn’t necessarily even played Doom in twenty years. If at all! Some of my followers are younger than Doom.

Most notably, this creates something of a distribution problem: to play my maps, you need to install a thing (ZDoom) and kinda figure out how to use it and also get a copy of Doom 2 which probably involves spending five bucks. Less of a hurdle than getting Mario Maker, yes, but still some upfront effort.

Also, ZDoom’s default settings are… not optimal. Out of the box, it’s similar to classic Doom: no WASD, no mouselook. I don’t know who this is meant to appeal to. If you’ve never played Doom, the controls are goofy. If you’ve played other shooters, the controls are goofy. If you played Doom when it came out but not since, you probably don’t remember the controls, so they’re still goofy. Oof.

Not having mouselook is more of a problem than you’d think. If you as the designer play with mouselook, it’s really easy to put important things off the top or bottom of the screen and never realize it’ll be a problem. I watched someone play through Throughfare a few days ago and get completely stuck at what seemed to be a dead end — because he needed to drop down a hole in a small platform, and the hole was completely hidden by the status bar.

That’s actually an interesting example for another reason. Here’s the room where he got stuck.

A small room with a raised platform at the end, a metal section in the floor, and a switch on the side wall

When you press the switch, the metal plates on the ground rise up and become stairs, so you can get onto the platform. He did that, saw nowhere obvious to go, and immediately turned around and backtracked quite a ways looking for some other route.

This surprised me! The room makes no sense as a dead end. It’s not an easter egg or interesting feature; it has no obvious reward; it has a button that appears to help you progress. If I were stuck here, I’d investigate the hell out of this room — yet this player gave up almost immediately.

Not to say that the player is wrong and the level is right. This room was supposed to be trivially simple, and I regret that it became a hurdle for someone. It’s just a difference in playstyle I didn’t account for. Besides the mouselook problem, this player tended to move very quickly in general, charging straight ahead in new areas without so much as looking around; I play more slowly, looking around for nooks and crannies. He ended up missing the plasma gun for much the same reason — it was on a ledge slightly below the default view angle, making it hard to see without mouselook.

Speaking of nooks and crannies: watching someone find or miss secrets in a world I built is utterly fascinating. I’ve watched several people play Throughfare now, and the secrets are the part I love watching the most. I’ve seen people charge directly into secrets on accident; I’ve seen people run straight to a very clever secret just because they had the same idea I did; I’ve seen people find a secret switch and then not press it. It’s amazing how different just a handful of players have been.

I think the spread of secrets in Throughfare is pretty good, though I slightly regret using the same trick three times; either you get it right away and try it everywhere, or you don’t get it at all and miss out on a lot of goodies. Of course, the whole point of secrets is that not everyone will find them on the first try (or at all), so it’s probably okay to err on the trickier side.


As for the speedmaps, I’ve only watched one person play them live. The biggest hurdle was a room I made that required jumping.

Jumping wasn’t in the original Doom games. People thus don’t really expect to need to jump in Doom maps. Worse, ZDoom doesn’t even have a key bound to jump out of the box, which I only discovered later.

See, when I made the room (very quickly), I was imagining a ZDoom veteran seeing it and immediately thinking, “oh, this is one of those maps where I need to jump”. I’ve heard people say that about other maps before, so it felt like common knowledge. But it’s only common knowledge if you’re part of the community and have run into a few maps that require jumping.

The situation is made all the more complicated by the way ZDoom handles it. Maps can use a ZDoom-specific settings file to explicitly allow or forbid jumping, but the default is to allow it. The stock maps and most third-party vanilla maps won’t have this ZDoom-specific file, so jumping will be allowed, even though they’re not designed for it. Most mappers only use this file at all if they’re making something specifically for ZDoom, in which case they might as well allow jumping anyway. It’s opt-out, but the maps that don’t want it are the ones least likely to use the opt-out, so in practice everyone has to assume jumping isn’t allowed until they see some strong indication otherwise. It’s a mess. Oh, and ZDoom also supports crouching, which is even more obscure.

I probably should’ve thought of all that at the time. In my defense, you know, speedmap.

One other minor thing was that, of course, ZDoom uses the traditional Doom HUD out of the box, and plenty of people play that way on purpose. I’m used to ZDoom’s “alternative” HUD, which not only expands your field of view slightly, but also shows a permanent count of how many secrets are in the level and how many you’ve found. I love that, because it tells me how much secret-hunting I’ll need to do from the beginning… but if you don’t use that HUD (and don’t look at the count on the automap), you won’t even know whether there are secrets or not.


For a third-party example: a recent (well, late 2014) cool release was Going Down, a set of small and devilish maps presented as the floors of a building you’re traversing from the roof downwards. I don’t actually play a lot of Doom, but I liked this concept enough to actually play it, and I enjoyed the clever traps and interwoven architecture.

Then I reached MAP12, Dead End. An appropriate name, because I got stuck here. Permanently stuck. The climax of the map is too many monsters in not enough space, and it’s cleverly rigged to remove the only remaining cover right when you need it. I couldn’t beat it.

That was a year ago. I haven’t seen any of the other 20 maps beyond this point. I’m sure they’re very cool, but I can’t get to them. This one is too high a hurdle.

Granted, hopping around levels is trivially easy in Doom games, but I don’t want to cheat my way through — and anyway, if I can’t beat MAP12, what hope do I have of beating MAP27?

I feel ambivalent about this. The author describes the gameplay as “chaotic evil”, so it is meant to be very hard, and I appreciate the design of the traps… but I’m unable to appreciate any more of them.

This isn’t the author’s fault, anyway; it’s baked into the design of Doom. If you can’t beat one level, you don’t get to see any future levels. In vanilla Doom it was particularly bad: if you die, you restart the level with no weapons or armor, probably making it even harder than it was before. You can save any time, and some modern source ports like ZDoom will autosave when you start a level, but the original game never saved automatically.

Isaac’s Descent

Isaac’s Descent is the little PICO-8 puzzle platformer I made for Ludum Dare 36 a couple months ago. It worked out surprisingly well; pretty much everyone who played it (and commented on it to me) got it, finished it, and enjoyed it. The PICO-8 exports to an HTML player, too, so anyone with a keyboard can play it with no further effort required.

I was really happy with the puzzle design, especially considering I hadn’t really made a puzzle game before and was rushing to make some rooms in a very short span of time. Only two were perhaps unfair. One was the penultimate room, which involved a tricky timing puzzle, so I’m not too bothered about that. The other was this room:

A cavern with two stone slab doors, one much taller than the other, and a wooden wheel on the wall

Using the wheel raises all stone doors in the room. Stone doors open at a constant rate, wait for a fixed time, and then close again. The tricky part with this puzzle is that by the time the very tall door has opened, the short door has already closed again. The solution is simply to use the wheel again right after the short door has closed, while the tall door is still opening. The short door will reopen, while the tall door won’t be affected since it’s already busy.

This isn’t particularly difficult to figure out, but it did catch a few people, and overall it doesn’t sit particularly well with me. Using the wheel while a door is opening feels like a weird edge case, not something that a game would usually rely on, yet I based an entire puzzle around it. I don’t know. I might be overthinking this. The problem might be that “ignore the message” is a very computery thing to do and doesn’t match with how such a wheel would work in practice; perhaps I’d like the puzzle more if the wheel always interrupted whatever a door was doing and forced it to raise again.

Overall, though, the puzzles worked well.

The biggest snags I saw were control issues with the PICO-8 itself. The PICO-8 is a “fantasy console” — effectively an emulator for a console that never existed. One of the consequences of this is that the controls aren’t defined in terms of keyboard keys, but in terms of the PICO-8’s own “controller”. Unfortunately, that controller is only defined indirectly, and the web player doesn’t indicate in any way how it works.

The controller’s main inputs — the only ones a game can actually read — are a directional pad and two buttons, and , which map to z and x on a keyboard. The PICO-8 font has glyphs for and , so I used those to indicate which button does what. Unfortunately, if you aren’t familiar with the PICO-8, those won’t make a lot of sense to you. It’s nice that looks like the keyboard key it’s bound to, but looks like the wrong keyboard key. This caused a little confusion.

Well,” I hear you say, “why not just refer to the keys directly?” Ah, but there’s a very good reason the PICO-8 is defined in terms of buttons: those aren’t the only keys you can use! n and m also work, as do c and v. The PocketCHIP also allows… 0 and =, I think, which is good because z and x are directly under the arrow keys on the PocketCHIP keyboard. And of course you can play on a USB controller, or rebind the keys.

I could’ve mentioned that z and x are the defaults, but that’s wrong for the PocketCHIP, and now I’m looking at a screenful of text explaining buttons that most people won’t read anyway.

A similar problem is the pause menu, accessible with p or enter. I’d put an option on the pause menu for resetting the room you’re in, just in case, but didn’t bother to explain how to get to the pause menu.Or that a pause menu exists. Also, the ability to put custom things on the pause menu is new, so a lot of people might not even know about it. I’m sure you can see this coming: a few rooms (including the two-door one) had places you could get stuck, and without any obvious way to restart the room, a few people thought they had to start the whole game over. Whoops.

In my defense, the web player is actively working against me here: it has a “pause” link below the console, but all the link does is freeze the player, not bring up the pause menu.

This is a recurring problem, and perhaps a fundamental question of making games accessible: how much do you need to explain to people who aren’t familiar with the platform or paradigm? Should every single game explain itself? Players who don’t need the explanation can easily get irritated by it, and that’s a bad way to start a game. The PICO-8 in particular has the extra wrinkle that its cartridge space is very limited, and any kind of explanation/tutorial costs space you could be using for gameplay. On the other hand, I’ve played more than one popular PICO-8 game that was completely opaque to me because it didn’t explain its controls at all.

I’m reminded of Counterfeit Monkey, a very good interactive fiction game that goes out of its way to implement a hint system and a gentle tutorial. The tutorial knits perfectly with the story, and the hints are trivially turned off, so neither is a bother. The game also has a hard mode, which eliminates some of the more obvious solutions and gives a nod to seasoned IF players as well. The author is very interested in making interactive fiction more accessible in general, and it definitely shows. I think this game alone convinced me it’s worth the effort — I’m putting many of the same touches in my own IF foray.

Under Construction

Under Construction is the PICO-8 game that Mel and I made early this year. It’s a simple, slightly surreal, slightly obtuse platformer.

Traditional wisdom has it that you don’t want games to be obtuse. That acts as a hurdle, and loses you players. Here, though, it’s part of the experience, so the question becomes how to strike a good balance without losing the impact.

A valid complaint we heard was that the use of color is slightly inconsistent in places. For the most part, foreground objects (those you can stand on) are light and background decorations are gray, but a couple tiles break that pattern. A related problem that came up almost immediately in beta testing was that spikes were difficult to pick out. I addressed that — fairly effectively, I think — by adding a single dark red pixel to the tip of the spikes.

But the most common hurdle by far was act 3, which caught us completely by surprise. Spoilers!

From the very beginning, the world contains a lot of pillars containing eyeballs that look at you. They don’t otherwise do anything, beyond act as platforms you can stand on.

In act 2, a number of little radios appear throughout the world. Mr. 5 complains that it’s very noisy, so you need to break all the radios by jumping on them.

In act 3, the world seems largely the same… but the eyes in the pillars now turn to ❌’s when you touch them. If this happens before you make it to the end, Mr. 5 complains that he’s in pain, and the act restarts.

The correct solution is to avoid touching any of the eye pillars. But because this comes immediately after act 2, where we taught the player to jump on things to defeat them — reinforcing a very common platforming mechanic — some players thought you were supposed to jump on all of them.

I don’t know how we could’ve seen that coming. The acts were implemented one at a time and not in the order they appear in the game, so we were both pretty used to every individual mechanic before we started playing through the entire game at once. I suppose when a game is developed and tested in pieces (as most games are), the order and connection between those pieces is a weak point and needs some extra consideration.

We didn’t change the game to address this, but the manual contains a strong hint.

Under Construction also contains a couple of easter eggs and different endings. All are fairly minor changes, but they added a lot of character to the game and gave its fans something else to delve into once they’d beaten it.

Crucially, these things worked as well as they did because they weren’t accessible. Easily-accessed easter eggs aren’t really easter eggs any more, after all. I don’t think the game has any explicit indication that the ending can vary, which meant that players would only find out about it from us or other fans.

I don’t yet know the right answer for balancing these kinds of extras, and perhaps there isn’t one. If you spend a lot of time on easter eggs, multiple endings, or even just multiple paths through the game, you’re putting a lot of effort into stuff that many players will never see. On the other hand, they add an incredible amount of depth and charm to a game and reward those players who do stick around to explore.

This is a lot like the balancing act with software interfaces. You want your thing to be accessible in the sense that a newcomer can sit down and get useful work done, but you also want to reward long-time users with shortcuts and more advanced features. You don’t want to hide advanced features too much, but you also don’t want to have an interface with a thousand buttons.

How larger and better-known games deal with this

I don’t have the patience for Zelda I. I never even tried it until I got it for free on my 3DS, as part of a pack of Virtual Console games given to everyone who bought a 3DS early. I gave it a shot, but I got bored really quickly. The overworld was probably the most frustrating part: the connections between places are weird, everything looks pretty much the same, the map is not very helpful, and very little acts as a landmark. I could’ve drawn my own map, but, well, I usually can’t be bothered to do that for games.

I contrast this with Skyward Sword, which I mostly enjoyed. Ironically, one of my complaints is that it doesn’t quite have an overworld. It almost does, but they stopped most of the way, leaving us with three large chunks of world and a completely-open sky area reminiscent of Wind Waker’s ocean.

Clearly, something about huge open spaces with no barriers whatsoever appeals to the Zelda team. I have to wonder if they’re trying to avoid situations like my experience with Zelda I. If a player gets lost in an expansive overworld, either they’ll figure out where to go eventually, or they’ll give up and never see the rest of the game. Losing players that way, especially in a story-driven game, is a huge shame.

And this is kind of a problem with the medium in general. For all the lip service paid to nonlinearity and sandboxes, the vast majority of games require some core progression that’s purely linear. You may be able to wander around a huge overworld, but you still must complete these dungeons and quests in this specific order. If something prevents you from doing one of them, you won’t be able to experience the others. You have to do all of the first x parts of the game before you can see part x + 1.

This is really weird! No other media is like this. If you watch a movie or read a book or listen to a song and some part of it is inaccessible for whatever reason — the plot is poorly explained, a joke goes over your head, the lyrics are mumbled — you can still keep going and experience the rest. The stuff that comes later might even help you make sense of the part you didn’t get.

In games, these little bumps in the road can become walls.

It’s not even necessarily difficulty, or getting lost, or whatever. A lot of mobile puzzle games use the same kind of artificial progression where you can only do puzzles in sequential batches; solving enough of the available puzzles will unlock the next batch. But in the interest of padding out the length, many of these games will have dozens of trivially easy and nearly identical puzzles in the beginning, which you have to solve to get to the later interesting ones. Sometimes I’ve gotten so bored by this that I’ve given up on a game before reaching the interesting puzzles.

In a way, that’s the same problem as getting lost in an overworld. Getting lost isn’t a hard wall, after all — you can always do an exhaustive search and talk to every NPC twice. But that takes time, and it’s not fun, much like the batches of required baby puzzles. People generally don’t like playing games that waste their time.

I love the Picross “e” series on the 3DS, because over time they’ve largely figured out that this is pointless: in the latest game in the series, everything is available from the beginning. Want to do easy puzzles? Do easy puzzles. Want to skip right to the hard stuff? Sure, do that. Don’t like being told when you made a wrong move? Turn it off.

(It’s kinda funny that the same people then made Pokémon Picross, which has some of the most absurd progression I’ve ever seen. Progressing beyond the first half-dozen puzzles requires spending weeks doing a boring minigame every day to grind enough pseudocurrency to unlock more puzzles. Or you can just pay for pseudocurrency, and you’ll have unlocked pretty much the whole game instantly. It might as well just be a demo; the non-paid progression is useless.)

Chip’s Challenge also handled this pretty well. You couldn’t skip around between levels arbitrarily, which was somewhat justified by the (very light) plot. Instead, if you died or restarted enough times, the game would offer to skip you to the next level, and that would be that. You weren’t denied the rest of the game just because you couldn’t figure out an ice maze or complete some horrible nightmare like Blobnet.

I wish this sort of mechanic were more common. Not so games could be more difficult, but so games wouldn’t have to worry as much about erring on the side of ease. I don’t know how it could work for a story-driven game where much of the story is told via experiencing the game itself, though — skipping parts of Portal would work poorly. On the other hand, Portal took the very clever step of offering “advanced” versions of several levels, which were altered very slightly to break all the obvious easy solutions.

Slapping on difficulty settings is nice for non-puzzle games (and even some puzzle games), but unless your game lets you change the difficulty partway through, someone who hits a wall still has to replay the entire game to change the difficulty. (Props to Doom 4, which looks to have taken difficulty levels very seriously — some have entirely different rules, and you can change whenever you want.)

I have a few wisps of ideas for how to deal with this in Isaac HD, but I can’t really talk about them before the design of the game has solidified a little more. Ultimately, my goal is the same as with everything else I do: to make something that people have a chance to enjoy, even if they don’t otherwise like the genre.

Inktober

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/23/inktober/

Inktober is an ancient and hallowed art tradition, dating all the way back to sometime, when it was started by someone. The idea is simple: draw something in ink every day. Real ink. You know. On paper.

I tried this last year. I quit after four days. Probably because I tried to do it without pencil sketches, and I’m really not very good at drawing things correctly the first time. I’d hoped that forcing myself to do it would spark some improvement, but all it really produced was half a week of frustration and bad artwork.

This year, I was convinced to try again without unnecessarily handicapping myself, so I did that. Three weeks and more than forty ink drawings later, here are some thoughts.

Some background

I’ve been drawing seriously since the beginning of 2015. I spent the first few months working primarily in pencil, until I was gifted a hand-me-down tablet in March; almost everything has been digital since then.

I’ve been fairly lax about learning to use color effectively — I have enough trouble just producing a sketch I like, so I’ve mostly been trying to improve there. Doesn’t feel worth the effort to color a sketch I’m not really happy with, and by the time I’m really happy with it, I’m itching to draw something else. Whoops. Until I get quicker or find some mental workaround, monochrome ink is a good direction to try.

I have an ongoing “daily” pokémon series, so I’ve been continuing that in ink. (Everyone else seems to be using some list of single-word prompts, but I didn’t even know about that until after I’d started, so, whoops.)

I’ve got a few things I want to get better at:

  • Detailing, whatever that means. Part of the problem is that I’m not sure what it means. My art is fairly simple and cartoony, and I know it’s possible to be more detailed without doing realistic shading, but I don’t have a grasp of how to think about that.

  • Better edges, which mostly means line weight. I mentally categorize this as a form of scale, which also includes tips like “don’t let parallel lines get too close together” and “don’t draw one or two very small details”.

  • Better backgrounds and environments. Or, let’s be honest, any backgrounds and environments — I draw an awful lot of single characters floating in an empty white void. My fixed-size canvas presents an obvious and simple challenge: fill the page!

  • More interesting poses, and relatedly, getting a better hang of anatomy. I started drawing the pokémon series partly for this reason: a great many pokémon have really unusual shapes I’ve tried drawing before. Dealing with weird anatomy and trying to map it to my existing understanding should hopefully flex some visualization muscles.

  • Lighting, probably? I’m aware that things not facing a light source are in shadow, but my understanding doesn’t extend very far beyond that. How does light affect a large outdoor area? How can you represent the complexity of light and shadow with only a single pen? Art, especially cartoony art, has an entire vocabulary of subtle indicators of shadow and volume that I don’t know much about.

Let’s see what exactly I’ve learned.

Analog materials are very different

I’ve drawn plenty of pencil sketches on paper, and I’ve done a few watercolors, but I’ve never done this volume of “serious” art on paper before.

All my inks so far are in a 3.5” × 5” sketchbook. I’ll run out of pages in a few days, at which point I’ll finish up the month in a bigger sketchbook. It’s been a mixed blessing: I have less page to fill, but details are smaller and more fiddly, so mistakes are more obvious. I also don’t have much room for error with the composition.

I started out drawing with a small black Faber–Castell “PITT artist pen”. Around day five, I borrowed C3 and C7 (light and dark cool greys) Copic sketch markers from Mel; later I got a C5 as well. A few days ago I bought a Lamy Safari fountain pen with Noodler’s Heart of Darkness ink.

Both the FC pen and the fountain pen are ultimately still pens, but they have some interesting differences in edge cases. Used very lightly at an extreme angle, the FC pen produces very scratchy-looking lines… sometimes. Sometimes it does nothing instead, and you must precariously tilt the pen until you find the magical angle, hoping you don’t suddenly get a solid line where you didn’t want it. The Lamy has been much more consistent: it’s a little more willing to draw thinner lines than it’s intended for, and it hasn’t created any unpleasant surprises. The Lamy feels much smoother overall, like it flows, which is appropriate since that’s how fountain pens work.

Markers are interesting. The last “serious” art I did on paper was watercolor, which is pretty fun — I can water a color down however much I want, and if I’m lucky and fast, I can push color around on the paper a bit before it dries. Markers, ah, not so much. Copics are supposed to be blendable, but I’ve yet to figure out how to make that happen. It might be that my sketchbook’s paper is too thin, but the ink seems to dry within seconds, too fast for me to switch markers and do much of anything. For the same reason, I have to color an area by… “flood-filling”? I can’t let the edge of the colored area dry, or when I go back to extend that edge, I’ll be putting down a second layer of ink and create an obvious dark band. I’ve learned to keep the edge wet as much as possible.

On the plus side, going over dry ink in the same color will darken it, and I’ve squeezed several different shades of gray out of just the light marker. The brush tip can be angled in several different ways to make different shapes; I’ve managed a grassy background and a fur texture just by holding the marker differently. Marker ink does bleed very slightly, but it tends to stop at pen ink, a feature I’ve wanted in digital art for at least a century. I can also kinda make strokes that fade out by moving the marker quickly and lifting it off the paper as I go; surely there are more clever things to be done here, but I’ve yet to figure them out.

The drawing of bergmite above was done as the light marker started to run dry, which is not a problem I was expecting. The marker still worked, but not very well. The strokes on the cave wall in the background aren’t a deliberate effect; those are the strokes the marker was making, and I tried to use them as best I could. I didn’t have the medium marker yet, and the dark marker is very dark — almost black. I’d already started laying down marker, so I couldn’t very well finish the picture with just the pen, and I had to improvise.

Ink is permanent

Well. Obviously.

I have to be pretty careful about what I draw, which creates a bit of a conflict. If I make smooth, confident strokes, I’m likely to fuck them up, and I can’t undo and try again. If I make a lot of short strokes, I get those tell-tale amateurish scratchy lines. If I trace my sketch very carefully and my hand isn’t perfectly steady, the resulting line will be visibly shaky.

I probably exacerbated the shaky lines with my choice of relatively small paper; there’s no buffer between those tiny wobbles and the smallest level of detail in the drawing itself. I can’t always even see where my tiny sketch is going, because my big fat fingers are in the way.

I’ve also had the problem that my sketch is such a mess that I can’t tell where a line is supposed to be going… until I’ve drawn it and it’s obviously wrong. Again, small paper exacerbates this by compressing sketches.

Since I can’t fix mistakes, I’ve had to be a little creative about papering over them.

  • I did one ink with very stark contrast: shadows were completely filled with ink, highlights were bare paper. No shading, hatching, or other middle ground. I’d been meaning to try the approach anyway, but I finally did it after making three or four glaring mistakes. In the final work, they’re all hidden in shadow, so you can’t really tell anything ever went wrong.

  • I’ve managed to disguise several mistakes of the “curved this line too early” variety just by adding some more parallel strokes and pretending I intended to hatch it all along.

  • One of the things I’ve been trying to figure out is varying line weight, and one way to vary it is to make edges thicker when in shadows. A clever hack has emerged here.

    You see, it’s much easier for me to draw an upwards arc than a downwards arc. (I think this is fairly universal?) I can of course just rotate the paper, but if I’m drawing a cylinder, it’s pretty obvious when the top was drawn with a slight bias in one direction and the bottom was drawn with a slight bias in the other direction.

    My lifehack is to draw the top and bottom with the paper oriented the same way, then gradually thicken the bottom, “carving” it into the right shape as I go. I can make a lot of small adjustments and still end up with a single smooth line that looks more or less deliberate.

  • As a last resort… leave it and hope no one notices. That’s what I did for the floatzel above, who has a big fat extra stroke across their lower stomach. It’s in one of the least interesting parts of the picture, though, so it doesn’t really stand out, even though it’s on one of the lightest surfaces.

Ink takes a while

Ink drawings feel like they’ve consumed my entire month. Sketching and then lining means drawing everything twice. Using physical ink means I have to nail the sketch — but I’m used to digital, where I can sketch sloppily and then fixing up lines as I go. I also can’t rearrange the sketch, move it around on the paper if I started in the wrong place, or even erase precisely, so I’ve had to be much more careful and thoughtful even with pencil. That’s a good thing — I don’t put nearly enough conscious thought into what I’m drawing — but it definitely takes longer. In a few thorny cases I’ve even resorted to doing a very loose digital sketch, then drawing the pencil sketch based off of that.

All told, each one takes maybe two hours, and I’ve been doing two at a time… but wait, that’s still only four hours, right? How are they taking most of a day?

I suspect a bunch of factors are costing me more time than expected. If I can’t think of a scene idea, I’ll dawdle on Twitter for a while. Two “serious” attempts in a medium I’m not used to can be a little draining and require a refractory period. Fragments of time between or around two larger tasks are, of course, lost forever. And I guess there’s that whole thing where I spent half the month waking up in the middle of the night for no reason and then being exhausted by late evening.

Occasionally I’ve experimented with some approach that turns out to be incredibly tedious and time-consuming, like the early Gardevoir above. You would not believe how long that damn grass took. Or maybe you would, if you’d ever tried similar. Even the much lazier tree-covered mountain in the background seemed to take a while. And this is on a fairly small canvas!

I’m feeling a bit exhausted with ink work at this point, which is not the best place to be after buying a bunch of ink supplies. I definitely want to do more of it in the future, but maybe not daily. I also miss being able to undo. Sweet, sweet undo.

Precision is difficult, and I am bad at planning

These turn out to be largely the same problem.

I’m not a particularly patient person, so I like to jump from the sketch into the inking as soon as possible. Sometimes this means I overlook some details. Here’s that whole “not consciously thinking enough” thing again. Consider, in the above image,

  • The two buildings at the top right are next to each other, yet the angles of their roofs suggest they’re facing in slightly different directions, which doesn’t make a lot of sense for artificial structures.

  • The path leading from the dock doesn’t quite make sense, and the general scale of the start of the dock versus the shrubs and trees is nonsense. The trees themselves are pretty cool, but it looks like I plopped them down individually without really having a full coherent plan going in. Which is exactly what happened.

    Imagining spaces in enough detail to draw them is tough, and not something I’ve really had to do much before. It’s ultimately the same problem I have with game level design, though, so hopefully a breakthrough in one will help me with the other.

  • Phantump’s left eye has a clear white edge showing the depth of the hole in the trunk, but the right eye’s edge was mostly lost to some errant strokes and subsequent attempts to fix them. Also, even the left margin is nowhere near as thick as the trunk’s bottom edge.

  • The crosshatched top of phantump’s head blends into the noisy grassy background. The fix for this is to leave a thin white edge around the top of the head. I think I intended to do this, then completely forgot about it as I was drawing the grass. I suppose I’m not used to reasoning about negative space; I can’t mark or indicate it in any way, nor erase the ink if I later realize I laid down too much.

  • The pupils don’t quite match, but I’d already carved them down a good bit. Negative space problem again. Highlights on dark areas have been a recurring problem all month, especially with markers.

I have no idea how people make beautifully precise inkwork. At the same time, I’ve long had the suspicion that I worry too much about precision and should be a lot looser. I’m missing something here, and I don’t know what it is.

What even is pokémon anatomy

This is a wigglytuff. Wigglytuffs are tall blobs with ears.

I had such a hard time sketching this. (Probably why I rushed the background.)

It turns out that if you draw a wigglytuff even slightly off, the result is a tall blob with ears rather than a wigglytuff. That makes no sense, especially given that wigglytuffs are balloons. Surely, the shape shouldn’t be such a strong part of the wigglytuff identity, and yet it is.

Maybe half of the pokémon I’ve drawn have had some anatomical surprise, even ones I thought I was familiar with. Aerodactyl and huntail have a really pronounced lower jaw. Palpitoad has no arms at all. Pelipper is 70% mouth. Zangoose seems like a straightforward mammal at first glance, but the legs and body and head are all kind of a single blob. Numerous pokémon have no distinct neck, or no distinct shoulders, or a very round abdomen with legs kind of arbitrarily attached somewhere.

Progress, maybe

I don’t know what precisely I’ve gotten out of this experience. I can’t measure artistic progress from one day to the next. I do feel like I’ve gleaned some things, but they seem to be very abstract things. I’m out of the total beginner weeds and solidly into the intermediate hell of just picking up hundreds of little things no one really talks about. All I can do is cross my fingers and push forwards.

The crowd favorite so far is this mega rayquaza, which is kinda funny to me because I don’t feel like I did anything special here. I just copied a bunch of fiddly details. It looks cool, but it felt more like rote work than a struggle to do a new thing.

My own favorite is this much simpler qwilfish. It’s the culmination of several attempts to draw water that I liked, and it came out the best by far. The highlight is also definitely the best I’ve drawn this month. Interesting how that works out.

The rest are on on Tumblr, or in this single Twitter thread.

Introducing PIXEL

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/introducing-pixel/

It was just over two years ago when I walked into Pi Towers for the first time. I only had the vaguest idea of what I was going to be doing, but on the first day Eben and I sat down and played with the Raspbian desktop for half an hour, then he asked me “do you think you can make it better?”

origdesk

Bear in mind that at this point I’d barely ever used Linux or Xwindows, never mind made any changes to them, so when I answered “hmmm – I think so”, it was with rather more confidence than I actually felt. It was obvious that there was a lot that could be done in terms of making it a better experience for the user, and I spent many years working in user interface design in previous jobs. But I had no idea where to start in terms of changing Raspbian. I clearly had a bit of a learning curve in front of me…

Well, that was two years ago, and I’ve learnt an awful lot since then. It’s actually surprisingly easy to hack about with the LXDE desktop once you get your head around what all the bits do, and since then I’ve been slowly chipping away at the bits that I felt would most benefit from tweaking. Stuff has slowly been becoming more and more like my original concept for the desktop; with the latest changes, I think the desktop has reached the point where it’s a complete product in its own right and should have its own name. So today, we’re announcing the release of the PIXEL desktop, which will ship with the Foundation’s Raspbian image from now on.

newdesk

PIXEL?

One of the things I said (at least partly in jest) to my colleagues in those first few weeks was that I’d quite like to rename the desktop environment once it was a bit more Pi-specific, and I had the name “pixel” in my mind about two weeks in. It was a nice reminder of my days learning to program in BASIC on the Sinclair ZX81; nowadays, everything from your TV to your phone has pixels on it, but back then it was a uniquely “computer-y” word and concept. I also like crosswords and word games, and once it occurred to me that “pixel” could be made up from the initials of words like Pi and Xwindows, the name stuck in my head and never quite went away. So PIXEL it is, which now officially stands for “Pi Improved Xwindows Environment, Lightweight”.

What’s new?

The latest set of changes are almost entirely to do with the appearance of the desktop; there are some functional changes and a few new applications, about which more below, but this is mostly about making things look nicer.

The first thing you’ll notice on rebooting is that the trail of cryptic boot messages has (mostly) gone, replaced by a splash screen. One feature which has frequently been requested is an obvious version number for our Raspbian image, and this can now be seen at the bottom-right of the splash image. We’ll update this whenever we release a new version of the image, so it should hopefully be slightly easier to know exactly what version you’re running in future.

splash

I should mention that the code for the splash screen has been carefully written and tested, and should not slow down the Pi’s boot process; the time to go from powering on to the desktop appearing is identical, whether the splash is shown or not.

Desktop pictures

Once the desktop appears, the first thing you’ll notice is the rather stunning background image. We’re very fortunate in that Greg Annandale, one of the Foundation’s developers, is also a very talented (and very well-travelled) photographer, and he has kindly allowed us to use some of his work as desktop pictures for PIXEL. There are 16 images to choose from; you can find them in /usr/share/pixel-wallpaper/, and you can use the Appearance Settings application to choose which one you prefer. Do have a look through them, as Greg’s work is well worth seeing! If you’re curious, the EXIF data in each image will tell you where it was taken.

desk2

desk3

desk1

Icons

You’ll also notice that the icons on the taskbar, menu, and file manager have had a makeover. Sam Alder and Alex Carter, the guys responsible for all the cartoons and graphics you see on our website, have been sweating blood over these for the last few months, with Eben providing a watchful eye to make sure every pixel was exactly the right colour! We wanted something that looked businesslike enough to be appropriate for those people who use the Pi desktop for serious work, but with just a touch of playfulness, and Sam and Alex did a great job. (Some of the icons you don’t see immediately are even nicer; it’s almost worth installing some education or engineering applications just so those categories appear in the menu…)

menu

Speaking of icons, the default is now not to show icons in individual application menus. These always made menus look a bit crowded, and didn’t really offer any improvement in usability, not least because it wasn’t always that obvious what the icon was supposed to represent… The menus look cleaner and more readable as a result, since the lack of visual clutter now makes them easier to use.

Finally on the subject of icons, in the past if your Pi was working particularly hard, you might have noticed some yellow and red squares appearing in the top-right corner of the screen, which were indications of overtemperature or undervoltage. These have now been replaced with some new symbols that make it a bit more obvious what’s actually happening; there’s a lightning bolt for undervoltage, and a thermometer for overtemperature.

Windows

If you open a window, you’ll see that the window frame design has now changed significantly. The old window design always looked a bit dated compared to what Apple and Microsoft are now shipping, so I was keen to update it. Windows now have a subtle curve on the corners, a cleaner title bar with new close / minimise / maximise icons, and a much thinner frame. One reason the frame was quite thick on the old windows was so that the grab handles for resizing were big enough to find with the mouse. To avoid this problem, the grab handles now extend slightly outside the window; if you hold the mouse pointer just outside the window which has focus, you’ll see the pointer change to show the handle.

window

Fonts

Steve Jobs said that one thing he was insistent on about the Macintosh was that its typography was good, and it’s true that using the right fonts makes a big difference. We’ve been using the Roboto font in the desktop for the last couple of years; it’s a nice-looking modern font, and it hasn’t changed for this release. However, we have made it look better in PIXEL by including the Infinality font rendering package. This is a library of tweaks and customisations that optimises how fonts are mapped to pixels on the screen; the effect is quite subtle, but it does give a noticeable improvement in some places.

Login

Most people have their Pi set up to automatically log in when the desktop starts, as this is the default setting for a new install. For those who prefer to log in manually each time, the login screen has been redesigned to visually match the rest of the desktop; you now see the login box (known as the “greeter”) over your chosen desktop design, with a seamless transition from greeter to desktop.

login

Wireless power switching

One request we have had in the past is to be able to shut off WiFi and/or Bluetooth completely, particularly on Pi 3. There are now options in the WiFi and Bluetooth menus to turn off the relevant devices. These work on the Pi 3’s onboard wireless hardware; they should also work on most external WiFi and Bluetooth dongles.

You can also now disconnect from an associated wireless access point by clicking on its entry in the WiFi menu.

New applications

There are a couple of new applications now included in the image.

RealVNC have ported their VNC server and viewer applications to Pi, and they are now integrated with the system. To enable the server, select the option on the Interfaces tab in Raspberry Pi Configuration; you’ll see the VNC menu appear on the taskbar, and you can then log in to your Pi and control it remotely from a VNC viewer.

The RealVNC viewer is also included – you can find it from the Internet section of the Applications menu – and it allows you to control other RealVNC clients, including other Pis. Have a look here on RealVNC’s site for more information.

vnc

Please note that if you already use xrdp to remotely access your Pi, this conflicts with the RealVNC server, so you shouldn’t install both at once. If you’re updating an existing image, don’t run the sudo apt-get install realvnc-vnc-server line in the instructions below. If you want to use xrdp on a clean image, first uninstall the RealVNC server with sudo apt-get purge realvnc-vnc-server before installing xrdp. (If the above paragraph means nothing to you, then you probably aren’t using xrdp, so you don’t have to worry about any of it!)

Also included is the new SenseHAT emulator, which was described in a blog post a couple of weeks ago; have a look here for all the details.

sensehat

Updates

There are updates for a number of the built-in applications; these are mostly tweaks and bug fixes, but there have been improvements made to Scratch and Node-RED.

One more thing…

We’ve been shipping the Epiphany web browser for the last couple of years, but it’s now starting to show its age. So for this release (and with many thanks to Gustav Hansen from the forums for his invaluable help with this), we’re including an initial release of Chromium for the Pi. This uses the Pi’s hardware to accelerate playback of streaming video content.

chromium

We’ve preinstalled a couple of extensions; the uBlock Origin adblocker should hopefully keep intrusive adverts from slowing down your browsing experience, and the h264ify extension forces YouTube to serve videos in a format which can be accelerated by the Pi’s hardware.

Chromium is a much more demanding piece of software than Epiphany, but it runs well on Pi 2 and Pi 3; it can struggle slightly on the Pi 1 and Pi Zero, but it’s still usable. (Epiphany is still installed in case you find it useful; launch it from the command line by typing “epiphany-browser”.)

How do I get it?

The Raspbian + PIXEL image is available from the Downloads page on our website now.

To update an existing Jessie image, type the following at the command line:

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install -y rpi-chromium-mods
sudo apt-get install -y python-sense-emu python3-sense-emu
sudo apt-get install -y python-sense-emu-doc realvnc-vnc-viewer

and then reboot.

If you don’t use xrdp and would like to use the RealVNC server to remotely access your Pi, type the following:

sudo apt-get install -y realvnc-vnc-server

As always, your feedback on the new release is very welcome; feel free to let us know what you think in the comments or on the forums.

The post Introducing PIXEL appeared first on Raspberry Pi.