Tag Archives: things

Is Continuing to Patch Windows XP a Mistake?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/is_continuing_t.html

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it’s not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ — stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP’s support life cycle for another few years and waited for attrition to shrink Windows XP’s userbase to irrelevant levels. Or it could have claimed that this case is somehow “special,” releasing a patch while still claiming that Windows XP isn’t supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft’s assertion that Windows XP isn’t supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It’s hard to say how that was possibly worth it.

This is a hard trade-off, and it’s going to get much worse with the Internet of Things. Here’s me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

MPAA & RIAA Demand Tough Copyright Standards in NAFTA Negotiations

Post Syndicated from Andy original https://torrentfreak.com/mpaa-riaa-demand-tough-copyright-standards-in-nafta-negotiations-170621/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago. With a quarter of a decade of developments to contend with, the United States wants to modernize.

“While our economy and U.S. businesses have changed considerably over that period, NAFTA has not,” the government says.

With this in mind, the US requested comments from interested parties seeking direction for negotiation points. With those comments now in, groups like the MPAA and RIAA have been making their positions known. It’s no surprise that intellectual property enforcement is high on the agenda.

“Copyright is the lifeblood of the U.S. motion picture and television industry. As such, MPAA places high priority on securing strong protection and enforcement disciplines in the intellectual property chapters of trade agreements,” the MPAA writes in its submission.

“Strong IPR protection and enforcement are critical trade priorities for the music industry. With IPR, we can create good jobs, make significant contributions to U.S. economic growth and security, invest in artists and their creativity, and drive technological innovation,” the RIAA notes.

While both groups have numerous demands, it’s clear that each seeks an environment where not only infringers can be held liable, but also Internet platforms and services.

For the RIAA, there is a big focus on the so-called ‘Value Gap’, a phenomenon found on user-uploaded content sites like YouTube that are able to offer infringing content while avoiding liability due to Section 512 of the DMCA.

“Today, user-uploaded content services, which have developed sophisticated on-demand music platforms, use this as a shield to avoid licensing music on fair terms like other digital services, claiming they are not legally responsible for the music they distribute on their site,” the RIAA writes.

“Services such as Apple Music, TIDAL, Amazon, and Spotify are forced to compete with services that claim they are not liable for the music they distribute.”

But if sites like YouTube are exercising their rights while acting legally under current US law, how can partners Canada and Mexico do any better? For the RIAA, that can be achieved by holding them to standards envisioned by the group when the DMCA was passed, not how things have panned out since.

Demanding that negotiators “protect the original intent” of safe harbor, the RIAA asks that a “high-level and high-standard service provider liability provision” is pursued. This, the music group says, should only be available to “passive intermediaries without requisite knowledge of the infringement on their platforms, and inapplicable to services actively engaged in communicating to the public.”

In other words, make sure that YouTube and similar sites won’t enjoy the same level of safe harbor protection as they do today.

The RIAA also requires any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

In any event, NAFTA should not “support interpretations that no longer reflect today’s digital economy and threaten the future of legitimate and sustainable digital trade,” the RIAA states.

For the MPAA, Section 512 is also perceived as a problem. While noting that the original intent was to foster a system of shared responsibility between copyright owners and service providers, the MPAA says courts have subsequently let copyright holders down. Like the RIAA, the MPAA also suggests that Canada and Mexico can be held to higher standards.

“We recommend a new approach to this important trade policy provision by moving to high-level language that establishes intermediary liability and appropriate limitations on liability. This would be fully consistent with U.S. law and avoid the same misinterpretations by policymakers and courts overseas,” the MPAA writes.

“In so doing, a modernized NAFTA would be consistent with Trade Promotion Authority’s negotiating objective of ‘ensuring that standards of protection and enforcement keep pace with technological developments’.”

The MPAA also has some specific problems with Mexico, including unauthorized camcording. The Hollywood group says that 85 illicit audio and video recordings of films were linked to Mexican theaters in 2016. However, recording is not currently a criminal offense in Mexico.

Another issue for the MPAA is that criminal sanctions for commercial scale infringement are only available if the infringement is for profit.

“This has hampered enforcement against the above-discussed camcording problem but also against online infringement, such as peer-to-peer piracy, that may be on a scale that is immensely harmful to U.S. rightsholders but nonetheless occur without profit by the infringer,” the MPAA writes.

“The modernized NAFTA like other U.S. bilateral free trade agreements must provide for criminal sanctions against commercial scale infringements without proof of profit motive.”

Also of interest are the MPAA’s complaints against Mexico’s telecoms laws. Unlike in the US and many countries in Europe, Mexico’s ISPs are forbidden to hand out their customers’ personal details to rights holders looking to sue. This, the MPAA says, needs to change.

The submissions from the RIAA and MPAA can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Is your product “Powered by Raspberry Pi”?

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/powered-by-raspberry-pi/

One of the most exciting things for us about the growth of the Raspberry Pi community has been the number of companies that have grown up around the platform, and who have chosen to embed our products into their own. While many of these design-ins have been “silent”, a number of people have asked us for a standardised way to indicate that a product contains a Raspberry Pi or a Raspberry Pi Compute Module.

Powered by Raspberry Pi Logo

At the end of last year, we introduced a “Powered by Raspberry Pi” logo to meet this need. It is now included in our trademark rules and brand guidelines, which you can find on our website. Below we’re showing an early example of a “Powered by Raspberry Pi”-branded device, the KUNBUS Revolution Pi industrial PC. It has already made it onto the market, and we think it will inspire you to include our logo on the packaging of your own product.

KUNBUS RevPi
Powered by Raspberry Pi logo on RevPi

Using the “Powered by Raspberry Pi” brand

Adding the “Powered by Raspberry Pi” logo to your packaging design is a great way to remind your customers that a portion of the sale price of your product goes to the Raspberry Pi Foundation and supports our educational work.

As with all things Raspberry Pi, our rules for using this brand are fairly straightforward: the only thing you need to do is to fill out this simple application form. Once you have submitted it, we will review your details and get back to you as soon as possible.

When we approve your application, we will require that you use one of the official “Powered by Raspberry Pi” logos and that you ensure it is at least 30 mm wide. We are more than happy to help you if you have any design queries related to this – just contact us at info@raspberrypi.org

The post Is your product “Powered by Raspberry Pi”? appeared first on Raspberry Pi.

All Systems Go! 2017 CfP Open

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-cfp-open.html

The All Systems Go! 2017 Call for Participation is Now Open!

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

Kernel prepatch 4.12-rc6

Post Syndicated from corbet original https://lwn.net/Articles/725787/rss

The 4.12-rc6 kernel prepatch is out for
testing. “The good news is that rc6 is smaller than rc5 was, and I think we’re
back on track and rc5 really was big just due to random timing. We’ll
see. Next weekend when I’m back home and do rc7, I’ll see how I feel
about things. I’m still hopeful that this would be a normal release
cycle where rc7 is the last rc.

Disney Asks Google to Remove Its Own (Invisible) Takedown Notices

Post Syndicated from Ernesto original https://torrentfreak.com/disney-asks-google-to-remove-its-own-invisible-takedown-notices-170618/

Pretty much every major copyright holder regularly reports infringing links to Google, hoping to decrease the visibility of pirated files.

Over the past several years, the search engine has had to remove more than two billion links and most of these requests have been neatly archived in the Lumen database.

Walt Disney Company is no stranger to these takedown efforts. The company has sent over 20 million takedown requests to the search engine, covering a wide variety of content. All of these notices are listed in Google’s transparency report, and copies are available at Lumen.

While this is nothing new, we recently noticed that Disney doesn’t stop at reporting direct links to traditional “pirate” sites. In fact, they recently targeted one of their own takedown notices in the Lumen database, which was sent on behalf of its daughter company Lucasfilm.

In the notice below, the media giant wants Google to remove a links to a copy of its own takedown notice, claiming that it infringes the copyright of the blockbuster “Star Wars: The Force Awakens.”

Disney vs. Disney?

This is not the first time that a company has engaged in this type of meta-censorship, it appears.

However, it’s all the more relevant this week after a German court decided that Google can be ordered to stop linking to its own takedown notices. While that suggests that Disney was right to ask for its own link to be removed, the reality is a bit more complex.

When it was still known as ChillingEffects, the Lumen Database instructed Google not to index any takedown notices. And indeed, searching for copies of takedown notices yields no result. This means that Disney asked Google to remove a search result that doesn’t exist.

Perhaps things are different in a galaxy far, far away, but Disney’s takedown notice is not only self-censorship but also entirely pointless.

Disney might be better off focusing on content that Google has actually indexed, instead of going after imaginary threats. Or put in the words of Gold Five: “Stay on Target,” Disney..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Digital painter rundown

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/

Another patron post! IndustrialRobot asks:

You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)

This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…

Right, now that your expectations have been suitably lowered:

Krita

I do all of my digital art in Krita. It’s pretty alright.

Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.

Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.

In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?

I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.

I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.

The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.

And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.

What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.


Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.

It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.

I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.

The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.

If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.

This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)

I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.

GIMP

Ah, GIMP — open source’s answer to Photoshop.

It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.

In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.

Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.

And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.

As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)

I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.

I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.

MyPaint

Ah, MyPaint. I gave it a try once. Once.

It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.

Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.

MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.

I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.

I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.

Drawpile

Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)

It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.

I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.

Aseprite

Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.

That’s all

I can’t name any other serious contender that exists for Linux.

I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.

And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.

Konecny: Anaconda modularisation

Post Syndicated from jake original https://lwn.net/Articles/725623/rss

On his blog, Jiri Konecny writes about plans for modularizing Anaconda, which is the installer for Fedora and other Linux distributions. Anaconda is written in Python 3, but is all contained in one monolithic program.
The current Anaconda has one significant problem: all of the code is in one place–the monolith. It is more difficult to trace bugs and to a have a stable API. Implementing new features or modifying existing code in Anaconda is also more challenging. Modularisation should help with these things mainly because of isolation between the modules. It will be much easier to create tests for modules or to add new functionality.

Modularisation also opens up new possibilities to developers. They should be able to create a new user interface easily. Since developers can rely on the existing API documentation, it should not be necessary to browse the source code tree very often. Another benefit is that an addon is like another module, communicating with other modules, so it has the same capabilities. Developers can use the public API to write their addons in their favourite programming language which supports DBus.”

Mira, tiny robot of joyful delight

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/mira-robot-alonso-martinez/

The staff of Pi Towers are currently melting into puddles while making ‘Aaaawwwwwww’ noises as Mira, the adorable little Pi-controlled robot made by Pixar 3D artist Alonso Martinez, steals their hearts.

Mira the robot playing peek-a-boo

If you want to get updates on Mira’s progress, sign up for the mailing list! http://eepurl.com/bteigD Mira is a desk companion that makes your life better one smile at a time. This project explores human robot interactivity and emotional intelligence. Currently Mira uses face tracking to interact with the users and loves playing the game “peek-a-boo”.

Introducing Mira

Honestly, I can’t type words – I am but a puddle! If I could type at all, I would only produce a stream of affectionate fragments. Imagine walking into a room full of kittens. What you would sound like is what I’d type.

No! I can do this. I’m a professional. I write for a living! I can…

SHE BLINKS OHMYAAAARGH!!!

Mira Alonso Martinez Raspberry Pi

Weebl & Bob meets South Park’s Ike Broflovski in an adorable 3D-printed bundle of ‘Aaawwwww’

Introducing Mira (I promise I can do this)

Right. I’ve had a nap and a drink. I’ve composed myself. I am up for this challenge. As long as I don’t look directly at her, I’ll be fine!

Here I go.

As one of the many über-talented 3D artists at Pixar, Alonso Martinez knows a thing or two about bringing adorable-looking characters to life on screen. However, his work left him wondering:

In movies you see really amazing things happening but you actually can’t interact with them – what would it be like if you could interact with characters?

So with the help of his friends Aaron Nathan and Vijay Sundaram, Alonso set out to bring the concept of animation to the physical world by building a “character” that reacts to her environment. His experiments with robotics started with Gertie, a ball-like robot reminiscent of his time spent animating bouncing balls when he was learning his trade. From there, he moved on to Mira.

Mira Alonso Martinez

Many, many of the views of this Tested YouTube video have come from me. So many.

Mira swivels to follow a person’s face, plays games such as peekaboo, shows surprise when you finger-shoot her, and giggles when you give her a kiss.

Mira’s inner workings

To get Mira to turn her head in three dimensions, Alonso took inspiration from the Microsoft Sidewinder Pro joystick he had as a kid. He purchased one on eBay, took it apart to understand how it works, and replicated its mechanism for Mira’s Raspberry Pi-powered innards.

Mira Alonso Martinez

Alonso used the smallest components he could find so that they would fit inside Mira’s tiny body.

Mira’s axis of 3D-printed parts moves via tiny Power HD DSM44 servos, while a camera and OpenCV handle face-tracking, and a single NeoPixel provides a range of colours to indicate her emotions. As for the blinking eyes? Two OLED screens boasting acrylic domes fit within the few millimeters between all the other moving parts.

More on Mira, including her history and how she works, can be found in this wonderful video released by Tested this week.

Pixar Artist’s 3D-Printed Animated Robots!

We’re gushing with grins and delight at the sight of these adorable animated robots created by artist Alonso Martinez. Sean chats with Alonso to learn how he designed and engineered his family of robots, using processes like 3D printing, mold-making, and silicone casting. They’re amazing!

You can also sign up for Alonso’s newsletter here to stay up-to-date about this little robot. Hopefully one of these newsletters will explain how to buy or build your own Mira, as I for one am desperate to see her adorable little face on my desk every day for the rest of my life.

The post Mira, tiny robot of joyful delight appeared first on Raspberry Pi.

Pirates Cost Australia’s Ten Network “Hundreds of Millions of Dollars”

Post Syndicated from Andy original https://torrentfreak.com/pirates-cost-australias-ten-network-hundreds-of-millions-of-dollars-170616/

In 2016, Australia’s Ten Network posted losses of AUS$157 million. This April, the broadcaster showed signs of continuing distress when it posted a half-year loss of AUS$232 million.

In a statement to the stock exchange, Ten said it was trying to secure new terms for a AUS$200 million debt financing guarantee. According to ABC, the company had lost more than 60% of its value in the preceding 12 months and almost 98% over the previous five years.

More bad news arrived this week when Ten’s board decided to put the company into voluntary administration after failing to secure a guarantee for a AUS$250 million loan that could’ve kept the ship afloat into the new year. As moves get underway to secure the company’s future, fingers of blame are being raised.

According to Village Roadshow co-chief executive Graham Burke, Internet pirates cost Ten “hundreds of millions of dollars” in advertising revenue due to their tendency to obtain movies and TV shows from the web rather than via legitimate means.

Burke told The Australian (paywall) that movies supplied to Ten by 21st Century Fox (including The Revenant and The Peanuts Movie which were both leaked) had received lower broadcast ratings due to people viewing them online in advance.

“Piracy is a much bigger channel and an illicit economy than the three main commercial networks combined,” Burke told the publication.

“Movies from Fox arrive with several million people having seen them through piracy. If it wasn’t for piracy, the ratings would be stronger and the product would not be arriving clapped out.”

But leaked or not, content doesn’t come cheap. As part of efforts to remain afloat, Ten Network recently tried to re-negotiate content supply deals with Fox and CBS. Together they reportedly cost the broadcaster more than AUS$900 million over the previous six years.

Despite this massive price tag and numerous other problems engulfing the troubled company, Burke suggests it is pirates that are to blame for Ten’s demise.

“A large part of Ten’s expenditure is on movies and they are being seen by millions of people ­illegitimately on websites supported by rogue ­advertising for drugs, prostitution and even legitimate advertising. The cumulative effect of all the ­pirated product out there has brought down Ten,” Burke said.

While piracy has certainly been blamed for a lot of things over the years, it is extremely rare for a senior industry figure to link it so closely with the potential demise of a major broadcaster.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BackMap, the haptic navigation system

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/backmap-haptic/

At this year’s TechCrunch Disrupt NY hackathon, one team presented BackMap, a haptic feedback system which helps visually impaired people to navigate cities and venues. It is assisted by a Raspberry Pi and integrated into a backpack.

Good vibrations with BackMap

The team, including Shashank Sharma, wrote an iOS phone app in Swift, Apple’s open-source programming language. To convert between addresses and geolocations, they used the Esri APIs offered by PubNub. So far, so standard. However, they then configured their BackMap setup so that the user can input their destination via the app, and then follow the route without having to look at a screen or listen to directions. Instead, vibrating motors have been integrated into the straps of a backpack and hooked up to a Raspberry Pi. Whenever the user needs to turn left or right, the Pi makes the respective motor vibrate.

Disrupt NY 2017 Hackathon | Part 1

Disrupt NY 2017 Hackathon presentations filmed live on May 15th, 2017. Preceding the Disrupt Conference is Hackathon weekend on May 13-14, where developers and engineers descend from all over the world to take part in a 24-hour hacking endurance test.

BackMap can also be adapted for indoor navigation by receiving signals from beacons. This could be used to direct users to toilet facilities or exhibition booths at conferences. The team hopes to upgrade the BackMap device to use a wristband format in the future.

Accessible Pi

Here at Pi Towers, we are always glad to see Pi builds for people with disabilities: we’ve seen Sanskriti and Aman’s Braille teacher Mudra, the audio e-reader Valdema by Finnish non-profit Kolibre, and Myrijam and Paul’s award-winning, eye-movement-controlled wheelchair, to name but a few.

Our mission is to bring the power of coding and digital making to everyone, and we are lucky to be part of a diverse community of makers and educators who have often worked proactively to make events and resources accessible to as many people as possible. There is, for example, the autism- and Tourette’s syndrome-friendly South London Raspberry Jam, organised by Femi Owolade-Coombes and his mum Grace. The Raspberry VI website is a portal to all things Pi for visually impaired and blind people. Deaf digital makers may find Jim Roberts’ video tutorials, which are signed in ASL, useful. And anyone can contribute subtitles in any language to our YouTube channel.

If you create or use accessible tutorials, or run a Jam, Code Club, or CoderDojo that is designed to be friendly to people who are neuroatypical or have a disability, let us know how to find your resource or event in the comments!

The post BackMap, the haptic navigation system appeared first on Raspberry Pi.

New – Auto Scaling for Amazon DynamoDB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-auto-scaling-for-amazon-dynamodb/

Amazon DynamoDB has more than one hundred thousand customers, spanning a wide range of industries and use cases. These customers depend on DynamoDB’s consistent performance at any scale and presence in 16 geographic regions around the world. A recent trend we’ve been observing is customers using DynamoDB to power their serverless applications. This is a good match: with DynamoDB, you don’t have to think about things like provisioning servers, performing OS and database software patching, or configuring replication across availability zones to ensure high availability – you can simply create tables and start adding data, and let DynamoDB handle the rest.

DynamoDB provides a provisioned capacity model that lets you set the amount of read and write capacity required by your applications. While this frees you from thinking about servers and enables you to change provisioning for your table with a simple API call or button click in the AWS Management Console, customers have asked us how we can make managing capacity for DynamoDB even easier.

Today we are introducing Auto Scaling for DynamoDB to help automate capacity management for your tables and global secondary indexes. You simply specify the desired target utilization and provide upper and lower bounds for read and write capacity. DynamoDB will then monitor throughput consumption using Amazon CloudWatch alarms and then will adjust provisioned capacity up or down as needed. Auto Scaling will be on by default for all new tables and indexes, and you can also configure it for existing ones.

Even if you’re not around, DynamoDB Auto Scaling will be monitoring your tables and indexes to automatically adjust throughput in response to changes in application traffic. This can make it easier to administer your DynamoDB data, help you maximize availability for your applications, and help you reduce your DynamoDB costs.

Let’s see how it works…

Using Auto Scaling
The DynamoDB Console now proposes a comfortable set of default parameters when you create a new table. You can accept them as-is or you can uncheck Use default settings and enter your own parameters:

Here’s how you enter your own parameters:

Target utilization is expressed in terms of the ratio of consumed capacity to provisioned capacity. The parameters above would allow for sufficient headroom to allow consumed capacity to double due to a burst in read or write requests (read Capacity Unit Calculations to learn more about the relationship between DynamoDB read and write operations and provisioned capacity). Changes in provisioned capacity take place in the background.

Auto Scaling in Action
In order to see this important new feature in action, I followed the directions in the Getting Started Guide. I launched a fresh EC2 instance, installed (sudo pip install boto3) and configured (aws configure) the AWS SDK for Python. Then I used the code in the Python and DynamoDB section to create and populate a table with some data, and manually configured the table for 5 units each of read and write capacity.

I took a quick break in order to have clean, straight lines for the CloudWatch metrics so that I could show the effect of Auto Scaling. Here’s what the metrics look like before I started to apply a load:

I modified the code in Step 3 to continually issue queries for random years in the range of 1920 to 2007, ran a single copy of the code, and checked the read metrics a minute or two later:

The consumed capacity is higher than the provisioned capacity, resulting in a large number of throttled reads. Time for Auto Scaling!

I returned to the console and clicked on the Capacity tab for my table. Then I clicked on Read capacity, accepted the default values, and clicked on Save:

DynamoDB created a new IAM role (DynamoDBAutoscaleRole) and a pair of CloudWatch alarms to manage the Auto Scaling of read capacity:

DynamoDB Auto Scaling will manage the thresholds for the alarms, moving them up and down as part of the scaling process. The first alarm was triggered and the table state changed to Updating while additional read capacity was provisioned:

The change was visible in the read metrics within minutes:

I started a couple of additional copies of my modified query script and watched as additional capacity was provisioned, as indicated by the red line:

I killed all of the scripts and turned my attention to other things while waiting for the scale-down alarm to trigger. Here’s what I saw when I came back:

The next morning I checked my Scaling activities and saw that the alarm had triggered several more times overnight:

This was also visible in the metrics:

Until now, you would prepare for this situation by setting your read capacity well about your expected usage, and pay for the excess capacity (the space between the blue line and the red line). Or, you might set it too low, forget to monitor it, and run out of capacity when traffic picked up. With Auto Scaling you can get the best of both worlds: an automatic response when an increase in demand suggests that more capacity is needed, and another automated response when the capacity is no longer needed.

Things to Know
DynamoDB Auto Scaling is designed to accommodate request rates that vary in a somewhat predictable, generally periodic fashion. If you need to accommodate unpredictable bursts of read activity, you should use Auto Scaling in combination with DAX (read Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads to learn more). Also, the AWS SDKs will detect throttled read and write requests and retry them after a suitable delay.

I mentioned the DynamoDBAutoscaleRole earlier. This role provides Auto Scaling with the privileges that it needs to have in order for it to be able to scale your tables and indexes up and down. To learn more about this role and the permissions that it uses, read Grant User Permissions for DynamoDB Auto Scaling.

Auto Scaling has complete CLI and API support, including the ability to enable and disable the Auto Scaling policies. If you have some predictable, time-bound spikes in traffic, you can programmatically disable an Auto Scaling policy, provision higher throughput for a set period of time, and then enable Auto Scaling again later.

As noted on the Limits in DynamoDB page, you can increase provisioned capacity as often as you would like and as high as you need (subject to per-account limits that we can increase on request). You can decrease capacity up to nine times per day for each table or global secondary index.

You pay for the capacity that you provision, at the regular DynamoDB prices. You can also purchase DynamoDB Reserved Capacity to further savings.

Available Now
This feature is available now in all regions and you can start using it today!

Jeff;

Man Faces Prison For Sharing Pirated Deadpool Movie on Facebook

Post Syndicated from Ernesto original https://torrentfreak.com/man-faces-prison-for-sharing-pirated-deadpool-movie-on-facebook-170614/

With roughly two billion active users per month, Facebook is by far the largest social networking site around.

While most of the content posted to the site is relatively harmless, some people use it to share things they are not supposed to.

This is also what 21-year-old Trevon Maurice Franklin from Fresno, California, did early last year. Just a week after the box-office hit Deadpool premiered in theaters, he shared a pirated copy of the movie on the social network.

Franklin, who used the screen name “Tre-Von M. King,” saw his post go viral as it allegedly reached five million views. This didn’t go unnoticed by Twentieth Century Fox, and soon after the feds were involved as well.

The FBI began to investigate the possibly criminal Facebook post and decided to build a case. This eventually led to an indictment, and the alleged “pirate” was arrested soon after.

Facebook post from early 2016

The U.S. Attorney’s Office for the Central District of California, which released the news a few hours ago, states that Franklin faces up to three years in prison for the alleged copyright infringement.

“Franklin is charged in a one-count indictment returned by a federal grand jury on April 7 with reproducing and distributing a copyrighted work, a felony offense that carries a statutory maximum penalty of three years in federal prison,” the office wrote in a press release.

According to comments on Facebook, posted last year, several people warned “Tre-Von M. King” that it wasn’t wise to post copyright-infringing material on Facebook. However, Franklin said he wasn’t worried that he would get in trouble.

Comment from early 2016

While the case is significant, there are also plenty of questions that remain unanswered.

Was the defendant involved in recording the copyright infringing copy? Was it already widely available elsewhere? Are the reported five million “views” people who watched a large part of the movie, or is this just the number of people who might have seen it in their feeds?

Thus far we have not seen a copy of the indictment in the court records, but a follow-up may be warranted when it becomes available.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

More notes on US-CERTs IOCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/more-notes-on-us-certs-iocs.html

Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT.

IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic.

A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front.

But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly.

This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless.

An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless.

Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily.

And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email.

A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid.

Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name.

But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”.

Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless.

What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly.

Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following:

import “pe”
import “hash”

Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed.

If this were a “threat intel” company,  we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content.

This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized.

There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details.

Weekly roundup: Potpourri

Post Syndicated from Eevee original https://eev.ee/dev/2017/06/12/weekly-roundup-potpourri/

  • potluck: It’s about time I actually started on this! I spent a couple days squabbling with JavaScript frameworks, but in the end I gave up and decided to just build it on my existing LÖVE code. Trying out a new thing is nice, but maybe not when I have a somewhat complicated game in mind that I want to build as soon as possible.

  • fox flux: I made a font! Also drew a ton of player sprites and made my grass much prettier. I’m going to need to start thinking about environmental art soon, and I’m kinda dreading it because I have no idea what I’m doing there.

  • art: I did some (unfinished) modelling and painted some character art (warning: this pic is fine, but the rest of the account is NSFW).

  • blog: I wrote some thoughts about teaching technical subjects.

For the first time in possibly my entire life, I feel like I’m a little ahead of the game! The month isn’t even half over and I’ve already done some obligatory stuff, finished off one languishing task, and made some decent inroads into several other things. Nice.

Copyright Holders Keep Targeting Dead Torrent Sites

Post Syndicated from Ernesto original https://torrentfreak.com/copyright-holders-keep-targeting-dead-torrent-sites-170611/

Over the past year several major torrent sites have shut down, causing quite an uproar among file-sharers.

Interestingly, however, several copyright holders still appear to think that these sites are alive and kicking. That is, judging from the takedown notices they send to Google.

Publisher Penguin Random House is particularly forgetful. Through its anti-piracy partner Digimarc, the company has reported hundreds of ‘infringing’ KickassTorrents URLs. Not only was KAT shut down last summer, the reported URLs are no longer listed in Google’s search results either.

Penguin is not alone though. Other rightsholders such as Sony Music, Dreamroom Productions, Taylor & Francis Group, The University of Chicago Press and many others have made the same mistakes recently.

Over the past month alone Google has received 1,340 takedown notices for Kat.cr URLs and an additional 775 for the Kat.ph domain name.

The problem is not limited to KAT either. Torrentz.eu, another major torrent site that went offline last summer, is still being targeted at well.

For example, earlier this week Sony Pictures asked Google to remove a Torrentz.eu URL that linked to the series Community, even though it is no longer indexed. In just one month copyright holders sent Google 4,960 takedown requests for “dead” Torrentz URLs.

Recent takedown requests for Torrentz.eu

Apparently, the reporting outfits have failed to adjust their piracy monitoring bots for the changing torrent landscape.

The mistakes are likely due to automated keyword filters that scour sites and forums for links to hosting services. These bots don’t bother to check whether Google actually indexes the content, nor do they remove dead sites from their system.

While targeting dead KAT and Torrentz links is bad enough, things can get worse.

The iconic torrent search isoHunt.com shut down following a MPAA lawsuit in 2013, well over three years ago. Nonetheless, rightsholders still sent Google takedown notices for the site, more than a dozen a month actually.

Or what about BTJunkie. This torrent indexer closed its doors voluntarily more than half a decade ago. Dead or not, some copyright holders still manage to find infringing links in some of the darkest corners of the Internet.

Apparently, torrent users are far quicker to adapt to the changing landscape than the monitoring outfits of some copyright holders…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Teaching tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/10/teaching-tech/

A sponsored post from Manishearth:

I would kinda like to hear about any thoughts you have on technical teaching or technical writing. Pedagogy is something I care about. But I don’t know how much you do, so feel free to ignore this suggestion 🙂

Good news: I care enough that I’m trying to write a sorta-kinda-teaching book!

Ironically, one of the biggest problems I’ve had with writing the introduction to that book is that I keep accidentally rambling on for pages about problems and difficulties with teaching technical subjects. So maybe this is a good chance to get it out of my system.

Phaser

I recently tried out a new thing. It was Phaser, but this isn’t a dig on them in particular, just a convenient example fresh in my mind. If anything, they’re better than most.

As you can see from Phaser’s website, it appears to have tons of documentation. Two of the six headings are “LEARN” and “EXAMPLES”, which seems very promising. And indeed, Phaser offers:

  • Several getting-started walkthroughs
  • Possibly hundreds of examples
  • A news feed that regularly links to third-party tutorials
  • Thorough API docs

Perfect. Beautiful. Surely, a dream.

Well, almost.

The examples are all microscopic, usually focused around a single tiny feature — many of them could be explained just as well with one line of code. There are a few example games, but they’re short aimless demos. None of them are complete games, and there’s no showcase either. Games sometimes pop up in the news feed, but most of them don’t include source code, so they’re not useful for learning from.

Likewise, the API docs are just API docs, leading to the sorts of problems you might imagine. For example, in a few places there’s a mention of a preUpdate stage that (naturally) happens before update. You might rightfully wonder what kinds of things happen in preUpdate — and more importantly, what should you put there, and why?

Let’s check the API docs for Phaser.Group.preUpdate:

The core preUpdate – as called by World.

Okay, that didn’t help too much, but let’s check what Phaser.World has to say:

The core preUpdate – as called by World.

Ah. Hm. It turns out World is a subclass of Group and inherits this method — and thus its unaltered docstring — from Group.

I did eventually find some brief docs attached to Phaser.Stage (but only by grepping the source code). It mentions what the framework uses preUpdate for, but not why, and not when I might want to use it too.


The trouble here is that there’s no narrative documentation — nothing explaining how the library is put together and how I’m supposed to use it. I get handed some brief primers and a massive reference, but nothing in between. It’s like buying an O’Reilly book and finding out it only has one chapter followed by a 500-page glossary.

API docs are great if you know specifically what you’re looking for, but they don’t explain the best way to approach higher-level problems, and they don’t offer much guidance on how to mesh nicely with the design of a framework or big library. Phaser does a decent chunk of stuff for you, off in the background somewhere, so it gives the strong impression that it expects you to build around it in a particular way… but it never tells you what that way is.

Tutorials

Ah, but this is what tutorials are for, right?

I confess I recoil whenever I hear the word “tutorial”. It conjures an image of a uniquely useless sort of post, which goes something like this:

  1. Look at this cool thing I made! I’ll teach you how to do it too.

  2. Press all of these buttons in this order. Here’s a screenshot, which looks nothing like what you have, because I’ve customized the hell out of everything.

  3. You did it!

The author is often less than forthcoming about why they made any of the decisions they did, where you might want to try something else, or what might go wrong (and how to fix it).

And this is to be expected! Writing out any of that stuff requires far more extensive knowledge than you need just to do the thing in the first place, and you need to do a good bit of introspection to sort out something coherent to say.

In other words, teaching is hard. It’s a skill, and it takes practice, and most people blogging are not experts at it. Including me!


With Phaser, I noticed that several of the third-party tutorials I tried to look at were 404s — sometimes less than a year after they were linked on the site. Pretty major downside to relying on the community for teaching resources.

But I also notice that… um…

Okay, look. I really am not trying to rag on this author. I’m not. They tried to share their knowledge with the world, and that’s a good thing, something worthy of praise. I’m glad they did it! I hope it helps someone.

But for the sake of example, here is the most recent entry in Phaser’s list of community tutorials. I have to link it, because it’s such a perfect example. Consider:

  • The post itself is a bulleted list of explanation followed by a single contiguous 250 lines of source code. (Not that there’s anything wrong with bulleted lists, mind you.) That code contains zero comments and zero blank lines.

  • This is only part two in what I think is a series aimed at beginners, yet the title and much of the prose focus on object pooling, a performance hack that’s easy to add later and that’s almost certainly unnecessary for a game this simple. There is no explanation of why this is done; the prose only says you’ll understand why it’s critical once you add a lot more game objects.

  • It turns out I only have two things to say here so I don’t know why I made this a bulleted list.

In short, it’s not really a guided explanation; it’s “look what I did”.

And that’s fine, and it can still be interesting. I’m not sure English is even this person’s first language, so I’m hardly going to criticize them for not writing a novel about platforming.

The trouble is that I doubt a beginner would walk away from this feeling very enlightened. They might be closer to having the game they wanted, so there’s still value in it, but it feels closer to having someone else do it for them. And an awful lot of tutorials I’ve seen — particularly of the “post on some blog” form (which I’m aware is the genre of thing I’m writing right now) — look similar.

This isn’t some huge social problem; it’s just people writing on their blog and contributing to the corpus of written knowledge. It does become a bit stickier when a large project relies on these community tutorials as its main set of teaching aids.


Again, I’m not ragging on Phaser here. I had a slightly frustrating experience with it, coming in knowing what I wanted but unable to find a description of the semantics anywhere, but I do sympathize. Teaching is hard, writing documentation is hard, and programmers would usually rather program than do either of those things. For free projects that run on volunteer work, and in an industry where anything other than programming is a little undervalued, getting good docs written can be tricky.

(Then again, Phaser sells books and plugins, so maybe they could hire a documentation writer. Or maybe the whole point is for you to buy the books?)

Some pretty good docs

Python has pretty good documentation. It introduces the language with a tutorial, then documents everything else in both a library and language reference.

This sounds an awful lot like Phaser’s setup, but there’s some considerable depth in the Python docs. The tutorial is highly narrative and walks through quite a few corners of the language, stopping to mention common pitfalls and possible use cases. I clicked an arbitrary heading and found a pleasant, informative read that somehow avoids being bewilderingly dense.

The API docs also take on a narrative tone — even something as humble as the collections module offers numerous examples, use cases, patterns, recipes, and hints of interesting ways you might extend the existing types.

I’m being a little vague and hand-wavey here, but it’s hard to give specific examples without just quoting two pages of Python documentation. Hopefully you can see right away what I mean if you just take a look at them. They’re good docs, Bront.

I’ve likewise always enjoyed the SQLAlchemy documentation, which follows much the same structure as the main Python documentation. SQLAlchemy is a database abstraction layer plus ORM, so it can do a lot of subtly intertwined stuff, and the complexity of the docs reflects this. Figuring out how to do very advanced things correctly, in particular, can be challenging. But for the most part it does a very thorough job of introducing you to a large library with a particular philosophy and how to best work alongside it.

I softly contrast this with, say, the Perl documentation.

It’s gotten better since I first learned Perl, but Perl’s docs are still a bit of a strange beast. They exist as a flat collection of manpage-like documents with terse names like perlootut. The documentation is certainly thorough, but much of it has a strange… allocation of detail.

For example, perllol — the explanation of how to make a list of lists, which somehow merits its own separate documentation — offers no fewer than nine similar variations of the same code for reading a file into a nested lists of words on each line. Where Python offers examples for a variety of different problems, Perl shows you a lot of subtly different ways to do the same basic thing.

A similar problem is that Perl’s docs sometimes offer far too much context; consider the references tutorial, which starts by explaining that references are a powerful “new” feature in Perl 5 (first released in 1994). It then explains why you might want to nest data structures… from a Perl 4 perspective, thus explaining why Perl 5 is so much better.

Some stuff I’ve tried

I don’t claim to be a great teacher. I like to talk about stuff I find interesting, and I try to do it in ways that are accessible to people who aren’t lugging around the mountain of context I already have. This being just some blog, it’s hard to tell how well that works, but I do my best.

I also know that I learn best when I can understand what’s going on, rather than just seeing surface-level cause and effect. Of course, with complex subjects, it’s hard to develop an understanding before you’ve seen the cause and effect a few times, so there’s a balancing act between showing examples and trying to provide an explanation. Too many concrete examples feel like rote memorization; too much abstract theory feels disconnected from anything tangible.

The attempt I’m most pleased with is probably my post on Perlin noise. It covers a fairly specific subject, which made it much easier. It builds up one step at a time from scratch, with visualizations at every point. It offers some interpretations of what’s going on. It clearly explains some possible extensions to the idea, but distinguishes those from the core concept.

It is a little math-heavy, I grant you, but that was hard to avoid with a fundamentally mathematical topic. I had to be economical with the background information, so I let the math be a little dense in places.

But the best part about it by far is that I learned a lot about Perlin noise in the process of writing it. In several places I realized I couldn’t explain what was going on in a satisfying way, so I had to dig deeper into it before I could write about it. Perhaps there’s a good guideline hidden in there: don’t try to teach as much as you know?

I’m also fairly happy with my series on making Doom maps, though they meander into tangents a little more often. It’s hard to talk about something like Doom without meandering, since it’s a convoluted ecosystem that’s grown organically over the course of 24 years and has at least three ways of doing anything.


And finally there’s the book I’m trying to write, which is sort of about game development.

One of my biggest grievances with game development teaching in particular is how often it leaves out important touches. Very few guides will tell you how to make a title screen or menu, how to handle death, how to get a Mario-style variable jump height. They’ll show you how to build a clearly unfinished demo game, then leave you to your own devices.

I realized that the only reliable way to show how to build a game is to build a real game, then write about it. So the book is laid out as a narrative of how I wrote my first few games, complete with stumbling blocks and dead ends and tiny bits of polish.

I have no idea how well this will work, or whether recapping my own mistakes will be interesting or distracting for a beginner, but it ought to be an interesting experiment.