Wednesday’s security updates

Post Syndicated from n8willis original

CentOS has updated kernel
(C6: TCP injection).

Debian-LTS has updated libgcrypt11 (flawed random number generation).

Fedora has updated eog (F24:
out-of-bounds write),
kernel (F23: use-after-free), mariadb (F23: multiple vulnerabilities), mingw-lcms2 (F24: heap memory leak), postgresql (F23: multiple vulnerabilities), and python (F23: proxy injection).

openSUSE has updated libidn
(Leap 42.1: multiple vulnerabilities) and kernel (13.2: multiple vulnerabilities).

Oracle has updated kernel
(O6: TCP injection).

Red Hat has updated kernel (RHEL 7.1: multiple vulnerabilities; RHEL6: TCP injection)
and qemu-kvm-rhev (RHOSP8: multiple vulnerabilities).

Scientific Linux has updated kernel (SL6: TCP injection).

Slackware has updated gnupg
(flawed random number generation), kernel (14.2: TCP injection), and libgcrypt (flawed random number generation).

You’re a (chess) wizard, Bethanie

Post Syndicated from Alex Bate original

By recreating the iconic Wizard’s Chess set from Harry Potter and the Philosopher’s Stone (sorry America, it’s Philosopher, not Sorcerer), 18-year-old Jambassador Bethanie Fentiman has become my new hero.

wizard's chess

Ron, you don’t suppose this is going to be like… ‘real’ wizard’s chess, do you?

Playing on an idea she’d had last year, Bethanie decided to recreate the chess board from the book/movie as part of her A-Level coursework (putting everything I ever created at school to utter shame), utilising the knowledge and support of her fellow Jammers from the Kent Raspberry Jam community.

After searching through the internet for inspiration, she stumbled upon an Instructables guide for building an Arduino-powered chess robot, which gave her a basis on which to build her system of stepper motors, drawer runners, gears, magnets, and so on.

Wizard's Chess

Harry Potter and the ‘it’s almost complete’ Wizard’s Chess board

The next issue she faced in her quest for ultimate wizarding glory was to figure out how to actually play chess! Without any chess-playing knowhow, Bethanie either needed to learn quickly or…cheat a bit. So she looked up the legal moves of each piece, coding them into the programme, therefore allowing her to move on with the project without the need to monotonously learn the rules to the game. 

wizard's chess

Hermione would never approve.

There were a few snags along the way, mainly due to problems with measuring. But once assembled, everything was looking good.

Wizard's Chess

We’ve got our fingers crossed that Bethanie replaces the pieces in time with some battling replicas from the movie.

On a minimal budget, Bethanie procured her chess pieces from a local charity shop, managing to get the board itself laser-cut for free, thanks to her school’s technology department.

Now complete, the board has begun its own ‘Wizard Chess Tour’, visiting various Raspberry Jams across the country. Its first stop was in Harlow, and more recently, Bethanie has taken the board to the August Covent Garden Jam.

Wizard's Chess gif


You can find out more about the Wizard’s Chess board via the Kent Jams Twitter account and website. And you’d like the board to visit your own Raspberry Jam event… send Bethanie word by owl and see what she says!


The post You’re a (chess) wizard, Bethanie appeared first on Raspberry Pi.

KDevelop 5.0 released

Post Syndicated from n8willis original

Version 5.0.0 of the KDevelop integrated development environment (IDE) has been released, marking the end of a two-year development cycle. The highlight is a move to Clang for C and C++ support: “The most prominent change certainly is the move away from our own, custom C++ analysis engine. Instead, C and C++ code analysis is now performed by clang.” The announcement goes on to describe other benefits of using Clang, such as more accurate diagnostics and suggested fixes for many syntax errors. KDevelop has also been ported to KDE Frameworks 5 and Qt 5, which opens up the possibility of Windows releases down the line.

AWS Week in Review – Coming Back With Your Help!

Post Syndicated from Jeff Barr original

Back in 2012 I realized that something interesting happened in AWS-land just about every day. In contrast to the periodic bursts of activity that were the norm back in the days of shrink-wrapped software, the cloud became a place where steady, continuous development took place.

In order to share all of this activity with my readers and to better illustrate the pace of innovation, I published the first AWS Week in Review in the spring of 2012. The original post took all of about 5 minutes to assemble, post and format. I got some great feedback on it and I continued to produce a steady stream of new posts every week for over 4 years. Over the years I added more and more content generated within AWS and from the ever-growing community of fans, developers, and partners.

Unfortunately, finding, saving, and filtering links, and then generating these posts grew to take a substantial amount of time. I reluctantly stopped writing new posts early this year after spending about 4 hours on the post for the week of April 25th.

After receiving dozens of emails and tweets asking about the posts, I gave some thought to a new model that would be open and more scalable.

Going Open
The AWS Week in Review is now a GitHub project ( I am inviting contributors (AWS fans, users, bloggers, and partners) to contribute.

Every Monday morning I will review and accept pull requests for the previous week, aiming to publish the Week in Review by 10 AM PT. In order to keep the posts focused and highly valuable, I will approve pull requests only if they meet our guidelines for style and content.

At that time I will also create a file for the week to come, so that you can populate it as you discover new and relevant content.

Content & Style Guidelines
Here are the guidelines for making contributions:

  • Relevance -All contributions must be directly related to AWS.
  • Ownership – All contributions remain the property of the contributor.
  • Validity – All links must be to publicly available content (links to free, gated content are fine).
  • Timeliness – All contributions must refer to content that was created on the associated date.
  • Neutrality – This is not the place for editorializing. Just the facts / links.

I generally stay away from generic news about the cloud business, and I post benchmarks only with the approval of my colleagues.

And now a word or two about style:

  • Content from this blog is generally prefixed with “I wrote about POST_TITLE” or “We announced that TOPIC.”
  • Content from other AWS blogs is styled as “The BLOG_NAME wrote about POST_TITLE.”
  • Content from individuals is styled as “PERSON wrote about POST_TITLE.”
  • Content from partners and ISVs is styled as “The BLOG_NAME wrote about POST_TITLE.”

There’s room for some innovation and variation to keep things interesting, but keep it clean and concise. Please feel free to review some of my older posts to get a sense for what works.

Over time we might want to create a more compelling visual design for the posts. Your ideas (and contributions) are welcome.

Over the years I created the following sections:

  • Daily Summaries – content from this blog, other AWS blogs, and everywhere else.
  • New & Notable Open Source.
  • New SlideShare Presentations.
  • New YouTube Videos including APN Success Stories.
  • New AWS Marketplace products.
  • New Customer Success Stories.
  • Upcoming Events.
  • Help Wanted.

Some of this content comes to my attention via RSS feeds. I will post the OPML file that I use in the GitHub repo and you can use it as a starting point. The New & Notable Open Source section is derived from a GitHub search for aws. I scroll through the results and pick the 10 or 15 items that catch my eye. I also watch /r/aws and Hacker News for interesting and relevant links and discussions.

Over time, it is possible that groups or individuals may become the primary contributor for a section. That’s fine, and I would be thrilled to see this happen. I am also open to the addition to new sections, as long as they are highly relevant to AWS.

Earlier this year I tried to automate the process, but I did not like the results. You are welcome to give this a shot on your own. I do want to make sure that we continue to exercise human judgement in order to keep the posts as valuable as possible.

Let’s Do It
I am super excited about this project and I cannot wait to see those pull requests coming in. Please let me know (via a blog comment) if you have any suggestions or concerns.

I should note up front that I am very new to Git-based collaboration and that this is going to be a learning exercise for me. Do not hesitate to let me know if there’s a better way to do things!



Tuesday’s security updates

Post Syndicated from n8willis original

Arch Linux has updated libgcrypt (information disclosure).

Fedora has updated kernel
(F24: use-after-free vulnerability), pagure (F24: cross-site scripting), and postgresql (F24: multiple vulnerabilities).

Red Hat has updated qemu-kvm-rhev (RHEL7 OSP5; RHEL7 OSP7; RHEL6 OSP5; RHEL7 OSP6:
multiple vulnerabilities).

SUSE has updated MozillaFirefox (SLE12: multiple vulnerabilities).

Vote for the top 20 Raspberry Pi projects in The MagPi!

Post Syndicated from Rob Zwetsloot original

Although this Thursday will see the release of issue 49 of The MagPi, we’re already hard at work putting together our 50th issue spectacular. As part of this issue we’re going to be covering 50 of the best Raspberry Pi projects ever and we want you, the community, to vote for the top 20.

Below we have listed the 30 projects that we think represent the best of the best. All we ask is that you vote for your favourite. We will have a few special categories with some other amazing projects in the final article, but if you think we’ve missed out something truly excellent, let us know in the comments. Here’s the list so you can remind yourselves of the projects, with the poll posted at the bottom.

From paper boats to hybrid sports cars

From paper boats to hybrid sports cars

  1. SeeMore – a huge sculpture of 256 Raspberry Pis connected as a cluster
  2. BeetBox – beets (vegetable) you can use to play sick beats (music)
  3. Voyage – 300 paper boats (actually polypropylene) span a river, and you control how they light up
  4. Aquarium – a huge aquarium with Pi-powered weather control simulating the environment of the Cayman Islands
  5. ramanPi – a Raman spectrometer used to identify different types of molecules
  6. Joytone – an electronic musical instrument operated by 72 back-lit joysticks
  7. Internet of LEGO – a city of LEGO, connected to and controlled by the internet
  8. McMaster Formula Hybrid – a Raspberry Pi provides telemetry on this hybrid racing car
  9. PiGRRL – Adafruit show us how to make an upgraded, 3D-printed Game Boy
  10. Magic Mirror – check out how you look while getting some at-a-glance info about your day
Dinosaurs, space, and modern art

Dinosaurs, space, and modern art

  1. 4bot – play a game of Connect 4 with a Raspberry Pi robot
  2. Blackgang Chine dinosaurs – these theme park attractions use the diminutive Pi to make them larger than life
  3. Sound Fighter – challenge your friend to the ultimate Street Fight, controlled by pianos
  4. Astro Pi – Raspberry Pis go to space with code written by school kids
  5. Pi in the Sky – Raspberry Pis go to near space and send back live images
  6. BrewPi – a microbrewery controlled by a micro-computer
  7. LED Mirror – a sci-fi effect comes to life as you’re represented on a wall of lights
  8. Raspberry Pi VCR – a retro VCR-player is turned into a pink media playing machine
  9. #OZWall – Contemporary art in the form of many TVs from throughout the ages
  10. #HiutMusic – you choose the music for a Welsh denim factory through Twitter
Robots and arcade machines make the cut

Robots and arcade machines make the cut

  1. CandyPi – control a jelly bean dispenser from your browser without the need to twist the dial
  2. Digital Zoetrope – still images rotated to create animation, updated for the 21st century
  3. LifeBox – create virtual life inside this box and watch it adapt and survive
  4. Coffee Table Pi – classy coffee table by name, arcade cabinet by nature. Tea and Pac-Man, anyone?
  5. Raspberry Pi Notebook – this handheld Raspberry Pi is many people’s dream machine
  6. Pip-Boy 3000A – turn life into a Bethesda RPG with this custom Pip-Boy
  7. Mason Jar Preserve – Mason jars are used to preserve things, so this one is a beautiful backup server to preserve your data
  8. Pi glass – Google Glass may be gone but you can still make your own amazing Raspberry Pi facsimile
  9. DoodleBorg – a powerful PiBorg robot that can tow a caravan
  10. BigHak – a Big Trak that is truly big: it’s large enough for you to ride in

Now you’ve refreshed your memory of all these amazing projects, it’s time to vote for the one you think is best!

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

The vote is running over the next two weeks, and the results will be in The MagPi 50. We’ll see you again on Thursday for the release of the excellent MagPi 49: don’t miss it!

The post Vote for the top 20 Raspberry Pi projects in The MagPi! appeared first on Raspberry Pi.

Android 7.0 “Nougat” released

Post Syndicated from corbet original

Google has announced
that the Android 7.0 release has started rolling out to recent-model Nexus
devices. “It introduces a brand new JIT/AOT compiler to improve
software performance, make app installs faster, and take up less
storage. It also adds platform support for Vulkan, a low-overhead,
cross-platform API for high-performance, 3D graphics. Multi-Window support
lets users run two apps at the same time, and Direct Reply so users can
reply directly to notifications without having to open the app. As always,
Android is built with powerful layers of security and encryption to keep
your private data private, so Nougat brings new features like File-based
encryption, seamless updates, and Direct Boot.

See this page
for a video-heavy description of new features.

IGHASHGPU – GPU Based Hash Cracking – SHA1, MD5 & MD4

Post Syndicated from Darknet original

IGHASHGPU is an efficient and comprehensive command line GPU based hash cracking program that enables you to retrieve SHA1, MD5 and MD4 hashes by utilising ATI and nVidia GPUs. It even works with salted hashes making it useful for MS-SQL, Oracle 11g, NTLM passwords and others than use salts. IGHASHGPU is meant to function with […]

The post…

Read the full post at

Testing, for people who hate testing

Post Syndicated from Eevee original

I love having tests.

I hate writing them.

It’s tedious. It’s boring. It’s hard, sometimes harder than writing the code. Worst of all, it doesn’t feel like it accomplishes anything.

So I usually don’t do it. I know, I know. I should do it. I should also get more exercise and eat more vegetables.

The funny thing is, the only time I see anyone really praise the benefits of testing is when someone who’s really into testing extols the virtues of test-driven development. To me, that’s like trying to get me to eat my veggies by telling me how great veganism is. If I don’t want to do it at all, trying to sell me on an entire lifestyle is not going to work. I need something a little more practical, like “make smoothies” or “technically, chips are a vegetable”.

Here’s the best way I’ve found to make test smoothies. I’ll even deliberately avoid any testing jargon, since no one can agree on what any of it means anyway.

Hating testing less

Love your test harness

Your test harness is the framework that finds your tests, runs your tests, and (in theory) helps you write tests.

If you hate your test harness, you will never enjoy writing tests. It’ll always be a slog, and you’ll avoid it whenever you can. Shop around and see if you can find something more palatable.

For example, Python’s standard solution is the stdlib unittest module. It’s a Java-inspired monstrosity that has you write nonsense like this:

import unittest

from mymodule import whatever

class TestWhatever(unittest.TestCase):
    def test_whatever(self):
        self.assertIn(whatever(), {1, 2, 3})

It drives me up the wall that half of this trivial test is weird boilerplate. The class itself is meaningless, and the thing I really want to test is obscured behind one of several dozen assert* methods.

These are minor gripes, but minor gripes make a big difference when they apply to every single test — and when I have to force myself to write tests in the first place. (Maybe they don’t bother you, in which case, keep using unittest!) So I use py.test instead:

from mymodule import whatever

def test_whatever(self):
    assert whatever() in {1, 2, 3}

If the test fails, you still get useful output, including diffs of strings or sequences:

    def test_whatever():
>       assert whatever() in {1, 2, 3}
E       assert 4 in set([1, 2, 3])
E        +  where 4 = whatever()

You really, really don’t want to know how this works. It does work, and that’s all I care about.

py.test also has some bells and whistles like the ability to show locals when a test fails, hooks for writing your own custom asserts, and bunches of other hooks and plugins. But the most important thing to me is that it minimizes the friction involve in writing tests to as little as possible. I can pretty much copy/paste whatever I did in the REPL and sprinkle asserts around.

If writing tests is hard, that might be a bug

I’ve seen tests that do some impressive acrobatics just to construct core objects, and likewise heard people grumble that they don’t want to write tests because creating core objects is so hard.

The thing is, tests are just code. If you have a hard time constructing your own objects with some particular state, it might be a sign that your API is hard to use!

Well, we never added a way to do this because there’s no possible reason anyone would ever want it.” But you want it, right now. You’re consuming your own API, complaining that it can’t do X, and then not adding the ability to do X because no one would ever need X.

One of the most underappreciated parts of writing tests is that they force you to write actual code that uses your interfaces. If doing basic setup is a slog, fix those interfaces.

Aggressively make your test suite fast and reliable

I’ve worked with test suites that took hours to run, if they ran at all.

The tradeoff is obvious: these test suites were fairly thorough, and speed was the cost of that thoroughness. For critical apps, that might be well worth it. For very large apps, that might be unavoidable.

For codebases that are starting out with no tests at all, it’s a huge source of testing pain. Your test suite should be as fast as possible, or you won’t run it, and then you’ll (rightfully!) convince yourself that there’s no point in writing even more tests that you won’t run.

If your code is just slow, consider this an excellent reason to make it faster. If you have a lot of tests, see if you can consolidate some.

Or if you have a handful of especially slow tests, I have a radical suggestion: maybe just delete them. If they’re not absolutely critical, and they’re keeping you from running your test suite constantly, they may not be worth the cost. Deleting a test drops your coverage by a fraction of a percent; never running your tests drops your coverage to zero.

Flaky tests are even worse. Your tests should always, always, pass completely. If you have a test that fails 10% of the time and you just can’t figure out why, disable or delete it. It’s not telling you anything useful, and in the meantime it’s training you to ignore when your tests fail. If a failing test isn’t an immediate red alert, there’s no point in having tests at all.

Run it automatically

Have you seen those GitHub projects where pull requests automatically get a thing saying whether the test suite passed or failed? Neat, right? It’s done through Travis, and it’s surprisingly painless to set up. Once it is set up, someone else’s computer will run your tests all the damn time and bug you when they fail. It’s really annoying, and really great.

(There’s also Coveralls, which measures your test coverage. Neat, but if you’re struggling to write tests at all, a looming reminder of your shame may not be the most helpful thing.)

I recently ran into an interesting problem in the form of Pelican, the Python library that generates this blog. It has tests for the fr_FR locale, and the test suite skips them if you don’t have that locale set up… but the README tells you that before you submit a pull request, you should generate the locale so you can run the tests. Naturally, I missed this, didn’t have fr_FR, thought I passed all the tests, and submitted a pull request that instantly failed on Travis.

Skipping tests because optional dependencies are missing is a tricky affair. When you write them, you think “no point claiming the test failed when it doesn’t indicate problems with the actual codebase” — when I run them, I think “oh, these tests were skipped, so they aren’t really important”.

What to test

Test what you manually test

When you’re working on a big feature or bugfix, you develop a little ritual for checking whether it’s done. You crack open the REPL and repeat the same few lines, or you run a script you hacked together, or your launch your app and repeat the same few actions. It gets incredibly tedious.

You may have similar rituals just before a big release: run the thing, poke around a bit, try common stuff, be confident that at least the basics work.

These are the best things to test, because you’re already testing them! You can save yourself a lot of anguish if you convert these rituals into code. As an added benefit, other people can then repeat your rituals without having to understand yours or invent your own, and your test suite will serve as a rough description of what you find most important.

Sometimes, this is hard. Give it a try anyway, even if (especially if) you don’t have a test suite at all.

Sometimes, this is really hard. Write tests for the parts you can, at least. You can always sit down and work the rest out later.

Test what’s likely to break

Some things are easy to test. If you have a function that checks whether a number is even, oh boy! You can write like fifty tests for that, no problem. Now you have fifty more tests! Good job!

That’s great, and feel free to write them all, but… how likely is it that anyone will ever change that function? It does one trivial thing, it can be verified correct at a glance, it doesn’t depend on anything else, and it almost certainly can’t be improved upon.

The primary benefit of testing is a defense against change. When code changes, tests help convince you that it still works correctly. Testing code that has no reason to change doesn’t add much more value to your test suite.

This isn’t to say that you shouldn’t test trivial functions — especially since we can be really bad at guessing what’ll change in the future — but when you have limited willpower, they’re not the most efficient places to spend it.

Knowing what to test is the same kind of artform as knowing what to comment, and I think many of the same approaches apply. Test obscure things, surprising special cases. Test things that were tricky to get right, things that feel delicate. Test things that are difficult to verify just by reading the code. If you feel the need to explain yourself, it’s probably worth testing.

Test the smallest things you can possibly test

It’s nice to have some tests that show your code works from a thousand miles up. Unfortunately, these also tend to be the slowest (because they do a lot), most brittle (because any small change might break many such tests at once), least helpful (because a problem might come from anywhere), and least efficient (because two such tests will run through much of the same code).

People who are into testing like to go on about unit tests versus functional tests, or maybe those are integration tests, or are they acceptance tests, or end-to-end tests, or… christ.

Forget the categories. You already know the shape of your own codebase: it’s a hierarchy of lumps that each feel like they relate to a particular concept, even if the code organization doesn’t reflect that. You’re writing a disassembler, and there’s some code in various places that deals with jumps and labels? That’s a lump, even if the code isn’t contiguous on disk. You, the human, know where it is.

So write your tests around those lumps, and make them as small as possible. Maybe you still have to run your entire disassembler to actually run certain tests, but you can still minimize the extra work: disable optional features and make the test as simple as possible. If you ever make changes to jumps or labels, you’ll know exactly which tests to look for; if those tests ever break, you’ll have a good idea of why.

Don’t get me wrong; I know it’s reassuring to have a mountain of tests that run through your entire app from start to finish, just as a user would. But in my experience, those tests break all the time without actually telling you anything you didn’t already know, and having more than a handful of them can bog down the entire test suite. Hesitate before each one you write.

How to test

Test output, avoid side effects

Testing code should be easy. You make some input; you feed it to a function; you check that the output is correct. The precise nature of “input” and “output” can easily spiral out of control, but at least the process is simple enough.

Testing code that has side effects is a huge, huge pain in the ass. (And since tests are just code, that means using code that has side effects is also a pain in the ass.)

Side effect” here means exactly what it sounds like: you feed input into a function, you get some output, and in the meantime something elsewhere has changed. Or in a similar vein, the behavior of the function depends on something other than what’s passed into the function. The most common case is global record-keeping, like app-wide configuration that sits at the module level somewhere.

It sucks, it’s confusing, avoid avoid avoid.

So… don’t use globals, I guess?

I’ve heard a lot of programmers protest that there’s nothing very difficult to understand about one global, and I’m going to commit heresy here by admitting: that’s probably true! The extra cognitive burden of using and testing code that relies on a single global is not particularly high.

But one global begets another, and another. Or perhaps your “one” global mutates into a massive sprawling object with tendrils in everything. Soon, you realize you’ve written the Doom renderer and you have goofy obscure bugs because it’s so hard to keep track of what’s going on at any given time.

Similar to the much-maligned C goto, globals aren’t an infectious and incurable toxin that will instantly and irreparably putrefy your codebase. They just have a cost, and you’ll only need to pay it sometime down the road, and it’s usually not worth the five minutes of effort saved. If you must introduce a global, always take a moment to feel really bad about what you’re doing.

Test negative cases and edge cases

I used to work for a company. As part of the hiring process, prospective hires would be asked to implement a particular board game, complete with tests. Their solution would be dumped in my lap, for some reason, and I would unleash the harsh light of my judgment upon it.

I’m being intentionally vague because I don’t want to help anyone cheat, any more than I already am by telling you how to write tests. So let’s say the game is the most trivial of board games: tic tac toe.

A significant proportion of solutions I graded had test suites like this.

board = """
assert check_winner(board) == "X"

board = """
assert check_winner(board) == "O"

And that was it. Two or three tests for particular winning states. (Often not even any tests for whether placing a piece actually worked, but let’s leave that aside.)

I would always mark down for this. The above tests check that your code is right, but they don’t check that your code isn’t wrong. What if there’s no winner, but your code thinks there is?

That’s much harder to reason about, I grant you! A tic tac toe board only has a relatively small handful of possible winning states, but a much larger number of possible non-winning states. But I think what really throws us is that winning is defined in the positive — “there must be three in a row” — whereas non-winning is only defined as, well, not winning. We don’t tend to think of the lack of something as being concrete.

When I took this test, I paid attention to the bugs I ran into while I was writing my code, and I thought about what could go wrong with my algorithm, and I made a few tests based on those educated guesses. So perhaps I’d check that these boards have no winner:

OO-     -O-     -X-
O-X     -O-     --X
-XX     -X-     X--

The left board has three of each symbol, but not in a row. The middle board has three in a row, but not all the same symbol. The right board has three in a row, but only if the board is allowed to wrap.

Are these cases likely to be false positives? I have no idea. All I did was consider for a moment what could go wrong, then make up some boards that would have that kind of error. (One or two of the solutions I graded even had the kinds of false positives that I’d written tests for!)

The same kind of thinking — what could I have missed? — leads me swiftly to another glaring omission from this test suite: what if there’s a tie? And, indeed, quite a few of the submissions I graded didn’t handle a tie at all. (Ties were less likely in the actual game than they are in tic tac toe, but still possible.) The game would ask you to make a move, you wouldn’t be able to, and the game would linger there forever.

Don’t just write tests to give yourself the smug satisfaction that you did something right. Write tests to catch the ways you might conceivably have done something wrong. Imagine the code you’re testing as an adversary; how might you catch it making a mistake?

If that doesn’t sound convincing, let me put this another way. Consider this hypothetical test suite for a primality test.

def test_is_prime():
    assert is_prime(2)
    assert is_prime(3)
    assert is_prime(5)
    assert is_prime(11)
    assert is_prime(17)
    assert is_prime(97)

Quick: write some code that passes these tests. Call it test-driven development practice.

Here’s what I came up with:

def is_prime(n):
    return True


A related benefit of negative tests is that they make sure your tests actually work. I’ve seen one or two tests that couldn’t reasonably verify that the output of some program was actually correct, so instead, they ran the program and checked that there were no errors. Later, something went wrong in the test suite, and the program silently didn’t run at all — which, naturally, produced no exceptions. A single test that fed in bad input and checked for an error would’ve caught this problem right away.


Tests are code. If you’re repeating yourself a lot or there’s a lot of friction for some common task, refactor. Write some helpers. See if your test harness can help you out.

Tests are code. Don’t write a bunch of magical, convoluted, brittle garbage to power your tests. If you can’t convince yourself that your tests work, how can your tests convince you that the rest of your code works? You should be more confident in your tests than in the rest of your code, yet you’ll probably spend far less time maintaining it. So err on the side of explicit and boring, even if you have to stick to repeating yourself.

Troublesome cases

External state

Testing against something outside your program sucks. With filesystems, you can make a temporary directory. With time, you can (maybe) fake it. In general, your life will be easier if you consolidate all your external state access into as few places as possible — easy to understand, easy to test, easy to swap out for some alternative implementation.

With databases, you’re just fucked. Database access pervades most code that needs to touch a database at all.

The common wisdom in the Python web development community is that you should just run your test suite against a little SQLite database. That’s a great idea, except that you’re suddenly restricted to the subset of SQL that works identically in SQLite and in your target database. The next best thing is to run against an actual instance of your target database and call it a day.

And you should probably stop there; nothing I can come up with is any better. Even for very large apps with very complex databases, that seems to be the best you can do. You might end up spending twenty minutes per test run starting up a full replicated setup and memcached and whatever else, but I don’t have any better ideas.

The problem is that database access still goes through SQL, and SQL is an entire other programming language you’re sending out over the wire. You can’t easily swap in an in-process SQL implementation — that’s called SQLite. You can hide all database access in functions with extremely long names and convoluted return values that are only called in one place, then swap out a dummy implementation for testing, but that’s really no fun at all. Plus, it doesn’t check that your SQL is actually correct.

If you’re using an ORM, you have slightly more of a chance, but I’ve never seen an ORM that can natively execute queries against in-memory data structures. (I would love to, and it seems within the realm of possibility, but it would be a huge amount of work and still not cover all the little places you’re using functions and syntax specific to your database.)

I don’t know. I got nothin’.

Procedural generation and other randomness

Imagine you wrote NetHack, which generates some 2D cavern structures. How can you possibly test that the generated caverns are correct, when they’re completely random?

I haven’t gotten too deep into this, but I think there’s some fertile ground here. You don’t know exactly what the output should be, but you certainly have some constraints in mind. For example, a cavern map should be at least 10% cave walls and at least 30% open space, right? Otherwise it’s not a cavern. You can write a test that verifies that, then just run it some number of times.

You can’t be absolutely sure there are no edge cases (unless you are extremely clever in how you write the terrain generation in the first place), but each run of the test suite will leave you a little more confident. There’s a real risk of flaking here, so you’ll have to be extra vigilant about diagnosing and fixing any problems.

You can also write some more specific tests if you give your thing-generator as many explicit parameters as possible, rather than having it make all its decisions internally. Maybe your cavern algorithm takes a parameter for how much open space there is, from 0.3 to 0.9. If you dial it down to the minimum, will there still be an open path from the entrance to the exit? You can test for that, too.

Web output

This is kind of an interesting problem. HTML is more readily inspected than an image; you can parse it, drill down with XPath or CSS selectors or what have you, and check that the right text is in the right places.

But! You may also want to know that it looks right, and that’s much more difficult. The obvious thing is to automate a browser, take a screenshot, and compare it to a known good rendering — all of which will come crumbling down the moment someone adds makes a border one pixel wider. I don’t know if we can do any better, unless we can somehow explain to a computer what “looks right” means.

Something I’d like to see is an automated sanity check for HTML + CSS. Lay out the page without rendering it and check for any obvious screwups, like overlapping text or unwanted overflow. I don’t know how much practical use this would be (or whether it already exists), but it seems like a nice easy way to check that you didn’t do something catastrophic. You wouldn’t even necessarily need it in your test suite — just plug it into a crawler and throw it at your site.

GUIs and games

Oh my god I have no idea. Keep your UI separated from your internals, test the internals, and hope for the best.

But most of all

Just test something. Going from zero tests to one test is an infinite improvement.

Once you have a tiny stub of a test suite, you have something to build on, and the next test will be a little easier to write. You might even find yourself in the middle of adding a feature and suddenly thinking, hey! this is a great opportunity to write a quick test or two.

Monday’s security advisories

Post Syndicated from jake original

Arch Linux has updated linux-lts
(connection hijacking).

CentOS has updated kernel (C7:
connection hijacking).

Debian-LTS has updated cracklib2
(code execution) and suckless-tools (screen
lock bypass).

Fedora has updated firewalld
(F24: authentication bypass), glibc (F24:
denial of service on armhfp), knot (F24; F23:
denial of service), libgcrypt (F24: bad
random number generation), and perl (F23:
privilege escalation).

openSUSE has updated apache2-mod_fcgid (42.1, 13.2: proxy
injection), gd (13.2: multiple
vulnerabilities), iperf (SPHfSLE12;
42.1, 13.2: denial of service), pdns (42.1, 13.2: denial of service), python3 (42.1, 13.2: multiple
vulnerabilities), roundcubemail (42.1; 13.2; 13.1: multiple vulnerabilities, two from
2015), and typo3-cms-4_7 (42.1, 13.2: three
vulnerabilities from 2013 and 2014).

Scientific Linux has updated kernel (SL7: connection hijacking) and python (SL6&7: three vulnerabilities).

The Carputer

Post Syndicated from Alex Bate original

Meet Benjamin, a trainee air traffic controller from the southeast of France.

Benjamin was bored of the simple radio setup in his Peugeot 207. Instead of investing in a new system, he decided to build a carputer using a Raspberry Pi.


Seriously, you lot: we love your imagination!

He started with a Raspberry Pi 3. As the build would require wireless connectivity to allow the screen to connect to the Pi, this model’s built-in functionality did away with the need for an additional dongle. 

Benjamin invested in the X400 Expansion Board, which acts as a sound card. The board’s ability to handle a variety of voltage inputs was crucial when it came to hooking the carputer up to the car engine.

Car engine fuse box

Under the hood

As Benjamin advises, be sure to unplug the fusebox before attempting to wire anything into your car. If you don’t… well, you’ll be frazzled. It won’t be pleasant.

Though many touchscreens are available on the market, Benjamin chose to use his Samsung tablet for the carputer’s display. Using the tablet meant he was able to remove it with ease when he left the vehicle, which is a clever idea if you don’t want to leave your onboard gear vulnerable to light-fingered types while the car is unattended.

To hook the Pi up to the car’s antenna, he settled on using an RTL SDR, overcoming connection issues with an adaptor to allow the car’s Fakra socket to access MCX via SMA (are you with us?). 


Fakra -> SMA -> MCX.

Benjamin set the Raspberry Pi up as a web server, enabling it as a wireless hotspot. This allows the tablet to connect wirelessly, displaying roadmaps and the media centre on his carputer dashboard, and accessing his music library via a USB flashdrive. The added benefit of using the tablet is that it includes GPS functionality: Benjamin plans to incorporate a 3G dongle to improve navigation by including real-time events such as road works and accidents.


The carputer control desk

The carputer build is a neat, clean setup, but it would be interesting to see what else could be added to increase functionality while on the road. As an aviation fanatic, Benjamin might choose to incorporate an ADS-B receiver, as demonstrated in this recent tutorial. Maybe some voice controls using Alexa? Or how about multiple tablets with the ability to access video or RetroPie, to keep his passengers entertained? What would you add?

Carputer with raspberry pi first test

For more details go to


The post The Carputer appeared first on Raspberry Pi.

Research on the Timing of Security Warnings

Post Syndicated from Bruce Schneier original

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.

A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly­ — while people are typing, watching a video, uploading files, etc.­ — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”


For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Research paper. News article.

A lesson in social engineering: president debates

Post Syndicated from Robert Graham original

In theory, we hackers are supposed to be experts in social engineering. In practice, we get suckered into it like everyone else. I point this out because of the upcoming presidential debates between Hillary and Trump (and hopefully Johnson). There is no debate, there is only social engineering.

Some think Trump will pull out of the debates, because he’s been complaining a lot lately that they are rigged. No. That’s just because Trump is a populist demagogue. A politician can only champion the cause of the “people” if there is something “powerful” to fight against. He has to set things up ahead of time (debates, elections, etc.) so that any failure on his part can be attributed to the powerful corrupting the system. His constant whining about the debates doesn’t mean he’ll pull out any more than whining about the election means he’ll pull out of that.
Moreover, he’s down in the polls (What polls? What’s the question??). He therefore needs the debates to pull himself back up. And it’ll likely work — because social-engineering.
Here’s how the social engineering works, and how Trump will win the debates.
The moderators, the ones running the debate, will do their best to ask Trump the toughest questions they think of. At this point, I think their first question will be about the Kahn family, and Trump’s crappy treatment of their hero son. This is one of Trump’s biggest weaknesses, but especially so among military-obsessed Republicans.
And Trump’s response to this will be awesome. I don’t know what it will be, but I do know that he’s employing some of the world’s top speech writers and debate specialists to work on the answer. He’ll be practicing this question diligently working on a scripted answer, from many ways it can be asked, from now until the election. And then, when that question comes up, it’ll look like he’s just responding off-the-cuff, without any special thought, and it’ll impress the heck out of all the viewers that don’t already hate him.
The same will apply too all Trump’s weak points. You think the debates are an opportunity for the press to lock him down, to make him reveal his weak points once and for all in front of a national audience, but the reverse is true. What the audience will instead see is somebody given tough, nearly impossible questions, and who nonetheless has a competent answer to everything. This will impress everyone with how “presidential” Trump has become.
Also, waivering voters will see that the Trump gets much tougher questions than Hillary. This will feed into Trump’s claim the media is biased against him. Of course, the reality is that Trump is a walking disaster area with so many more weaknesses to hit, but there’s some truth to the fact that media has a strong left-wing bias. Regardless of Trump’s performance, the media will be on trial during the debate, and they’ll lose.
The danger to Trump is that he goes off script, that his advisors haven’t beaten it into his head hard enough that he’s social engineering and not talking. That’s been his greatest flaw so far. But, and this is a big “but”, it’s also been his biggest strength. By owning his gaffes, he’s seen as a more authentic man of the people and not a slick politician. I point this out because we are all still working according to the rules of past elections, and Trump appears to have rewritten the rules for this election.
Anyway, this post is about social-engineering, not politics. You should watch the debate, not for content, but for how well each candidates does social engineering. Watch how they field every question, then “bridge” to a prepared statement they’ve been practicing for months. Watch how the moderators try to take them “off message”, and how the candidates put things back “on message”. Watch how Clinton, while being friendly and natural, never ever gets “off message”, and how you don’t even notice that she’s “bridging” to her message. Watch how Trump, though, will get flustered and off message. Watch how Hillary controls her hand gestures (almost) none, while Trump frequently fails to.
At least, this is what I’ll be watching for. And watching for live tweeting, as I paraphrase what candidate really were saying, as egregiously as I can :).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.