Tag Archives: stroke

Eevee mugshot set for Doom

Post Syndicated from Eevee original https://eev.ee/release/2017/11/23/eevee-mugshot-set-for-doom/

Screenshot of Industrial Zone from Doom II, with an Eevee face replacing the usual Doom marine in the status bar

A full replacement of Doomguy’s vast array of 42 expressions.

You can get it yourself if you want to play Doom as me, for some reason? It does nothing but replace a few sprites, so it works with any Doom flavor (including vanilla) on 1, 2, or Final. Just run Doom with -file eeveemug.wad. With GZDoom, you can load it automatically.


I don’t entirely know why I did this. I drew the first one on a whim, then realized there was nothing really stopping me from making a full set, so I spent a day doing that.

The funny thing is that I usually play Doom with ZDoom’s “alternate” HUD. It’s a full-screen overlay rather than a huge bar, and — crucially — it does not show the mugshot. It can’t even be configured to show the mugshot. As far as I’m aware, it can’t even be modded to show the mugshot. So I have to play with the OG status bar if I want to actually use the thing I made.

Preview of the Eevee mugshot sprites arranged in a grid, where the Eevee becomes more beaten up in each subsequent column

I’m pretty happy with the results overall! I think I did a decent job emulating the Doom “surreal grit” style. I did the shading with Aseprite‘s shading mode — instead of laying down a solid color, it shifts pixels along a ramp of colors you select every time you draw over them. Doom’s palette has a lot of browns, so I made a ramp out of all of them and kept going over furry areas, nudging pixels into being lighter or darker, until I liked the texture. It was a lot like making a texture in a sketch with a lot of scratchy pencil strokes.

I also gleaned some interesting things about smoothness and how the eye interprets contours? I tried to explain this on Twitter and had a hell of a time putting it into words, but the short version is that it’s amazing to see the difference a single misplaced pixel can make, especially as you slide that pixel between dark and light.


Doom's palette of 256 colors, many of which are very long gradients of reds and browns

Speaking of which, Doom’s palette is incredibly weird to work with. Thank goodness Eevees are brown! The game does have to draw arbitrary levels of darkness all with the same palette, which partly explains the number of dark colors and gradients — but I believe a number of the colors are exact duplicates, so close they might as well be duplicates, or completely unused in stock Doom assets. I guess they had no reason to optimize for people trying to add arbitrary art to the game 25 years later, though. (And nowadays, GZDoom includes a truecolor software renderer, so the palette is becoming less and less important.)

I originally wanted the god mode sprite to be a Sylveon, but Sylveon is made of pink and azure and blurple, and I don’t think I could’ve pulled it off with this set of colors. I even struggled with the color of the mane a bit — I usually color it with pretty pale colors, but Doom only has a couple of those, and they’re very saturated. I ended up using a lot more dark yellows than I would normally, and thankfully it worked out pretty well.

The most significant change I made between the original sprite and the final set was the eye color:

A comparison between an original Doom mugshot sprite, the first sprite I drew, and how it ended up

(This is STFST20, a frame from the default three-frame “glacing around” animation that plays when the player has between 40 and 59 health. Doom Wiki has a whole article on the mugshot if you’re interested.)

The blue eyes in my original just do not work at all. The Doom palette doesn’t have a lot of subtle colors, and its blues in particular are incredibly bad. In the end, I made the eyes basically black, though with a couple pixels of very dark blue in them.

After I decided to make the full set, I started by making a neutral and completely healthy front pose, then derived the others from that (with a very complicated system of layers). You can see some of the side effects of that here: the face doesn’t actually turn when glancing around, because hoo boy that would’ve been a lot of work, and so the cheek fluff is visible on both sides.

I also notice that there are two columns of identical pixels in each eye! I fixed that in the glance to the right, but must’ve forgotten about it here. Oh, well; I didn’t even notice until I zoomed in just now.

A general comparison between the Doom mugshots and my Eevee ones, showing each pose in its healthy state plus the neutral pose in every state of deterioration

The original sprites might not be quite aligned correctly in the above image. The available space in the status bar is 35×31, of which a couple pixels go to an inset border, leaving 33×30. I drew all of my sprites at that size, but the originals are all cropped and have varying offsets (part of the Doom sprite format). I extremely can’t be assed to check all of those offsets for over a dozen sprites, so I just told ImageMagick to center them. (I only notice right now that some of the original sprites are even a full 31 pixels tall and draw over the top border that I was so careful to stay out of!)

Anyway, this is a representative sample of the Doom mugshot poses.

The top row shows all eight frames at full health. The first three are the “idle” state, drawn when nothing else is going on; the sprite usually faces forwards, but glances around every so often at random. The forward-facing sprite is the one I finalized first.

I tried to take a lot of cues from the original sprite, seeing as I wanted to match the style. I’d never tried drawing a sprite with a large palette and a small resolution before, and the first thing that struck me was Doomguy’s lips — the upper lip, lips themselves, and shadow under the lower lip are all created with only one row of pixels each. I thought that was amazing. Now I even kinda wish I’d exaggerated that effect a bit more, but I was wary of going too dark when there’s a shadow only a couple pixels away. I suppose Doomguy has the advantage of having, ah, a chin.

I did much the same for the eyebrows, which was especially necessary because Doomguy has more of a forehead than my Eevee does. I probably could’ve exaggerated those a bit more, as well! Still, I love how they came out — especially in the simple looking-around frames, where even a two-pixel eyebrow raise is almost comically smug.

The fourth frame is a wild-ass grin (even named STFEVL0), which shows for a short time after picking up a new weapon. Come to think of it, that’s a pretty rare occurrence when playing straight through one of the Doom games; you keep your weapons between levels.

The fifth through seventh are also a set. If the player takes damage, the status bar will briefly show one of these frames to indicate where the damage is coming from. You may notice that where Doomguy bravely faces the source of the pain, I drew myself wincing and recoiling away from it.

The middle frame of that set also appears while the player is firing continuously (regardless of damage), so I couldn’t really make it match the left and right ones. I like the result anyway. It was also great fun figuring out the expressions with the mouth — that’s another place where individual pixels make a huge difference.

Finally, the eighth column is the legendary “ouch” face, which appears when the player takes more than 20 damage at once. It may look completely alien to you, because vanilla Doom has a bug that only shows this face when the player gains 20 or more health while taking damage. This is vanishingly rare (though possible!), so the frame virtually never appears in vanilla Doom. Lots of source ports have fixed this bug, making the ouch face it a bit better known, but I usually play without the mugshot visible so it still looks super weird to me. I think my own spin on it is a bit less, ah, body horror?

The second row shows deterioration. It is pretty weird drawing yourself getting beaten up.

A lot of Doomguy’s deterioration is in the form of blood dripping from under his hair, which I didn’t think would translate terribly well to a character without hair. Instead, I went a little cartoony with it, adding bandages here and there. I had a little bit of a hard time with the bloodshot eyes at this resolution, which I realize as I type it is a very poor excuse when I had eyes three times bigger than Doomguy’s. I do love the drooping ears, with the possible exception of the fifth state, which I’m not sure is how that would actually look…? Oh well. I also like the bow becoming gradually unravelled, eventually falling off entirely when you die.

Oh, yes, the sixth frame there (before the gap) is actually for a dead player. Doomguy’s bleeding becomes markedly more extreme here, but again that didn’t really work for me, so I went a little sillier with it. A little. It’s still pretty weird drawing yourself dead.

That leaves only god mode, which is incredible. I love that glow. I love the faux whisker shapes it makes. I love how it fades into the background. I love that 100% pure “oh this is pretty good” smile. It all makes me want to just play Doom in god mode forever.

Now that I’ve looked closely at these sprites again, I spy a good half dozen little inconsistencies and nitpicks, which I’m going to refrain from spelling out. I did do this in only a day, and I think it came out pretty dang well considering.

Maybe I’ll try something else like this in the future. Not quite sure what, though; there aren’t many small and self-contained sets of sprites like this in Doom. Monsters are several times bigger and have a zillion different angles. Maybe some pickups, which only have one frame?


Hmm. Parting thought: I’m not quite sure where I should host this sort of one-off thing. It arguably belongs on Itch, but seems really out of place alongside entire released games. It also arguably belongs on the idgames archive, but I’m hesitant to put it there because it’s such an obscure thing of little interest to a general audience. At the moment it’s just a file I’ve uploaded to wherever on my own space, but I now have three little Doom experiments with no real permanent home.

ShadowBrokers Releases NSA UNITEDRAKE Manual

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/shadowbrokers_r.html

The ShadowBrokers released the manual for UNITEDRAKE, a sophisticated NSA Trojan that targets Windows machines:

Able to compromise Windows PCs running on XP, Windows Server 2003 and 2008, Vista, Windows 7 SP 1 and below, as well as Windows 8 and Windows Server 2012, the attack tool acts as a service to capture information.

UNITEDRAKE, described as a “fully extensible remote collection system designed for Windows targets,” also gives operators the opportunity to take complete control of a device.

The malware’s modules — including FOGGYBOTTOM and GROK — can perform tasks including listening in and monitoring communication, capturing keystrokes and both webcam and microphone usage, the impersonation users, stealing diagnostics information and self-destructing once tasks are completed.

More news.

UNITEDRAKE was mentioned in several Snowden documents and also in the TAO catalog of implants.

And Kaspersky Labs has found evidence of these tools in the wild, associated with the Equation Group — generally assumed to be the NSA:

The capabilities of several tools in the catalog identified by the codenames UNITEDRAKE, STRAITBAZZARE, VALIDATOR and SLICKERVICAR appear to match the tools Kaspersky found. These codenames don’t appear in the components from the Equation Group, but Kaspersky did find “UR” in EquationDrug, suggesting a possible connection to UNITEDRAKE (United Rake). Kaspersky also found other codenames in the components that aren’t in the NSA catalog but share the same naming conventions­they include SKYHOOKCHOW, STEALTHFIGHTER, DRINKPARSLEY, STRAITACID, LUTEUSOBSTOS, STRAITSHOOTER, and DESERTWINTER.

ShadowBrokers has only released the UNITEDRAKE manual, not the tool itself. Presumably they’re trying to sell that

Netflix develops Morse code search option

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/netflix-morse-code/

What happens when Netflix gives its staff two days to hack the platform and create innovative (and often unnecessary) variations on the streaming service?

This. This is what happens.

Hack Day Summer 2017 Teleflix

Uploaded by NetflixOpenSource on 2017-08-28.

Netflix Hack Day

Twice a year, the wonderful team at Netflix is given two days to go nuts and create fun, random builds, taking inspiration from Netflix and its content. So far they’ve debuted a downgraded version of the streaming platform played on an original Nintendo Entertainment System (NES), turned hit show Narcos into a video game, and utilised VR technology into many more builds that, while they’ll never be made public, have no doubt led to some lightbulb moments for the creative teams involved.

DarNES – Netflix Hack Day – Winter 2015

In a world… where devices proliferate… darNES digs back in time to provide Netflix access to the original Nintendo Entertainment System.

Kevin Spacey? More like ‘Kevin Spacebar’, am I right? Aha…ha…haaaa…I’ll get my coat.

Teleflix

The Teleflix build from this summer’s Hack Day is obviously the best one yet, as it uses a Raspberry Pi. By writing code that decodes the dots and dashes from an original 1920s telegraph (provided by AT&T, and lovingly restored by the team using ketchup!) into keystrokes, they’re able to search for their favourite shows via Morse code.

Netflix Morse Code

Morse code, for the unaware, is a method for transmitting letters and numbers via a standardised series of beeps, clicks, or flashes. Stuck in a sticky situation? Three dots followed by three dashes and a further three dots gives you ‘SOS’. Sorted. So long as there’s someone there to see or hear it, who also understands Morse Code.

Morse Code

Morse code was a method of transmiting textual information as a series of on-off tones that could be directly understood by a skilled listener. Mooo-Theme: http://soundcloud.com/mooojvm/mooo-theme

So if you’d like to watch, for example, The Unbreakable Kimmy Schmidt, you simply send: – …. . / ..- -. -… .-. . .- -.- .- -… .-.. . / -.- .. — — -.– / … -.-. …. — .. -.. – and you’re set. Easy!

To reach Netflix, the team used a Playstation 4. However, if you want to skip a tech step, you could stream Netflix directly to your Raspberry Pi by following this relatively new tutorial. Nobody at Pi Towers has tried it out yet, but if you have we’d be interested to see how you got on in the comments below.

And if you’d like to play around a little more with the Raspberry Pi and Morse code, you can pick up your own Morse code key, or build one using conductive components such as buttons or bananas, and try it out for yourself.

Alex’s Netflix-themed Morse code quiz

Just for fun, here are the titles of some of my favourite shows to watch on Netflix, translated into Morse code. Using the key below, why not take a break and challenge your mind to translate them back into English. Reward yourself +10 imaginary House Points for each correct answer.

Netflix Morse Code

  1. -.. — -.-. – — .-. / .– …. —
  2. …. .- -. -. .. -… .- .-..
  3. – …. . / — .-
  4. … . -. … . —..
  5. .— . … … .. -.-. .- / .— — -. . …
  6. –. .. .-.. — — .-. . / –. .. .-. .-.. …
  7. –. .-.. — .–

The post Netflix develops Morse code search option appeared first on Raspberry Pi.

We Are Not Having a Productive Debate About Women in Tech

Post Syndicated from Bozho original https://techblog.bozho.net/not-productive-debate-women-tech/

Yes, it’s about the “anti-diversity memo”. But I won’t go into particular details of the memo, the firing, who’s right and wrong, who’s liberal and who’s conservative. Actually, I don’t need to repeat this post, which states almost exactly what I think about the particular issue. Just in case, and before someone decided to label me as “sexist white male” that knows nothing, I guess should clearly state that I acknowledge that biases against women are real and that I strongly support equal opportunity, and I think there must be more women in technology. I also have to state that I think the author of “the memo” was well-meaning, had some well argued, research-backed points and should not be ostracized.

But I want to “rant” about the quality of the debate. On one side we have conservatives who are throwing themselves in defense of the fired googler, insisting that liberals are banning conservative points of view, that it is normal to have so few woman in tech and that everything is actually okay, or even that women are inferior. On the other side we have triggered liberals that are ready to shout “discrimination” and “harassment” at anything that resembles an attempt to claim anything different than total and absolute equality, in many cases using a classical “strawman” argument (e.g. “he’s saying women should not work in tech, he’s obviously wrong”).

Everyone seems to be too eager to take side and issue a verdict on who’s right and who’s wrong, to blame the other side for all related and unrelated woes and while doing that, exhibit a huge amount of biases. If the debate is about that, we’d better shut it down as soon as possible, as it’s not going to lead anywhere. No matter how much conservatives want “a debate”, and no matter how much liberals want to advance equality. Oh, and by the way – this “conservatives” vs “liberals” is a false dichotomy. Most people hold a somewhat sensible stance in between. But let’s get to the actual issue:

Women are underrepresented in STEM (Science, technology, engineering, mathematics). That is a fact everyone agrees on and is blatantly obvious when you walk in any software company office.

Why is that the case? The whole debate revolved around biological and social differences, some of which are probably even true – that women value job flexibility more than being promoted or getting higher salary, that they are more neurotic (on average), that they are less confident, that they are more empathic and so on. These difference have been studied and documented, and as much as I have my reservations about psychology studies (so much so, that even meta-analysis are shown by meta-meta-analysis to be flawed) and social science in general, there seems to be a consensus there (by the way, it’s a shame that Gizmodo removed all the scientific references when they first published “the memo”). But that is not the issue. As it has been pointed out, there’s equal applicability of male and female “inherent” traits when working with technology.

Why are we talking about “techonology”, and why not “mining and construction”, as many will point out. Let’s cut that argument once and for all – mining and construction are blue collar jobs that have a high chance of being automated in the near future and are in decline. The problem that we’re trying to solve is – how to make the dominant profession of the future – information technology – one of equal opportunity. Yes, it’s a a bold claim, but software is going to be everywhere and the industry will grow. This is why it’s so important to discuss it, not because we are developers and we are somewhat affected by that.

So, there has been extended research on the matter, and the reasons are – surprise – complex and intertwined and there is no simple issue that, once resolved, will unlock the path of women to tech jobs.

What would diversity give us and why should we care? Let’s assume for a moment we don’t care about equal opportunity and we are right-leaning, conservative people. Well, imagine you have a growing business and you need to hire developers. What would you prefer – having fewer or more people of whom to choose from? Having fewer or more diverse skills (technical and social) on the job market? The answer is obvious. The more people, regardless of their gender, race, whatever, are on the job market, the better for businesses.

So I guess we’ve agreed on the two points so far – that women are underrepresented, and that it’s better for everyone if there are more people with technical skills on the job market, which includes more women.

The “final” questions is – how?

And this questions seems to not be anywhere in the discussion. Instead, we are going in circles with irrelevant arguments trying to either show that we’ve read more scientific papers than others, that we are more liberal than others or that we are more pro free speech.

Back to “how” – in Bulgaria we have a social meme: “I don’t know what is the right way, but the way you are doing it is NOT the right way”. And much of the underlying sentiment of “the memo” is similar – that google should stop doing some of the stuff it is doing about diversity, or do them differently (but doesn’t tell us how exactly). Hiring biases, internal programs, whatever, seem to bother him. But this is just talking about the surface of the problem. These programs are correcting something that remains hidden in “the memo”.

Google, on their diversity page, say that 20% of their tech employees are women. At the same time, in another diversity section, they claim “18% of CS graduates are women”. So, I guess, job done – they’ve reached the maximum possible diversity. They’ve hired as many women in tech as CS graduates there are. Anything more than that, even if it doesn’t mean they’ll hire worse developers, will leave the rest of the industry with less women. So, sure, 50/50 in Google would sound cool, but the industry average will still be bad.

And that’s the actual, underlying reason that we should have already arrived at, and we should’ve started discussing the “how”. Girls do not see STEM as a thing for them. Our biases are projected on younger girls which culminate at a “this is not for girls” mantra. No matter how diverse hiring policies we have, if we don’t address the issue at a way earlier stage, we aren’t getting anywhere.

In schools and even kindergartens we need to have an inclusive environment where “this is not for girls” is frowned upon. We should not discourage girls from liking math, or making math sound uncool and “hard for girls” (in my biased world I actually know more women mathematicians than men). This comic seems like on a different topic (gender-specific toys), but it’s actually not about toys – it’s about what is considered (stereo)typical of a girl to do. And most of these biases are unconscious, and come from all around us (school, TV, outdoor ads, people on the street, relatives, etc.), and it takes effort to confront them.

To do that, we need policy decisions. We need lobbying education departments / ministries to encourage girls more in the STEM direction (and don’t worry, they’ll be good at it). By the way, guess what – Google’s diversity program is not just about hiring more women, it actually includes education policies with stuff like “influencing perception about computer science”, “getting more girls to code” and scholarships.

Let’s discuss the education policies, the path to getting 40-50% of CS graduates to be female, and before that – more girls in schools with technical focus, and ultimately – how to get society to not perceive technology and science as “not for girls”. Let each girl decide on her own. All the other debates are short-sighted and not to the point at all. Will biological differences matter then? They probably will – but not significantly to justify a high gender imbalance.

I am no expert in education policies and I don’t know what will work and what won’t. There is research on the matter that we should look at, and maybe argue about it. Everything else is wasted keystrokes.

The post We Are Not Having a Productive Debate About Women in Tech appeared first on Bozho's tech blog.

Burner laptops for DEF CON

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/burner-laptops-for-def-con.html

Hacker summer camp (Defcon, Blackhat, BSidesLV) is upon us, so I thought I’d write up some quick notes about bringing a “burner” laptop. Chrome is your best choice in terms of security, but I need Windows/Linux tools, so I got a Windows laptop.

I chose the Asus e200ha for $199 from Amazon with free (and fast) shipping. There are similar notebooks with roughly the same hardware and price from other manufacturers (HP, Dell, etc.), so I’m not sure how this compares against those other ones. However, it fits my needs as a “burner” laptop, namely:

  • cheap
  • lasts 10 hours easily on battery
  • weighs 2.2 pounds (1 kilogram)
  • 11.6 inch and thin

Some other specs are:

  • 4 gigs of RAM
  • 32 gigs of eMMC flash memory
  • quad core 1.44 GHz Intel Atom CPU
  • Windows 10
  • free Microsoft Office 365 for one year
  • good, large keyboard
  • good, large touchpad
  • USB 3.0
  • microSD
  • WiFi ac
  • no fans, completely silent

There are compromises, of course.

  • The Atom CPU is slow, thought it’s only noticeable when churning through heavy webpages. Adblocking addons or Brave are a necessity. Most things are usably fast, such as using Microsoft Word.
  • Crappy sound and video, though VLC does a fine job playing movies with headphones on the airplane. Using in bright sunlight will be difficult.
  • micro-HDMI, keep in mind if intending to do presos from it, you’ll need an HDMI adapter
  • It has limited storage, 32gigs in theory, about half that usable.
  • Does special Windows 10 compressed install that you can’t actually upgrade without a completely new install. It doesn’t have the latest Windows 10 Creators update. I lost a gig thinking I could compress system files.

Copying files across the 802.11ac WiFi to the disk was quite fast, several hundred megabits-per-second. The eMMC isn’t as fast as an SSD, but its a lot faster than typical SD card speeds.

The first thing I did once I got the notebook was to install the free VeraCrypt full disk encryption. The CPU has AES acceleration, so it’s fast. There is a problem with the keyboard driver during boot that makes it really hard to enter long passwords — you have to carefully type one key at a time to prevent extra keystrokes from being entered.

You can’t really install Linux on this computer, but you can use virtual machines. I installed VirtualBox and downloaded the Kali VM. I had some problems attaching USB devices to the VM. First of all, VirtualBox requires a separate downloaded extension to get USB working. Second, it conflicts with USBpcap that I installed for Wireshark.

It comes with one year of free Office 365. Obviously, Microsoft is hoping to hook the user into a longer term commitment, but in practice next year at this time I’d get another burner $200 laptop rather than spend $99 on extending the Office 365 license.

Let’s talk about the CPU. It’s Intel’s “Atom” processor, not their mainstream (Core i3 etc.) processor. Even though it has roughly the same GHz as the processor in a 11inch MacBook Air and twice the cores, it’s noticeably and painfully slower. This is especially noticeable on ad-heavy web pages, while other things seem to work just fine. It has hardware acceleration for most video formats, though I had trouble getting Netflix to work.

The tradeoff for a slow CPU is phenomenal battery life. It seems to last forever on battery. It’s really pretty cool.

Conclusion

A Chromebook is likely more secure, but for my needs, this $200 is perfect.

Introspection

Post Syndicated from Eevee original https://eev.ee/blog/2017/05/28/introspection/

This month, IndustrialRobot has generously donated in order to ask:

How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?

Whoof. That’s incredibly abstract and open-ended — there’s a lot I could say, but most of it is hard to turn into words.


The first example to come to mind — and the most conspicuous, at least from where I’m sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it’s become all the more pronounced since then.

I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.

Now that I had all the time in the world to work on these things, I… didn’t. It turned out they were almost as much of a slog as my job had been!

The problem, I think, was that there was no point.

This was really weird to realize and come to terms with. I do like solving problems for its own sake; it’s interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.

But even if I create a really interesting tool, what do I have? I don’t have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they’d be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.

Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as “let’s solve X problem forever!” — I go to scratch the itch, I do just enough work that it doesn’t itch any more, and then I lose interest.

I spent a few months quietly flailing over this minor existential crisis. I’d spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.

Meanwhile, I’d vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might’ve been too ambitious, despite feeling small, and that might’ve discouraged me from pursuing other kinds of games earlier.

A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I’m simply disqualified, right? Maybe the same thing applies to games.

Lord knows I had enough trouble when I tried. I’d orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.

I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.

It’s been incredibly rewarding — far moreso than any “pure” tooling problem I’ve ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.

I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should’ve had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn’t tend to bring joy to people, but amateur art can.

I still like doing technical work, but I prefer when it’s a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. “Make a library/tool for X” is a nebulous problem that could go in a great many directions; “make a bot that tweets Perlin noise” has a pretty definitive finish line. It was interesting to write a little physics engine, but I would’ve hated doing it if it weren’t for a game I were making and didn’t have the clear scope of “do what I need for this game”.


It feels like creative work is something I’ve been wanting to do for a long time. If this were a made-for-TV movie, I would’ve discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.

That didn’t happen. Instead I’ve found that even something as mundane as having ideas is a skill, and while it’s one I enjoy, I’ve barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.

How do I theme this area? Well, I don’t know. How do I think of something? I don’t know that either. It’s a strange paradox to have an urge to create things but not quite know what those things are.

It’s such a new and completely different kind of problem. There’s no right answer, or even an answer I can check for “correctness”. I can do anything. With no landmarks to start from, it’s easy to feel completely lost and just draw blanks.

I’ve essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven’t found them yet. I don’t think they’re anything that can be told or taught. But I’m starting to get there, and part of it is just accepting that I can’t treat these like problems with clear best solutions and clear algorithms to find those solutions.

A particularly glaring irony is that I’ve had a really tough problem designing abstract spaces, even though that’s exactly the kind of architecture I praise in Doom. It’s much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.

I suppose it’s similar to a struggle I’ve had with art. I’m drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what’s most important. I’m reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would’ve made the background far too busy, but trees are naturally busy, so how do you represent that?

The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.

It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it’s up to you to make some creative leaps. I don’t think there’s a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.


I think I’m getting a little distracted here, but this is stuff that’s been rattling around lately.

If there’s a more personal meaning to the tree story, it’s that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.

Two and a half years ago, I never would’ve thought I’d ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don’t expect we’re capable of, if only we give them a serious shot.

And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip’s advice, and on some level I feel like I didn’t quite do it myself, even though every stroke was made by my hand. Hell, I don’t even look at references nearly as much as I should. It feels like cheating, somehow? I know that’s ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I’ve been doing that for too long with programming. Trust me, it doesn’t work quite so well in a brand new field.


I’m getting distracted again!

To answer your actual questions: how do I go about learning about myself? I don’t! It happens completely by accident. I’ll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.

Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I’m starting to suspect they’d caught on after eight years of watching me lament that I couldn’t draw.

I don’t know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don’t know that I have quite that experience — my grappling usually comes earlier, when I’m still trying to figure the thing out despite not knowing that there’s a thing to find out. Once I know it, it’s on the table; I can’t un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.

This isn’t quite 2000 words. Sorry. I’ve run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.

Keylogger Found in HP Laptop Audio Drivers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/keylogger_found.html

This is a weird story: researchers have discovered that an audio driver installed in some HP laptops includes a keylogger, which records all keystrokes to a local file. There seems to be nothing malicious about this, but it’s a vivid illustration of how hard it is to secure a modern computer. The operating system, drivers, processes, application software, and everything else is so complicated that it’s pretty much impossible to lock down every aspect of it. So many things are eavesdropping on different aspects of the computer’s operation, collecting personal data as they do so. If an attacker can get to the computer when the drive is unencrypted, he gets access to all sorts of information streams — and there’s often nothing the computer’s owner can do.

Bash Bunny: Big hacks come in tiny packages (InfoWorld)

Post Syndicated from corbet original https://lwn.net/Articles/720912/rss

InfoWorld plays
with the Bash Bunny
, a USB device for attacking computers.
It can run anything a regular Debian Linux distro can run, such as
Python scripts or common Linux commands. To infiltrate other computing
devices, Bash Bunny can fake its identity as a trusted media device,
networking device, keyboard, or other serial device. For example, it can
load itself as a keyboard device and mimic keystrokes. You can download
dozens of existing payload scripts, create your own, or ask questions in a
fairly active user forum.

Processing: making art with code

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/processing-making-art-code/

This column is from The MagPi issue 56. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

One way we achieve our mission at the Raspberry Pi Foundation is to find an intersection between someone’s passion and computing. For example, if you’re a young person interested in space, our Astro Pi programme is all about getting your code running on the International Space Station. If you like music, you can use Sonic Pi to compose songs with code. This month, I’d like to introduce you to some interesting work happening at the intersection between computing and the visual arts.

Image of Dead Presidents by Mike Brondbjerg art made with Processing

Mike Brondbjerg’s Dead Presidents uses Processing to generate portraits.

Processing is a programming language and development environment that sits perfectly at that intersection. It enables you to use code to generate still graphics, animations, or interactive applications such as games. It’s based on the Java programming language, and it runs on multiple platforms and operating systems. Thanks to the work of the Processing Foundation, and in particular the efforts of contributor Gottfried Haider, Processing runs like a champ on the Raspberry Pi.

Screenshot of Processing environment

When I want to communicate how cool Processing is while speaking to members of the Raspberry Pi community, I usually make this analogy: with Sonic Pi, you can use one line of code to make one note; with Processing, you can use one line of code to draw one stroke. Once you’ve figured that out, you can use computational tools such as loops, conditions, and variables to make some beautiful art.

And even though Processing is intended for use in the realm of visual arts, its capabilities can go beyond that. You can make applications that interact with the user through keyboard or mouse input. Processing also has libraries for working with network connections, files, and cameras. This means that you don’t just have to create artwork with Processing. You can also use it for almost anything you need to code.

Physical process

Processing is especially cool on the Raspberry Pi because there’s a library for working with the Pi’s GPIO pins. You can therefore have on-screen graphics interacting with buttons, switches, LEDs, relays, and sensors wired up to your Pi. With Processing, you could build a game that uses a custom controller that you’ve built yourself. Or you could create a piece of artwork that interacts with the user by sensing their proximity to it.

Processing screenshot

Best of all, Processing was created with learning to code in mind. It comes with lots of built-in examples, and you can use these to learn about many different programming and drawing concepts. The documentation on Processing’s website is very thorough and – as with Raspberry Pi – there’s a very supportive community around it if you run into any trouble. Additionally, the Processing development environment is powerful but also very simplified. For these reasons, it’s perfect for someone who is just getting started.

To get going with Processing on Raspberry Pi, there’s a one-line install command. You can also go to Processing.org and download pre-built Raspbian images with Processing already installed. To help you on your journey, there’s a resource for getting started with Processing. It includes a walkthrough on how to access the GPIO pins to combine physical computing and visual arts.

When you launch Processing, you will see a blank file where you can start keying in your code. Don’t let that intimidate you! All of the world’s greatest pieces of art started off as a raw slab of marble, a blob of clay, or a blank canvas. It just takes one line of code at a time to generate your own masterpiece.

Become a supporter

After this article appeared in The MagPi, the Processing Foundation put out a call for support:

We want you to be a part of this. Our work is almost entirely supported by individual one-time donations from the community. Right now we are outspending what we earn, and we have bigger plans! We want to continue all the work we’re doing and make it more accessible, more inclusive, and more responsive to the community needs.

To create lasting support for these new directions we’re starting a Membership Program. A membership is an annual donation that supports all this work and signifies your belief in it. You can do this as an individual, a studio, an educational institution, or a corporate partner. We will list your name on our members page along with all the others that help make this mission possible.

The post Processing: making art with code appeared first on Raspberry Pi.

How Stack Overflow plans to survive the next DNS attack

Post Syndicated from Mark Henderson original http://blog.serverfault.com/2017/01/09/surviving-the-next-dns-attack/

Let’s talk about DNS. After all, what could go wrong? It’s just cache invalidation and naming things.

tl;dr

This blog post is about how Stack Overflow and the rest of the Stack Exchange network approaches DNS:

  • By bench-marking different DNS providers and how we chose between them
  • By implementing multiple DNS providers
  • By deliberately breaking DNS to measure its impact
  • By validating our assumptions and testing implementations of the DNS standard

The good stuff in this post is in the middle, so feel free to scroll down to “The Dyn Attack” if you want to get straight into the meat and potatoes of this blog post.

The Domain Name System

DNS had its moment in the spotlight in October 2016, with a major Distributed Denial of Service (DDos) attack launched against Dyn, which affected the ability for Internet users to connect to some of their favourite websites, such as Twitter, CNN, imgur, Spotify, and literally thousands of other sites.

But for most systems administrators or website operators, DNS is mostly kept in a little black box, outsourced to a 3rd party, and mostly forgotten about. And, for the most part, this is the way it should be. But as you start to grow to 1.3+ billion pageviews a month with a website where performance is a feature, every little bit matters.

In this post, I’m going to explain some of the decisions we’ve made around DNS in the past, and where we’re going with it in the future. I will eschew deep technical details and gloss over low-level DNS implementation in favour of the broad strokes.

In the beginning

So first, a bit of history: In the beginning, we ran our own DNS on-premises using artisanally crafted zone files with BIND. It was fast enough when we were doing only a few hundred million hits a month, but eventually hand-crafted zonefiles were too much hassle to maintain reliably. When we moved to Cloudflare as our CDN, their service is intimately coupled with DNS, so we demoted our BIND boxes out of production and handed off DNS to Cloudflare.

The search for a new provider

Fast forward to early 2016 and we moved our CDN to Fastly. Fastly doesn’t provide DNS service, so we were back on our own in that regards and our search for a new DNS provider began. We made a list of every DNS provider we could think of, and ended up with a shortlist of 10:

  • Dyn
  • NS1
  • Amazon Route 53
  • Google Cloud DNS
  • Azure DNS (beta)
  • DNSimple
  • Godaddy
  • EdgeCast (Verizon)
  • Hurricane Electric
  • DNS Made Easy

From this list of 10 providers, we did our initial investigations into their service offerings, and started eliminating services that were either not suited to our needs, outrageously expensive, had insufficient SLAs, or didn’t offer services that we required (such as a fully featured API). Then we started performance testing. We did this by embedding a hidden iFrame on 5% of the visitors to stackoverflow.com, which forced a request to a different DNS provider. We did this for each provider until we had some pretty solid performance numbers.

Using some basic analytics, we were able to measure the real-world performance, as seen by our real-world users, broken down into geographical area. We built some box plots based on these tests which allowed us to visualise the different impact each provider had.

If you don’t know how to interpret a boxplot, here’s a brief primer for you. For the data nerds, these were generated with R’s standard boxplot functions, which means the upper and lower whiskers are min(max(x), Q_3 + 1.5 * IQR) and max(min(x), Q_1 – 1.5 * IQR), where IQR = Q_3 – Q_1

This is the results of our tests as seen by our users in the United States:

DNS Performance in the United States

You can see that Hurricane Electric had a quarter of requests return in < 16ms and a median of 32ms, with the three “cloud” providers (Azure, Google Cloud DNS and Route 53) being slightly slower (being around 24ms first quarter and 45ms median), and DNS Made Easy coming in 2nd place (20ms first quarter, 39ms median).

You might wonder why the scale on that chart goes all the way to 700ms when the whiskers go nowhere near that high. This is because we have a worldwide audience, so just looking at data from the United States is not sufficient. If we look at data from New Zealand, we see a very different story:

DNS Performance in New Zealand

Here you can see that Route 53, DNS Made Easy and Azure all have healthy 1st quarters, but Hurricane Electric and Google have very poor 1st quarters. Try to remember this, as this becomes important later on.

We also have Stack Overflow in Portuguese, so let’s check the performance from Brazil:

DNS Performance in Brazil

Here we can see Hurricane Electric, Route 53 and Azure being favoured, with Google and DNS Made Easy being slower.

So how do you reach a decision about which DNS provider to choose, when your main goal is performance? It’s difficult, because regardless of which provider you end up with, you are going to be choosing a provider that is sub-optimal for part of your audience.

You know what would be awesome? If we could have two DNS providers, each one servicing the areas that they do best! Thankfully this is something that is possible to implement with DNS. However, time was short, so we had to put our dual-provider design on the back-burner and just go with a single provider for the time being.

Our initial rollout of DNS was using Amazon Route 53 as our provider: they had acceptable performance figures over a large number of regions and had very effective pricing (on that note Route 53, Azure DNS, and Google Cloud DNS are all priced identically for basic DNS services).

The DYN attack

Roll forwards to October 2016. Route 53 had proven to be a stable, fast, and cost-effective DNS provider. We still had dual DNS providers on our backlog of projects, but like a lot of good ideas it got put on the back-burner until we had more time.

Then the Internet ground to a halt. The DNS provider Dyn had come under attack, knocking a large number of authoritative DNS servers off the Internet, and causing widespread issues with connecting to major websites. All of a sudden DNS had our attention again. Stack Overflow and Stack Exchange were not affected by the Dyn outage, but this was pure luck.

We knew if a DDoS of this scale happened to our DNS provider, the solution would be to have two completely separate DNS providers. That way, if one provider gets knocked off the Internet, we still have a fully functioning second provider who can pick up the slack. But there were still questions to be answered and assumptions to be validated:

  • What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?
  • What is the performance impact for our users if one of the providers is offline?
  • What is the best number of nameservers to be using?
  • How are we going to keep our DNS providers in sync?

These were pretty serious questions – some of which we had hypothesis that needed to be checked and others that were answered in the DNS standards, but we know from experience that DNS providers in the wild do not always obey the DNS standards.

What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?

This one should be fairly easy to test. We’ve already done it once, so let’s just do it again. We fired up our tests, as we did in early 2016, but this time we specified two DNS providers:

  • Route 53 & Google Cloud
  • Route 53 & Azure DNS
  • Route 53 & Our internal DNS

We did this simply by listing Name Servers from both providers in our domain registration (and obviously we set up the same records in the zones for both providers).

Running with Route 53 and Google or Azure was fairly common sense – Google and Azure had good coverage of the regions that Route 53 performed poorly in. Their pricing is identical to Route 53, which would make forecasting for the budget easy. As a third option, we decided to see what would happen if we took our formerly demoted, on-premises BIND servers and put them back into production as one of the providers. Let’s look at the data for the three regions from before: United States, New Zealand and Brazil:

United States
DNS Performance for dual providers in the United States

New Zealand
DNS Performance for dual providers in New Zealand

Brazil

DNS Performance for dual providers in Brazil

There is probably one thing you’ll notice immediately from these boxplots, but there’s also another not-so obvious change:

  1. Azure is not in there (the obvious one)
  2. Our 3rd quarters are measurably slower (the not-so obvious one).

Azure

Azure has a fatal flaw in their DNS offering, as of the writing of this blog post. They do not permit the modification of the NS records in the apex of your zone:

You cannot add to, remove, or modify the records in the automatically created NS record set at the zone apex (name = “@”). The only change that’s permitted is to modify the record set TTL.

These NS records are what your DNS provider says are authoritative DNS servers for a given domain. It’s very important that they are accurate and correct, because they will be cached by clients and DNS resolvers and are more authoritative than the records provided by your registrar.

Without going too much into the actual specifics of how DNS caching and NS records work (it would take me another 2,500 words to describe this in detail), what would happen is this: Whichever DNS provider you contact first would be the only DNS provider you could contact for that domain until your DNS cache expires. If Azure is contacted first, then only Azure’s nameservers will be cached and used. This defeats the purpose of having multiple DNS providers, as in the event that the provider you’ve landed on goes offline, which is roughly 50:50, you will have no other DNS provider to fall back to.

So until Azure adds the ability to modify the NS records in the apex of a zone, they’re off the table for a dual-provider setup.

The 3rd quarter

What the third quarter represents here is the impact of latency on DNS. You’ll notice that in the results for ExDNS (which is the internal name for our on-premises BIND servers) the box plot is much taller than the others. This is because those servers are located in New Jersey and Colorado – far, far away from where most of our visitors come from. So as expected, a service with only two points of presence in a single country (as opposed to dozens worldwide) performs very poorly for a lot of users.

Performance conclusions

So our choices were narrowed for us to Route 53 and Google Cloud, thanks to Azure’s lack of ability to modify critical NS records. Thankfully, we have the data to back up the fact that Route 53 combined with Google is a very acceptable combination.

Remember earlier, when I said that the performance of New Zealand was important? This is because Route 53 performed well, but Google Cloud performed poorly in that region. But look at the chart again. Don’t scroll up, I’ll show you another chart here:

Comparison for DNS performance data in New Zealand between single and dual providers

See how Google on its own performed very poorly in NZ (its 1st quarter is 164ms versus 27ms for Route 53)? However, when you combine Google and Route 53 together, the performance basically stays the same as when there was just Route 53.

Why is this? Well, it’s due to a technique called Smooth Round Trip Time. Basically, DNS resolvers (namely certain version of BIND and PowerDNS) keep track of which DNS servers respond faster, and weight queries towards those DNS servers. This means that the faster provider should be skewed to more often than the slower providers. There’s a nice presentation over here if you want to learn more about this. The short version is that if you have many DNS servers, DNS cache servers will favour the fastests ones. As a result, if one provider is fast in Auckland but slow in London, and another provider is the reverse, DNS cache servers in Auckland will favour the first provider and DNS cache servers in London will favor the other. This is a very little known feature of modern DNS servers but our testing shows that enough ISPs support it that we are confident we can rely on it.

What is the performance impact for our users if one of the providers is offline?

This is where having some on-premises DNS servers comes in very handy. What we can essentially do here is send a sample of our users to our on-premises servers, get a baseline performance measurement, then break one of the servers and run the performance measurements again. We can also measure in multiple places: We have our measurements as reported by our clients (what the end user actually experienced), and we can look at data from within our network to see what actually happened. For network analysis, we turned to our trusted network analysis tool, ExtraHop. This would allow us to look at the data on the wire, and get measurements from a broken DNS server (something you can’t do easily with a pcap on that server, because, you know. It’s broken).

Here’s what healthy performance looked like on the wire (as measured by ExtraHop), with two DNS servers, both of them fully operational, over a 24-hour period (this chart is additive for the two series):

DNS performance with two healthy name servers

Blue and brown are the two different, healthy DNS servers. As you can see, there’s a very even 50:50 split in request volume. Because both of the servers are located in the same datacenter, Smoothed Round Trip Time had no effect, and we had a nice even distribution – as we would expect.

Now, what happens when we take one of those DNS servers offline, to simulate a provider outage?

DNS performance with a broken nameserver

In this case, the blue DNS server was offline, and the brown DNS server was healthy. What we see here is that the blue, broken, DNS server received the same number of requests as it did when the DNS server was healthy, but the brown, healthy, DNS server saw twice as many requests. This is because those users who were hitting the broken server eventually retried their requests to the healthy server and started to favor it. So what does this look like in terms of actual client performance?

I’m only going to share one chart with you this time, because they were all essentially the same:

Comparison of healthy vs unhealthy DNS performance

What we see here is a substantial number of our visitors saw a performance decrease. For some it was minor, for others, quite major. This is because the 50% of visitors who hit the faulty server need to retry their request, and the amount of time it takes to retry that request seems to vary. You can see again a large increase in the long tail, which indicates that they are clients who took over 300 milliseconds to retry their request.

What does this tell us?

What this means is that in the event of a DNS provider going offline, we need to pull that DNS provider out of rotation to provide best performance, but until we do our users will still receive service. A non-trivial number of users will be seeing a large performance impact.

What is the best number of nameservers to be using?

Based on the previous performance testing, we can assume that the number of retries a client may have to make is N/2+1, where N is the number of nameservers listed. So if we list eight nameservers, with four from each provider, the client may potentially have to make 5 DNS requests before they finally get a successful message (the four failed requests, plus a final successful one). A statistician better than I would be able to tell you the exact probabilities of each scenario you would face, but the short answer here is:

Four.

We felt that based on our use case, and the performance penalty we were willing to take, we would be listing a total of four nameservers – two from each provider. This may not be the right decision for those who have a web presence orders of magnitudes larger than ours, but Facebook provide two nameservers on IPv4 and two on IPv6. Twitter provides eight, four from Dyn and four from Route 53. Google provides 4.

How are we going to keep our DNS providers in sync?

DNS has built in ways of keeping multiple servers in sync. You have domain transfers (IXFR, AXFR), which are usually triggered by a NOTIFY packet sent to all the servers listed as NS records in the zone. But these are not used in the wild very often, and have limited support from DNS providers. They also come with their own headaches, like maintaining an ACL IP Whitelist, of which there could be hundreds of potential servers (all the different points of presence from multiple providers), of which you do not control any. You also lose the ability to audit who changed which record, as they could be changed on any given server.

So we built a tool to keep our DNS in sync. We actually built this tool years ago, once our artisanally crafted zone files became too troublesome to edit by hand. The details of this tool are out of scope for this blog post though. If you want to learn about it, keep an eye out around March 2017 as we plan to open-source it. The tool lets us describe the DNS zone data in one place and push it to many different DNS providers.

So what did we learn?

The biggest takeaway from all of this, is that even if you have multiple DNS servers, DNS is still a single point of failure if they are all with the same provider and that provider goes offline. Until the Dyn attack this was pretty much “in theory” if you were using a large DNS provider, because until first the successful attack no large DNS provider had ever had an extended outage on all of its points of presence.

However, implementing multiple DNS providers is not entirely straightforward. There are performance considerations. You need to ensure that both of your zones are serving the same data. There can be such a thing as too many nameservers.

Lastly, we did all of this whilst following DNS best practices. We didn’t have to do any weird DNS trickery, or write our own DNS server to do non-standard things. When DNS was designed in 1987, I wonder if the authors knew the importance of what they were creating. I don’t know, but their design still stands strong and resilient today.

Attributions

  • Thanks to Camelia Nicollet for her work in R to produce the graphs in this blog post

That anti-Trump Recode article is terrible

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-anti-trump-recode-article-is.html

Trump’s a dangerous populist. However, the left-wing media’s anti-Trump fetishism is doing nothing to stop Trump. It’s no better than “fake news” — it gets passed around a lot on social-media, but is intellectually bankrupt, unlikely to change anybody’s mind. A good example is this op-ed on Re/Code [*] about Silicon Valley leaders visiting Trump.

The most important feature of that Re/code article is that it contains no criticism of Trump other than the fact that he’s a Republican. Half the country voted for Trump. Half the country voted Republican. It’s not just Trump that this piece imagines as being unreasonable, but half the country. It’s a fashionable bigotry among some of Silicon Valley’s leftist elite.

But CEOs live in a world where half their customers are Republican, where half their share holders are Republican. They cannot lightly take political positions that differ from their investors/customers. The Re/code piece claims CEOs said “we are duty-bound as American citizens to attend”. No, what they said was “we are duty-bound as officers of our corporations to attend”.

The word “officer”, as in “Chief Operating Officer”, isn’t an arbitrary title like “Senior Software Engineer” that has no real meaning. Instead, “officer” means “bound by duty”. It includes a lot of legal duties, for which they can go to jail if they don’t follow. It includes additional duties to shareholders, for which the board can fire them if they don’t follow.

Normal employees can have Twitter disclaimers saying “these are my personal opinions only, not that of my employer”. Officers of corporations cannot. They are the employer. They cannot champion political causes of their own that would impact their stock price. Sure, they can do minor things, like vote, or contribute quietly to campaigns, as long as they aren’t too public. They can also do political things that enhances stock price, such as opposing encryption backdoors. Tim Cook can announce he’s gay, because that enhances the brand image among Apple’s key demographic of millennials. It’s not something he could do if he were the CEO of John Deere Tractors.

Among the things the CEO’s cannot do is take a stance against Donald Trump. The Boeing thing is a good example. The Boeing’s CEO criticized Trump’s stance on free trade, and 30 minutes later Trump tweeted criticisms of a $4 billion contract with Boeing, causing an immediate billion drop in Boeing’s stock price.

This incident shows why the rest of us need to oppose Trump. Such vindictive politics is how democracies have failed. We cannot allow this to happen here. But the hands of CEOs are tied — they are duty bound to avoid such hits to their stock price.

On the flip, this is one of the few chances CEOs will be able to lobby Trump. If Trump has proven anything, it’s that he has no real positions on things. This would be a great time to change his mind on “encryption backdoors”, for example.

Trump is a dangerous populist who sews distrust in the institutions that give us a stable, prosperous country. Any institution, from the press, to the military, to the intelligence services, to the election system, is attacked, brought into disrepute, even if it supports him. Trump has a dubious relationship with the truth, such as his repeated insistence he won a landslide rather than by a slim margin. He has deep character flaws, such as his vindictive attacks against those who oppose him (Boeing is just one of many examples). Hamilton electors cite deep, patriotic principles for changing their votes, such as Trump’s foreign influences and demagoguery.

What I’m demonstrating here is that thinking persons have good reasons to oppose Trump that can be articulated without mentioning political issues that divide Democrats and Republicans. That the Re/code article is unable to do so makes it simply “hyper-partisan news”, the sort that stroke’s people’s prejudices and passions to get passed around a lot on social media, but which is unlikely to inform anybody or change any minds. In other words, it’s no better than “fake-news”.


Using Wi-Fi to Detect Hand Motions and Steal Passwords

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/using_wi-fi_to_.html

This is impressive research: “When CSI Meets Public WiFi: Inferring Your Mobile Phone Password via WiFi Signals“:

Abstract: In this study, we present WindTalker, a novel and practical keystroke inference framework that allows an attacker to infer the sensitive keystrokes on a mobile device through WiFi-based side-channel information. WindTalker is motivated from the observation that keystrokes on mobile devices will lead to different hand coverage and the finger motions, which will introduce a unique interference to the multi-path signals and can be reflected by the channel state information (CSI). The adversary can exploit the strong correlation between the CSI fluctuation and the keystrokes to infer the user’s number input. WindTalker presents a novel approach to collect the target’s CSI data by deploying a public WiFi hotspot. Compared with the previous keystroke inference approach, WindTalker neither deploys external devices close to the target device nor compromises the target device. Instead, it utilizes the public WiFi to collect user’s CSI data, which is easy-to-deploy and difficult-to-detect. In addition, it jointly analyzes the traffic and the CSI to launch the keystroke inference only for the sensitive period where password entering occurs. WindTalker can be launched without the requirement of visually seeing the smart phone user’s input process, backside motion, or installing any malware on the tablet. We implemented Windtalker on several mobile phones and performed a detailed case study to evaluate the practicality of the password inference towards Alipay, the largest mobile payment platform in the world. The evaluation results show that the attacker can recover the key with a high successful rate.

That “high successful rate” is 81.7%.

News article.

Physical therapy with a pressure-sensing football

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/physical-therapy-pressure-sensing-football/

Every year, eighth-grade science teacher Michele Chamberlain challenges her students to find a solution to a real-world problem. The solution must be environmentally friendly, and must demonstrate their sense of global awareness.

Amelia Day

Amelia with her project.

One of Michele’s students, 14-year-old Amelia Day, knew she wanted to create something that would help her practice her favourite sport, and approached Chamberlain with an idea for a football-related project.

I know you said to choose a project you love,” Amelia explained, “I love soccer and I want to do something with engineering. I know I want to compete.”

Originally, the tool was built to help budding football players practise how to kick a ball correctly. The ball, tethered to a parasol shaft, uses a Raspberry Pi, LEDs, Bluetooth, and pressure points; together, these help athletes to connect with the ball with the right degree of force at the appropriate spot.

However, after a conversation with her teacher, it became apparent that Amelia’s ball could be used for so much more. As a result, the project was gradually redirected towards working with stroke therapy patients.

“It uses the aspect of a soccer training tool and that interface makes it fun, but it also uses Bluetooth audio feedback to rebuild the neural pathways inside the brain, and this is what is needed to recover from a stroke,” explains Amelia. 

“DE3MYSC Submission – [Press-Sure Soccer Ball]”

Uploaded by Amelia Day on 2016-04-20.

The video above comes as part of Amelia’s submission for the Discover Education’s 3M ‘Young Science Challenge 2016’, a national competition for fifth- to eight-grade students from across the USA.

Down to the last ten finalists, Amelia travelled to 3M HQ in Minnesota this October where she had to present her project to a panel of judges. She placed third runner up and received a cash prize.

LMS Hawks on Twitter

Our very own Amelia Day placed 3rd runner up @ the 3M National Junior Scientist competition this week. Proud to call her a Hawk!📓✏️🔎⚽️ #LMS

We’re always so proud to see young makers working to change the world and we wish Amelia the best of luck with her future. We expect to see great things from this Lakeridge Middle School Hawk.

The post Physical therapy with a pressure-sensing football appeared first on Raspberry Pi.