Tag Archives: ioc

Поредната е-майл измама (този път „Fibank“)

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2125

Цитирам текста на съобщението, което получих днес:

—-

(логото на Fibank)

скъпи клиенти,

Вашата електронна сметка в Fibank.bg е временно заключена от съображения за сигурност.

Ако искате да активирате профила си, следвайте инструкциите по-долу.

можете да го направите тук:

Отключване на профила

С уважение,

Обслужване на клиенти

—-

Както се сещате, линкът „Отключване на профила“ не сочи към Fibank – сочи към домейна mobiocean.com. Домейнът е регистриран на лице със скрита информация, и сайтът е хостнат в secureserver.net – тоест, цялата схема е изпипана за целта на измамата.

Обърнете внимание на добрия български език и оригиналното лого в е-майла. В системата на измамниците с гаранция има и българи, като минимум като преводачи.

Сайтът, който ви очаква на линка, също е прилично копие на този на Fibank – включително масивното предупреждение срещу измами. Естествено, можете да въведете там каквито искате „потребител“ и „парола“ – той ги приема щастливо и „отключва“ акаунта ви.

Не се хващайте. Изсмучат ли кибепрестъпниците парите ви, българските банки ще ви помогнат да си ги върнете дълго след като цъфнат налъмите. Нагледал съм се на достатъчно случаи. Който сам се пази, и Господ го пази. Който не се пази сам, и Господ не може да го опази.

Internet Security Threats at the Olympics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/internet_securi.html

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 –­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ — eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter’s positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they’ve already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

Astro Pi Mission Zero: your code is in space

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-mission-zero-day/

Every school year, we run the European Astro Pi challenge to find the next generation of space scientists who will program two space-hardened Raspberry Pi units, called Astro Pis, living aboard the International Space Station.

Italian ESA Astronaut Paolo Nespoli with the Astro Pi units. Image credit ESA.

Astro Pi Mission Zero

The 2017–2018 challenge included the brand-new non-competitive Mission Zero, which guaranteed that participants could have their code run on the ISS for 30 seconds, provided they followed the rules. They would also get a certificate showing the exact time period during which their code ran in space.

Astro Pi Mission Zero logo

We asked participants to write a simple Python program to display a personalised message and the air temperature on the Astro Pi screen. No special hardware was needed, since all the code could be written in a web browser using the Sense HAT emulator developed in partnership with Trinket.

Scott McKenzie on Twitter

Students coding #astropi emulator to scroll a message to astronauts on @Raspberry_Pi in space this summer. Try it here: https://t.co/0KURq11X0L #Rm9Parents #CSforAll #ontariocodes

And now it’s time…

We received over 2500 entries for Mission Zero, and we’re excited to announce that tomorrow all entries with flight status will be run on the ISS…in SPAAACE!

There are 1771 Python programs with flight status, which will run back-to-back on Astro Pi VIS (Ed). The whole process will take about 14 hours. This means that everyone will get a timestamp showing 1 February, so we’re going to call this day Mission Zero Day!

Part of each team’s certificate will be a map, like the one below, showing the exact location of the ISS while the team’s code was running.

The grey line is the ISS orbital path, the red marker shows the ISS’s location when their code was running. Produced using Google Static Maps API.

The programs will be run in the same sequence in which we received them. For operational reasons, we can’t guarantee that they will run while the ISS flies over any particular location. However, if you have submitted an entry to Mission Zero, there is a chance that your code will run while the ISS is right overhead!

Go out and spot the station

Spotting the ISS is a great activity to do by yourself or with your students. The station looks like a very fast-moving star that crosses the sky in just a few minutes. If you know when and where to look, and it’s not cloudy, you literally can’t miss it.

Source Andreas Möller, Wikimedia Commons.

The ISS passes over most ground locations about twice a day. For it to be clearly visible though, you need darkness on the ground with sunlight on the ISS due to its altitude. There are a number of websites which can tell you when these visible passes occur, such as NASA’s Spot the Station. Each of the sites requires you to give your location so it can work out when visible passes will occur near you.

Visible ISS pass star chart from Heavens Above, on which familiar constellations such as the Plough (see label Ursa Major) can be seen.

A personal favourite of mine is Heavens Above. It’s slightly more fiddly to use than other sites, but it produces brilliant star charts that show you precisely where to look in the sky. This is how it works:

  1. Go to www.heavens-above.com
  2. To set your location, click on Unspecified in the top right-hand corner
  3. Enter your location (e.g. Cambridge, United Kingdom) into the text box and click Search
  4. The map should change to the correct location — scroll down and click Update
  5. You’ll be taken back to the homepage, but with your location showing at the top right
  6. Click on ISS in the Satellites section
  7. A table of dates will now show, which are the upcoming visible passes for your location
  8. Click on a row to view the star chart for that pass — the line is the path of the ISS, and the arrow shows direction of travel
  9. Be outside in cloudless weather at the start time, look towards the direction where the line begins, and hope the skies stay clear

If you go out and do this, then tweet some pictures to @raspberry_pi, @astro_pi, and @esa. Good luck!

More Astro Pi

Mission Zero certificates will be arriving in participants’ inboxes shortly. We would like to thank everyone who participated in Mission Zero this school year, and we hope that next time you’ll take it one step further and try Mission Space Lab.

Mission Zero and Mission Space Lab are two really exciting programmes that young people of all ages can take part in. If you would like to be notified when the next round of Astro Pi opens for registrations, sign up to our mailing list here.

The post Astro Pi Mission Zero: your code is in space appeared first on Raspberry Pi.

When You Have A Blockchain, Everything Looks Like a Nail

Post Syndicated from Bozho original https://techblog.bozho.net/blockchain-everything-looks-like-nail/

Blockchain, AI, big data, NoSQL, microservices, single page applications, cloud, SOA. What do these have in common? They have been or are hyped. At some point they were “the big thing” du jour. Everyone was investigating the possibility of using them, everyone was talking about them, there were meetups, conferences, articles on Hacker news and reddit. There are more examples, of course (which is the javascript framework this month?) but I’ll focus my examples on those above.

Another thing they have in common is that they are useful. All of them have some pretty good applications that are definitely worth the time and investment.

Yet another thing they have in common is that they are far from universally applicable. I’ve argued that monoliths are often still the better approach and that microservices introduce too much complexity for the average project. Big Data is something very few organizations actually have; AI/machine learning can help a wide variety of problems, but it is just a tool in a toolbox, not the solution to all problems. Single page applications are great for, yeah, applications, but most websites are still websites, not feature-rich frontends – you don’t need an SPA for every type of website. NoSQL has solved niche issues, and issues of scale that few companies have had, but nothing beats a good old relational database for the typical project out there. “The cloud” is not always where you want your software to be; and SOA just means everything (ESBs, direct integrations, even microservices, according to some). And the blockchain – it seems to be having limited success beyond cryptocurrencies.

And finally, another trait many of them share is that the hype has settled down. Only yesterday I read an article about the “death of the microservices madness”. I don’t see nearly as many new NoSQL databases as a few years ago, some of the projects that have been popular have faded. SOA and “the cloud” are already “boring”, and we’ve realized we don’t actually have big data if it fits in an Excel spreadsheet. SPAs and AI are still high in popularity, but we are getting a good understanding as a community why and when they are useful.

But it seems that nuanced reality has never stopped us from hyping a particular technology or approach. And maybe that’s okay in order to get a promising, though niche, technology, the spotlight and let it shine in the particular usecases where it fits.

But countless projects have and will suffer from our collective inability to filter through these hypes. I’d bet millions of developer hours have been wasted in trying to use the above technologies where they just didn’t fit. It’s like that scene from Idiocracy where a guy tries to fit a rectangular figure into a circular hole.

And the new one is not “the blockchain”. I won’t repeat my rant, but in summary – it doesn’t solve many of the problems companies are trying to solve with it right now just because it’s cool. Or at least it doesn’t solve them better than existing solutions. Many pilots will be carried out, many hours will be wasted in figuring out why that thing doesn’t work. A few of those projects will be a good fit and will actually bring value.

Do you need to reach multi-party consensus for the data you store? Can all stakeholder support the infrastructure to run their node(s)? Do they have the staff to administer the node(s)? Do you need to execute distributed application code on the data? Won’t it be easier to just deploy RESTful APIs and integrate the parties through that? Do you need to store all the data, or just parts of it, to guarantee data integrity?

“If you have is a hammer, everything looks like a nail” as the famous saying goes. In the software industry we repeatedly find new and cool hammers and then try to hit as many nails as we can. But only few of them are actual nails. The rest remain ugly, hard to support, “who was the idiot that wrote this” and “I wasn’t here when the decisions were made” types of projects.

I don’t have the illusion that we will calm down and skip the next hypes. Especially if adding the hyped word to your company raises your stock price. But if there’s one thing I’d like people to ask themselves when choosing a technology stack, it is “do we really need that to solve our problems?”.

If the answer is really “yes”, then great, go ahead and deploy the multi-organization permissioned blockchain, or fork Ethereum, or whatever. If not, you can still do a project a home that you can safely abandon. And if you need some pilot project to figure out whether the new piece of technology would be beneficial – go ahead and try it. But have a baseline – the fact that it somehow worked doesn’t mean it’s better than old, tested models of doing the same thing.

The post When You Have A Blockchain, Everything Looks Like a Nail appeared first on Bozho's tech blog.

More notes on US-CERTs IOCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/more-notes-on-us-certs-iocs.html

Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT.

IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic.

A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front.

But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly.

This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless.

An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless.

Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily.

And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email.

A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid.

Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name.

But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”.

Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless.

What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly.

Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following:

import “pe”
import “hash”

Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed.

If this were a “threat intel” company,  we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content.

This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized.

There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details.

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

Build your own Crystal Maze at Home

Post Syndicated from Laura Sach original https://www.raspberrypi.org/blog/build-crystal-maze/

I recently discovered a TV channel which shows endless re-runs of the game show The Crystal Maze, and it got me thinking: what resources are available to help the younger generation experience the wonder of this iconic show? Well…

Enter the Crystal Maze

If you’re too young to remember The Crystal Maze, or if you come from a country lacking this nugget of TV gold, let me explain. A band of fairly useless contestants ran around a huge warehouse decked out to represent four zones: Industrial, Aztec, Futuristic, and Medieval. They were accompanied by a wisecracking host in a fancy coat, Richard O’Brien.

A GIF of Crystal Maze host Richard O'Brien having fun on set. Build your own Raspberry Pi Crystal Maze

Richard O’Brien also wrote The Rocky Horror Picture Show so, y’know, he was interesting to watch if nothing else.

The contestants would enter rooms to play themed challenges – the categories were mental, physical, mystery, and skill – with the aim of winning crystals. If they messed up, they were locked in the room forever (well, until the end of the episode). For every crystal they collected, they’d be given a bit more time in a giant crystal dome at the end of the programme. And what did they do in the dome? They tried to collect pieces of gold paper while being buffeted by a wind machine, of course!

A GIF of a boring prize being announced to the competing team. Build your own Raspberry Pi Crystal Maze

Collect enough gold paper and you win a mediocre prize. Fail to collect enough gold paper and you win a mediocre prize. Like I said: TV gold.

Sounds fun, doesn’t it? Here are some free resources that will help you recreate the experience of The Crystal Maze in your living room…without the fear of being locked in.

Marble maze

Image of Crystal Maze Board Game

Photo credit: Board Game Geek

Make the classic Crystal Maze game, but this time with a digital marble! Use your Sense HAT to detect pitch, roll, and yaw as you guide the marble to its destination.

Bonus fact: marble mazes featured in the Crystal Maze board game from the 1990s.

Buzz Wire

Crystal Maze Buzz Wire game screengrab

Photo credit: Board Game Geek

Guide the hook along the wire and win the crystal! Slip up and buzz three times, though, and it’s an automatic lock-in. The beauty of this make is that you can play any fail sound you like: burp wire, anyone? Follow the tutorial by community member David Pride, which he created for the Cotswold Jam.

Laser tripwire

Crystal Maze laser trip wire screengrab

Photo credit: Marc Gerrish

Why not recreate the most difficult game of all? Can you traverse a room without setting off the laser alarms, and grab the crystal? Try your skill with our laser tripwire resource!

Forget the crystal! Get out!

I would love to go to a school fête where kids build their own Crystal Maze-style challenges. I’m sure there are countless other events which you could jazz up with a fun digital making challenge, though the bald dude in a fur coat remains optional. So if you have made your own Crystal Maze challenge, or you try out one of ours, we’d love to hear about it!

Here at the Raspberry Pi Foundation, we take great pride in the wonderful free resources we produce for you to use in classes, at home, and in coding clubs. We publish them under a Creative Commons licence, and they’re an excellent way to develop your digital making skills. And massive thanks to David Pride and the Cotswold Jam for creating and sharing your great resources for free.

The post Build your own Crystal Maze at Home appeared first on Raspberry Pi.

European Astro Pi Challenge winners

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/european-astro-pi-winners/

In October last year, with the European Space Agency and CNES, we launched the first ever European Astro Pi challenge. We asked students from all across Europe to write code for the flight of French ESA astronaut Thomas Pesquet to the International Space Station (ISS) as part of the Proxima mission. Today, we are very excited to announce the winners! First of all, though, we have a very special message from Thomas Pesquet himself, which comes all the way from space…

Thomas Pesquet congratulates Astro Pi participants from space

French ESA astronaut Thomas Pesquet floats in to thank all participants in the European Astro Pi challenge. In October last year, together with the European Space Agency, we launched the first ever European Astro Pi challenge for the flight of French ESA astronaut Thomas Pesquet to the International Space Station (ISS) as part of mission Proxima.

Thomas also recorded a video in French: you can click here to see it and to enjoy some more of his excellent microgravity acrobatics.

A bit of background

This year’s competition expands on our previous work with British ESA astronaut Tim Peake, in which, together with the UK Space Agency and ESA, we invited UK students to design software experiments to run on board the ISS.

Astro Pi Vis (AKA Ed) on board the ISS. Image from ESA.

In 2015, we built two space-hardened Raspberry Pi units, or Astro Pis, to act as the platform on which to run the students’ code. Affectionately nicknamed Ed and Izzy, the units were launched into space on an Atlas V rocket, arriving at the ISS a few days before Tim Peake. He had a great time running all of the programs, and the data collected was transmitted back to Earth so that the winners could analyse their results and share them with the public.

The European challenge provides the opportunity to design code to be run in space to school students from every ESA member country. To support the participants, we worked with ESA and CPC to design, manufacture, and distribute several hundred free Astro Pi activity kits to the teams who registered. Further support for teachers was provided in the form of three live webinars, a demonstration video, and numerous free educational resources.

Image of Astro Pi kit box

The Astro Pi activity kit used by participants in the European challenge.

The challenge

Thomas Pesquet assigned two missions to the teams:

  • A primary mission, for which teams needed to write code to detect when the crew are working in the Columbus module near the Astro Pi units.
  • A secondary mission, for which teams needed to come up with their own scientific investigation and write the code to execute it.

The deadline for code submissions was 28 February 2017, with the judging taking place the following week. We can now reveal which schools will have the privilege of having their code uploaded to the ISS and run in space.

The proud winners!

Everyone produced great work and the judges found it really tough to narrow the entries down. In addition to the winning submissions, there were a number of teams who had put a great deal of work into their projects, and whose entries have been awarded ‘Highly Commended’ status. These teams will also have their code run on the ISS.

We would like to say a big thank you to everyone who participated. Massive congratulations are due to the winners! We will upload your code digitally using the space-to-ground link over the next few weeks. Your code will be executed, and any files created will be downloaded from space and returned to you via email for analysis.

In no particular order, the winners are:

France

  • Winners
    • @stroteam, Institut de Genech, Hauts-de-France
    • Wierzbinski, École à la maison, Occitanie
    • Les Marsilyens, École J. M. Marsily, PACA
    • MauriacSpaceCoders, Lycée François Mauriac, Nouvelle-Aquitaine
    • Ici-bas, École de Saint-André d’Embrun, PACA
    • Les Astrollinaires, Lycée général et technologique Guillaume Apollinaire, PACA
  • Highly Commended
    • ALTAÏR, Lycée Albert Claveille, Nouvelle Aquitaine
    • GalaXess Reloaded, Lycée Saint-Cricq, Nouvelle Aquitaine
    • Les CM de Neffiès, École Louis Authie, Occitanie
    • Équipe Sciences, Collège Léonce Bourliaguet, Nouvelle Aquitaine
    • Maurois ICN, Lycée André Maurois, Normandie
    • Space Project SP4, Lycée Saint-Paul IV, Île de la Réunion
    • 4eme2 Gymnase Jean Sturm, Gymnase Jean Sturm, Grand Est
    • Astro Pascal dans les étoiles, École Pascal, Île-de-France
    • les-4mis, EREA Alexandre Vialatte, Auvergne-Rhône-Alpes
    • Space Cavenne Oddity, École Cavenne, Auvergne-Rhône-Alpes
    • Luanda for Space, Lycée Français de Luanda, Angola
      (Note: this is a French international school and the team members have French nationality/citizenship)
    • François Detrille, Lycée Langevin-Wallon, Île-de-France

Greece

  • Winners
    • Delta, TALOS ed-UTH-robotix, Magnesia
    • Weightless Mass, Intercultural Junior High School of Evosmos, Macedonia
    • 49th Astro Pi Teamwork, 49th Elementary School of Patras, Achaia
    • Astro Travellers, 12th Primary School of Petroupolis, Attiki
    • GKGF-1, Gymnasium of Kanithos, Sterea Ellada
  • Highly Commended
    • AstroShot, Lixouri High School, Kefalonia
    • Salamina Rockets Pi, 1st Senior High School of Salamina, Attiki
    • The four Astro-fans, 6th Gymnasio of Veria, Macedonia
    • Samians, 2nd Gymnasio Samou, North Eastern Aegean

United Kingdom

  • Winners
    • Madeley Ad Astra, Madeley Academy, Shropshire
    • Team Dexterity, Dyffryn Taf School, Carmarthenshire
    • The Kepler Kids, St Nicolas C of E Junior School, Berkshire
    • Catterline Pi Bugs, Catterline Primary, Aberdeenshire
    • smileyPi, Westminster School, London
  • Highly Commended
    • South London Raspberry Jam, South London Raspberry Jam, London

Italy

  • Winners
    • Garibaldini, Istituto Comprensivo Rapisardi-Garibaldi, Sicilia
    • Buzz, IIS Verona-Trento, Sicilia
    • Water warmers, Liceo Scientifico Galileo Galilei, Abruzzo
    • Juvara/Einaudi Siracusa, IIS L. Einaudi, Sicilia
    • AstroTeam, IIS Arimondi-Eula, Piemonte

Poland

  • Winners
    • Birnam, Zespół Szkoły i Gimnazjum im. W. Orkana w Niedźwiedziu, Malopolska
    • TechnoZONE, Zespół Szkół nr 2 im. Eugeniusza Kwiatkowskiego, Podkarpacie
    • DeltaV, Gimnazjum nr 49, Województwo śląskie
    • The Safety Crew, MZS Gimnazjum nr 1, Województwo śląskie
    • Warriors, Zespół Szkół Miejskich nr 3 w Jaśle, Podkarpackie
  • Highly Commended
    • The Young Cuiavian Astronomers, Gimnazjum im. Stefana Kardynała Wyszyńskiego w Piotrkowie Kujawskim, Kujawsko-pomorskie
    • AstroLeszczynPi, I Liceum Ogolnokształcace w Jasle im. Krola Stanislawa Leszczynskiego, Podkarpackie

Portugal

  • Winners
    • Sampaionautas, Escola Secundária de Sampaio, Setúbal
    • Labutes Pi, Escola Secundária D. João II, Setúbal
    • AgroSpace Makers, EB 2/3 D. Afonso Henriques, Cávado
    • Zero Gravity, EB 2/3 D. Afonso Henriques, Cávado
    • Lua, Agrupamento de Escolas José Belchior Viegas, Algarve

Romania

  • Winners
    • AstroVianu, Tudor Vianu National High School of Computer Science, Bucharest
    • MiBus Researchers, Mihai Busuioc High School, Iași
    • Cosmos Dreams, Nicolae Balcescu High School, Cluj
    • Carmen Sylva Astro Pi, Liceul Teoretic Carmen Sylva Eforie, Constanța
    • Stargazers, Tudor Vianu National High School of Computer Science, Bucharest

Spain

  • Winners
    • Papaya, IES Sopela, Vizcaya
    • Salesianos-Ubeda, Salesianos Santo Domingo Savio, Andalusia
    • Valdespartans, IES Valdespartera, Aragón
    • Ins Terrassa, Institut Terrassa, Cataluña

Ireland

  • Winner
    • Moonty1, Mayfield Community School, Cork

Germany

  • Winner
    • BSC Behringersdorf Space Center, Labenwolf-Gymnasium, Bayern

Norway

  • Winner
    • Skedsmo Kodeklubb, Kjeller Skole, Akershus

Hungary

  • Winner
    • UltimaSpace, Mihaly Tancsics Grammar School of Kaposvár, Somogy

Belgium

  • Winner
    • Lambda Voyager, Stedelijke Humaniora Dilsen, Limburg

FAQ

Why aren’t all 22 ESA member states listed?

  • Because some countries did not have teams participating in the challenge.

Why do some countries have fewer than five teams?

  • Either because those countries had fewer than five teams qualifying for space flight, or because they had fewer than five teams participating in the challenge.

How will I get my results back from space?

  • After your code has run on the ISS, we will download any files you created and they will be emailed to your teacher.

The post European Astro Pi Challenge winners appeared first on Raspberry Pi.

Dear Obama, From Infosec

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/dear-obama-from-infosec.html

Dear President Obama:

We are more than willing to believe Russia was responsible for the hacked emails/records that influenced our election. We believe Russian hackers were involved. Even if these hackers weren’t under the direct command of Putin, we know he could put a stop to such hacking if he chose. It’s like harassment of journalists and diplomats. Putin encourages a culture of thuggery that attacks opposition, without his personal direction, but with his tacit approval.

Your lame attempts to convince us of what we already agree with has irretrievably damaged your message.

Instead of communicating with the America people, you worked through your typical system of propaganda, such as stories in the New York Times quoting unnamed “senior government officials”. We don’t want “unnamed” officials — we want named officials (namely you) who we can pin down and question. When you work through this system of official leaks, we believe you have something to hide, that the evidence won’t stand on its own.

We still don’t believe the CIA’s conclusions because we don’t know, precisely, what those conclusions are. Are they derived purely from companies like FireEye and CrowdStrike based on digital forensics? Or do you have spies in Russian hacker communities that give better information? This is such an important issue that it’s worth degrading sources of information in order to tell us, the American public, the truth.

You had the DHS and US-CERT issue the “GRIZZLY-STEPPE”[*] report “attributing those compromises to Russian malicious cyber activity“. It does nothing of the sort. It’s full of garbage. It contains signatures of viruses that are publicly available, used by hackers around the world, not just Russia. It contains a long list of IP addresses from perfectly normal services, like Tor, Google, Dropbox, Yahoo, and so forth.

Yes, hackers use Yahoo for phishing and malvertising. It doesn’t mean every access of Yahoo is an “Indicator of Compromise”.

For example, I checked my web browser [chrome://net-internals/#dns] and found that last year on November 20th, it accessed two IP addresses that are on the Grizzley-Steppe list:

No, this doesn’t mean I’ve been hacked. It means I just had a normal interaction with Yahoo. It means the Grizzley-Steppe IoCs are garbage.

If your intent was to show technical information to experts to confirm Russia’s involvement, you’ve done the precise opposite. Grizzley-Steppe proves such enormous incompetence that we doubt all the technical details you might have. I mean, it’s possible that you classified the important details and de-classified the junk, but even then, that junk isn’t worth publishing. There’s no excuse for those Yahoo addresses to be in there, or the numerous other problems.

Among the consequences is that Washington Post story claiming Russians hacked into the Vermont power grid. What really happened is that somebody just checked their Yahoo email, thereby accessing one of the same IP addresses I did. How they get from the facts (one person accessed Yahoo email) to the story (Russians hacked power grid) is your responsibility. This misinformation is your fault.

You announced sanctions for the Russian hacking [*]. At the same time, you announced sanctions for Russian harassment of diplomatic staff. These two events are confused in the press, with most stories reporting you expelled 35 diplomats for hacking, when that appears not to be the case.

Your list of individuals/organizations is confusing. It makes sense to name the GRU, FSB, and their officers. But why name “ZorSecurity” but not sole proprietor “Alisa Esage Shevchenko”? It seems a minor target, and you give no information why it was selected. Conversely, you ignore the APT28/APT29 Dukes/CozyBear groups that feature so prominently in your official leaks. You also throw in a couple extra hackers, for finance hacks rather than election hacks. Again, this causes confusion in the press about exactly who you are sanctioning and why. It seems as slipshod as the DHS/US-CERT report.

Mr President, you’ve got two weeks left in office. Russia’s involvement is a huge issue, especially given President-Elect Trump’s pro-Russia stance. If you’ve got better information than this, I beg you to release it. As it stands now, all you’ve done is support Trump’s narrative, making this look like propaganda — and bad propaganda at that. Give us, the infosec/cybersec community, technical details we can look at, analyze, and confirm.

Regards,
Infosec

Some notes on IoCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/some-notes-on-iocs.html

Obama “sanctioned” Russia today for those DNC/election hacks, kicking out 35 diplomats (**), closing diplomatic compounds (**), seizing assets of named individuals/groups (***). They also published “IoCs” of those attacks, fingerprints/signatures that point back to the attackers, like virus patterns, file hashes, and IP addresses.

These IoCs are of low quality. They are published as a political tool, to prove they have evidence pointing to Russia. They have limited utility to defenders, or those publicly analyzing attacks.

Consider the Yara rule included in US-CERT’s “GRIZZLY STEPPE” announcement:

What is this? What does this mean? What do I do with this information?

It’s a YARA rule. YARA is a tool ostensibly for malware researchers, to quickly classify files. It’s not really an anti-virus product designed to prevent or detect an intrusion/infection, but to analyze an intrusion/infection afterward — such as attributing the attack. Signatures like this will identify a well-known file found on infected/hacked systems.

What this YARA rule detects is, as the name suggests, the “PAS TOOL WEB KIT”, a web shell tool that’s popular among Russia/Ukraine hackers. If you google “PAS TOOL PHP WEB KIT”, the second result points to the tool in question. You can download a copy here [*], or you can view it on GitHub here [*].

Once a hacker gets comfortable with a tool, they tend to keep using it. That implies the YARA rule is useful at tracking the activity of that hacker, to see which other attacks they’ve been involved in, since it will find the same web shell on all the victims.

The problem is that this P.A.S. web shell is popular, used by hundreds if not thousands of hackers, mostly associated with Russia, but also throughout the rest of the world (judging by hacker forum posts). This makes using the YARA signature for attribution problematic: just because you found P.A.S. in two different places doesn’t mean it’s the same hacker.

A web shell, by the way, is one of the most common things hackers use once they’ve broken into a server. It allows further hacking and exfiltration traffic to appear as normal web requests. It typically consists of a script file (PHP, ASP, PERL, etc.) that forwards commands to the local system. There are hundreds of popular web shells in use.

We have little visibility into how the government used these IoCs. IP addresses and YARA rules like this are weak, insufficient for attribution by themselves. On the other hand, if they’ve got web server logs from multiple victims where commands from those IP addresses went to this specific web shell, then the attribution would be strong that all these attacks are by the same actor.

In other words, these rules can be a reflection of the fact the government has excellent information for attribution. Or, it could be a reflection that they’ve got only weak bits and pieces. It’s impossible for us outsiders to tell. IoCs/signatures are fetishized in the cybersecurity community: they love the small rule, but they ignore the complexity and context around the rules, often misunderstanding what’s going on. (I’ve written thousands of the things — I’m constantly annoyed by the ignorance among those not understanding what they mean).

I see on twitter people praising the government for releasing these IoCs. What I’m trying to show here is that I’m not nearly as enthusiastic about their quality.


Note#1: BTW, the YARA rule has to trigger on the PHP statements, not on the imbedded BASE64 encoded stuff. That’s because it’s encrypted with a password, so could be different for every hacker.

Note#2: Yes, the hackers who use this tool can evade detection by minor changes that avoid this YARA rule. But that’s not a concern — the point is to track the hacker using this tool across many victims, to attribute attacks. The point is not to act as an anti-virus/intrusion-detection system that triggers on “signatures”.

Note#3: Publishing the YARA rule burns it. The hackers it detects will presumably move to different tools, like PASv4 instead of PASv3. Presumably, the FBI/NSA/etc. have a variety of YARA rules for various web shells used by know active hackers, to attribute attacks to various groups. They aren’t publishing these because they want to avoid burning those rules.

Note#4: The PDF from the DHS has pretty diagrams about the attacks, but it doesn’t appears this web shell was used in any of them. It’s difficult to see where it fits in the overall picture.


(**) No, not really. Apparently, kicking out the diplomats was punishment for something else, not related to the DNC hacks.

(***) It’s not clear if these “sanctions” have any teeth.

IoT saves lives but infosec wants to change that

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/iot-saves-lives.html

The cybersecurity industry mocks/criticizes IoT. That’s because they are evil and wrong. IoT saves lives. This was demonstrated a couple weeks ago when a terrorist attempted to drive a truck through a Christmas market in German. The truck has an Internet-connected braking system (firmware updates, configuration, telemetry). When it detected the collision, it deployed the brakes, bringing the truck to a stop. Injuries and deaths were a 10th of the similar Nice truck attack earlier in the year.

All the trucks shipped by Scania in the last five years have had mobile phone connectivity to the Internet. Scania pulls back telemetry from trucks, for the purposes of improving drivers, but also to help improve the computerized features of the trucks. They put everything under the microscope, such as how to improve air conditioning to make the trucks more environmentally friendly.

Among their features is the “Autonomous Emergency Braking” system. This is the system that saved lives in Germany.

You can read up on these features on their website, or in their annual report [*].

My point is this: the cybersecurity industry is a bunch of police-state fetishists that want to stop innovation, to solve the “security” problem first before allowing innovation to continue. This will only cost lives. Yes, we desperately need to solve the problem. Almost certainly, the Scania system can trivially be hacked by mediocre hackers. But if Scania had waited first to secure its system before rolling it out in trucks, many more people would now be dead in Germany. Don’t listen to cybersecurity professionals who want to stop the IoT revolution — they just don’t care if people die.


Update: Many, such the first comment, point out that the emergency brakes operate independently of the Internet connection, thus disproving this post.

That’s silly. That’s the case of all IoT devices. The toaster still toasts without Internet. The surveillance camera still records video without Internet. My car, which also has emergency brakes, still stops. In almost no IoT is the Internet connectivity integral to the day-to-day operation. Instead, Internet connectivity is for things like configuration, telemetry, and downloading firmware updates — as in the case of Scania.

While the brakes don’t make their decision based on the current connectivity, connectivity is nonetheless essential to the equation. Scania monitors its fleet of 170,000 trucks and uses that information to make trucks, including braking systems, better.

My car is no more or less Internet connected than the Scania truck, yet hackers have released exploits at hacking conferences for it, and it’s listed as a classic example of an IoT device. Before you say a Scania truck isn’t an IoT device, you first have to get all those other hackers to stop calling my car an IoT device.

The false-false-balance problem

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/the-false-false-balance-problem.html

Until recently, journalism in America prided itself on objectivity — to report the truth, without taking sides. That’s because big debates are always complexed and nuanced, and that both sides are equally reasonable. Therefore, when writing an article, reporters attempt to achieve balance by quoting people/experts/proponents on both sides of an issue.

But what about those times when one side is clearly unreasonable? You’d never try to achieve balance by citing those who believe in aliens and big-foot, for example.Thus, journalists have come up with the theory of false-balance to justify being partisan and one-sided on certain issues.
Typical examples where journalists cite false-balance is reporting on anti-vaxxers, climate-change denialists, and Creationists. More recently, false-balance has become an issue in the 2016 Trump election.
But this concept of false-balance is wrong. It’s not that anti-vaxxers, denialists, Creationists, and white supremacists are reasonable. Instead, the issue is that the left-wing has reframed the debate. They’ve simplified it into something black-and-white, removing nuance, in a way that shows their opponents as being unreasonable. The media then adopts the reframed debate.
Let’s talk anti-vaxxers. One of the policy debates is whether the government has the power to force vaccinations on people (or on people’s children). Reasonable people say the government doesn’t have this power. Many (if not most) people hold this opinion while agreeing that vaccines are both safe and effective (that they don’t cause autism).
Consider this February 2015 interview with Chris Christy. He’s one of the few politicians who have taken the position that government can override personal choice, such as in the case of an outbreak. Yet, when he said “parents need to have some measure of choice in things as well, so that’s the balance that the government has to decide“, he was broadly reviled as an anti-vaxxer throughout the media. The press reviled other Republican candidates the same way, even while ignoring almost identical statements made at the same time by the Obama administration. They also ignored clearly anti-vax comments from both Hillary and Obama during the 2008 election.
Yes, we can all agree that anti-vaxxers are a bunch of crazy nutjobs. In calling for objectivity, we aren’t saying that you should take them seriously. Instead, we are pointing out the obvious bias in the way the media attacked Republican candidates as being anti-vaxxers, and then hiding behind “false-balance”.
Now let’s talk evolution. The issue is this: Darwinism has been set up as some sort of competing religion against belief in God(s). High-schools teach children to believe in Darwinism, but not to understand Darwinism. Few kids graduate understanding Darwinism, which is why it’s invariably misrepresented in mass-media (X-Men, Planet of the Apes, Waterworld, Godzilla, Jurassic Park, etc.). The only movie I can recall getting evolution correct is Idiocracy.
Also, evolution has holes in it. This isn’t a bad thing in science, every scientific theory has holes. Science isn’t a religion. We don’t care about the holes. That some things remain unexplained by a theory doesn’t bother us. Science has no problem with gaps in knowledge, where we admit “I don’t know”. It’s religion that has “God of the gaps”, where ignorance isn’t tolerated, and everything unexplained is explained by a deity.
The hole in evolution is how the cell evolved. The fossil record teaches us a lot about multi-cellular organisms over the last 400-million years, but not much about how the cell evolved in the 4-billion years on planet Earth before that. I can point to radio isotope dating and fossil finds to prove dinosaurs existed 250,000 million to 60 million years ago, thus disproving your crazy theory of a 10,000 year-old Earth. But I can’t point to anything that disagrees with your view that a deity created the original cellular organisms. I don’t agree with that theory, but I can’t disprove it, either.
The point is that Christians have a good point that Darwinism is taught as a competing religion. You see this in the way books that deny holes in knowledge, insisting that Darwinism explains even how cells evolved, and that doubting Darwin is blasphemy. 
The Creationist solution is wrong, we can’t teach religion in schools. But they have a reasonable concern about religious Darwinism. The solution there is to do a better job teaching it as a science. If kids want to believe that one of the deities created the first cells, then that’s okay, as long as they understand the fossil record and radioisotope dating.
Now let’s talk Climate Change. This is a tough one, because you people have lost your collective minds. The debate is over how much change? how much danger? how much costs?. The debate is not over Is it true?. We all agree it’s true, even most Republicans. By keeping the debate between the black-and-white “Is global warming true?”, the left-wing can avoid the debate “How much warming?”.
Consider this exchange from one of the primary debates:
Moderator: …about climate change…
RUBIO: Because we’re not going to destroy our economy …
Moderator: Governor Christie, … what do you make of skeptics of climate change such as Senator Rubio?
CHRISTIE: I don’t think Senator Rubio is a skeptic of climate change.
RUBIO: I’m not a denier/skeptic of climate change.
The media (in this case CNN) is so convinced that Republican deny climate change that they can’t hear any other statement. Rubio clearly didn’t deny Climate Change, but the moderator was convinced that he did. Every statement is seen as outright denial, or code words for denial. Thus, convinced of the falseness of false-balance, the media never sees the fact that most Republicans are reasonable.
Similar proof of Republican non-denial is this page full of denialism quotes. If you actually look at the quotes, you’ll see that when taken in context, virtually none of the statements deny climate change. For example, when Senator Dan Sulliven says “no concrete scientific consensus on the extent to which humans contribute to climate change“, he is absolutely right. There is 97% consensus that mankind contributes to climate change, but there is widespread disagreement on how much.
That “97% consensus” is incredibly misleading. Whenever it’s quoted, the speaker immediately moves the bar, claiming that scientists also agree with whatever crazy thing the speaker wants, like hurricanes getting worse (they haven’t — at least, not yet).
There’s no inherent reason why Republicans would disagree with addressing Climate Change. For example, Washington State recently voted on a bill to impose a revenue neutral carbon tax. The important part is “revenue neutral”: Republicans hate expanding government, but they don’t oppose policies that keep government the same side. Democrats opposed this bill, precisely because it didn’t expand the size of government. That proves that Democrats are less concerned with a bipartisan approach to addressing climate change, but instead simply use it as a wedge issue to promote their agenda of increased regulation and increased spending. 
If you are serious about address Climate Change, then agree that Republicans aren’t deniers, and then look for bipartisan solutions.
Conclusion

The point here is not to try to convince you of any political opinion. The point here is to describe how the press has lost objectivity by adopting the left-wing’s reframing of the debate. Instead of seeing balanced debate between two reasonable sides, they see a warped debate between a reasonable (left-wing) side and an unreasonable (right-wing) side. That the opposing side is unreasonable is so incredible seductive they can never give it up.
That Christie had to correct the moderator in the debate should teach you that something is rotten in journalism. Christie understood Rubio’s remarks, but the debate moderator could not. Journalists cannot even see the climate debate because they are wedded to the left-wing’s corrupt view of the debate.
The issue of false-balance is wrong. In debates that evenly divide the population, the issues are complex and nuanced, both sides are reasonable. That’s the law. It doesn’t matter what the debate is. If you see the debate simplified to the point where one side is obviously unreasonable, then it’s you who has a problem.

Dinner with Rajneeshees

One evening I answered the doorbell to find a burgundy clad couple on the doorstep. They were followers of the Bagwan Shree Rajneesh, whose cult had recently purchased a large ranch in the eastern part of the state. No, they weren’t there to convert us. They had come for dinner. My father had invited them.
My father was a journalist, who had been covering the controversies with the cult’s neighbors. Yes, they were a crazy cult which later would breakup after committing acts of domestic terrorism.  But this couple was a pair of young professionals (lawyers) who, except for their clothing, looked and behaved like normal people. They would go on to live normal lives after the cult.
Growing up, I lived in two worlds. One was the normal world, which encourages you to demonize those who disagree with you. On the political issues that concern you most, you divide the world into the righteous and the villains. It’s not enough to believe the other side wrong, you most also believe them to be evil.
The other world was that of my father, teaching me to see the other side of the argument. I guess I grew up with my own Atticus Finch (from To Kill a Mockingbird), who set an ideal. In much the same way that Atticus told his children that they couldn’t hate even Hitler, I was told I couldn’t hate even the crazy Rajneeshees.

Halloween Pumpkin Light Effect Tutorial

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/halloween-pumpkin-light-effect-tutorial/

While browsing the Halloween section of my local supermarket, I stumbled across this adorable pumpkin light for £2. Given the fact I had a £2 coin in my pocket and an itching for a Halloween hack project, it came home with me at once.

Halloween Pumpkin BEFORE

Mediocre spooky

You’ll see, if you look hard enough, the small orange LED within the plastic shell. Controlled by a switch beneath, the light was less than impressive once illuminated. 

And so the hack seed was planted.

Now, I’ll admit to not having much coding knowledge. But what I lack in skills, I make up for in charm, and it didn’t take long for me to find help from a couple of Raspberry Pi staffers.

Pimoroni had been nice enough to send us some Blinkt! units a while back and while one is being used for our People in Space Indicator, I’d been wanting to use one myself for some sort of lighting effect. So, with their Getting Started guide in hand (or rather, on screen), I set up the Raspberry Pi 3 that lives on my desk, attached the Blinkt! HAT and got to work on creating this…

Halloween Pumpkin Light Effect

Use a Raspberry Pi and Pimoroni Blinkt! to create an realistic lighting effect for your Halloween Pumpkin. Learn how at www.

If you’d like to create your own pumpkin light effect, you’ll need:

  • A Raspberry Pi (Make sure you use one that fits in your pumpkin!)
  • A Pimoroni Blinkt!
  • A power supply (plus monitor, mouse, and keyboard for setup)
  • A pumpkin

Take your Blinkt! and attach it to your Pi. If you’re using a 1-3 model, this will be easy enough, but make sure the Pi fits in your pumpkin! If, like me, you need to go smaller, you’ll have to solder your header pins to a Zero before attaching the HAT.

You might want to make sure Raspbian is running on the newest version. Why? Well, why not? You don’t have to upgrade to PIXEL, but you totally should as it’s very pretty. Its creator, Simon Long, was my soldering master for this project. His skills are second to none. To upgrade to Pixel, follow the steps here.

In the terminal, you’ll need to install the Pimoroni Blinkt! library. Use the following to achieve this:

curl -sS get.pimoroni.com/blinkt | bash

You’ll need to reboot the Raspberry Pi to allow the changes to take effect. You can do this by typing:

sudo reboot

At this point, you’re more than welcome to go your own way with the Blinkt! and design your own light show (this may help). However, and with major thanks to Jonic Linley, we’ve created a pumpkin fire effect for you.

Within the terminal, type:

git clone https://github.com/AlexJrassic/fire_effect.git

This will bring the code to your Raspberry Pi from GitHub. Next, we need to tell the Raspberry Pi to automatically start the fire_effect.py code when you power up. To do this, type:

nano ~/.config/lxsession/LXDE-pi/autostart

At end of the file, add this line:

@python /home/pi/fire_effect/fire_effect.py

Save and then reboot:

sudo reboot

Now you’re good to go. 

Halloween Pumpkin AFTER

To add more of a spread to the light effect, I created a diffuser to cover the Blinkt! LEDs. In the video above, you’ll see I used a tissue. I wouldn’t suggest this for prolonged use, due to the unit getting a little warm; I won’t be responsible for any  subsequent tissue fires. I would suggest using a semi-opaque bowl (the ones you get a Christmas pudding in) or a piece of plastic from a drinks bottle, and go to town on it with some fine sandpaper.

We also drilled a small hole in the back for the micro-USB lead to reach the Zero. I used a battery pack for power, but you could use a lead directly into the mains. With a larger pumpkin, you could put a battery pack inside with the Pi.

If you use this code, please share a photo with us below, or across social media. We’d love to see what you get up to.

And if you want to buy the Blinkt!, the team at Pimoroni have kindly agreed to extend the cut-off for postage on Friday from midday to 3pm, allowing you the chance to get the unit through your door on Monday (so long as you live in the UK). You can also purchase the Blinkt! from Adafruit if you live across the pond.

The post Halloween Pumpkin Light Effect Tutorial appeared first on Raspberry Pi.

The Yahoo-email-search story is garbage

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/the-yahoo-email-search-story-is-garbage.html

Joseph Menn (Reuters) is reporting that Yahoo! searched emails for the NSA. The details of the story are so mangled that it’s impossible to say what’s actually going on.

The first paragraph says this:

Yahoo Inc last year secretly built a custom software program to search all of its customers’ incoming emails

The second paragraph says this:

The company complied with a classified U.S. government demand, scanning hundreds of millions of Yahoo Mail accounts

Well? Which is it? Did they “search incoming emails” or did they “scan mail accounts”? Whether we are dealing with emails in transmit, or stored on the servers, is a BFD (Big Fucking Detail) that you can’t gloss over and confuse in a story like this. Whether searches are done indiscriminately across all emails, or only for specific accounts, is another BFD.

The third paragraph seems to resolve this, but it doesn’t:

Some surveillance experts said this represents the first case to surface of a U.S. Internet company agreeing to an intelligence agency’s request by searching all arriving messages, as opposed to examining stored messages or scanning a small number of accounts in real time.

Who are these “some surveillance experts”? Why is the story keeping their identities secret? Are they some whistleblowers afraid for their jobs? If so, then that should be mentioned. In reality, they are unlikely to be real surveillance experts, but just some random person that knows slightly more about the subject than Joseph Menn, and their identities are being kept secret in order to prevent us from challenging these experts — which is a violation of journalistic ethics.

And, are they analyzing the raw information the author sent them? Or are they opining on the garbled version of events that we see in the first two paragraphs.

The confusion continues:

It is not known what information intelligence officials were looking for, only that they wanted Yahoo to search for a set of characters. That could mean a phrase in an email or an attachment, said the sources, who did not want to be identified.

What the fuck is a “set of characters”??? Is this an exact quote for somewhere? Or something the author of the story made up? The clarification of what this “could mean” doesn’t clear this up, because if that’s what it “actually means”, then why not say this to begin with?

It’s not just technical terms, but also legal ones:

The request to search Yahoo Mail accounts came in the form of a classified edict sent to the company’s legal team, according to the three people familiar with the matter.

What the fuck is a “classified edict”? An NSL? A FISA court order? What? This is also a BFD.

We outsiders already know about the NSA/FBI’s ability to ask for strong selectors (email addresses). What what we don’t know about is their ability to search all emails, regardless of account, for arbitrary keywords/phases. If that’s what’s going on, then this would be a huge story. But the story doesn’t make it clear that this is actually what’s going on — just strongly implies it.

There are many other ways to interpret this story. For example, the government may simply be demanding that when Yahoo satisfies demands for emails (based on email addresses), that it does so from the raw incoming stream, before it hits spam/malware filters. Or, they may be demanding that Yahoo satisfies their demands with more secrecy, so that the entire company doesn’t learn of the email addresses that a FISA order demands. Or, the government may be demanding that the normal collection happen in real time, in the seconds that emails arrive, instead of minutes later.

Or maybe this isn’t an NSA/FISA story at all. Maybe the DHS has a cybersecurity information sharing program that distributes IoCs (indicators of compromise) to companies under NDA. Because it’s a separate program under NDA, Yahoo would need to setup a email malware scanning system separate from their existing malware system in order to use those IoCs. (@declanm‘s stream has further variations on this scenario).

My point is this: the story is full of mangled details that really tell us nothing. I can come up with multiple, unrelated scenarios that are consistent with the content in the story. The story certainly doesn’t say that Yahoo did anything wrong, or that the government is doing anything wrong (at least, wronger than we already know).

I’m convinced the government is up to no good, strong arming companies like Yahoo into compliance. The thing that’s stopping us from discovering malfeasance is poor reporting like this.

OpenIOC – Sharing Threat Intelligence

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/D2fKidZeuds/

OpenIOC is an open framework for sharing threat intelligence, sophisticated threats require sophisticated indicators. In the current threat environment, rapid communication of pertinent threat information is the key to quickly detecting, responding and containing targeted attacks. OpenIOC is designed to fill a void that currently exists for…

Read the full post at darknet.org.uk

Freaking out over the DBIR

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/05/freaking-out-over-dbir.html

Many in the community are upset over the recent “Verizon DBIR” because it claims widespread exploitation of the “FREAK” vulnerability. They know this is impossible, because of the vulnerability details. But really, the problem lies in misconceptions about how “intrusion detection” (IDS) works. As a sort of expert in intrusion detection (by which, I mean the expert), I thought I’d describe what really went wrong.
First let’s talk FREAK. It’s a man-in-the-middle attack. In other words, you can’t attack a web server remotely by sending bad data at it. Instead, you have to break into a network somewhere and install a man-in-the-middle computer. This fact alone means it cannot be the most widely exploited attack.
Second, let’s talk FREAK. It works by downgrading RSA to 512-bit keys, which can be cracked by supercomputers. This fact alone means it cannot be the most widely exploited attack — even the NSA does not have sufficient compute power to crack as many keys as the Verizon DBIR claim were cracked.
Now let’s talk about how Verizon calculates when a vulnerability is responsible for an attack. They use this methodology:
  1. look at a compromised system (identified by AV scanning, IoCs, etc.)
  2. look at which unpatched vulnerabilities the system has (vuln scans)
  3. see if the system was attacked via those vulnerabilities (IDS)
In other words, if you are vulnerable to FREAK, and the IDS tells you people attacked you with FREAK, and indeed you were compromised, then it seems only logical that they compromised you through FREAK.
This sounds like a really good methodology — but only to stupids. (Sorry for being harsh, I’ve been pointing out this methodology sucks for 15 years, and am getting frustrated people still believe in it.)
Here’s the problem with all data breach investigations. Systems get hacked, and we don’t know why. Yet, there is enormous pressure to figure out why. Therefore, we seize on any plausible explanation. We then go through the gauntlet of logical fallacies, such as “confirmation bias”, to support our conclusion. They torture the data until it produces the right results.
In the majority of breach reports I’ve seen, the identified source of the compromise is bogus. That’s why I never believed North Korea was behind the Sony attack — I’ve read too many data breach reports fingering the wrong cause. Political pressure to come up with a cause, any cause, is immense.
This specific logic, “vulnerable to X and attacked with X == breached with X” has been around with us for a long time. 15 years ago, IDS vendors integrated with vulnerability scanners to produce exactly these sorts of events. It’s nonsense that never produced actionable data.
In other words, in the Verizon report, things went this direction. FIRST, they investigated a system and found IoCs (indicators that the system had been compromised). SECOND, they did the correlation between vuln/IDS. They didn’t do it the other way around, because such a system produces too much false data. False data is false data. If you aren’t starting with this vuln/IDS correlation, then looking for IoCs, then there is no reason to believe such correlations will be robust afterwards.
On of the reasons the data isn’t robust is that IDS events do not mean what you think they mean. Most people in our industry treat them as “magic”, that if an IDS triggers on a “FREAK” attack, then that’s what happen.
But that’s not what happened. First of all, there is the issue of false-positives, whereby the system claims a “FREAK” attack happened, when nothing related to the issue happened. Looking at various IDSs, this should be rare for FREAK, but happens for other kinds of attacks.
Then there is the issue of another level of false-positives. It’s plausible, for example, that older browsers, email clients, and other systems may accidentally be downgrading to “export” ciphers simply because these are the only ciphers old and new computers have in common. Thus, you’ll see a lot of “FREAK” events, where this downgrade did indeed occur, but not for malicious reasons.
In other words, this is not a truly false-positive, because the bad thing really did occur, but it is a semi-false-positive, because this was not malicious.
Then there is the problem of misunderstood events. For FREAK, both client and server must be vulnerable — and clients reveal their vulnerability in every SSL request. Therefore, some IDSs trigger on that, telling you about vulnerable clients. The EmergingThreats rules have one called “ET POLICY FREAK Weak Export Suite From Client (CVE-2015-0204)”. The key word here is “POLICY” — it’s not an attack signature but a policy signature.
But a lot of people are confused and think it’s an attack. For example, this website lists it as an attack.
If somebody has these POLICY events enabled, then it will appear that their servers are under constant attack with the FREAK vulnerability, as random people around the Internet with old browsers/clients connect to their servers, regardless if the server itself is vulnerable.
Another source of semi-false-positives are vulnerability scanners, which simply scan for the vulnerability without fully exploiting/attacking the target. Again, this is a semi-false-positive, where it is correctly identified as FREAK, but incorrectly identified as an attack rather than a scan. As other critics of the Verizon report have pointed out, people have been doing Internet-wide scans for this bug. If you have a server exposed to the Internet, then it’s been scanned for “FREAK”. If you have internal servers, but run vulnerability scanners, they have been scanned for “FREAK”. But none of these are malicious “attacks” that can be correlated according to the Verizon DBIR methodology.
Lastly, there are “real” attacks. There are no real FREAK attacks, except maybe twice in Syria when the NSA needed to compromise some SSL communications. And the NSA never does something if they can get caught. Therefore, no IDS event identifying “FREAK” has ever been a true attack.
So here’s the thing. Knowing all this, we can reduce the factors in the Verizon DBIR methodology. The factor “has the system been attacked with FREAK?” can be reduced to “does the system support SSL?“, because all SSL supporting systems have been attacked with FREAK, according to IDS. Furthermore, since people just apply all or none of the Microsoft patches, we don’t ask “is the system vulnerable to FREAK?” so much as “has it been patched recently?“.
Thus, the Verizon DBIR methodology becomes:
1. has the system been compromised?
2. has the system been patched recently?
3. does the system support SSL?
If all three answers are “yes”, then it claims the system was compromised with FREAK. As you can plainly see, this is idiotic methodology.
In the case of FREAK, we already knew the right answer, and worked backward to find the flaw. But in truth, all the other vulnerabilities have the same flaw, for related reasons. The root of the problem is that people just don’t understand IDS information. They, like Verizon, treat the IDS as some sort of magic black box or oracle, and never question the data.
Conclusion

An IDS is wonderfully useful tool if you pay attention to how it works and why it triggers on the things it does. It’s not, however, an “intrusion detection” tool, whereby every event it produces should be acted upon as if it were an intrusion. It’s not a magical system — you really need to pay attention to the details.
Verizon didn’t pay attention to the details. They simply dumped the output of an IDS inappropriately into some sort of analysis. Since the input data was garbage, no amount of manipulation and analysis would ever produce a valid result.


False-positives: Notice I list a range of “false-positives”, from things that might trigger that have nothing to do with FREAK, to a range of things that are FREAK, but aren’t attacks, and which cannot be treated as “intrusions”. Such subtleties is why we can’t have nice things in infosec. Everyone studies “false-positives” when studying for their CISSP examine, but truly don’t understand them.

That’s why when vendors claim “no false positives” they are blowing smoke. The issue is much more subtle than that.

Scratch performance – feel the speed!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/scratch-performance-raspberry-pi/

The Scratch programming language, developed at MIT, has become the cornerstone of computing education at the primary level. Running the Scratch environment well was an early goal for Raspberry Pi. Since early 2013 we’ve been working with Tim Rowledge, Smalltalk hacker extraordinaire. Tim has been beavering away, improving the Scratch codebase and porting it to newer versions of the Squeak virtual machine. Ben Avison chipped in with ARM-optimised versions of Squeak’s graphics operations, and of course we did our bit by releasing two new generations of the Raspberry Pi hardware.

We thought you’d enjoy these two videos. The first shows Andrew Oliver’s Scratch implementation of Pacman running on an Intel Core i5 laptop with “standard” Scratch 1.4. (Yes, that Andrew Oliver. Thanks Andrew!) The second shows the same code running on a Raspberry Pi 3 with Tim’s optimised Scratch. The Raspberry Pi version is roughly twice as fast.

Pacman running on a Macbook i5 under MIT Scratch

A demonstration of how much slower standard Scratch can be than the optimised NuScratch that’s available for Raspberry Pi

PacMan running on Pi 3 under NuScratch

This is “PacMan running on Pi 3 under NuScratch” by raspberrypi on Vimeo, the home for high quality videos and the people who love them.

This is a great example of the sort of attention-to-detail work that we like to focus on, and that can make the difference between a mediocre user experience and the desktop-equivalent experience that we aspire to for Raspberry Pi 3. We think it’s as important to work as hard on improving and incrementing software as it is to do the same with the hardware it runs on. We’ve done similar work with Kodi and Epiphany, and you can expect a lot more of this from us over the next couple of years.

The post Scratch performance – feel the speed! appeared first on Raspberry Pi.

Introducing sd-event

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/introducing-sd-event.html

The Event Loop API of libsystemd

When we began working on
systemd we built
it around a hand-written ad-hoc event loop, wrapping Linux
epoll
. The more
our project grew the more we realized the limitations of using raw
epoll:

  • As we used
    timerfd
    for our timer events, each event source cost one file descriptor and
    we had many of them! File descriptors are a scarce resource on UNIX,
    as
    RLIMIT_NOFILE
    is typically set to 1024 or similar, limiting the number of
    available file descriptors per process to 1021, which isn’t
    particularly a lot.

  • Ordering of event dispatching became a nightmare. In many cases, we
    wanted to make sure that a certain kind of event would always be
    dispatched before another kind of event, if both happen at the same
    time. For example, when the last process of a service dies, we might
    be notified about that via a SIGCHLD signal, via an
    sd_notify() “STATUS=”
    message, and via a control group notification. We wanted to get
    these events in the right order, to know when it’s safe to process
    and subsequently release the runtime data systemd keeps about the
    service or process: it shouldn’t be done if there are still events
    about it pending.

  • For each program we added to the systemd project we noticed we were
    adding similar code, over and over again, to work with epoll’s
    complex interfaces. For example, finding the right file descriptor
    and callback function to dispatch an epoll event to, without running
    into invalidated pointer issues is outright difficult and requires
    non-trivial code.

  • Integrating child process watching into our event loops was much
    more complex than one could hope, and even more so if child process
    events should be ordered against each other and unrelated kinds of
    events.

Eventually, we started working on
sd-bus. At
the same time we decided to seize the opportunity, put together a
proper event loop API in C, and then not only port sd-bus on top of
it, but also the rest of systemd. The result of this is
sd-event. After
almost two years of development we declared sd-event stable in systemd
version 221, and published it as official API of libsystemd.

Why?

sd-event.h,
of course, is not the first event loop API around, and it doesn’t
implement any really novel concepts. When we started working on it we
tried to do our homework, and checked the various existing event loop
APIs, maybe looking for candidates to adopt instead of doing our own,
and to learn about the strengths and weaknesses of the various
implementations existing. Ultimately, we found no implementation that
could deliver what we needed, or where it would be easy to add the
missing bits: as usual in the systemd project, we wanted something
that allows us access to all the Linux-specific bits, instead of
limiting itself to the least common denominator of UNIX. We weren’t
looking for an abstraction API, but simply one that makes epoll usable
in system code.

With this blog story I’d like to take the opportunity to introduce you
to sd-event, and explain why it might be a good candidate to adopt as
event loop implementation in your project, too.

So, here are some features it provides:

  • I/O event sources, based on epoll’s file descriptor watching,
    including edge triggered events (EPOLLET). See
    sd_event_add_io(3).

  • Timer event sources, based on timerfd_create(), supporting the
    CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_BOOTIME clocks, as well
    as the CLOCK_REALTIME_ALARM and CLOCK_BOOTTIME_ALARM clocks that
    can resume the system from suspend. When creating timer events a
    required accuracy parameter may be specified which allows coalescing
    of timer events to minimize power consumption. For each clock only a
    single timer file descriptor is kept, and all timer events are
    multiplexed with a priority queue. See
    sd_event_add_time(3).

  • UNIX process signal events, based on
    signalfd(2),
    including full support for real-time signals, and queued
    parameters. See sd_event_add_signal(3).

  • Child process state change events, based on
    waitid(2). See
    sd_event_add_child(3).

  • Static event sources, of three types: defer, post and exit, for
    invoking calls in each event loop, after other event sources or at
    event loop termination. See
    sd_event_add_defer(3).

  • Event sources may be assigned a 64bit priority value, that controls
    the order in which event sources are dispatched if multiple are
    pending simultanously. See
    sd_event_source_set_priority(3).

  • The event loop may automatically send watchdog notification messages
    to the service manager. See sd_event_set_watchdog(3).

  • The event loop may be integrated into foreign event loops, such as
    the GLib one. The event loop API is hence composable, the same way
    the underlying epoll logic is. See
    sd_event_get_fd(3)
    for an example.

  • The API is fully OOM safe.

  • A complete set of documentation in UNIX man page format is
    available, with
    sd-event(3)
    as the entry page.

  • It’s pretty widely available, and requires no extra
    dependencies. Since systemd is built on it, most major distributions
    ship the library in their default install set.

  • After two years of development, and after being used in all of
    systemd’s components, it has received a fair share of testing already,
    even though we only recently decided to declare it stable and turned
    it into a public API.

Note that sd-event has some potential drawbacks too:

  • If portability is essential to you, sd-event is not your best
    option. sd-event is a wrapper around Linux-specific APIs, and that’s
    visible in the API. For example: our event callbacks receive
    structures defined by Linux-specific APIs such as signalfd.

  • It’s a low-level C API, and it doesn’t isolate you from the OS
    underpinnings. While I like to think that it is relatively nice and
    easy to use from C, it doesn’t compromise on exposing the low-level
    functionality. It just fills the gaps in what’s missing between
    epoll, timerfd, signalfd and related concepts, and it does not hide
    that away.

Either way, I believe that sd-event is a great choice when looking for
an event loop API, in particular if you work on system-level software
and embedded, where functionality like timer coalescing or
watchdog support matter.

Getting Started

Here’s a short example how to use sd-event in a simple daemon. In this
example, we’ll not just use sd-event.h, but also sd-daemon.h to
implement a system service.

#include <alloca.h>
#include <endian.h>
#include <errno.h>
#include <netinet/in.h>
#include <signal.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#include <unistd.h>

#include <systemd/sd-daemon.h>
#include <systemd/sd-event.h>

static int io_handler(sd_event_source *es, int fd, uint32_t revents, void *userdata) {
        void *buffer;
        ssize_t n;
        int sz;

        /* UDP enforces a somewhat reasonable maximum datagram size of 64K, we can just allocate the buffer on the stack */
        if (ioctl(fd, FIONREAD, &sz) < 0)
                return -errno;
        buffer = alloca(sz);

        n = recv(fd, buffer, sz, 0);
        if (n < 0) {
                if (errno == EAGAIN)
                        return 0;

                return -errno;
        }

        if (n == 5 && memcmp(buffer, "EXITn", 5) == 0) {
                /* Request a clean exit */
                sd_event_exit(sd_event_source_get_event(es), 0);
                return 0;
        }

        fwrite(buffer, 1, n, stdout);
        fflush(stdout);
        return 0;
}

int main(int argc, char *argv[]) {
        union {
                struct sockaddr_in in;
                struct sockaddr sa;
        } sa;
        sd_event_source *event_source = NULL;
        sd_event *event = NULL;
        int fd = -1, r;
        sigset_t ss;

        r = sd_event_default(&event);
        if (r < 0)
                goto finish;

        if (sigemptyset(&ss) < 0 ||
            sigaddset(&ss, SIGTERM) < 0 ||
            sigaddset(&ss, SIGINT) < 0) {
                r = -errno;
                goto finish;
        }

        /* Block SIGTERM first, so that the event loop can handle it */
        if (sigprocmask(SIG_BLOCK, &ss, NULL) < 0) {
                r = -errno;
                goto finish;
        }

        /* Let's make use of the default handler and "floating" reference features of sd_event_add_signal() */
        r = sd_event_add_signal(event, NULL, SIGTERM, NULL, NULL);
        if (r < 0)
                goto finish;
        r = sd_event_add_signal(event, NULL, SIGINT, NULL, NULL);
        if (r < 0)
                goto finish;

        /* Enable automatic service watchdog support */
        r = sd_event_set_watchdog(event, true);
        if (r < 0)
                goto finish;

        fd = socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0);
        if (fd < 0) {
                r = -errno;
                goto finish;
        }

        sa.in = (struct sockaddr_in) {
                .sin_family = AF_INET,
                .sin_port = htobe16(7777),
        };
        if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
                r = -errno;
                goto finish;
        }

        r = sd_event_add_io(event, &event_source, fd, EPOLLIN, io_handler, NULL);
        if (r < 0)
                goto finish;

        (void) sd_notifyf(false,
                          "READY=1n"
                          "STATUS=Daemon startup completed, processing events.");

        r = sd_event_loop(event);

finish:
        event_source = sd_event_source_unref(event_source);
        event = sd_event_unref(event);

        if (fd >= 0)
                (void) close(fd);

        if (r < 0)
                fprintf(stderr, "Failure: %sn", strerror(-r));

        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
}

The example above shows how to write a minimal UDP/IP server, that
listens on port 7777. Whenever a datagram is received it outputs its
contents to STDOUT, unless it is precisely the string EXITn in
which case the service exits. The service will react to SIGTERM and
SIGINT and do a clean exit then. It also notifies the service manager
about its completed startup, if it runs under a service
manager. Finally, it sends watchdog keep-alive messages to the service
manager if it asked for that, and if it runs under a service manager.

When run as systemd service this service’s STDOUT will be connected to
the logging framework of course, which means the service can act as a
minimal UDP-based remote logging service.

To compile and link this example, save it as event-example.c, then run:

$ gcc event-example.c -o event-example `pkg-config --cflags --libs libsystemd`

For a first test, simply run the resulting binary from the command
line, and test it against the following netcat command line:

$ nc -u localhost 7777

For the sake of brevity error checking is minimal, and in a real-world
application should, of course, be more comprehensive. However, it
hopefully gets the idea across how to write a daemon that reacts to
external events with sd-event.

For further details on the functions used in the example above, please
consult the manual pages:
sd-event(3),
sd_event_exit(3),
sd_event_source_get_event(3),
sd_event_default(3),
sd_event_add_signal(3),
sd_event_set_watchdog(3),
sd_event_add_io(3),
sd_notifyf(3),
sd_event_loop(3),
sd_event_source_unref(3),
sd_event_unref(3).

Conclusion

So, is this the event loop to end all other event loops? Certainly
not. I actually believe in “event loop plurality”. There are many
reasons for that, but most importantly: sd-event is supposed to be an
event loop suitable for writing a wide range of applications, but it’s
definitely not going to solve all event loop problems. For example,
while the priority logic is important for many usecase it comes with
drawbacks for others: if not used carefully high-priority event
sources can easily starve low-priority event sources. Also, in order
to implement the priority logic, sd-event needs to linearly iterate
through the event structures returned by
epoll_wait(2)
to sort the events by their priority, resulting in worst case
O(n*log(n)) complexity on each event loop wakeup (for n = number of
file descriptors). Then, to implement priorities fully, sd-event only
dispatches a single event before going back to the kernel and asking
for new events. sd-event will hence not provide the theoretically
possible best scalability to huge numbers of file descriptors. Of
course, this could be optimized, by improving epoll, and making it
support how todays’s event loops actually work (after, all, this is
the problem set all event loops that implement priorities — including
GLib’s — have to deal with), but even then: the design of sd-event is focussed on
running one event loop per thread, and it dispatches events strictly
ordered. In many other important usecases a very different design is
preferable: one where events are distributed to a set of worker threads
and are dispatched out-of-order.

Hence, don’t mistake sd-event for what it isn’t. It’s not supposed to
unify everybody on a single event loop. It’s just supposed to be a
very good implementation of an event loop suitable for a large part of
the typical usecases.

Note that our APIs, including
sd-bus, integrate nicely into
sd-event event loops, but do not require it, and may be integrated
into other event loops too, as long as they support watching for time
and I/O events.

And that’s all for now. If you are considering using sd-event for your
project and need help or have questions, please direct them to the
systemd mailing list.