Tag Archives: passwords

Vulnerability in the Kaspersky Password Manager

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/vulnerability-in-the-kaspersky-password-manager.html

A vulnerability (just patched) in the random number generator used in the Kaspersky Password Manager resulted in easily guessable passwords:

The password generator included in Kaspersky Password Manager had several problems. The most critical one is that it used a PRNG not suited for cryptographic purposes. Its single source of entropy was the current time. All the passwords it created could be bruteforced in seconds. This article explains how to securely generate passwords, why Kaspersky Password Manager failed, and how to exploit this flaw. It also provides a proof of concept to test if your version is vulnerable.

The product has been updated and its newest versions aren’t affected by this issue.

Stupid programming mistake, or intentional backdoor? We don’t know.

More generally: generating random numbers is hard. I recommend my own algorithm: Fortuna. I also recommend my own password manager: Password Safe.

EDITED TO ADD: Commentary from Matthew Green.

Backdoor in Zyxel Firewalls and Gateways

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/backdoor-in-zyxel-firewalls-and-gateways.html

This is bad:

More than 100,000 Zyxel firewalls, VPN gateways, and access point controllers contain a hardcoded admin-level backdoor account that can grant attackers root access to devices via either the SSH interface or the web administration panel.

[…]

Installing patches removes the backdoor account, which, according to Eye Control researchers, uses the “zyfwp” username and the “PrOw!aN_fXp” password.

“The plaintext password was visible in one of the binaries on the system,” the Dutch researchers said in a report published before the Christmas 2020 holiday.

Helping build the next generation of privacy-preserving protocols

Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/next-generation-privacy-protocols/

Helping build the next generation of privacy-preserving protocols

Helping build the next generation of privacy-preserving protocols

Over the last ten years, Cloudflare has become an important part of Internet infrastructure, powering websites, APIs, and web services to help make them more secure and efficient. The Internet is growing in terms of its capacity and the number of people using it and evolving in terms of its design and functionality. As a player in the Internet ecosystem, Cloudflare has a responsibility to help the Internet grow in a way that respects and provides value for its users. Today, we’re making several announcements around improving Internet protocols with respect to something important to our customers and Internet users worldwide: privacy.

These initiatives are:

Each of these projects impacts an aspect of the Internet that influences our online lives and digital footprints. Whether we know it or not, there is a lot of private information about us and our lives floating around online. This is something we can help fix.

For over a year, we have been working through standards bodies like the IETF and partnering with the biggest names in Internet technology (including Mozilla, Google, Equinix, and more) to design, deploy, and test these new privacy-preserving protocols at Internet scale. Each of these three protocols touches on a critical aspect of our online lives, and we expect them to help make real improvements to privacy online as they gain adoption.

A continuing tradition at Cloudflare

One of Cloudflare’s core missions is to support and develop technology that helps build a better Internet. As an industry, we’ve made exceptional progress in making the Internet more secure and robust. Cloudflare is proud to have played a part in this progress through multiple initiatives over the years.

Here are a few highlights:

  • Universal SSL™. We’ve been one of the driving forces for encrypting the web. We launched Universal SSL in 2014 to give website encryption to our customers for free and have actively been working along with certificate authorities like Let’s Encrypt, web browsers, and website operators to help remove mixed content. Before Universal SSL launched to give all Cloudflare customers HTTPS for free, only 30% of connections to websites were encrypted. Through the industry’s efforts, that number is now 80% — and a much more significant proportion of overall Internet traffic. Along with doing our part to encrypt the web, we have supported the Certificate Transparency project via Nimbus and Merkle Town, which has improved accountability for the certificate ecosystem HTTPS relies on for trust.
  • TLS 1.3 and QUIC. We’ve also been a proponent of upgrading existing security protocols. Take Transport Layer Security (TLS), the underlying protocol that secures HTTPS. Cloudflare engineers helped contribute to the design of TLS 1.3, the latest version of the standard, and in 2016 we launched support for an early version of the protocol. This early deployment helped lead to improvements to the final version of the protocol. TLS 1.3 is now the most widely used encryption protocol on the web and a vital component of the emerging QUIC standard, of which we were also early adopters.
  • Securing Routing, Naming, and Time. We’ve made major efforts to help secure other critical components of the Internet. Our efforts to help secure Internet routing through our RPKI toolkit, measurement studies, and “Is BGP Safe Yet” tool have significantly improved the Internet’s resilience against disruptive route leaks. Our time service (time.cloudflare.com) has helped keep people’s clocks in sync with more secure protocols like NTS and Roughtime. We’ve also made DNS more secure by supporting DNS-over-HTTPS and DNS-over-TLS in 1.1.1.1 at launch, along with one-click DNSSEC in our authoritative DNS service and registrar.

Continuing to improve the security of the systems of trust online is critical to the Internet’s growth. However, there is a more fundamental principle at play: respect. The infrastructure underlying the Internet should be designed to respect its users.

Building an Internet that respects users

When you sign in to a specific website or service with a privacy policy, you know what that site is expected to do with your data. It’s explicit. There is no such visibility to the users when it comes to the operators of the Internet itself. You may have an agreement with your Internet Service Provider (ISP) and the site you’re visiting, but it’s doubtful that you even know which networks your data is traversing. Most people don’t have a concept of the Internet beyond what they see on their screen, so it’s hard to imagine that people would accept or even understand what a privacy policy from a transit wholesaler or an inspection middlebox would even mean.

Without encryption, Internet browsing information is implicitly shared with countless third parties online as information passes between networks. Without secure routing, users’ traffic can be hijacked and disrupted. Without privacy-preserving protocols, users’ online life is not as private as they would think or expect. The infrastructure of the Internet wasn’t built in a way that reflects their expectations.

Helping build the next generation of privacy-preserving protocols
Normal network flow
Helping build the next generation of privacy-preserving protocols
Network flow with malicious route leak

The good news is that the Internet is continuously evolving. One of the groups that help guide that evolution is the Internet Architecture Board (IAB). The IAB provides architectural oversight to the Internet Engineering Task Force (IETF), the Internet’s main standard-setting body. The IAB recently published RFC 8890, which states that individual end-users should be prioritized when designing Internet protocols. It says that if there’s a conflict between the interests of end-users and the interest of service providers, corporations, or governments, IETF decisions should favor end users. One of the prime interests of end-users is the right to privacy, and the IAB published RFC 6973 to indicate how Internet protocols should take privacy into account.

Today’s technical blog posts are about improvements to the Internet designed to respect user privacy. Privacy is a complex topic that spans multiple disciplines, so it’s essential to clarify what we mean by “improving privacy.” We are specifically talking about changing the protocols that handle privacy-sensitive information exposed “on-the-wire” and modifying them so that this data is exposed to fewer parties. This data continues to exist. It’s just no longer available or visible to third parties without building a mechanism to collect it at a higher layer of the Internet stack, the application layer. These changes go beyond website encryption; they go deep into the design of the systems that are foundational to making the Internet what it is.

The toolbox: cryptography and secure proxies

Two tools for making sure data can be used without being seen are cryptography and secure proxies.

Helping build the next generation of privacy-preserving protocols

Cryptography allows information to be transformed into a format that a very limited number of people (those with the key) can understand. Some describe cryptography as a tool that transforms data security problems into key management problems. This is a humorous but fair description. Cryptography makes it easier to reason about privacy because only key holders can view data.

Another tool for protecting access to data is isolation/segmentation. By physically limiting which parties have access to information, you effectively build privacy walls. A popular architecture is to rely on policy-aware proxies to pass data from one place to another. Such proxies can be configured to strip sensitive data or block data transfers between parties according to what the privacy policy says.

Both these tools are useful individually, but they can be even more effective if combined. Onion routing (the cryptographic technique underlying Tor) is one example of how proxies and encryption can be used in tandem to enforce strong privacy. Broadly, if party A wants to send data to party B, they can encrypt the data with party B’s key and encrypt the metadata with a proxy’s key and send it to the proxy.

Platforms and services built on top of the Internet can build in consent systems, like privacy policies presented through user interfaces. The infrastructure of the Internet relies on layers of underlying protocols. Because these layers of the Internet are so far below where the user interacts with them, it’s almost impossible to build a concept of user consent. In order to respect users and protect them from privacy issues, the protocols that glue the Internet together should be designed with privacy enabled by default.

Data vs. metadata

The transition from a mostly unencrypted web to an encrypted web has done a lot for end-user privacy. For example, the “coffeeshop stalker” is no longer an issue for most sites. When accessing the majority of sites online, users are no longer broadcasting every aspect of their web browsing experience (search queries, browser versions, authentication cookies, etc.) over the Internet for any participant on the path to see. Suppose a site is configured correctly to use HTTPS. In that case, users can be confident their data is secure from onlookers and reaches only the intended party because their connections are both encrypted and authenticated.

However, HTTPS only protects the content of web requests. Even if you only browse sites over HTTPS, that doesn’t mean that your browsing patterns are private. This is because HTTPS fails to encrypt a critical aspect of the exchange: the metadata. When you make a phone call, the metadata is the phone number, not the call’s contents. Metadata is the data about the data.

To illustrate the difference and why it matters, here’s a diagram of what happens when you visit a website like an imageboard. Say you’re going to a specific page on that board (https://<imageboard>.com/room101/) that has specific embedded images hosted on <embarassing>.com.

Helping build the next generation of privacy-preserving protocols
Page load for an imageboard, returning an HTML page with an image from an embarassing site
Helping build the next generation of privacy-preserving protocols
Subresource fetch for the image from an embarassing site

The space inside the dotted line here represents the part of the Internet that your data needs to transit. They include your local area network or coffee shop, your ISP, an Internet transit provider, and it could be the network portion of the cloud provider that hosts the server. Users often don’t have a relationship with these entities or a contract to prevent these parties from doing anything with the user’s data. And even if those entities don’t look at the data, a well-placed observer intercepting Internet traffic could see anything sent unencrypted. It would be best if they just didn’t see it at all. In this example, the fact that the user visited <imageboard>.com can be seen by an observer, which is expected. However, though page content is encrypted, it’s possible to learn which specific page you’ve visited can be seen since <embarassing>.com is also visible.

It’s a general rule that if data is available to on-path parties on the Internet, some of these on-path parties will use this data. It’s also true that these on-path parties need some metadata in order to facilitate the transport of this data. This balance is explored in RFC 8558, which explains how protocols should be designed thoughtfully with respect to the balance between too much metadata (bad for privacy) and too little metadata (bad for operations).

In an ideal world, Internet protocols would be designed with the principle of least privilege. They would provide the minimum amount of information needed for the on-path parties (the pipes) to do the job of transporting the data to the right place and keep everything else confidential by default. Current protocols, including TLS 1.3 and QUIC, are important steps towards this ideal but fall short with respect to metadata privacy.

Knowing both who you are and what you do online can lead to profiling

Today’s announcements reflect two metadata protection levels: the first involves limiting the amount of metadata available to third-party observers (like ISPs). The second involves restricting the amount of metadata that users share with service providers themselves.

Hostnames are an example of metadata that needs to be protected from third-party observers, which DoH and ECH intend to do. However, it doesn’t make sense to hide the hostname from the site you’re visiting. It also doesn’t make sense to hide it from a directory service like DNS. A DNS server needs to know which hostname you’re resolving to resolve it for you!

A privacy issue arises when a service provider knows about both what sites you’re visiting and who you are. Individual websites do not have this dangerous combination of information (except in the case of third party cookies, which are going away soon in browsers), but DNS providers do. Thankfully, it’s not actually necessary for a DNS resolver to know *both* the hostname of the service you’re going to and which IP you’re coming from. Disentangling the two, which is the goal of ODoH, is good for privacy.

The Internet is part of ‘our’ Infrastructure

Roads should be well-paved, well lit, have accurate signage, and be optimally connected. They aren’t designed to stop a car based on who’s inside it. Nor should they be! Like transportation infrastructure, Internet infrastructure is responsible for getting data where it needs to go, not looking inside packets, and making judgments. But the Internet is made of computers and software, and software tends to be written to make decisions based on the data it has available to it.

Privacy-preserving protocols attempt to eliminate the temptation for infrastructure providers and others to peek inside and make decisions based on personal data. A non-privacy preserving protocol like HTTP keeps data and metadata, like passwords, IP addresses, and hostnames, as explicit parts of the data sent over the wire. The fact that they are explicit means that they are available to any observer to collect and act on. A protocol like HTTPS improves upon this by making some of the data (such as passwords and site content) invisible on the wire using encryption.

The three protocols we are exploring today extend this concept.

  • ECH takes most of the unencrypted metadata in TLS (including the hostname) and encrypts it with a key that was fetched ahead of time.
  • ODoH (a new variant of DoH co-designed by Apple, Cloudflare, and Fastly engineers) uses proxies and onion-like encryption to make the source of a DNS query invisible to the DNS resolver. This protects the user’s IP address when resolving hostnames.
  • OPAQUE uses a new cryptographic technique to keep passwords hidden even from the server. Utilizing a construction called an Oblivious Pseudo-Random Function (as seen in Privacy Pass), the server does not learn the password; it only learns whether or not the user knows the password.

By making sure Internet infrastructure acts more like physical infrastructure, user privacy is more easily protected. The Internet is more private if private data can only be collected where the user has a chance to consent to its collection.

Doing it together

As much as we’re excited about working on new ways to make the Internet more private, innovation at a global scale doesn’t happen in a vacuum. Each of these projects is the output of a collaborative group of individuals working out in the open in organizations like the IETF and the IRTF. Protocols must come about through a consensus process that involves all the parties that make up the interconnected set of systems that power the Internet. From browser builders to cryptographers, from DNS operators to website administrators, this is truly a global team effort.

We also recognize that sweeping technical changes to the Internet will inevitably also impact the technical community. Adopting these new protocols may have legal and policy implications. We are actively working with governments and civil society groups to help educate them about the impact of these potential changes.

We’re looking forward to sharing our work today and hope that more interested parties join in developing these protocols. The projects we are announcing today were designed by experts from academia, industry, and hobbyists together and were built by engineers from Cloudflare Research (including the work of interns, which we will highlight) with everyone’s support Cloudflare.

If you’re interested in this type of work, we’re hiring!

OPAQUE: The Best Passwords Never Leave your Device

Post Syndicated from Tatiana Bradley original https://blog.cloudflare.com/opaque-oblivious-passwords/

OPAQUE: The Best Passwords Never Leave your Device

OPAQUE: The Best Passwords Never Leave your Device

Passwords are a problem. They are a problem for reasons that are familiar to most readers. For us at Cloudflare, the problem lies much deeper and broader. Most readers will immediately acknowledge that passwords are hard to remember and manage, especially as password requirements grow increasingly complex. Luckily there are great software packages and browser add-ons to help manage passwords. Unfortunately, the greater underlying problem is beyond the reaches of software to solve.

The fundamental password problem is simple to explain, but hard to solve: A password that leaves your possession is guaranteed to sacrifice security, no matter its complexity or how hard it may be to guess. Passwords are insecure by their very existence.

You might say, “but passwords are always stored in encrypted format!” That would be great. More accurately, they are likely stored as a salted hash, as explained below. Even worse is that there is no way to verify the way that passwords are stored, and so we can assume that on some servers passwords are stored in cleartext. The truth is that even responsibly stored passwords can be leaked and broken, albeit (and thankfully) with enormous effort. An increasingly pressing problem stems from the nature of passwords themselves: any direct use of a password, today, means that the password must be handled in the clear.

You say, “but my password is transmitted securely over HTTPS!” This is true.

You say, “but I know the server stores my password in hashed form, secure so no one can access it!” Well, this puts a lot of faith in the server. Even so, let’s just say that yes, this may be true, too.

There remains, however, an important caveat — a gap in the end-to-end use of passwords. Consider that once a server receives a password, between being securely transmitted and securely stored, the password has to be read and processed. Yes, as cleartext!

And it gets worse — because so many are used to thinking in software, it’s easy to forget about the vulnerability of hardware. This means that even if the software is somehow trusted, the password must at some point reside in memory. The password must at some point be transmitted over a shared bus to the CPU. These provide vectors of attack to on-lookers in many forms. Of course, these attack vectors are far less likely than those presented by transmission and permanent storage, but they are no less severe (recent CPU vulnerabilities such as Spectre and Meltdown should serve as a stark reminder.)

The only way to fix this problem is to get rid of passwords altogether. There is hope! Research and private sector communities are working hard to do just that. New standards are emerging and growing mature. Unfortunately, passwords are so ubiquitous that it will take a long time to agree on and supplant passwords with new standards and technology.

At Cloudflare, we’ve been asking if there is something that can be done now, imminently. Today’s deep-dive into OPAQUE is one possible answer. OPAQUE is one among many examples of systems that enable a password to be useful without it ever leaving your possession. No one likes passwords, but as long they’re in use, at least we can ensure they are never given away.

I’ll be the first to admit that password-based authentication is annoying. Passwords are hard to remember, tedious to type, and notoriously insecure. Initiatives to reduce or replace passwords are promising. For example, WebAuthn is a standard for web authentication based primarily on public key cryptography using hardware (or software) tokens. Even so, passwords are frustratingly persistent as an authentication mechanism. Whether their persistence is due to their ease of implementation, familiarity to users, or simple ubiquity on the web and elsewhere, we’d like to make password-based authentication as secure as possible while they persist.

My internship at Cloudflare focused on OPAQUE, a cryptographic protocol that solves one of the most glaring security issues with password-based authentication on the web: though passwords are typically protected in transit by HTTPS, servers handle them in plaintext to check their correctness. Handling plaintext passwords is dangerous, as accidentally logging or caching them could lead to a catastrophic breach. The goal of the project, rather than to advocate for adoption of any particular protocol, is to show that OPAQUE is a viable option among many for authentication. Because the web case is most familiar to me, and likely many readers, I will use the web as my main example.

Web Authentication 101: Password-over-TLS

When you type in a password on the web, what happens? The website must check that the password you typed is the same as the one you originally registered with the site. But how does this check work?

Usually, your username and password are sent to a server. The server then checks if the registered password associated with your username matches the password you provided. Of course, to prevent an attacker eavesdropping on your Internet traffic from stealing your password, your connection to the server should be encrypted via HTTPS (HTTP-over-TLS).

Despite use of HTTPS, there still remains a glaring problem in this flow: the server must store a representation of your password somewhere. Servers are hard to secure, and breaches are all too common. Leaking this representation can cause catastrophic security problems. (For records of the latest breaches, check out https://haveibeenpwned.com/).

To make these leaks less devastating, servers often apply a hash function to user passwords. A hash function maps each password to a unique, random-looking value. It’s easy to apply the hash to a password, but almost impossible to reverse the function and retrieve the password. (That said, anyone can guess a password, apply the hash function, and check if the result is the same.)

With password hashing, plaintext passwords are no longer stored on servers.  An attacker who steals a password database no longer has direct access to passwords. Instead, the attacker must apply the hash to many possible passwords and compare the results with the leaked hashes.

Unfortunately, if a server hashes only the passwords, attackers can download precomputed rainbow tables containing hashes of trillions of possible passwords and almost instantly retrieve the plaintext passwords. (See https://project-rainbowcrack.com/table.htm for a list of some rainbow tables).

With this in mind, a good defense-in-depth strategy is to use salted hashing, where the server hashes your password appended to a random, per-user value called a salt. The server also saves the salt alongside the username, so the user never sees or needs to submit it. When the user submits a password, the server re-computes this hash function using the salt. An attacker who steals password data, i.e., the password representations and salt values, must then guess common passwords one by one and apply the (salted) hash function to each guessed password. Existing rainbow tables won’t help because they don’t take the salts into account, so the attacker needs to make a new rainbow table for each user!

This (hopefully) slows down the attack enough for the service to inform users of a breach, so they can change their passwords. In addition, the salted hashes should be hardened by applying a hash many times to further slow attacks. (See https://blog.cloudflare.com/keeping-passwords-safe-by-staying-up-to-date/ for a more detailed discussion).

These two mitigation strategies — encrypting the password in transit and storing salted, hardened hashes — are the current best practices.

A large security hole remains open. Password-over-TLS (as we will call it) requires users to send plaintext passwords to servers during login, because servers must see these passwords to match against registered passwords on file. Even a well-meaning server could accidentally cache or log your password attempt(s), or become corrupted in the course of checking passwords. (For example, Facebook detected in 2019 that it had accidentally been storing hundreds of millions of plaintext user passwords). Ideally, servers should never see a plaintext password at all.

But that’s quite a conundrum: how can you check a password if you never see the password? Enter OPAQUE: a Password-Authenticated Key Exchange (PAKE) protocol that simultaneously proves knowledge of a password and derives a secret key. Before describing OPAQUE in detail, we’ll first summarize PAKE functionalities in general.

Password Proofs with Password-Authenticated Key Exchange

Password-Authenticated Key Exchange (PAKE) was proposed by Bellovin and Merrit in 1992, with an initial motivation of allowing password-authentication without the possibility of dictionary attacks based on data transmitted over an insecure channel.

Essentially, a plain, or symmetric, PAKE is a cryptographic protocol that allows two parties who share only a password to establish a strong shared secret key. The goals of PAKE are:

1) The secret keys will match if the passwords match, and appear random otherwise.

2) Participants do not need to trust third parties (in particular, no Public Key Infrastructure),

3) The resulting secret key is not learned by anyone not participating in the protocol – including those who know the password.

4) The protocol does not reveal either parties’ password to each other (unless the passwords match), or to eavesdroppers.

In sum, the only way to successfully attack the protocol is to guess the password correctly while participating in the protocol. (Luckily, such attacks can be mostly thwarted by rate-limiting, i.e, blocking a user from logging in after a certain number of incorrect password attempts).

Given these requirements, password-over-TLS is clearly not a PAKE, because:

  • It relies on WebPKI, which places trust in third-parties called Certificate Authorities (see https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/ for an in-depth explanation of WebPKI and some of its shortcomings).
  • The user’s password is revealed to the server.
  • Password-over-TLS provides the user no assurance that the server knows their password or a derivative of it — a server could accept any input from the user with no checks whatsoever.

That said, plain PAKE is still worse than Password-over-TLS, simply because it requires the server to store plaintext passwords. We need a PAKE that lets the server store salted hashes if we want to beat the current practice.

An improvement over plain PAKE is what’s called an asymmetric PAKE (aPAKE), because only the client knows the password, and the server knows a hashed password. An aPAKE has the four properties of PAKE, plus one more:

5) An attacker who steals password data stored on the server must perform a dictionary attack to retrieve the password.

The issue with most existing aPAKE protocols, however, is that they do not allow for a salted hash (or if they do, they require that salt to be transmitted to the user, which means the attacker has access to the salt beforehand and can begin computing a rainbow table for the user before stealing any data). We’d like, therefore, to upgrade the security property as follows:

5*) An attacker who steals password data stored on the server must perform a per-user dictionary attack to retrieve the password after the data is compromised.

OPAQUE is the first aPAKE protocol with a formal security proof that has this property: it allows for a completely secret salt.

OPAQUE – Servers safeguard secrets without knowing them!

OPAQUE: The Best Passwords Never Leave your Device

OPAQUE is what’s referred to as a strong aPAKE, which simply means that it resists these pre-computation attacks by using a secretly salted hash on the server. OPAQUE was proposed and formally analyzed by Stanislaw Jarecki, Hugo Krawcyzk and Jiayu Xu in 2018 (full disclosure: Stanislaw Jarecki is my academic advisor). The name OPAQUE is a combination of the names of two cryptographic protocols: OPRF and PAKE. We already know PAKE, but what is an OPRF? OPRF stands for Oblivious Pseudo-Random Function, which is a protocol by which two parties compute a function F(key, x) that is deterministic but outputs random-looking values. One party inputs the value x, and another party inputs the key – the party who inputs x learns the result F(key, x) but not the key, and the party providing the key learns nothing.  (You can dive into the math of OPRFs here: https://blog.cloudflare.com/privacy-pass-the-math/).

The core of OPAQUE is a method to store user secrets for safekeeping on a server, without giving the server access to those secrets. Instead of storing a traditional salted password hash, the server stores a secret envelope for you that is “locked” by two pieces of information: your password known only by you, and a random secret key (like a salt) known only by the server. To log in, the client initiates a cryptographic exchange that reveals the envelope key to the client, but, importantly, not to the server.

The server then sends the envelope to the user, who now can retrieve the encrypted keys. (The keys included in the envelope are a private-public key pair for the user, and a public key for the server.) These keys, once unlocked, will be the inputs to an Authenticated Key Exchange (AKE) protocol, which allows the user and server to establish a secret key which can be used to encrypt their future communication.

OPAQUE consists of two phases, being credential registration and login via key exchange.

OPAQUE: Registration Phase

Before registration, the user first signs up for a service and picks a username and password. Registration begins with the OPRF flow we just described: Alice (the user) and Bob (the server) do an OPRF exchange. The result is that Alice has a random key rwd, derived from the OPRF output F(key, pwd), where key is a server-owned OPRF key specific to Alice and pwd is Alice’s password.

Within his OPRF message, Bob sends the public key for his OPAQUE identity. Alice then generates a new private/public key pair, which will be her persistent OPAQUE identity for Bob’s service, and encrypts her private key along with Bob’s public key with the rwd (we will call the result an encrypted envelope). She sends this encrypted envelope along with her public key (unencrypted) to Bob, who stores the data she provided, along with Alice’s specific OPRF keysecret, in a database indexed by her username.

OPAQUE: The Best Passwords Never Leave your Device

OPAQUE: Login Phase

The login phase is very similar. It starts the same way as registration — with an OPRF flow. However, on the server side, instead of generating a new OPRF key, Bob instead looks up the one he created during Alice’s registration. He does this by looking up Alice’s username (which she provides in the first message), and retrieving his record of her. This record contains her public key, her encrypted envelope, and Bob’s OPRF key for Alice.

He also sends over the encrypted envelope which Alice can decrypt with the output of the OPRF flow. (If decryption fails, she aborts the protocol — this likely indicates that she typed her password incorrectly, or Bob isn’t who he says he is). If decryption succeeds, she now has her own secret key and Bob’s public key. She inputs these into an AKE protocol with Bob, who, in turn, inputs his private key and her public key, which gives them both a fresh shared secret key.

OPAQUE: The Best Passwords Never Leave your Device

Integrating OPAQUE with an AKE

An important question to ask here is: what AKE is suitable for OPAQUE? The emerging CFRG specification outlines several options, including 3DH and SIGMA-I. However, on the web, we already have an AKE at our disposal: TLS!

Recall that TLS is an AKE because it provides unilateral (and mutual) authentication with shared secret derivation. The core of TLS is a Diffie-Hellman key exchange, which by itself is unauthenticated, meaning that the parties running it have no way to verify who they are running it with. (This is a problem because when you log into your bank, or any other website that stores your private data, you want to be sure that they are who they say they are). Authentication primarily uses certificates, which are issued by trusted entities through a system called Public Key Infrastructure (PKI). Each certificate is associated with a secret key. To prove its identity, the server presents its certificate to the client, and signs the TLS handshake with its secret key.

Modifying this ubiquitous certificate-based authentication on the web is perhaps not the best place to start. Instead, an improvement would be to authenticate the TLS shared secret, using OPAQUE, after the TLS handshake completes. In other words, once a server is authenticated with its typical WebPKI certificate, clients could subsequently authenticate to the server. This authentication could take place “post handshake” in the TLS connection using OPAQUE.

Exported Authenticators are one mechanism for “post-handshake” authentication in TLS. They allow a server or client to provide proof of an identity without setting up a new TLS connection. Recall that in the standard web case, the server establishes their identity with a certificate (proving, for example, that they are “cloudflare.com”). But if the same server also holds alternate identities, they must run TLS again to prove who they are.

The basic Exported Authenticator flow works resembles a classical challenge-response protocol, and works as follows. (We’ll consider the server authentication case only, as the client case is symmetric).

OPAQUE: The Best Passwords Never Leave your Device

At any point after a TLS connection is established, Alice (the client) sends an authenticator request to indicate that she would like Bob (the server) to prove an additional identity. This request includes a context (an unpredictable string — think of this as a challenge), and extensions which include information about what identity the client wants to be provided. For example, the client could include the SNI extension to ask the server for a certificate associated with a certain domain name other than the one initially used in the TLS connection.

On receipt of the client message, if the server has a valid certificate corresponding to the request, it sends back an exported authenticator which proves that it has the secret key for the certificate. (This message has the same format as an Auth message from the client in TLS 1.3 handshake – it contains a Certificate, a CertificateVerify and a Finished message). If the server cannot or does not wish to authenticate with the requested certificate, it replies with an empty authenticator which contains only a well formed Finished message.

The client then checks that the Exported Authenticator it receives is well-formed, and then verifies that the certificate presented is valid, and if so, accepts the new identity.

In sum, Exported Authenticators provide authentication in a higher layer (such as the application layer) safely by leveraging the well-vetted cryptography and message formats of TLS. Furthermore, it is tied to the TLS session so that authentication messages can’t be copied and pasted from one TLS connection into another. In other words, Exported Authenticators provide exactly the right hooks needed to add OPAQUE-based authentication into TLS.

OPAQUE with Exported Authenticators (OPAQUE-EA)

OPAQUE: The Best Passwords Never Leave your Device

OPAQUE-EA allows OPAQUE to run at any point after a TLS connection has already been set up. Recall that Bob (the server) will store his OPAQUE identity, in this case a signing key and verification key, and Alice will store her identity — encrypted — on Bob’s server. (The registration flow where Alice stores her encrypted keys is the same as in regular OPAQUE, except she stores a signing key, so we will skip straight to the login flow). Alice and Bob run two request-authenticate EA flows, one for each party, and OPAQUE protocol messages ride along in the extensions section of the EAs. Let’s look in detail how this works.

First, Alice generates her OPRF message based on her password. She creates an Authenticator Request asking for Bob’s OPAQUE identity, and includes (in the extensions field) her username and her OPRF message, and sends this to Bob over their established TLS connection.

Bob receives the message and looks up Alice’s username in his database. He retrieves her OPAQUE record containing her verification key and encrypted envelope, and his OPRF key. He uses the OPRF key on the OPRF message, and creates an Exported Authenticator proving ownership of his OPAQUE signing key, with an extension containing his OPRF message and the encrypted envelope. Additionally, he sends a new Authenticator Request asking Alice to prove ownership of her OPAQUE signing key.

Alice parses the message and completes the OPRF evaluation using Bob’s message to get output rwd, and uses rwd to decrypt the envelope. This reveals her signing key and Bob’s public key. She uses Bob’s public key to validate his Authenticator Response proof, and, if it checks out, she creates and sends an Exported Authenticator proving that she holds the newly decrypted signing key. Bob checks the validity of her Exported Authenticator, and if it checks out, he accepts her login.

My project: OPAQUE-EA over HTTPS

Everything described above is supported by lots and lots of theory that has yet to find its way into practice. My project was to turn the theory into reality. I started with written descriptions of Exported Authenticators, OPAQUE, and a preliminary draft of OPAQUE-in-TLS. My goal was to get from those to a working prototype.

My demo shows the feasibility of implementing OPAQUE-EA on the web, completely removing plaintext passwords from the wire, even encrypted. This provides a possible alternative to the current password-over-TLS flow with better security properties, but no visible change to the user.

A few of the implementation details are worth knowing. In computer science, abstraction is a powerful tool. It means that we can often rely on existing tools and APIs to avoid duplication of effort. In my project I relied heavily on mint, an open-source implementation of TLS 1.3 in Go that is great for prototyping. I also used CIRCL’s OPRF API. I built libraries for Exported Authenticators, the core of OPAQUE, and OPAQUE-EA (which ties together the two).

I made the web demo by wrapping the OPAQUE-EA functionality in a simple HTTP server and client that pass messages to each other over HTTPS. Since a browser can’t run Go, I compiled from Go to WebAssembly (WASM) to get the Go functionality in the browser, and wrote a simple script in JavaScript to call the WASM functions needed.

Since current browsers do not give access to the underlying TLS connection on the client side, I had to implement a work-around to allow the client to access the exporter keys, namely, that the server simply computes the keys and sends them to the client over HTTPS. This workaround reduces the security of the resulting demo — it means that trust is placed in the server to provide the right keys. Even so, the user’s password is still safe, even if a malicious server provided bad keys— they just don’t have assurance that they actually previously registered with that server. However, in the future, browsers could include a mechanism to support exported keys and allow OPAQUE-EA to run with its full security properties.

You can explore my implementation on Github, and even follow the instructions to spin up your own OPAQUE-EA test server and client. I’d like to stress, however, that the implementation is meant as a proof-of-concept only, and must not be used for production systems without significant further review.

OPAQUE-EA Limitations

Despite its great properties, there will definitely be some hurdles in bringing OPAQUE-EA from a proof-of-concept to a fully fledged authentication mechanism.

Browser support for TLS exporter keys. As mentioned briefly before, to run OPAQUE-EA in a browser, you need to access secrets from the TLS connection called exporter keys. There is no way to do this in the current most popular browsers, so support for this functionality will need to be added.

Overhauling password databases. To adopt OPAQUE-EA, servers need not only to update their password-checking logic, but also completely overhaul their password databases. Because OPAQUE relies on special password representations that can only be generated interactively, existing salted hashed passwords cannot be automatically updated to OPAQUE records. Servers will likely need to run a special OPAQUE registration flow on a user-by-user basis. Because OPAQUE relies on buy-in from both the client and the server, servers may need to support the old method for a while before all clients catch up.

Reliance on emerging standards. OPAQUE-EA relies on OPRFs, which is in the process of standardization, and Exported Authenticators, a proposed standard. This means that support for these dependencies is not yet available in most existing cryptographic libraries, so early adopters may need to implement these tools themselves.

Summary

As long as people still use passwords, we’d like to make the process as secure as possible. Current methods rely on the risky practice of handling plaintext passwords on the server side while checking their correctness. PAKEs, and (specifically aPAKEs) allow secure password login without ever letting the server see the passwords.

OPAQUE is also being explored within other companies. According to Kevin Lewi, a research scientist from the Novi Research team at Facebook, they are “excited by the strong cryptographic guarantees provided by OPAQUE and are actively exploring OPAQUE as a method for further safeguarding credential-protected fields that are stored server-side.”

OPAQUE is one of the best aPAKEs out there, and can be fully integrated into TLS. You can check out the core OPAQUE implementation here and the demo TLS integration here. A running version of the demo is also available here. A Typescript client implementation of OPAQUE is coming soon. If you’re interested in implementing the protocol, or encounter any bugs with the current implementation, please drop us a line at [email protected]! Consider also subscribing to the IRTF CFRG mailing list to track discussion about the OPAQUE specification and its standardization.

On Risk-Based Authentication

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/on-risk-based-authentication.html

Interesting usability study: “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication“:

Abstract: Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.

We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably se-cure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation.Our contribution provides a first deeper understanding of the users’perception of RBA and helps to improve RBA implementations for a broader user acceptance.

Paper’s website. I’ve blogged about risk-based authentication before.

DiceKeys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/dicekeys.html

DiceKeys is a physical mechanism for creating and storing a 192-bit key. The idea is that you roll a special set of twenty-five dice, put them into a plastic jig, and then use an app to convert those dice into a key. You can then use that key for a variety of purposes, and regenerate it from the dice if you need to.

This week Stuart Schechter, a computer scientist at the University of California, Berkeley, is launching DiceKeys, a simple kit for physically generating a single super-secure key that can serve as the basis for creating all the most important passwords in your life for years or even decades to come. With little more than a plastic contraption that looks a bit like a Boggle set and an accompanying web app to scan the resulting dice roll, DiceKeys creates a highly random, mathematically unguessable key. You can then use that key to derive master passwords for password managers, as the seed to create a U2F key for two-factor authentication, or even as the secret key for cryptocurrency wallets. Perhaps most importantly, the box of dice is designed to serve as a permanent, offline key to regenerate that master password, crypto key, or U2F token if it gets lost, forgotten, or broken.

[…]

Schechter is also building a separate app that will integrate with DiceKeys to allow users to write a DiceKeys-generated key to their U2F two-factor authentication token. Currently the app works only with the open-source SoloKey U2F token, but Schechter hopes to expand it to be compatible with more commonly used U2F tokens before DiceKeys ship out. The same API that allows that integration with his U2F token app will also allow cryptocurrency wallet developers to integrate their wallets with DiceKeys, so that with a compatible wallet app, DiceKeys can generate the cryptographic key that protects your crypto coins too.

Here’s the DiceKeys website and app. Here’s a short video demo. Here’s a longer SOUPS talk.

Preorder a set here.

Note: I am an adviser on the project.

Another news article. Slashdot thread. Hacker News thread. Reddit thread.

Half a Million IoT Passwords Leaked

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/half_a_million.html

It is amazing that this sort of thing can still happen:

…the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker then tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Telnet? Default passwords? In 2020?

We have a long way to go to secure the IoT.

EDITED TO ADD (7/14): Apologies, but I previously blogged this story in January.

Password Changing After a Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/password_changi.html

This study shows that most people don’t change their passwords after a breach, and if they do they change it to a weaker password.

Abstract: To protect against misuse of passwords compromised in a breach, consumers should promptly change affected passwords and any similar passwords on other accounts. Ideally, affected companies should strongly encourage this behavior and have mechanisms in place to mitigate harm. In order to make recommendations to companies about how to help their users perform these and other security-enhancing actions after breaches, we must first have some understanding of the current effectiveness of companies’ post-breach practices. To study the effectiveness of password-related breach notifications and practices enforced after a breach, we examine­ — based on real-world password data from 249 participants­ — whether and how constructively participants changed their passwords after a breach announcement.

Of the 249 participants, 63 had accounts on breached domains;only 33% of the 63 changed their passwords and only 13% (of 63)did so within three months of the announcement. New passwords were on average 1.3× stronger than old passwords (when comparing log10-transformed strength), though most were weaker or of equal strength. Concerningly, new passwords were overall more similar to participants’ other passwords, and participants rarely changed passwords on other sites even when these were the same or similar to their password on the breached domain.Our results highlight the need for more rigorous password-changing requirements following a breach and more effective breach notifications that deliver comprehensive advice.

News article.

EDITED TO ADD (6/2): Another news aricle. Slashdot thread.

CIA Dirty Laundry Aired

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/cia_dirty_laund.html

Joshua Schulte, the CIA employee standing trial for leaking the Wikileaks Vault 7 CIA hacking tools, maintains his innocence. And during the trial, a lot of shoddy security and sysadmin practices are coming out:

All this raises a question, though: just how bad is the CIA’s security that it wasn’t able to keep Schulte out, even accounting for the fact that he is a hacking and computer specialist? And the answer is: absolutely terrible.

The password for the Confluence virtual machine that held all the hacking tools that were stolen and leaked? That’ll be 123ABCdef. And the root login for the main DevLAN server? mysweetsummer.

It actually gets worse than that. Those passwords were shared by the entire team and posted on the group’s intranet. IRC chats published during the trial even revealed team members talking about how terrible their infosec practices were, and joked that CIA internal security would go nuts if they knew. Their justification? The intranet was restricted to members of the Operational Support Branch (OSB): the elite programming unit that makes the CIA’s hacking tools.

The jury returned no verdict on the serious charges. He was convicted of contempt and lying to the FBI; a mistrial on everything else.

Pwned Passwords Padding (ft. Lava Lamps and Workers)

Post Syndicated from Junade Ali original https://blog.cloudflare.com/pwned-passwords-padding-ft-lava-lamps-and-workers/

Pwned Passwords Padding (ft. Lava Lamps and Workers)

Pwned Passwords Padding (ft. Lava Lamps and Workers)

The Pwned Passwords API (part of Troy Hunt’s Have I Been Pwned service) is used tens of millions of times each day, to alert users if their credentials are breached in a variety of online services, browser extensions and applications. Using Cloudflare, the API cached around 99% of requests, making it very efficient to run.

From today, we are offering a new security advancement in the Pwned Passwords API – API clients can receive responses padded with random data. This exists to effectively protect from any potential attack vectors which seek to use passive analysis of the size of API responses to identify which anonymised bucket a user is querying. I am hugely grateful to security researcher Matt Weir who I met at PasswordsCon in Stockholm and has explored proof-of-concept analysis of unpadded API responses in Pwned Passwords and has driven some of the work to consider the addition of padded responses.

Now, by passing a header of “Add-Padding” with a value of “true”, Pwned Passwords API users are able to request padded API responses (to a minimum of 800 entries with additional padding of a further 0-200 entries). The padding consists of randomly generated hash suffixes with the usage count field set to “0”.

Clients using this approach should seek to exclude 0-usage hash suffixes from breach validation. Given most implementations of PwnedPasswords simply do string matching on the suffix of a hash, there is no real performance implication of searching through the padding data. The false positive risk if a hash suffix matches a randomly generated response is very low, 619/(235*4) ≈ 4.44 x 10-40. This means you’d need to do about 1040 queries (roughly a query for every two atoms in the universe) to have a 44.4% probability of a collision.

In the future, non-padded responses will be deprecated outright (and all responses will be padded) once clients have had a chance to update.

You can see an example padded request by running the following curl request:

curl -H Add-Padding:true https://api.pwnedpasswords.com/range/FFFFF

API Structure

The high level structure of the Pwned Passwords API is discussed in my original blog post “Validating Leaked Passwords with k-Anonymity”. In essence, a client queries the API for the first 5 hexadecimal characters of a SHA-1 hashed password (amounting to 20 bits), a list of responses is returned with the remaining 35 hexadecimal characters of the hash (140 bits) of every breached password in the dataset. Each hash suffix is appended with a colon (“:”) and the number of times that given hash is found in the breached data.

An example query for FFFFF can be seen below, with the structure represented:

Pwned Passwords Padding (ft. Lava Lamps and Workers)

Without padding, the message length varies given the amount of hash suffixes in the bucket that is queried. It is known that it is possible to fingerprint TLS traffic based on the encrypted message length – fortunately this padding can be inserted in the API responses themselves (in the HTTP content). We can see the difference in download size between two unpadded buckets by running:

$ curl -so /dev/null https://api.pwnedpasswords.com/range/E0812 -w '%{size_download} bytes\n'
17022 bytes
$ curl -so /dev/null https://api.pwnedpasswords.com/range/834EF -w '%{size_download} bytes\n'
25118 bytes

The randomised padded entries can be found with with the “:0” suffix (indicating usage count); for example, below the top three entries are real entries whilst the last 3 represent padding data:

FF1A63ACC70BEA924C5DBABEE4B9B18C82D:10
FF8A0382AA9C8D9536EFBA77F261815334D:12
FFEE791CBAC0F6305CAF0CEE06BBE131160:2
2F811DCB8FF6098B838DDED4D478B0E4032:0
A1BABA501C55ACB6BDDC6D150CF585F20BE:0
9F31397459FF46B347A376F58506E420A58:0

Compression and Randomisation

Cloudflare supports both GZip and Brotli for compression. Compression benefits the PwnedPasswords API as responses are hexadecimal represented in ASCII. That said, compression is somewhat limited given the Avalanche Effect in hashing algorithms (that a small change in an input results in a completely different hash output) – each range searched has dramatically different input passwords and the remaining 35 characters of the SHA-1 hash are similarly different and have no expected similarity between them.

Accordingly, if one were to simply pad messages with null messages (say “000…”), the compression could mean that values padded to the same could be differentiated after compression. Similarly, even without compression, padding messages with the same data could still yield credible attacks.

Accordingly, padding is instead generated with randomly generated entries. In order to not break clients, such padding is generated to effectively look like legitimate hash suffixes. It is possible, however, to identify such messages as randomised padding. As the PwnedPasswords API contains a count field (distinguished by a colon after the remainder of the hex followed by a numerical count), randomised entries can be distinguished with a 0 usage.

Lava Lamps and Workers

I’ve written before about how cache optimisation of Pwned Passwords (including using Cloudflare Workers). Cloudflare Workers has an additional benefit that Workers run before elements are pulled from cache.

This allows for randomised entries to be generated dynamically on a request-to-request basis instead of being cached. This means the resulting randomised padding can differ from request-to-request (thus the amount of entries in a given response and the size of the response).

Cloudflare Workers supports the Web Crypto API, providing for exposure of a cryptographically sound random number generator. This random number generator is used to decide the variable amount of padding added to each response. Whilst a cryptographically secure random number generator is used for determining the amount of padding, as the random hexadecimal padding does not need to be indistinguishable from the real hashes, for computational performance we use the non-cryptographically secure Math.random() to generate the actual content of the padding.

Famously, one of the sources of entropy used in Cloudflare servers is sourced from Lava Lamps. By filming a wall of lava lamps in our San Francisco office (with individual photoreceptors picking up on random noise beyond the movement of the lava), we are able to generate random seed data used in servers (complimented by other sources of entropy along the way). This lava lamp entropy is used alongside the randomness sources on individual servers. This entropy is used to seed cryptographically secure pseudorandom number generators (CSPRNG) algorithms when generating random numbers. Cloudflare Workers runtime uses the v8 engine for JavaScript, with randomness sourced from /dev/urandom on the server itself.

Each response is padded to a minimum of 800 hash suffixes and a randomly generated amount of additional padding (from 200 entries).

This can be seen in two ways, firstly we can see that repeating the same responses to the same endpoint (with the underlying response being cached), yields a randomised amount of lines between 800 and 1000:

$ for run in {1..10}; do curl -s -H Add-Padding:true https://api.pwnedpasswords.com/range/FFFFF | wc -l; done
     831
     956
     870
     980
     932
     868
     856
     961
     912
     827

Secondly, we can see a randomised download size in each response:

$ for run in {1..10}; do curl -so /dev/null -H Add-Padding:true https://api.pwnedpasswords.com/range/FFFFF -w '%{size_download} bytes\n'; done
35572 bytes
37358 bytes
38194 bytes
33596 bytes
32304 bytes
37168 bytes
32532 bytes
37928 bytes
35154 bytes
33178 bytes

Future Work and Conclusion

There has been a considerable amount of research that has complemented the anonymity approach in Pwned Passwords. For example; Google and Stanford have written a paper about their approach implemented in Google Password Checkup, “Protecting accounts from credential stuffing with password breach alerting” [Usenix].

We have done a significant amount of work exploring more advanced protocols for Pwned Passwords, some of this work can be found in a paper we worked on with academics at Cornell University, “Protocols for Checking Compromised Credentials” [ACM or arXiv preprint]. This research offers two new protocols (FSB, frequency smoothing bucketization, and IDB, identifier-based bucketization) to further reduce information leakage in the APIs.

Further work is needed before these protocols gain the production worthiness that we’d like before they are shipped – but, as always, we’ll keep you updated here on our blog.

Half a Million IoT Device Passwords Published

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/half_a_million_.html

It’s a list of easy-to-guess passwords for IoT devices on the Internet as recently as last October and November. Useful for anyone putting together a bot network:

A hacker has published this week a massive list of Telnet credentials for more than 515,000 servers, home routers, and IoT (Internet of Things) “smart” devices.

The list, which was published on a popular hacking forum, includes each device’s IP address, along with a username and password for the Telnet service, a remote access protocol that can be used to control devices over the internet.

According to experts to who ZDNet spoke this week, and a statement from the leaker himself, the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker than tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Chrome Extension Stealing Cryptocurrency Keys and Passwords

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/chrome_extensio.html

A malicious Chrome extension surreptitiously steals Ethereum keys and passwords:

According to Denley, the extension is dangerous to users in two ways. First, any funds (ETH coins and ERC0-based tokens) managed directly inside the extension are at risk.

Denley says that the extension sends the private keys of all wallets created or managed through its interface to a third-party website located at erc20wallet[.]tk.

Second, the extension also actively injects malicious JavaScript code when users navigate to five well-known and popular cryptocurrency management platforms. This code steals login credentials and private keys, data that it’s sent to the same erc20wallet[.]tk third-party website.

Another example of how blockchain requires many single points of trust in order to be secure.

Iranian Attacks on Industrial Control Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/iranian_attacks.html

New details:

At the CyberwarCon conference in Arlington, Virginia, on Thursday, Microsoft security researcher Ned Moran plans to present new findings from the company’s threat intelligence group that show a shift in the activity of the Iranian hacker group APT33, also known by the names Holmium, Refined Kitten, or Elfin. Microsoft has watched the group carry out so-called password-spraying attacks over the past year that try just a few common passwords across user accounts at tens of thousands of organizations. That’s generally considered a crude and indiscriminate form of hacking. But over the last two months, Microsoft says APT33 has significantly narrowed its password spraying to around 2,000 organizations per month, while increasing the number of accounts targeted at each of those organizations almost tenfold on average.

[…]

The hackers’ motivation — and which industrial control systems they’ve actually breached — remains unclear. Moran speculates that the group is seeking to gain a foothold to carry out cyberattacks with physically disruptive effects. “They’re going after these producers and manufacturers of control systems, but I don’t think they’re the end targets,” says Moran. “They’re trying to find the downstream customer, to find out how they work and who uses them. They’re looking to inflict some pain on someone’s critical infrastructure that makes use of these control systems.”

It’s unclear whether the attackers are causing any actual damage, or just gaining access for some future use.

Cracking the Passwords of Early Internet Pioneers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/cracking_the_pa.html

Lots of them weren’t very good:

BSD co-inventor Dennis Ritchie, for instance, used “dmac” (his middle name was MacAlistair); Stephen R. Bourne, creator of the Bourne shell command line interpreter, chose “bourne”; Eric Schmidt, an early developer of Unix software and now the executive chairman of Google parent company Alphabet, relied on “wendy!!!” (the name of his wife); and Stuart Feldman, author of Unix automation tool make and the first Fortran compiler, used “axolotl” (the name of a Mexican salamander).

Weakest of all was the password for Unix contributor Brian W. Kernighan: “/.,/.,” representing a three-character string repeated twice using adjacent keys on a QWERTY keyboard. (None of the passwords included the quotation marks.)

I don’t remember any of my early passwords, but they probably weren’t much better.

Cracking Forgotten Passwords

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/cracking_forgot.html

Expandpass is a string expansion program. It’s “useful for cracking passwords you kinda-remember.” You tell the program what you remember about the password and it tries related passwords.

I learned about it in this article about Phil Dougherty, who helps people recover lost cryptocurrency passwords (mostly Ethereum) for a cut of the recovered value.

Risks of Password Managers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/risks_of_passwo.html

Stuart Schechter writes about the security risks of using a password manager. It’s a good piece, and nicely discusses the trade-offs around password managers: which one to choose, which passwords to store in it, and so on.

My own Password Safe is mentioned. My particular choices about security and risk is to only store passwords on my computer — not on my phone — and not to put anything in the cloud. In my way of thinking, that reduces the risks of a password manager considerably. Yes, there are losses in convenience.

How to securely provide database credentials to Lambda functions by using AWS Secrets Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-using-aws-secrets-manager/

As a solutions architect at AWS, I often assist customers in architecting and deploying business applications using APIs and microservices that rely on serverless services such as AWS Lambda and database services such as Amazon Relational Database Service (Amazon RDS). Customers can take advantage of these fully managed AWS services to unburden their teams from infrastructure operations and other undifferentiated heavy lifting, such as patching, software maintenance, and capacity planning.

In this blog post, I’ll show you how to use AWS Secrets Manager to secure your database credentials and send them to Lambda functions that will use them to connect and query the backend database service Amazon RDS—without hardcoding the secrets in code or passing them through environment variables. This approach will help you secure last-mile secrets and protect your backend databases. Long living credentials need to be managed and regularly rotated to keep access into critical systems secure, so it’s a security best practice to periodically reset your passwords. Manually changing the passwords would be cumbersome, but AWS Secrets Manager helps by managing and rotating the RDS database passwords.

Solution overview

This is sample code: you’ll use an AWS CloudFormation template to deploy the following components to test the API endpoint from your browser:

  • An RDS MySQL database instance on a db.t2.micro instance
  • Two Lambda functions with necessary IAM roles and IAM policies, including access to AWS Secrets Manager:
    • LambdaRDSCFNInit: This Lambda function will execute immediately after the CloudFormation stack creation. It will create an “Employees” table in the database, where it will insert three sample records.
    • LambdaRDSTest: This function will query the Employees table and return the record count in an HTML string format
  • RESTful API with “GET” method on AWS API Gateway

Here’s the high level setup of the AWS services that will be created from the CloudFormation stack deployment:
 

Figure 1: Solution architecture

Figure 1: Architecture diagram

  1. Clients call the RESTful API hosted on AWS API Gateway
  2. The API Gateway executes the Lambda function
  3. The Lambda function retrieves the database secrets using the Secrets Manager API
  4. The Lambda function connects to the RDS database using database secrets from Secrets Manager and returns the query results

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/AWS-SecretsManager-Lambda-RDS-blog.

Deploying the sample solution

Set up the sample deployment by selecting the Launch Stack button below. If you haven’t logged into your AWS account, follow the prompts to log in.

By default, the stack will be deployed in the us-east-1 region. If you want to deploy this stack in any other region, download the code from the above GitHub link, place the Lambda code zip file in a region-specific S3 bucket and make the necessary changes in the CloudFormation template to point to the right S3 bucket. (Please refer to the AWS CloudFormation User Guide for additional details on how to create stacks using the AWS CloudFormation console.)
 
Select this image to open a link that starts building the CloudFormation stack

Next, follow these steps to execute the stack:

  1. Leave the default location for the template and select Next.
     
    Figure 2: Keep the default location for the template

    Figure 2: Keep the default location for the template

  2. On the Specify Details page, you’ll see the parameters pre-populated. These parameters include the name of the database and the database user name. Select Next on this screen
     
    Figure 3: Parameters on the "Specify Details" page

    Figure 3: Parameters on the “Specify Details” page

  3. On the Options screen, select the Next button.
  4. On the Review screen, select both check boxes, then select the Create Change Set button:
     
    Figure 4: Select the check boxes and "Create Change Set"

    Figure 4: Select the check boxes and “Create Change Set”

  5. After the change set creation is completed, choose the Execute button to launch the stack.
  6. Stack creation will take between 10 – 15 minutes. After the stack is created successfully, select the Outputs tab of the stack, then select the link.
     
    Figure 5:  Select the link on the "Outputs" tab

    Figure 5: Select the link on the “Outputs” tab

    This action will trigger the code in the Lambda function, which will query the “Employee” table in the MySQL database and will return the results count back to the API. You’ll see the following screen as output from the RESTful API endpoint:
     

    Figure 6:   Output from the RESTful API endpoint

    Figure 6: Output from the RESTful API endpoint

At this point, you’ve successfully deployed and tested the API endpoint with a backend Lambda function and RDS resources. The Lambda function is able to successfully query the MySQL RDS database and is able to return the results through the API endpoint.

What’s happening in the background?

The CloudFormation stack deployed a MySQL RDS database with a randomly generated password using a secret resource. Now that the secret resource with randomly generated password has been created, the CloudFormation stack will use dynamic reference to resolve the value of the password from Secrets Manager in order to create the RDS instance resource. Dynamic references provide a compact, powerful way for you to specify external values that are stored and managed in other AWS services, such as Secrets Manager. The dynamic reference guarantees that CloudFormation will not log or persist the resolved value, keeping the database password safe. The CloudFormation template also creates a Lambda function to do automatic rotation of the password for the MySQL RDS database every 30 days. Native credential rotation can improve security posture, as it eliminates the need to manually handle database passwords through the lifecycle process.

Below is the CloudFormation code that covers these details:


#This is a Secret resource with a randomly generated password in its SecretString JSON.
MyRDSInstanceRotationSecret:
    Type: AWS::SecretsManager::Secret
    Properties:
    Description: 'This is my rds instance secret'
    GenerateSecretString:
        SecretStringTemplate: !Sub '{"username": "${!Ref RDSUserName}"}'
        GenerateStringKey: 'password'
        PasswordLength: 16
        ExcludeCharacters: '"@/\'
    Tags:
    -
        Key: AppNam
        Value: MyApp

#This is a RDS instance resource. Its master username and password use dynamic references to resolve values from
#SecretsManager. The dynamic reference guarantees that CloudFormation will not log or persist the resolved value
#We use a ref to the Secret resource logical id in order to construct the dynamic reference, since the Secret name is being
#generated by CloudFormation
MyDBInstance2:
    Type: AWS::RDS::DBInstance
    Properties:
    AllocatedStorage: 20
    DBInstanceClass: db.t2.micro
    DBName: !Ref RDSDBName
    Engine: mysql
    MasterUsername: !Ref RDSUserName
    MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref MyRDSInstanceRotationSecret, ':SecretString:password}}' ]]
    MultiAZ: False
    PubliclyAccessible: False      
    StorageType: gp2
    DBSubnetGroupName: !Ref myDBSubnetGroup
    VPCSecurityGroups:
    - !Ref RDSSecurityGroup
    BackupRetentionPeriod: 0
    DBInstanceIdentifier: 'rotation-instance'

#This is a SecretTargetAttachment resource which updates the referenced Secret resource with properties about
#the referenced RDS instance
SecretRDSInstanceAttachment:
    Type: AWS::SecretsManager::SecretTargetAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    TargetId: !Ref MyDBInstance2
    TargetType: AWS::RDS::DBInstance
#This is a RotationSchedule resource. It configures rotation of password for the referenced secret using a rotation lambda
#The first rotation happens at resource creation time, with subsequent rotations scheduled according to the rotation rules
#We explicitly depend on the SecretTargetAttachment resource being created to ensure that the secret contains all the
#information necessary for rotation to succeed
MySecretRotationSchedule:
    Type: AWS::SecretsManager::RotationSchedule
    DependsOn: SecretRDSInstanceAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    RotationLambdaARN: !GetAtt MyRotationLambda.Arn
    RotationRules:
        AutomaticallyAfterDays: 30

#This is a lambda Function resource. We will use this lambda to rotate secrets
#For details about rotation lambdas, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html     https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
#The below example assumes that the lambda code has been uploaded to a S3 bucket, and that it will rotate a mysql database password
MyRotationLambda:
    Type: AWS::Serverless::Function
    Properties:
    Runtime: python2.7
    Role: !GetAtt MyLambdaExecutionRole.Arn
    Handler: mysql_secret_rotation.lambda_handler
    Description: 'This is a lambda to rotate MySql user passwd'
    FunctionName: 'cfn-rotation-lambda'
    CodeUri: 's3://devsecopsblog/code.zip'      
    Environment:
        Variables:
        SECRETS_MANAGER_ENDPOINT: !Sub 'https://secretsmanager.${AWS::Region}.amazonaws.com' 

Verifying the solution

To be certain that everything is set up properly, you can look at the Lambda code that’s querying the database table by following the below steps:

  1. Go to the AWS Lambda service page
  2. From the list of Lambda functions, click on the function with the name scm2-LambdaRDSTest-…
  3. You can see the environment variables at the bottom of the Lambda Configuration details screen. Notice that there should be no database password supplied as part of these environment variables:
     
    Figure 7: Environment variables

    Figure 7: Environment variables

    
        import sys
        import pymysql
        import boto3
        import botocore
        import json
        import random
        import time
        import os
        from botocore.exceptions import ClientError
        
        # rds settings
        rds_host = os.environ['RDS_HOST']
        name = os.environ['RDS_USERNAME']
        db_name = os.environ['RDS_DB_NAME']
        helperFunctionARN = os.environ['HELPER_FUNCTION_ARN']
        
        secret_name = os.environ['SECRET_NAME']
        my_session = boto3.session.Session()
        region_name = my_session.region_name
        conn = None
        
        # Get the service resource.
        lambdaClient = boto3.client('lambda')
        
        
        def invokeConnCountManager(incrementCounter):
            # return True
            response = lambdaClient.invoke(
                FunctionName=helperFunctionARN,
                InvocationType='RequestResponse',
                Payload='{"incrementCounter":' + str.lower(str(incrementCounter)) + ',"RDBMSName": "Prod_MySQL"}'
            )
            retVal = response['Payload']
            retVal1 = retVal.read()
            return retVal1
        
        
        def openConnection():
            print("In Open connection")
            global conn
            password = "None"
            # Create a Secrets Manager client
            session = boto3.session.Session()
            client = session.client(
                service_name='secretsmanager',
                region_name=region_name
            )
            
            # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
            # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
            # We rethrow the exception by default.
            
            try:
                get_secret_value_response = client.get_secret_value(
                    SecretId=secret_name
                )
                print(get_secret_value_response)
            except ClientError as e:
                print(e)
                if e.response['Error']['Code'] == 'DecryptionFailureException':
                    # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                    # An error occurred on the server side.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidParameterException':
                    # You provided an invalid value for a parameter.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidRequestException':
                    # You provided a parameter value that is not valid for the current state of the resource.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                    # We can't find the resource that you asked for.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
            else:
                # Decrypts secret using the associated KMS CMK.
                # Depending on whether the secret is a string or binary, one of these fields will be populated.
                if 'SecretString' in get_secret_value_response:
                    secret = get_secret_value_response['SecretString']
                    j = json.loads(secret)
                    password = j['password']
                else:
                    decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                    print("password binary:" + decoded_binary_secret)
                    password = decoded_binary_secret.password    
            
            try:
                if(conn is None):
                    conn = pymysql.connect(
                        rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
                elif (not conn.open):
                    # print(conn.open)
                    conn = pymysql.connect(
                        rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
        
            except Exception as e:
                print (e)
                print("ERROR: Unexpected error: Could not connect to MySql instance.")
                raise e
        
        
        def lambda_handler(event, context):
            if invokeConnCountManager(True) == "false":
                print ("Not enough Connections available.")
                return False
        
            item_count = 0
            try:
                openConnection()
                # Introducing artificial random delay to mimic actual DB query time. Remove this code for actual use.
                time.sleep(random.randint(1, 3))
                with conn.cursor() as cur:
                    cur.execute("select * from Employees")
                    for row in cur:
                        item_count += 1
                        print(row)
                        # print(row)
            except Exception as e:
                # Error while opening connection or processing
                print(e)
            finally:
                print("Closing Connection")
                if(conn is not None and conn.open):
                    conn.close()
                invokeConnCountManager(False)
        
            content =  "Selected %d items from RDS MySQL table" % (item_count)
            response = {
                "statusCode": 200,
                "body": content,
                "headers": {
                    'Content-Type': 'text/html',
                }
            }
            return response        
        

In the AWS Secrets Manager console, you can also look at the new secret that was created from CloudFormation execution by following the below steps:

  1. Go to theAWS Secret Manager service page with appropriate IAM permissions
  2. From the list of secrets, click on the latest secret with the name MyRDSInstanceRotationSecret-…
  3. You will see the secret details and rotation information on the screen, as shown in the following screenshot:
     
    Figure 8: Secret details and rotation information

    Figure 8: Secret details and rotation information

Conclusion

In this post, I showed you how to manage database secrets using AWS Secrets Manager and how to leverage Secrets Manager’s API to retrieve the secrets into a Lambda execution environment to improve database security and protect sensitive data. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and ongoing maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Secrets Manager Forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ramesh Adabala

Ramesh is a Solution Architect on the Southeast Enterprise Solution Architecture team at AWS.