Tag Archives: cybersecurity

AWS renews its GNS Portugal certification for classified information with 66 services

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-renews-its-gns-portugal-certification-for-classified-information-with-66-services/

Amazon Web Services (AWS) announces that it has successfully renewed the Portuguese GNS (Gabinete Nacional de Segurança, National Security Cabinet) certification in the AWS Regions and edge locations in the European Union. This accreditation confirms that AWS cloud infrastructure, security controls, and operational processes adhere to the stringent requirements set forth by the Portuguese government for handling classified information at the National Reservado level (equivalent to the NATO Restricted level).

The GNS certification is based on the NIST SP800-53 Rev. 5 and CSA CCM v4 frameworks. It demonstrates the AWS commitment to providing the most secure cloud services to public-sector customers, particularly those with the most demanding security and compliance needs. By achieving this certification, AWS has demonstrated its ability to safeguard classified data up to the Reservado (Restricted) level, in accordance with the Portuguese government’s rigorous security standards.

AWS was evaluated by an authorized and independent third-party auditor, Adyta Lda, and by the Portuguese GNS itself. With the GNS certification, AWS customers in Portugal, including public sector organizations and defense contractors, can now use the full extent of AWS cloud services to handle national restricted information. This enables these customers to take advantage of AWS scalability, reliability, and cost-effectiveness, while safeguarding data in alignment with GNS standards.

We’re happy to announce the addition of 40 services to the scope of our GNS certification, for a new total of 66 services in scope. To view the complete list of services included in the scope, see the AWS Services in Scope by Compliance Program – GNS National Restricted Certification page.

The Certificate of Compliance illustrating the compliance status of AWS is available on the GNS Certifications page and through AWS Artifact.

For more information about GNS, see the AWS Compliance page GNS National Restricted Certification.

If you have feedback about this post, submit comments in the Comments section below.
 

Daniel Fuertes
Daniel Fuertes

Daniel is a Security Audit Program Manager at AWS, based in Madrid, Spain. Daniel leads multiple security audits, attestations, and certification programs in Spain, Portugal, and other EMEA countries. Daniel has ten years of experience in security assurance and compliance, including previous experience as an auditor for the PCI DSS security framework. He also holds the CISSP, PCIP, and ISO 27001 Lead Auditor certifications.

Python Developers Targeted with Malware During Fake Job Interviews

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/python-developers-targeted-with-malware-during-fake-job-interviews.html

Interesting social engineering attack: luring potential job applicants with fake recruiting pitches, trying to convince them to download malware. From a news article

These particular attacks from North Korean state-funded hacking team Lazarus Group are new, but the overall malware campaign against the Python development community has been running since at least August of 2023, when a number of popular open source Python tools were maliciously duplicated with added malware. Now, though, there are also attacks involving “coding tests” that only exist to get the end user to install hidden malware on their system (cleverly hidden with Base64 encoding) that allows remote execution once present. The capacity for exploitation at that point is pretty much unlimited, due to the flexibility of Python and how it interacts with the underlying OS.

On the Cyber Safety Review Board

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/08/a-better-investigatory-board-for-cyber-incidents.html

When an airplane crashes, impartial investigatory bodies leap into action, empowered by law to unearth what happened and why. But there is no such empowered and impartial body to investigate CrowdStrike’s faulty update that recently unfolded, ensnarling banks, airlines, and emergency services to the tune of billions of dollars. We need one. To be sure, there is the White House’s Cyber Safety Review Board. On March 20, the CSRB released a report into last summer’s intrusion by a Chinese hacking group into Microsoft’s cloud environment, where it compromised the U.S. Department of Commerce, State Department, congressional offices, and several associated companies. But the board’s report—well-researched and containing some good and actionable recommendations—shows how it suffers from its lack of subpoena power and its political unwillingness to generalize from specific incidents to the broader industry.

Some background: The CSRB was established in 2021, by executive order, to provide an independent analysis and assessment of significant cyberattacks against the United States. The goal was to pierce the corporate confidentiality that often surrounds such attacks and to provide the entire security community with lessons and recommendations. The more we all know about what happened, the better we can all do next time. It’s the same thinking that led to the formation of the National Transportation Safety Board, but for cyberattacks and not plane crashes.

But the board immediately failed to live up to its mission. It was founded in response to the Russian cyberattack on the U.S. known as SolarWinds. Although it was specifically tasked with investigating that incident, it did not—for reasons that remain unclear.

So far, the board has published three reports. They offered only simplistic recommendations. In the first investigation, on Log4J, the CSRB exhorted companies to patch their systems faster and more often. In the second, on Lapsus$, the CSRB told organizations not to use SMS-based two-factor authentication (it’s vulnerable to SIM-swapping attacks). These two recommendations are basic cybersecurity hygiene, and not something we need an investigation to tell us.

The most recent report—on China’s penetration of Microsoft—is much better. This time, the CSRB gave us an extensive analysis of Microsoft’s security failures and placed blame for the attack’s success squarely on their shoulders. Its recommendations were also more specific and extensive, addressing Microsoft’s board and leaders specifically and the industry more generally. The report describes how Microsoft stopped rotating cryptographic keys in early 2021, reducing the security of the systems affected in the hack. The report suggests that if the company had set up an automated or manual key rotation system, or a way to alert teams about the age of their keys, it could have prevented the attack on its systems. The report also looked at how Microsoft’s competitors—think Google, Oracle, and Amazon Web Services—handle this issue, offering insights on how similar companies avoid mistakes.

Yet there are still problems, with the report itself and with the environment in which it was produced.

First, the public report cites a large number of anonymous sources. While the report lays blame for the breach on Microsoft’s lax security culture, it is actually quite deferential to Microsoft; it makes special mention of the company’s cooperation. If the board needed to make trades to get information that would only be provided if people were given anonymity, this should be laid out more explicitly for the sake of transparency. More importantly, the board seems to have conflict-of-interest issues arising from the fact that the investigators are corporate executives and heads of government agencies who have full-time jobs.

Second: Unlike the NTSB, the CSRB lacks subpoena power. This is, at least in part, out of fear that the conflicted tech executives and government employees would use the power in an anticompetitive fashion. As a result, the board must rely on wheedling and cooperation for its fact-finding. While the DHS press release said, “Microsoft fully cooperated with the Board’s review,” the next company may not be nearly as cooperative, and we do not know what was not shared with the CSRB.

One of us, Tarah, recently testified on this topic before the U.S. Senate’s Homeland Security and Governmental Affairs Committee, and the senators asking questions seemed genuinely interested in how to fix the CSRB’s extreme slowness and lack of transparency in the two reports they’d issued so far.

It’s a hard task. The CSRB’s charter comes from Executive Order 14208, which is why—unlike the NTSB—it doesn’t have subpoena power. Congress needs to codify the CSRB in law and give it the subpoena power it so desperately needs.

Additionally, the CSRB’s reports don’t provide useful guidance going forward. For example, is the Microsoft report provides no mapping of the company’s security problems to any government standards that could have prevented them. In this case, the problem is that there are no standards overseen by NIST—the organization in charge of cybersecurity standards—for key rotation. It would have been better for the report to have said that explicitly. The cybersecurity industry needs NIST standards to give us a compliance floor below which any organization is explicitly failing to provide due care. The report condemns Microsoft for not rotating an internal encryption key for seven years, when its standard internally was four years. However, for the last several years, automated key rotation more on the order of once a month or even more frequently has become the expected industry guideline.

A guideline, however, is not a standard or regulation. It’s just a strongly worded suggestion. In this specific case, the report doesn’t offer guidance on how often keys should be rotated. In essence, the CSRB report said that Microsoft should feel very bad about the fact that they did not rotate their keys more often—but did not explain the logic, give an actual baseline of how often keys should be rotated, or provide any statistical or survey data to support why that timeline is appropriate. Automated certificate rotation such as that provided by public free service Let’s Encrypt has revolutionized encrypted-by-default communications, and expectations in the cybersecurity industry have risen to match. Unfortunately, the report only discusses Microsoft proprietary keys by brand name, instead of having a larger discussion of why public key infrastructure exists or what the best practices should be.

More generally, because the CSRB reports so far have failed to generalize their findings with transparent and thorough research that provides real standards and expectations for the cybersecurity industry, we—policymakers, industry leaders, the U.S. public—find ourselves filling in the gaps. Individual experts are having to provide anecdotal and individualized interpretations of what their investigations might imply for companies simply trying to learn what their actual due care responsibilities are.

It’s as if no one is sure whether boiling your drinking water or nailing a horseshoe up over the door is statistically more likely to decrease the incidence of cholera. Sure, a lot of us think that boiling your water is probably best, but no one is saying that with real science. No one is saying how long you have to boil your water for, or if any water sources more likely to carry illness. And until there are real numbers and general standards, our educated opinions are on an equal footing with horseshoes and hope.

It should not be the job of cybersecurity experts, even us, to generate lessons from CSRB reports based on our own opinions. This is why we continue to ask the CSRB to provide generalizable standards which either are based on or call for NIST standardization. We want proscriptive and descriptive reports of incidents: see, for example, the UK GAO report for the WannaCry ransomware, which remains a gold standard of government cybersecurity incident investigation reports.

We need and deserve more than one-off anecdotes about how one company didn’t do security well and should do it better in future.  Let’s start treating cybersecurity like the equivalent of public safety and get some real lessons learned.

This essay was written with Tarah Wheeler, and was published on Defense One.

Providing Security Updates to Automobile Software

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/07/providing-security-updates-to-automobile-software.html

Auto manufacturers are just starting to realize the problems of supporting the software in older models:

Today’s phones are able to receive updates six to eight years after their purchase date. Samsung and Google provide Android OS updates and security updates for seven years. Apple halts servicing products seven years after they stop selling them.

That might not cut it in the auto world, where the average age of cars on US roads is only going up. A recent report found that cars and trucks just reached a new record average age of 12.6 years, up two months from 2023. That means the car software hitting the road today needs to work­—and maybe even improve—­beyond 2036. The average length of smartphone ownership is just 2.8 years.

I wrote about this in 2018, in Click Here to Kill Everything, talking about patching as a security mechanism:

This won’t work with more durable goods. We might buy a new DVR every 5 or 10 years, and a refrigerator every 25 years. We drive a car we buy today for a decade, sell it to someone else who drives it for another decade, and that person sells it to someone who ships it to a Third World country, where it’s resold yet again and driven for yet another decade or two. Go try to boot up a 1978 Commodore PET computer, or try to run that year’s VisiCalc, and see what happens; we simply don’t know how to maintain 40-year-old [consumer] software.

Consider a car company. It might sell a dozen different types of cars with a dozen different software builds each year. Even assuming that the software gets updated only every two years and the company supports the cars for only two decades, the company needs to maintain the capability to update 20 to 30 different software versions. (For a company like Bosch that supplies automotive parts for many different manufacturers, the number would be more like 200.) The expense and warehouse size for the test vehicles and associated equipment would be enormous. Alternatively, imagine if car companies announced that they would no longer support vehicles older than five, or ten, years. There would be serious environmental consequences.

We really don’t have a good solution here. Agile updates is how we maintain security in a world where new vulnerabilities arise all the time, and we don’t have the economic incentive to secure things properly from the start.

How to Identify and Prevent Social Engineering Attacks

Post Syndicated from Editor original https://nebosystems.eu/how-to-identify-and-prevent-social-engineering-attacks/

At Nebosystems, we’re excited to introduce our latest video, which dives deep into the world of social engineering. This common tactic used by cybercriminals involves manipulating individuals into revealing confidential information. Unlike traditional hacking, social engineering exploits human interaction and psychological manipulation. Watch our video to learn how to recognize and protect yourself from these sophisticated attacks.

What is Social Engineering?

Social engineering is the art of manipulating people into giving up sensitive information. It involves tricking individuals into divulging confidential details through various deceptive methods. Unlike technical hacking, social engineering exploits human vulnerabilities.

Common Types of Social Engineering Attacks

In the video, we cover several common types of social engineering attacks:

  • Phishing: Sending fraudulent emails that appear to be from reputable sources to steal sensitive information.
  • Pretexting: Creating a fabricated scenario to obtain information, often pretending to be someone in authority.
  • Baiting: Leaving physical devices, like USB sticks infected with malware, in public places to lure victims.
  • Tailgating: Following an authorized individual into a restricted area without proper authorization.

How to Recognize Social Engineering Attacks

Here are some tips to help you recognize social engineering attacks:

  • Be Skeptical of Unexpected Requests: Be cautious of unexpected requests for personal or financial information.
  • Verify Identities: Always verify the identity of the person or organization requesting information.
  • Look for Red Flags: Watch for poor grammar, urgent language or offers that seem too good to be true.

How to Protect Yourself from Social Engineering

Protecting yourself from social engineering involves being aware and taking proactive steps:

  • Educate Yourself and Others: Regularly educate yourself and your colleagues about the latest social engineering tactics.
  • Implement Security Protocols: Follow strict security protocols, such as two-factor authentication and regular password changes.
  • Report Suspicious Activity: Report any suspicious activity or potential attacks to your IT department.

For more comprehensive protection, explore our Information Security Services designed to safeguard your digital assets and ensure your business remains secure. Learn more about our services here: Information Security Services.

Stay informed with the latest updates and insights on cybersecurity by visiting our blog: Nebosystems Blog.

For any inquiries or further information, feel free to contact us.

Apple Is Alerting iPhone Users of Spyware Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/07/apple-is-alerting-iphone-users-of-spyware-attacks.html

Not a lot of details:

Apple has issued a new round of threat notifications to iPhone users across 98 countries, warning them of potential mercenary spyware attacks. It’s the second such alert campaign from the company this year, following a similar notification sent to users in 92 nations in April.

Spotting Phishing Emails: Essential Tips from Nebosystems

Post Syndicated from Editor original https://nebosystems.eu/spotting-phishing-emails-essential-tips/

Welcome to Nebosystems! We are excited to announce the release of our first video on YouTube: “How to Identify Phishing Emails.” In today’s digital age, protecting yourself from cyber threats is more important than ever. Phishing emails are one of the most common and dangerous types of cyber attacks, but with the right knowledge, you can easily spot and avoid them.

In this video, you’ll learn:

  • How to recognize suspicious sender information.
  • How to identify urgent or threatening language.
  • Tips for checking suspicious links and attachments.
  • The importance of watching for spelling and grammar mistakes.
  • Why legitimate companies will never ask for sensitive information via email.

Stay safe online by following these essential tips. If you find this video helpful, don’t forget to like and subscribe to our YouTube channel for more cybersecurity tips and tricks!

Watch the video now:

For more comprehensive protection, explore our Information Security Services designed to safeguard your digital assets and ensure your business remains secure. Learn more about our services here: Information Security Services.

Stay informed with the latest updates and insights on cybersecurity by visiting our blog: Nebosystems Blog.

For any inquiries or further information, feel free to contact us.

Is Ransomware Protection Working?

Post Syndicated from Bozho original https://techblog.bozho.net/is-ransomware-protection-working/

Recently an organization I’m familiar with suffered a ransomware attack. A LockBit ransomware infected a workstation and spread to a few servers. The attack was contained thanks to a quick incident response, but there was one important thing: the organization had a top-notch EDR installed. It’s in the Gartner’s leader quadrant for endpoint protection platforms. And it didn’t stop it. I won’t mention names, because the point here is not to attack a particular brand, but the fact that the attackers created a different build of the LockBit codebase meant that the signature-based detection didn’t work and so the files on the endpoints were encrypted. Not only that – attackers downloaded port scanners and executed password spraying on the network, in order to pivot to the servers, without much trouble caused by the EDR.

Ransomware is the most widespread type of attack, we have a security industry that’s trying to cope with those attacks for a long time, and in 2024 it’s sufficient to just rebuild the ransomware from source in order to not get caught. That’s a problem. To add insult to injury, the malware executable was called lockbit.exe_.

I think we need to think of better ways to tackle ransomware. I’m not an antivirus/EDR in-depth expert, but I have been thinking the following: why doesn’t the operating system have a built-in anti-ransomware functionality? The malware can stop endpoint agents, but it can’t stop the OS (well, it can stop OS services or modify OS dlls, I agree). I was thinking of native NTFS listeners to detect ransomware behavior – very fast enumeration and modification of files. Then I read this article titled “Microsoft Can Fix Ransomware Tomorrow” which echoed my thoughts. It’s not that simple as the title implies, and the article goes to argue that it’s hard, but something has to be done.

This may seriously impact the EDR market – if the top threat – ransomware – is automatically contained by built-in functionality, why purchase an EDR? Obviously, there are many other reasons to do so, but every budget cut will start from the EDR. Ransomware has been a successful driver for more investment in security. To prevent putting too much functionality in the OS, EDRs can tap into the detection capability of the OS to take further actions.

But ransomware works in a very simple way. Detecting mass-encryption of files is easy. Rate-limiting it is reasonable. You may be aware, but CPUs have AES-specific instructions that improve the performance of AES encryption – the fact that these instructions are used may be a further indicator of an ongoing ransomware attack.

I think we have to tap into the lower levels of our software stack – file system, OS, CPU – in order to tackle ransomware. It’s clear that omnipotent endpoint agents don’t exist, no matter how much “AI magic” you sprinkle ontop of them in the data sheet (I remember once asking another EDR vendor on a conference how exactly their AI is working. My impression from the answer was: it’s rules).

As I said, I’m no AV/EDR expert, and I expect comments like “But we are already doing that”. I’m sure APIs like this one are utilized by AV/EDRs, but they may be too slow or too heavy to be used. And it may mean that this APIs can be optimized for the ransomware-detection usecase. I don’t have a ready answer (otherwise I’d be an EDR vendor), but I’d welcome a serious discussion on that. We can’t be in a situation where purchasing an expensive security tool doesn’t reliably solve the most prominent threat – “off-the-shelf” ransomware.

The post Is Ransomware Protection Working? appeared first on Bozho's tech blog.

Using LLMs to Exploit Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/06/using-llms-to-exploit-vulnerabilities.html

Interesting research: “Teams of LLM Agents can Exploit Zero-Day Vulnerabilities.”

Abstract: LLM agents have become increasingly sophisticated, especially in the realm of cybersecurity. Researchers have shown that LLM agents can exploit real-world vulnerabilities when given a description of the vulnerability and toy capture-the-flag problems. However, these agents still perform poorly on real-world vulnerabilities that are unknown to the agent ahead of time (zero-day vulnerabilities).

In this work, we show that teams of LLM agents can exploit real-world, zero-day vulnerabilities. Prior agents struggle with exploring many different vulnerabilities and long-range planning when used alone. To resolve this, we introduce HPTSA, a system of agents with a planning agent that can launch subagents. The planning agent explores the system and determines which subagents to call, resolving long-term planning issues when trying different vulnerabilities. We construct a benchmark of 15 real-world vulnerabilities and show that our team of agents improve over prior work by up to 4.5×.

The LLMs aren’t finding new vulnerabilities. They’re exploiting zero-days—which means they are not trained on them—in new ways. So think about this sort of thing combined with another AI that finds new vulnerabilities in code.

These kinds of developments are important to follow, as they are part of the puzzle of a fully autonomous AI cyberattack agent. I talk about this sort of thing more here.

Security and Human Behavior (SHB) 2024

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/06/security-and-human-behavior-shb-2024.html

This week, I hosted the seventeenth Workshop on Security and Human Behavior at the Harvard Kennedy School. This is the first workshop since our co-founder, Ross Anderson, died unexpectedly.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security. The fifty or so attendees include psychologists, economists, computer security researchers, criminologists, sociologists, political scientists, designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Short talks limit presenters’ ability to get into the boring details of their work, and the interdisciplinary audience discourages jargon.

Since the beginning, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways—and has resulted in some new friendships and unexpected collaborations. This is why some of us have been coming back every year for over a decade.

This year’s schedule is here. This page lists the participants and includes links to some of their work. Kami Vaniea liveblogged both days.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, fifteenth and sixteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the sessions. Ross maintained a good webpage of psychology and security resources—it’s still up for now.

Next year we will be in Cambridge, UK, hosted by Frank Stajano.

AWS completes the 2024 Cyber Essentials Plus certification

Post Syndicated from Tariro Dongo original https://aws.amazon.com/blogs/security/aws-completes-the-2024-cyber-essentials-plus-certification/

Amazon Web Services (AWS) is pleased to announce the successful renewal of the United Kingdom Cyber Essentials Plus certification. The Cyber Essentials Plus certificate is valid for one year until March 22, 2025.

Cyber Essentials Plus is a UK Government–backed, industry-supported certification scheme intended to help organizations demonstrate controls against common cyber security threats. An independent third-party auditor certified by Information Assurance for Small and Medium Enterprises (IASME) completed the audit. The scope of our Cyber Essentials Plus certificate covers the AWS corporate network for the United Kingdom, Ireland, and Germany.

AWS compliance status is available on the AWS Cyber Essentials Plus compliance page, and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit a comment in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

 
Want more AWS Security news? Follow us on X.

Tariro Dongo

Tariro Dongo

Tariro is a Security Assurance Program Manager at AWS, based in London. Tari is responsible for third-party and customer audits, attestations, certifications, and assessments across EMEA. Previously, Tari worked in security assurance and technology risk in the Big Four accounting firms and the financial services industry over the last 12 years.

The art of possible: Three themes from RSA Conference 2024

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/the-art-of-possible-three-themes-from-rsa-conference-2024/

San Francisco skyline with Oakland Bay Bridge at sunset, California, USA

RSA Conference 2024 drew 650 speakers, 600 exhibitors, and thousands of security practitioners from across the globe to the Moscone Center in San Francisco, California from May 6 through 9.

The keynote lineup was diverse, with 33 presentations featuring speakers ranging from WarGames actor Matthew Broderick, to public and private-sector luminaries such as Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly, U.S. Secretary of State Antony Blinken, security technologist Bruce Schneier, and cryptography experts Tal Rabin, Whitfield Diffie, and Adi Shamir.

Topics aligned with this year’s conference theme, “The art of possible,” and focused on actions we can take to revolutionize technology through innovation, while fortifying our defenses against an evolving threat landscape.

This post highlights three themes that caught our attention: artificial intelligence (AI) security, the Secure by Design approach to building products and services, and Chief Information Security Officer (CISO) collaboration.

AI security

Organizations in all industries have started building generative AI applications using large language models (LLMs) and other foundation models (FMs) to enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. So it’s not surprising that AI dominated conversations. Over 100 sessions touched on the topic, and the desire of attendees to understand AI technology and learn how to balance its risks and opportunities was clear.

“Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system.” — CISA

FMs and the applications built around them are often used with highly sensitive business data such as personal data, compliance data, operational data, and financial information to optimize the model’s output. As we explore the advantages of generative AI, protecting highly sensitive data and investments is a top priority. However, many organizations aren’t paying enough attention to security.

A joint generative AI security report released by Amazon Web Services (AWS) and the IBM Institute for Business Value during the conference found that 82% of business leaders view secure and trustworthy AI as essential for their operations, but only 24% are actively securing generative AI models and embedding security processes in AI development. In fact, nearly 70% say innovation takes precedence over security, despite concerns over threats and vulnerabilities (detailed in Figure 1).

Figure 1: Generative AI adoption concerns

Figure 1: Generative AI adoption concerns, Source: IBM Security

Because data and model weights—the numerical values models learn and adjust as they train—are incredibly valuable, organizations need them to stay protected, secure, and private, whether that means restricting access from an organization’s own administrators, customers, or cloud service provider, or protecting data from vulnerabilities in software running in the organization’s own environment.

There is no silver AI-security bullet, but as the report points out, there are proactive steps you can take to start protecting your organization and leveraging AI technology to improve your security posture:

  1. Establish a governance, risk, and compliance (GRC) foundation. Trust in gen AI starts with new security governance models (Figure 2) that integrate and embed GRC capabilities into your AI initiatives, and include policies, processes, and controls that are aligned with your business objectives.

    Figure 2: Updating governance, risk, and compliance models

    Figure 2: Updating governance, risk, and compliance models, Source: IBM Security

    In the RSA Conference session AI: Law, Policy, and Common Sense Suggestions to Stay Out of Trouble, digital commerce and gaming attorney Behnam Dayanim highlighted ethical, policy, and legal considerations—including AI-specific regulations—as well as governance structures such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0) that can help maximize a successful implementation and minimize potential risk.

  2. Strengthen your security culture. When we think of securing AI, it’s natural to focus on technical measures that can help protect the business. But organizations are made up of people—not technology. Educating employees at all levels of the organization can help avoid preventable harms such as prompt-based risks and unapproved tool use, and foster a resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

    “You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS
  3. Engage with partners. Developing and securing AI solutions requires resources and skills that many organizations lack. Partners can provide you with comprehensive security support—whether that’s informing and advising you about generative AI, or augmenting your delivery and support capabilities. This can help make your engineers and your security controls more effective.

    While many organizations purchase security products or solutions with embedded generative AI capabilities, nearly two-thirds, as detailed in Figure 3, report that their generative AI security capabilities come through some type of partner.

    Figure 3: More than 90% of security gen AI capabilities are coming from third-party products or partners

    Figure 3: Most security gen AI capabilities are coming from third-party products or partners, Source: IBM Security

    Tens of thousands of customers are using AWS, for example, to experiment and move transformative generative AI applications into production. AWS provides AI-powered tools and services, a Generative AI Innovation Center program, and an extensive network of AWS partners that have demonstrated expertise delivering machine learning (ML) and generative AI solutions. These resources can support your teams with hands-on help developing solutions mapped to your requirements, and a broader collection of knowledge they can use to help you make the nuanced decisions required for effective security.

View the joint report and AWS generative AI security resources for additional guidance.

Secure by Design

Building secure software was a popular and related focus at the conference. Insecure design is ranked as the number four critical web application security concern on the Open Web Application Security Project (OWASP) Top 10.

The concept known as Secure by Design is gaining importance in the effort to mitigate vulnerabilities early, minimize risks, and recognize security as a core business requirement. Secure by Design builds off of security models such as Zero Trust, and aims to reduce the burden of cybersecurity and break the cycle of constantly creating and applying updates by developing products that are foundationally secure.

More than 60 technology companies—including AWS—signed CISA’s Secure by Design Pledge during RSA Conference as part of a collaborative push to put security first when designing products and services.

The pledge demonstrates a commitment to making measurable progress towards seven goals within a year:

  • Broaden the use of multi-factor authentication (MFA)
  • Reduce default passwords
  • Enable a significant reduction in the prevalence of one or more vulnerability classes
  • Increase the installation of security patches by customers
  • Publish a vulnerability disclosure policy (VDP)
  • Demonstrate transparency in vulnerability reporting
  • Strengthen the ability of customers to gather evidence of cybersecurity intrusions affecting products

“From day one, we have pioneered secure by design and secure by default practices in the cloud, so AWS is designed to be the most secure place for customers to run their workloads. We are committed to continuing to help organizations around the world elevate their security posture, and we look forward to collaborating with CISA and other stakeholders to further grow and promote security by design and default practices.” — Chris Betz, CISO at AWS

The need for security by design applies to AI like any other software system. To protect users and data, we need to build security into ML and AI with a Secure by Design approach that considers these technologies to be part of a larger software system, and weaves security into the AI pipeline.

Since models tend to have very high privileges and access to data, integrating an AI bill of materials (AI/ML BOM) and Cryptography Bill of Materials (CBOM) into BOM processes can help you catalog security-relevant information, and gain visibility into model components and data sources. Additionally, frameworks and standards such as the AI RMF 1.0, the HITRUST AI Assurance Program, and ISO/IEC 42001 can facilitate the incorporation of trustworthiness considerations into the design, development, and use of AI systems.

CISO collaboration

In the RSA Conference keynote session CISO Confidential: What Separates The Best From The Rest, Trellix CEO Bryan Palma and CISO Harold Rivas noted that there are approximately 32,000 global CISOs today—4 times more than 10 years ago. The challenges they face include staffing shortages, liability concerns, and a rapidly evolving threat landscape. According to research conducted by the Information Systems Security Association (ISSA), nearly half of organizations (46%) report that their cybersecurity team is understaffed, and more than 80% of CISOs recently surveyed by Trellix have experienced an increase in cybersecurity threats over the past six months. When asked what would most improve their organizations’ abilities to defend against these threats, their top answer was industry peers sharing insights and best practices.

Building trusted relationships with peers and technology partners can help you gain the knowledge you need to effectively communicate the story of risk to your board of directors, keep up with technology, and build success as a CISO.

AWS CISO Circles provide a forum for cybersecurity executives from organizations of all sizes and industries to share their challenges, insights, and best practices. CISOs come together in locations around the world to discuss the biggest security topics of the moment. With NDAs in place and the Chatham House Rule in effect, security leaders can feel free to speak their minds, ask questions, and get feedback from peers through candid conversations facilitated by AWS Security leaders.

“When it comes to security, community unlocks possibilities. CISO Circles give us an opportunity to deeply lean into CISOs’ concerns, and the topics that resonate with them. Chatham House Rule gives security leaders the confidence they need to speak openly and honestly with each other, and build a global community of knowledge-sharing and support.” — Clarke Rodgers, Director of Enterprise Strategy at AWS

At RSA Conference, CISO Circle attendees discussed the challenges of adopting generative AI. When asked whether CISOs or the business own generative AI risk for the organization, the consensus was that security can help with policies and recommendations, but the business should own the risk and decisions about how and when to use the technology. Some attendees noted that they took initial responsibility for generative AI risk, before transitioning ownership to an advisory board or committee comprised of leaders from their HR, legal, IT, finance, privacy, and compliance and ethics teams over time. Several CISOs expressed the belief that quickly taking ownership of generative AI risk before shepherding it to the right owner gave them a valuable opportunity to earn trust with their boards and executive peers, and to demonstrate business leadership during a time of uncertainty.

Embrace the art of possible

There are many more RSA Conference highlights on a wide range of additional topics, including post-quantum cryptography developments, identity and access management, data perimeters, threat modeling, cybersecurity budgets, and cyber insurance trends. If there’s one key takeaway, it’s that we should never underestimate what is possible from threat actors or defenders. By harnessing AI’s potential while addressing its risks, building foundationally secure products and services, and developing meaningful collaboration, we can collectively strengthen security and establish cyber resilience.

Join us to learn more about cloud security in the age of generative AI at AWS re:Inforce 2024 June 10–12 in Pennsylvania. Register today with the code SECBLOfnakb to receive a limited time $150 USD discount, while supplies last.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Danielle Ruderman

Danielle Ruderman

Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.

IBM Sells Cybersecurity Group

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/05/ibm-sells-cybersecurity-group.html

IBM is selling its QRadar product suite to Palo Alto Networks, for an undisclosed—but probably surprisingly small—sum.

I have a personal connection to this. In 2016, IBM bought Resilient Systems, the startup I was a part of. It became part if IBM’s cybersecurity offerings, mostly and weirdly subservient to QRadar.

That was what seemed to be the problem at IBM. QRadar was IBM’s first acquisition in the cybersecurity space, and it saw everything through the lens of that SIEM system. I left the company two years after the acquisition, and near as I could tell, it never managed to figure the space out.

So now it’s Palo Alto’s turn.

AWS achieves Spain’s ENS High 311/2022 certification across 172 services

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-311-2022-certification-across-172-services/

Amazon Web Services (AWS) has recently renewed the Esquema Nacional de Seguridad (ENS) High certification, upgrading to the latest version regulated under Royal Decree 311/2022. The ENS establishes security standards that apply to government agencies and public organizations in Spain and service providers on which Spanish public services depend.

This security framework has gone through significant updates since the Royal Decree 3/2010 to the latest Royal Decree 311/2022 to adapt to evolving cybersecurity threats and technologies. The current scheme defines basic requirements and lists additional security reinforcements to meet the bar of the different security levels (Low, Medium, High).

Achieving the ENS High certification for its 311/2022 version underscores AWS commitment to maintaining robust cybersecurity controls and highlights our proactive approach to cybersecurity.

We are happy to announce the addition of 14 services to the scope of our ENS certification, for a new total of 172 services in scope. The certification now covers 31 Regions. Some of the additional services in scope for ENS High include the following:

  • Amazon Bedrock – This fully managed service offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
  • Amazon EventBridge – Use this service to easily build loosely coupled, event-driven architectures. It creates point-to-point integrations between event producers and consumers without needing to write custom code or manage and provision servers.
  • AWS HealthOmics – This service helps healthcare and life science organizations and their software partners store, query, and analyze genomic, transcriptomic, and other omics data and then uses that data to generate insights to improve health.
  • AWS Signer – This is a fully managed code-signing service to ensure the trust and integrity of your code. AWS Signer manages the code-signing certificate’s public and private keys and enables central management of the code-signing lifecycle.
  • AWS Wickr – This service encrypts messages, calls, and files with a 256-bit end-to-end encryption protocol. Only the intended recipients and the customer organization can decrypt these communications, reducing the risk of adversary-in-the-middle attacks.

AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level as described in Royal Decree 311/2022.

AWS has also updated the existing eight Security configuration guidelines that map the ENS controls to the AWS Well-Architected Framework and provides guidance relating to the following topics: compliance profile, secure configuration, Prowler quick guide, hybrid connectivity, multi-account environments, Amazon WorkSpaces, incident response and monitorization and governance. AWS has also supported Prowler to offer new functionalities and to include the latest controls of the ENS.

For more information about ENS High and the AWS Security configuration guidelines, see the AWS Compliance page Esquema Nacional de Seguridad High. To view the complete list of services included in the scope, see the AWS Services in Scope by Compliance Program – Esquema Nacional de Seguridad (ENS) page. You can download the ENS High Certificate from AWS Artifact in the AWS Management Console or from Esquema Nacional de Seguridad High.

As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a security audit program manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations, and certification programs in Spain and other EMEA countries. Daniel has ten years of experience in security assurance and compliance, including previous experience as an auditor for the PCI DSS security framework. He also holds the CISSP, PCIP, and ISO 27001 Lead Auditor certifications.

Borja Larrumbide

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked at companies such as Microsoft and BBVA. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

Microsoft and Security Incentives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/04/microsoft-and-security-incentives.html

Former senior White House cyber policy director A. J. Grotto talks about the economic incentives for companies to improve their security—in particular, Microsoft:

Grotto told us Microsoft had to be “dragged kicking and screaming” to provide logging capabilities to the government by default, and given the fact the mega-corp banked around $20 billion in revenue from security services last year, the concession was minimal at best.

[…]

“The government needs to focus on encouraging and catalyzing competition,” Grotto said. He believes it also needs to publicly scrutinize Microsoft and make sure everyone knows when it messes up.

“At the end of the day, Microsoft, any company, is going to respond most directly to market incentives,” Grotto told us. “Unless this scrutiny generates changed behavior among its customers who might want to look elsewhere, then the incentives for Microsoft to change are not going to be as strong as they should be.”

Breaking up the tech monopolies is one of the best things we can do for cybersecurity.

Backdoor in XZ Utils That Almost Happened

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/04/backdoor-in-xz-utils-that-almost-happened.html

Last week, the Internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global Internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it.

Programmers dislike doing extra work. If they can find already-written code that does what they want, they’re going to use it rather than recreate the functionality. These code repositories, called libraries, are hosted on sites like GitHub. There are libraries for everything: displaying objects in 3D, spell-checking, performing complex mathematics, managing an e-commerce shopping cart, moving files around the Internet—everything. Libraries are essential to modern programming; they’re the building blocks of complex software. The modularity they provide makes software projects tractable. Everything you use contains dozens of these libraries: some commercial, some open source and freely available. They are essential to the functionality of the finished software. And to its security.

You’ve likely never heard of an open-source library called XZ Utils, but it’s on hundreds of millions of computers. It’s probably on yours. It’s certainly in whatever corporate or organizational network you use. It’s a freely available library that does data compression. It’s important, in the same way that hundreds of other similar obscure libraries are important.

Many open-source libraries, like XZ Utils, are maintained by volunteers. In the case of XZ Utils, it’s one person, named Lasse Collin. He has been in charge of XZ Utils since he wrote it in 2009. And, at least in 2022, he’s had some “longterm mental health issues.” (To be clear, he is not to blame in this story. This is a systems problem.)

Beginning in at least 2021, Collin was personally targeted. We don’t know by whom, but we have account names: Jia Tan, Jigar Kumar, Dennis Ens. They’re not real names. They pressured Collin to transfer control over XZ Utils. In early 2023, they succeeded. Tan spent the year slowly incorporating a backdoor into XZ Utils: disabling systems that might discover his actions, laying the groundwork, and finally adding the complete backdoor earlier this year. On March 25, Hans Jansen—another fake name—tried to push the various Unix systems to upgrade to the new version of XZ Utils.

And everyone was poised to do so. It’s a routine update. In the span of a few weeks, it would have been part of both Debian and Red Hat Linux, which run on the vast majority of servers on the Internet. But on March 29, another unpaid volunteer, Andres Freund—a real person who works for Microsoft but who was doing this in his spare time—noticed something weird about how much processing the new version of XZ Utils was doing. It’s the sort of thing that could be easily overlooked, and even more easily ignored. But for whatever reason, Freund tracked down the weirdness and discovered the backdoor.

It’s a masterful piece of work. It affects the SSH remote login protocol, basically by adding a hidden piece of functionality that requires a specific key to enable. Someone with that key can use the backdoored SSH to upload and execute an arbitrary piece of code on the target machine. SSH runs as root, so that code could have done anything. Let your imagination run wild.

This isn’t something a hacker just whips up. This backdoor is the result of a years-long engineering effort. The ways the code evades detection in source form, how it lies dormant and undetectable until activated, and its immense power and flexibility give credence to the widely held assumption that a major nation-state is behind this.

If it hadn’t been discovered, it probably would have eventually ended up on every computer and server on the Internet. Though it’s unclear whether the backdoor would have affected Windows and macOS, it would have worked on Linux. Remember in 2020, when Russia planted a backdoor into SolarWinds that affected 14,000 networks? That seemed like a lot, but this would have been orders of magnitude more damaging. And again, the catastrophe was averted only because a volunteer stumbled on it. And it was possible in the first place only because the first unpaid volunteer, someone who turned out to be a national security single point of failure, was personally targeted and exploited by a foreign actor.

This is no way to run critical national infrastructure. And yet, here we are. This was an attack on our software supply chain. This attack subverted software dependencies. The SolarWinds attack targeted the update process. Other attacks target system design, development, and deployment. Such attacks are becoming increasingly common and effective, and also are increasingly the weapon of choice of nation-states.

It’s impossible to count how many of these single points of failure are in our computer systems. And there’s no way to know how many of the unpaid and unappreciated maintainers of critical software libraries are vulnerable to pressure. (Again, don’t blame them. Blame the industry that is happy to exploit their unpaid labor.) Or how many more have accidentally created exploitable vulnerabilities. How many other coercion attempts are ongoing? A dozen? A hundred? It seems impossible that the XZ Utils operation was a unique instance.

Solutions are hard. Banning open source won’t work; it’s precisely because XZ Utils is open source that an engineer discovered the problem in time. Banning software libraries won’t work, either; modern software can’t function without them. For years, security engineers have been pushing something called a “software bill of materials”: an ingredients list of sorts so that when one of these packages is compromised, network owners at least know if they’re vulnerable. The industry hates this idea and has been fighting it for years, but perhaps the tide is turning.

The fundamental problem is that tech companies dislike spending extra money even more than programmers dislike doing extra work. If there’s free software out there, they are going to use it—and they’re not going to do much in-house security testing. Easier software development equals lower costs equals more profits. The market economy rewards this sort of insecurity.

We need some sustainable ways to fund open-source projects that become de facto critical infrastructure. Public shaming can help here. The Open Source Security Foundation (OSSF), founded in 2022 after another critical vulnerability in an open-source library—Log4j—was discovered, addresses this problem. The big tech companies pledged $30 million in funding after the critical Log4j supply chain vulnerability, but they never delivered. And they are still happy to make use of all this free labor and free resources, as a recent Microsoft anecdote indicates. The companies benefiting from these freely available libraries need to actually step up, and the government can force them to.

There’s a lot of tech that could be applied to this problem, if corporations were willing to spend the money. Liabilities will help. The Cybersecurity and Infrastructure Security Agency’s (CISA’s) “secure by design” initiative will help, and CISA is finally partnering with OSSF on this problem. Certainly the security of these libraries needs to be part of any broad government cybersecurity initiative.

We got extraordinarily lucky this time, but maybe we can learn from the catastrophe that didn’t happen. Like the power grid, communications network, and transportation systems, the software supply chain is critical infrastructure, part of national security, and vulnerable to foreign attack. The US government needs to recognize this as a national security problem and start treating it as such.

This essay originally appeared in Lawfare.

In Memoriam: Ross Anderson, 1956–2024

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/04/in-memoriam-ross-anderson-1956-2024.html

Last week, I posted a short memorial of Ross Anderson. The Communications of the ACM asked me to expand it. Here’s the longer version.

EDITED TO ADD (4/11): Two weeks before he passed away, Ross gave an 80-minute interview where he told his life story.

XZ Utils Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/04/xz-utils-backdoor.html

The cybersecurity world got really lucky last week. An intentionally placed backdoor in XZ Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

Ross Anderson

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/03/ross-anderson.html

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.

I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.

My heart goes out to his wife Shireen and his family. We lost him much too soon.

EDITED TO ADD (4/10): I wrote a longer version for Communications of the ACM.

Security Vulnerability in Saflok’s RFID-Based Keycard Locks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/03/security-vulnerability-in-safloks-rfid-based-keycard-locks.html

It’s pretty devastating:

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

If ever. My guess is that for many locks, this is a permanent vulnerability.