Tag Archives: Cryptography

Improving the Cryptanalysis of Lattice-Based Public-Key Algorithms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/improving-the-cryptanalysis-of-lattice-based-public-key-algorithms.html

The winner of the Best Paper Award at Crypto this year was a significant improvement to lattice-based cryptanalysis.

This is important, because a bunch of NIST’s post-quantum options base their security on lattice problems.

I worry about standardizing on post-quantum algorithms too quickly. We are still learning a lot about the security of these systems, and this paper is an example of that learning.

News story.

Building a security-first mindset: three key themes from AWS re:Invent 2023

Post Syndicated from Clarke Rodgers original https://aws.amazon.com/blogs/security/building-a-security-first-mindset-three-key-themes-from-aws-reinvent-2023/

Amazon CSO Stephen Schmidt

Amazon CSO Stephen Schmidt

AWS re:Invent drew 52,000 attendees from across the globe to Las Vegas, Nevada, November 27 to December 1, 2023.

Now in its 12th year, the conference featured 5 keynotes, 17 innovation talks, and over 2,250 sessions and hands-on labs offering immersive learning and networking opportunities.

With dozens of service and feature announcements—and innumerable best practices shared by AWS executives, customers, and partners—the air of excitement was palpable. We were on site to experience all of the innovations and insights, but summarizing highlights isn’t easy. This post details three key security themes that caught our attention.

Security culture

When we think about cybersecurity, it’s natural to focus on technical security measures that help protect the business. But organizations are made up of people—not technology. The best way to protect ourselves is to foster a proactive, resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

In Sustainable security culture: Empower builders for success, AWS Global Services Security Vice President Hart Rossman and AWS Global Services Security Organizational Excellence Leader Sarah Currey presented practical strategies for building a sustainable security culture.

Rossman noted that many customers who meet with AWS about security challenges are attempting to manage security as a project, a program, or a side workstream. To strengthen your security posture, he said, you have to embed security into your business.

“You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS

Three best practices can help:

  1. Be consistently persistent. Routinely and emphatically thank employees for raising security issues. It might feel repetitive, but treating security events and escalations as learning opportunities helps create a positive culture—and it’s a practice that can spread to other teams. An empathetic leadership approach encourages your employees to see security as everyone’s responsibility, share their experiences, and feel like collaborators.
  2. Brief the board. Engage executive leadership in regular, business-focused meetings. By providing operational metrics that tie your security culture to the impact that it has on customers, crisply connecting data to business outcomes, and providing an opportunity to ask questions, you can help build the support of executive leadership, and advance your efforts to establish a sustainable proactive security posture.
  3. Have a mental model for creating a good security culture. Rossman presented a diagram (Figure 1) that highlights three elements of security culture he has observed at AWS: a student, a steward, and a builder. If you want to be a good steward of security culture, you should be a student who is constantly learning, experimenting, and passing along best practices. As your stewardship grows, you can become a builder, and progress the culture in new directions.
Figure 1: Sample mental model for building security culture

Figure 1: Sample mental model for building security culture

Thoughtful investment in the principles of inclusivity, empathy, and psychological safety can help your team members to confidently speak up, take risks, and express ideas or concerns. This supports an escalation-friendly culture that can reduce employee burnout, and empower your teams to champion security at scale.

In Shipping securely: How strong security can be your strategic advantage, AWS Enterprise Strategy Director Clarke Rodgers reiterated the importance of security culture to building a security-first mindset.

Rodgers highlighted three pillars of progression (Figure 2)—aware, bolted-on, and embedded—that are based on meetings with more than 800 customers. As organizations mature from a reactive security posture to a proactive, security-first approach, he noted, security culture becomes a true business enabler.

“When organizations have a strong security culture and everyone sees security as their responsibility, they can move faster and achieve quicker and more secure product and service releases.” — Clarke Rodgers, Director of Enterprise Strategy at AWS
Figure 2: Shipping with a security-first mindset

Figure 2: Shipping with a security-first mindset

Human-centric AI

CISOs and security stakeholders are increasingly pivoting to a human-centric focus to establish effective cybersecurity, and ease the burden on employees.

According to Gartner, by 2027, 50% of large enterprise CISOs will have adopted human-centric security design practices to minimize cybersecurity-induced friction and maximize control adoption.

As Amazon CSO Stephen Schmidt noted in Move fast, stay secure: Strategies for the future of security, focusing on technology first is fundamentally wrong. Security is a people challenge for threat actors, and for defenders. To keep up with evolving changes and securely support the businesses we serve, we need to focus on dynamic problems that software can’t solve.

Maintaining that focus means providing security and development teams with the tools they need to automate and scale some of their work.

“People are our most constrained and most valuable resource. They have an impact on every layer of security. It’s important that we provide the tools and the processes to help our people be as effective as possible.” — Stephen Schmidt, CSO at Amazon

Organizations can use artificial intelligence (AI) to impact all layers of security—but AI doesn’t replace skilled engineers. When used in coordination with other tools, and with appropriate human review, it can help make your security controls more effective.

Schmidt highlighted the internal use of AI at Amazon to accelerate our software development process, as well as new generative AI-powered Amazon Inspector, Amazon Detective, AWS Config, and Amazon CodeWhisperer features that complement the human skillset by helping people make better security decisions, using a broader collection of knowledge. This pattern of combining sophisticated tooling with skilled engineers is highly effective, because it positions people to make the nuanced decisions required for effective security that AI can’t make on its own.

In How security teams can strengthen security using generative AI, AWS Senior Security Specialist Solutions Architects Anna McAbee and Marshall Jones, and Principal Consultant Fritz Kunstler featured a virtual security assistant (chatbot) that can address common security questions and use cases based on your internal knowledge bases, and trusted public sources.

Figure 3: Generative AI-powered chatbot architecture

Figure 3: Generative AI-powered chatbot architecture

The generative AI-powered solution depicted in Figure 3—which includes Retrieval Augmented Generation (RAG) with Amazon Kendra, Amazon Security Lake, and Amazon Bedrock—can help you automate mundane tasks, expedite security decisions, and increase your focus on novel security problems.

It’s available on Github with ready-to-use code, so you can start experimenting with a variety of large and multimodal language models, settings, and prompts in your own AWS account.

Secure collaboration

Collaboration is key to cybersecurity success, but evolving threats, flexible work models, and a growing patchwork of data protection and privacy regulations have made maintaining secure and compliant messaging a challenge.

An estimated 3.09 billion mobile phone users access messaging apps to communicate, and this figure is projected to grow to 3.51 billion users in 2025.

The use of consumer messaging apps for business-related communications makes it more difficult for organizations to verify that data is being adequately protected and retained. This can lead to increased risk, particularly in industries with unique recordkeeping requirements.

In How the U.S. Army uses AWS Wickr to deliver lifesaving telemedicine, Matt Quinn, Senior Director at The U.S. Army Telemedicine & Advanced Technology Research Center (TATRC), Laura Baker, Senior Manager at Deloitte, and Arvind Muthukrishnan, AWS Wickr Head of Product highlighted how The TATRC National Emergency Tele-Critical Care Network (NETCCN) was integrated with AWS Wickr—a HIPAA-eligible secure messaging and collaboration service—and AWS Private 5G, a managed service for deploying and scaling private cellular networks.

During the session, Quinn, Baker, and Muthukrishnan described how TATRC achieved a low-resource, cloud-enabled, virtual health solution that facilitates secure collaboration between onsite and remote medical teams for real-time patient care in austere environments. Using Wickr, medics on the ground were able to treat injuries that exceeded their previous training (Figure 4) with the help of end-to-end encrypted video calls, messaging, and file sharing with medical professionals, and securely retain communications in accordance with organizational requirements.

“Incorporating Wickr into Military Emergency Tele-Critical Care Platform (METTC-P) not only provides the security and privacy of end-to-end encrypted communications, it gives combat medics and other frontline caregivers the ability to gain instant insight from medical experts around the world—capabilities that will be needed to address the simultaneous challenges of prolonged care, and the care of large numbers of casualties on the multi-domain operations (MDO) battlefield.” — Matt Quinn, Senior Director at TATRC
Figure 4: Telemedicine workflows using AWS Wickr

Figure 4: Telemedicine workflows using AWS Wickr

In a separate Chalk Talk titled Bolstering Incident Response with AWS Wickr and Amazon EventBridge, Senior AWS Wickr Solutions Architects Wes Wood and Charles Chowdhury-Hanscombe demonstrated how to integrate Wickr with Amazon EventBridge and Amazon GuardDuty to strengthen incident response capabilities with an integrated workflow (Figure 5) that connects your AWS resources to Wickr bots. Using this approach, you can quickly alert appropriate stakeholders to critical findings through a secure communication channel, even on a potentially compromised network.

Figure 5: AWS Wickr integration for incident response communications

Figure 5: AWS Wickr integration for incident response communications

Security is our top priority

AWS re:Invent featured many more highlights on a variety of topics, including adaptive access control with Zero Trust, AWS cyber insurance partners, Amazon CTO Dr. Werner Vogels’ popular keynote, and the security partnerships showcased on the Expo floor. It was a whirlwind experience, but one thing is clear: AWS is working hard to help you build a security-first mindset, so that you can meaningfully improve both technical and business outcomes.

To watch on-demand conference sessions, visit the AWS re:Invent Security, Identity, and Compliance playlist on YouTube.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Clarke Rodgers

Clarke Rodgers

Clarke is a Director of Enterprise Security at AWS. Clarke has more than 25 years of experience in the security industry, and works with enterprise security, risk, and compliance-focused executives to strengthen their security posture, and understand the security capabilities of the cloud. Prior to AWS, Clarke was a CISO for the North American operations of a multinational insurance company.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than 13 years of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Improving Shor’s Algorithm

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/01/improving-shors-algorithm.html

We don’t have a useful quantum computer yet, but we do have quantum algorithms. Shor’s algorithm has the potential to factor large numbers faster than otherwise possible, which—if the run times are actually feasible—could break both the RSA and Diffie-Hellman public-key algorithms.

Now, computer scientist Oded Regev has a significant speed-up to Shor’s algorithm, at the cost of more storage.

Details are in this article. Here’s the result:

The improvement was profound. The number of elementary logical steps in the quantum part of Regev’s algorithm is proportional to n1.5 when factoring an n-bit number, rather than n2 as in Shor’s algorithm. The algorithm repeats that quantum part a few dozen times and combines the results to map out a high-dimensional lattice, from which it can deduce the period and factor the number. So the algorithm as a whole may not run faster, but speeding up the quantum part by reducing the number of required steps could make it easier to put it into practice.

Of course, the time it takes to run a quantum algorithm is just one of several considerations. Equally important is the number of qubits required, which is analogous to the memory required to store intermediate values during an ordinary classical computation. The number of qubits that Shor’s algorithm requires to factor an n-bit number is proportional to n, while Regev’s algorithm in its original form requires a number of qubits proportional to n1.5—a big difference for 2,048-bit numbers.

Again, this is all still theoretical. But now it’s theoretically faster.

Oded Regev’s paper.

This is me from 2018 on the potential for quantum cryptanalysis. I still believe now what I wrote then.

AWS Speaker Profile: Zach Miller, Senior Worldwide Security Specialist Solutions Architect

Post Syndicated from Roger Park original https://aws.amazon.com/blogs/security/aws-speaker-profile-zach-miller-senior-worldwide-security-specialist-solutions-architect/

In the AWS Speaker Profile series, we interview Amazon Web Services (AWS) thought leaders who help keep our customers safe and secure. This interview features Zach Miller, Senior Worldwide Security Specialist SA and re:Invent 2023 presenter of Securely modernize payment applications with AWS and Centrally manage application secrets with AWS Secrets Manager. Zach shares thoughts on the data protection and cloud security landscape, his unique background, his upcoming re:Invent sessions, and more.


How long have you been at AWS?

I’ve been at AWS for more than four years, and I’ve enjoyed every minute of it! I started as a consultant in Professional Services, and I’ve been a Security Solutions Architect for around three years.

How do you explain your job to your non-tech friends?

Well, my mother doesn’t totally understand my role, and she’s been known to tell her friends that I’m the cable company technician that installs your internet modem and router. I usually tell my non-tech friends that I help AWS customers protect their sensitive data. If I mention cryptography, I typically only get asked questions about cryptocurrency—which I’m not qualified to answer. If someone asks what cryptography is, I usually say it’s protecting data by using mathematics.

How did you get started in data protection and cryptography? What about it piqued your interest?

I originally went to school to become a network engineer, but I discovered that moving data packets from point A to point B wasn’t as interesting to me as securing those data packets. Early in my career, I was an intern at an insurance company, and I had a mentor who set up ethnical hacking lessons for me—for example, I’d come into the office and he’d have a compromised workstation preconfigured. He’d ask me to do an investigation and determine how the workstation was compromised and what could be done to isolate it and collect evidence. Other times, I’d come in and find my desk cabinets were locked with a padlock, and he wanted me to pick the lock. Security is particularly interesting because it’s an ever-evolving field, and I enjoy learning new things.

What’s been the most dramatic change you’ve seen in the data protection landscape?

One of the changes that I’ve been excited to see is an emphasis on encrypting everything. When I started my career, we’d often have discussions about encryption in the context of tradeoffs. If we needed to encrypt sensitive data, we’d have a conversation with application teams about the potential performance impact of encryption and decryption operations on their systems (for example, their databases), when to schedule downtime for the application to encrypt the data or rotate the encryption keys protecting the data, how to ensure the durability of their keys and make sure they didn’t lose data, and so on.

When I talk to customers about encryption on AWS today—of course, it’s still useful to talk about potential performance impact—but the conversation has largely shifted from “Should I encrypt this data?” to “How should I encrypt this data?” This is due to services such as AWS Key Management Service (AWS KMS) making it simpler for customers to manage encryption keys and encrypt and decrypt data in their applications with minimal performance impact or application downtime. AWS KMS has also made it simple to enable encryption of sensitive data—with over 120 AWS services integrated with AWS KMS, and services such as Amazon Simple Storage Service (Amazon S3) encrypting new S3 objects by default.

You are a frequent contributor to the AWS Security Blog. What were some of your recent posts about?

My last two posts covered how to use AWS Identity and Access Management (IAM) condition context keys to create enterprise controls for certificate management and how to use AWS Secrets Manager to securely manage and retrieve secrets in hybrid or multicloud workloads. I like writing posts that show customers how to use a new feature, or highlight a pattern that many customers ask about.

You are speaking in a couple of sessions at AWS re:Invent; what will your sessions focus on? What do you hope attendees will take away from your session?

I’m delivering two sessions at re:Invent this year. The first is a chalk talk, Centrally manage application secrets with AWS Secrets Manager (SEC221), that I’m delivering with Ritesh Desai, who is the General Manager of Secrets Manager. We’re discussing how you can securely store and manage secrets in your workloads inside and outside of AWS. We will highlight some recommended practices for managing secrets, and answer your questions about how Secrets Manager integrates with services such as AWS KMS to help protect application secrets.

The second session is also a chalk talk, Securely modernize payment applications with AWS (SEC326). I’m delivering this talk with Mark Cline, who is the Senior Product Manager of AWS Payment Cryptography. We will walk through an example scenario on creating a new payment processing application. We will discuss how to use AWS Payment Cryptography, as well as other services such as AWS Lambda, to build a simple architecture to help process and secure credit card payment data. We will also include common payment industry use cases such as tokenization of sensitive data, and how to include basic anti-fraud detection, in our example app.

What are you currently working on that you’re excited about?

My re:Invent sessions are definitely something that I’m excited about. Otherwise, I spend most of my time talking to customers about AWS Cryptography services such as AWS KMS, AWS Secrets Manager, and AWS Private Certificate Authority. I also lead a program at AWS that enables our subject matter experts to create and publish videos to demonstrate new features of AWS Security Services. I like helping people create videos, and I hope that our videos provide another mechanism for viewers who prefer information in a video format. Visual media can be more inclusive for customers with certain disabilities or for neurodiverse customers who find it challenging to focus on written text. Plus, you can consume videos differently than a blog post or text documentation. If you don’t have the time or desire to read a blog post or AWS public doc, you can listen to an instructional video while you work on other tasks, eat lunch, or take a break. I invite folks to check out the AWS Security Services Features Demo YouTube video playlist.

Is there something you wish customers would ask you about more often?

I always appreciate when customers provide candid feedback on our services. AWS is a customer-obsessed company, and we build our service roadmaps based on what our customers tell us they need. You should feel comfortable letting AWS know when something could be easier, more efficient, or less expensive. Many customers I’ve worked with have provided actionable feedback on our services and influenced service roadmaps, just by speaking up and sharing their experiences.

How about outside of work, any hobbies?

I have two toddlers that keep me pretty busy, so most of my hobbies are what they like to do. So I tend to spend a lot of time building elaborate toy train tracks, pushing my kids on the swings, and pretending to eat wooden toy food that they “cook” for me. Outside of that, I read a lot of fiction and indulge in binge-worthy TV.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Roger Park

Roger Park

Roger is a Senior Security Content Specialist at AWS Security focusing on data protection. He has worked in cybersecurity for almost ten years as a writer and content producer. In his spare time, he enjoys trying new cuisines, gardening, and collecting records.

Zach Miller

Zach Miller

Zach is a Senior Worldwide Security Specialist Solutions Architect at AWS. His background is in data protection and security architecture, focused on a variety of security domains, including cryptography, secrets management, and data classification. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

New SSH Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/11/new-ssh-vulnerability.html

This is interesting:

For the first time, researchers have demonstrated that a large portion of cryptographic keys used to protect data in computer-to-server SSH traffic are vulnerable to complete compromise when naturally occurring computational errors occur while the connection is being established.

[…]

The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined. That translates to roughly 1 billion signatures out of the 3.2 billion signatures examined. Of the roughly 1 billion RSA signatures, about one in a million exposed the private key of the host.

Research paper:

Passive SSH Key Compromise via Lattices

Abstract: We demonstrate that a passive network attacker can opportunistically obtain private RSA host keys from an SSH server that experiences a naturally arising fault during signature computation. In prior work, this was not believed to be possible for the SSH protocol because the signature included information like the shared Diffie-Hellman secret that would not be available to a passive network observer. We show that for the signature parameters commonly in use for SSH, there is an efficient lattice attack to recover the private key in case of a signature fault. We provide a security analysis of the SSH, IKEv1, and IKEv2 protocols in this scenario, and use our attack to discover hundreds of compromised keys in the wild from several independently vulnerable implementations.

Bounty to Recover NIST’s Elliptic Curve Seeds

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/bounty-to-recover-nists-elliptic-curve-seeds.html

This is a fun challenge:

The NIST elliptic curves that power much of modern cryptography were generated in the late ’90s by hashing seeds provided by the NSA. How were the seeds generated? Rumor has it that they are in turn hashes of English sentences, but the person who picked them, Dr. Jerry Solinas, passed away in early 2023 leaving behind a cryptographic mystery, some conspiracy theories, and an historical password cracking challenge.

So there’s a $12K prize to recover the hash seeds.

Some backstory:

Some of the backstory here (it’s the funniest fucking backstory ever): it’s lately been circulating—though I think this may have been somewhat common knowledge among practitioners, though definitely not to me—that the “random” seeds for the NIST P-curves, generated in the 1990s by Jerry Solinas at NSA, were simply SHA1 hashes of some variation of the string “Give Jerry a raise”.

At the time, the “pass a string through SHA1” thing was meant to increase confidence in the curve seeds; the idea was that SHA1 would destroy any possible structure in the seed, so NSA couldn’t have selected a deliberately weak seed. Of course, NIST/NSA then set about destroying its reputation in the 2000’s, and this explanation wasn’t nearly enough to quell conspiracy theories.

But when Jerry Solinas went back to reconstruct the seeds, so NIST could demonstrate that the seeds really were benign, he found that he’d forgotten the string he used!

If you’re a true conspiracist, you’re certain nobody is going to find a string that generates any of these seeds. On the flip side, if anyone does find them, that’ll be a pretty devastating blow to the theory that the NIST P-curves were maliciously generated—even for people totally unfamiliar with basic curve math.

Note that this is not the constants used in the Dual_EC_PRNG random-number generator that the NSA backdoored. This is something different.

AWS-LC is now FIPS 140-3 certified

Post Syndicated from Nevine Ebeid original https://aws.amazon.com/blogs/security/aws-lc-is-now-fips-140-3-certified/

AWS Cryptography is pleased to announce that today, the National Institute for Standards and Technology (NIST) awarded AWS-LC its validation certificate as a Federal Information Processing Standards (FIPS) 140-3, level 1, cryptographic module. This important milestone enables AWS customers that require FIPS-validated cryptography to leverage AWS-LC as a fully owned AWS implementation.

AWS-LC is an open source cryptographic library that is a fork from Google’s BoringSSL. It is tailored by the AWS Cryptography team to meet the needs of AWS services, which can require a combination of FIPS-validated cryptography, speed of certain algorithms on the target environments, and formal verification of the correctness of implementation of multiple algorithms. FIPS 140 is the technical standard for cryptographic modules for the U.S. and Canadian Federal governments. FIPS 140-3 is the most recent version of the standard, which introduced new and more stringent requirements over its predecessor, FIPS 140-2. The AWS-LC FIPS module underwent extensive code review and testing by a NIST-accredited lab before we submitted the results to NIST, where the module was further reviewed by the Cryptographic Module Validation Program (CMVP).

Our goal in designing the AWS-LC FIPS module was to create a validated library without compromising on our standards for both security and performance. AWS-LC is validated on AWS Graviton2 (c6g, 64-bit AWS custom Arm processor based on Neoverse N1) and Intel Xeon Platinum 8275CL (c5, x86_64) running Amazon Linux 2 or Ubuntu 20.04. Specifically, it includes low-level implementations that target 64-bit Arm and x86 processors, which are essential to meeting—and even exceeding—the performance that customers expect of AWS services. For example, in the integration of the AWS-LC FIPS module with AWS s2n-tls for TLS termination, we observed a 27% decrease in handshake latency in Amazon Simple Storage Service (Amazon S3), as shown in Figure 1.

Figure 1: Amazon S3 TLS termination time after using AWS-LC

Figure 1: Amazon S3 TLS termination time after using AWS-LC

AWS-LC integrates CPU-Jitter as the source of entropy, which works on widely available modern processors with high-resolution timers by measuring the tiny time variations of CPU instructions. Users of AWS-LC FIPS can have confidence that the keys it generates adhere to the required security strength. As a result, the library can be run with no uncertainty about the impact of a different processor on the entropy claims.

AWS-LC is a high-performance cryptographic library that provides an API for direct integration with C and C++ applications. To support a wider developer community, we’re providing integrations of a future version of the AWS-LC FIPS module, v2.0, into the AWS Libcrypto for Rust (aws-lc-rs) and ACCP 2.0 libraries . aws-lc-rs is API-compatible with the popular Rust library named ring, with additional performance enhancements and support for FIPS. Amazon Corretto Crypto Provider 2.0 (ACCP) is an open source OpenJDK implementation interfacing with low-level cryptographic algorithms that equips Java developers with fast cryptographic services. AWS-LC FIPS module v2.0 is currently submitted to an accredited lab for FIPS validation testing, and upon completion will be submitted to NIST for certification.

Today’s AWS-LC FIPS 140-3 certificate is an important milestone for AWS-LC, as a performant and verified library. It’s just the beginning; AWS is committed to adding more features, supporting more operating environments, and continually validating and maintaining new versions of the AWS-LC FIPS module as it grows.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Nevine Ebeid

Nevine Ebeid

Nevine is a Senior Applied Scientist at AWS Cryptography where she focuses on algorithms development, machine-level optimizations and FIPS 140-3 requirements for AWS-LC, the cryptographic library of AWS. Prior to joining AWS, Nevine worked in the research and development of various cryptographic libraries and protocols in automotive and mobile security applications.

New Revelations from the Snowden Documents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/09/new-revelations-from-the-snowden-documents.html

Jake Appelbaum’s PhD thesis contains several new revelations from the classified NSA documents provided to journalists by Edward Snowden. Nothing major, but a few more tidbits.

Kind of amazing that that all happened ten years ago. At this point, those documents are more historical than anything else.

And it’s unclear who has those archives anymore. According to Appelbaum, The Intercept destroyed their copy.

I recently published an essay about my experiences ten years ago.

Reduce the security and compliance risks of messaging apps with AWS Wickr

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/reduce-the-security-and-compliance-risks-of-messaging-apps-with-aws-wickr/

Effective collaboration is central to business success, and employees today depend heavily on messaging tools. An estimated 3.09 billion mobile phone users access messaging applications (apps) to communicate, and this figure is projected to grow to 3.51 billion users in 2025.

This post highlights the risks associated with messaging apps and describes how you can use enterprise solutions — such as AWS Wickr — that combine end-to-end encryption with data retention to drive positive security and business outcomes.

The business risks of messaging apps

Evolving threats, flexible work models, and a growing patchwork of data protection and privacy regulations have made maintaining secure and compliant enterprise messaging a challenge.

The use of third-party apps for business-related messages on both corporate and personal devices can make it more difficult to verify that data is being adequately protected and retained. This can lead to business risk, particularly in industries with unique record-keeping requirements. Organizations in the financial services industry, for example, are subject to rules that include Securities and Exchange Commission (SEC) Rule 17a-4 and Financial Industry Regulatory Authority (FINRA) Rule 3120, which require them to preserve all pertinent electronic communications.

A recent Gartner report on the viability of mobile bring-your-own-device (BYOD) programs noted, “It is now logical to assume that most financial services organizations with mobile BYOD programs for regulated employees could be fined due to a lack of compliance with electronic communications regulations.”

In the public sector, U.S. government agencies are subject to records requests under the Freedom of Information Act (FOIA) and various state sunshine statutes. For these organizations, effectively retaining business messages is about more than supporting security and compliance—it’s about maintaining public trust.

Securing enterprise messaging

Enterprise-grade messaging apps can help you protect communications from unauthorized access and facilitate desired business outcomes.

Security — Critical security protocols protect messages and files that contain sensitive and proprietary data — such as personally identifiable information, protected health information, financial records, and intellectual property — in transit and at rest to decrease the likelihood of a security incident.

Control — Administrative controls allow you to add, remove, and invite users, and organize them into security groups with restricted access to features and content at their level. Passwords can be reset and profiles can be deleted remotely, helping you reduce the risk of data exposure stemming from a lost or stolen device.

Compliance — Information can be preserved in a customer-controlled data store to help meet requirements such as those that fall under the Federal Records Act (FRA) and National Archives and Records Administration (NARA), as well as SEC Rule 17a-4 and Sarbanes-Oxley (SOX).

Marrying encryption with data retention

Enterprise solutions bring end-to-end encryption and data retention together in support of a comprehensive approach to secure messaging that balances people, process, and technology.

End-to-end encryption

Many messaging apps offer some form of encryption, but not all of them use end-to-end encryption. End-to-end encryption is a secure communication method that protects data from unauthorized access, interception, or tampering as it travels from one endpoint to another.

In end-to-end encryption, encryption and decryption take place locally, on the device. Every call, message, and file is encrypted with unique keys and remains indecipherable in transit. Unauthorized parties cannot access communication content because they don’t have the keys required to decrypt the data.

Encryption in transit compared to end-to-end encryption

Encryption in transit encrypts data over a network from one point to another (typically between one client and one server); data might remain stored in plaintext at the source and destination storage systems. End-to-end encryption combines encryption in transit and encryption at rest to secure data at all times, from being generated and leaving the sender’s device, to arriving at the recipient’s device and being decrypted.

“Messaging is a critical tool for any organization, and end-to-end encryption is the security technology that provides organizations with the confidence they need to rely on it.” — CJ Moses, CISO and VP of Security Engineering at AWS

Data retention

While data retention is often thought of as being incompatible with end-to-end encryption, leading enterprise-grade messaging apps offer both, giving you the option to configure a data store of your choice to retain conversations without exposing them to outside parties. No one other than the intended recipients and your organization has access to the message content, giving you full control over your data.

How AWS can help

AWS Wickr is an end-to-end encrypted messaging and collaboration service that was built from the ground up with features designed to help you keep internal and external communications secure, private, and compliant. Wickr protects one-to-one and group messaging, voice and video calling, file sharing, screen sharing, and location sharing with 256-bit Advanced Encryption Standard (AES) encryption, and provides data retention capabilities.

Figure 1: How Wickr works

Figure 1: How Wickr works

With Wickr, each message gets a unique AES private encryption key, and a unique Elliptic-curve Diffie–Hellman (ECDH) public key to negotiate the key exchange with recipients. Message content — including text, files, audio, or video — is encrypted on the sending device (your iPhone, for example) using the message-specific AES key. This key is then exchanged via the ECDH key exchange mechanism, so that only intended recipients can decrypt the message.

“As former employees of federal law enforcement, the intelligence community, and the military, Qintel understands the need for enterprise-federated, secure communication messaging capabilities. When searching for our company’s messaging application we evaluated the market thoroughly and while there are some excellent capabilities available, none of them offer the enterprise security and administrative flexibility that Wickr does.”
Bill Schambura, CEO at Qintel

Wickr network administrators can configure and apply data retention to both internal and external communications in a Wickr network. This includes conversations with guest users, external teams, and other partner networks, so you can retain messages and files sent to and from the organization to help meet internal, legal, and regulatory requirements.

Figure 2: Data retention process

Figure 2: Data retention process

Data retention is implemented as an always-on recipient that is added to conversations, not unlike the blind carbon copy (BCC) feature in email. The data-retention process participates in the key exchange, allowing it to decrypt messages. The process can run anywhere: on-premises, on an Amazon Elastic Compute Cloud (Amazon EC2) instance, or at a location of your choice.

Wickr is a Health Insurance Portability and Accountability Act of 1996 (HIPAA)-eligible service, helping healthcare organizations and medical providers to conduct secure telehealth visits, send messages and files that contain protected health information, and facilitate real-time patient care.

Wickr networks can be created through the AWS Management Console, and workflows can be automated with Wickr bots. Wickr is currently available in the AWS US East (Northern Virginia), AWS GovCloud (US-West), AWS Canada (Central), and AWS Europe (London) Regions.

Keep your messages safe

Employees will continue to use messaging apps to chat with friends and family, and boost productivity at work. While many of these apps can introduce risks if not used properly in business settings, Wickr combines end-to-end encryption with data-retention capabilities to help you achieve security and compliance goals. Incorporating Wickr into a comprehensive approach to secure enterprise messaging that includes clear policies and security awareness training can help you to accelerate collaboration, while protecting your organization’s data.

To learn more and get started, visit the AWS Wickr webpage, or contact us.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Tanvi Jain

Tanvi Jain

Tanvi is a Senior Technical Product Manager at AWS, based in New York. She focuses on building security-first features for customers, and is passionate about improving collaboration by building technology that is easy to use, scalable, and interoperable.

Accelerating JVM cryptography with Amazon Corretto Crypto Provider 2

Post Syndicated from Will Childs-Klein original https://aws.amazon.com/blogs/security/accelerating-jvm-cryptography-with-amazon-corretto-crypto-provider-2/

Earlier this year, Amazon Web Services (AWS) released Amazon Corretto Crypto Provider (ACCP) 2, a cryptography provider built by AWS for Java virtual machine (JVM) applications. ACCP 2 delivers comprehensive performance enhancements, with some algorithms (such as elliptic curve key generation) seeing a greater than 13-fold improvement over ACCP 1. The new release also brings official support for the AWS Graviton family of processors. In this post, I’ll discuss a use case for ACCP, then review performance benchmarks to illustrate the performance gains. Finally, I’ll show you how to get started using ACCP 2 in applications today.

This release changes the backing cryptography library for ACCP from OpenSSL (used in ACCP 1) to the AWS open source cryptography library, AWS libcrypto (AWS-LC). AWS-LC has extensive formal verification, as well as traditional testing, to assure the correctness of cryptography that it provides. While AWS-LC and OpenSSL are largely compatible, there are some behavioral differences that required the ACCP major version increment to 2.

The move to AWS-LC also allows ACCP to leverage performance optimizations in AWS-LC for modern processors. I’ll illustrate the ACCP 2 performance enhancements through the use case of establishing a secure communications channel with Transport Layer Security version 1.3 (TLS 1.3). Specifically, I’ll examine cryptographic components of the connection’s initial phase, known as the handshake. TLS handshake latency particularly matters for large web service providers, but reducing the time it takes to perform various cryptographic operations is an operational win for any cryptography-intensive workload.

TLS 1.3 requires ephemeral key agreement, which means that a new key pair is generated and exchanged for every connection. During the TLS handshake, each party generates an ephemeral elliptic curve key pair, exchanges public keys using Elliptic Curve Diffie-Hellman (ECDH), and agrees on a shared secret. Finally, the client authenticates the server by verifying the Elliptic Curve Digital Signature Algorithm (ECDSA) signature in the certificate presented by the server after key exchange. All of this needs to happen before you can send data securely over the connection, so these operations directly impact handshake latency and must be fast.

Figure 1 shows benchmarks for the three elliptic curve algorithms that implement the TLS 1.3 handshake: elliptic curve key generation (up to 1,298% latency improvement in ACCP 2.0 over ACCP 1.6), ECDH key agreement (up to 858% latency improvement), and ECDSA digital signature verification (up to 260% latency improvement). These algorithms were benchmarked over three common elliptic curves with different key sizes on both ACCP 1 and ACCP 2. The choice of elliptic curve determines the size of the key used or generated by the algorithm, and key size correlates to performance. The following benchmarks were measured under the Amazon Corretto 11 JDK on a c7g.large instance running Amazon Linux with a Graviton 3 processor.

Figure 1: Percentage improvement of ACCP 2.0 over 1.6 performance benchmarks on c7g.large Amazon Linux Graviton 3

Figure 1: Percentage improvement of ACCP 2.0 over 1.6 performance benchmarks on c7g.large Amazon Linux Graviton 3

The performance improvements due to the optimization of secp384r1 in AWS-LC are particularly noteworthy.

Getting started

Whether you’re introducing ACCP to your project or upgrading from ACCP 1, start the onboarding process for ACCP 2 by updating your dependency manager configuration in your development or testing environment. The Maven and Gradle examples below assume that you’re using linux on an ARM64 processor. If you’re using an x86 processor, substitute linux-x86_64 for linux-aarch64. After you’ve performed this update, sync your application’s dependencies and install ACCP in your JVM process. ACCP can be installed either by specifying our recommended security.properties file in your JVM invocation or programmatically at runtime. The following sections provide more details about all of these steps.

After ACCP has been installed, the Java Cryptography Architecture (JCA) will look for cryptographic implementations in ACCP first before moving on to other providers. So, as long as your application and dependencies obtain algorithms supported by ACCP from the JCA, your application should gain the benefits of ACCP 2 without further configuration or code changes.

Maven

If you’re using Maven to manage dependencies, add or update the following dependency configuration in your pom.xml file.

<dependency>
  <groupId>software.amazon.cryptools</groupId>
  <artifactId>AmazonCorrettoCryptoProvider</artifactId>
  <version>[2.0,3.0)</version>
  <classifier>linux-aarch64</classifier>
</dependency>

Gradle

For Gradle, add or update the following dependency in your build.gradle file.

dependencies {
    implementation 'software.amazon.cryptools:AmazonCorrettoCryptoProvider:2.+:linux-aarch64'
}

Install through security properties

After updating your dependency manager, you’ll need to install ACCP. You can install ACCP using security properties as described in our GitHub repository. This installation method is a good option for users who have control over their JVM invocation.

Install programmatically

If you don’t have control over your JVM invocation, you can install ACCP programmatically. For Java applications, add the following code to your application’s initialization logic (optionally performing a health check).

com.amazon.corretto.crypto.provider.AmazonCorrettoCryptoProvider.install();
com.amazon.corretto.crypto.provider.AmazonCorrettoCryptoProvider.INSTANCE.assertHealthy();

Migrating from ACCP 1 to ACCP 2

Although the migration path to version 2 is straightforward for most ACCP 1 users, ACCP 2 ends support for some outdated algorithms: a finite field Diffie-Hellman key agreement, finite field DSA signatures, and a National Institute of Standards and Technology (NIST)-specified random number generator. The removal of these algorithms is not backwards compatible, so you’ll need to check your code for their usage and, if you do find usage, either migrate to more modern algorithms provided by ACCP 2 or obtain implementations from a different provider, such as one of the default providers that ships with the JDK.

Check your code

Search for unsupported algorithms in your application code by their JCA names:

  • DH: Finite-field Diffie-Hellman key agreement
  • DSA: Finite-field Digital Signature Algorithm
  • NIST800-90A/AES-CTR-256: NIST-specified random number generator

Use ACCP 2 supported algorithms

Where possible, use these supported algorithms in your application code:

  • ECDH for key agreement instead of DH
  • ECDSA or RSA for signatures instead of DSA
  • Default SecureRandom instead of NIST800-90A/AES-CTR-256

If your use case requires the now-unsupported algorithms, check whether any of those algorithms are explicitly requested from ACCP.

  • If ACCP is not explicitly named as the provider, then you should be able to transparently fall back to another provider without a code change.
  • If ACCP is explicitly named as the provider, then remove that provider specification and register a different provider that offers the algorithm. This will allow the JCA to obtain an implementation from another registered provider without breaking backwards compatibility in your application.

Test your code

Some behavioral differences exist between ACCP 2 and other providers, including ACCP 1 (backed by OpenSSL). After onboarding or migrating, it’s important that you test your application code thoroughly to identify potential incompatibilities between cryptography providers.

Conclusion

Integrate ACCP 2 into your application today to benefit from AWS-LC’s security assurance and performance improvements. For a full list of changes, see the ACCP CHANGELOG on GitHub. Linux builds of ACCP 2 are now available on Maven Central for aarch64 and x86-64 processor architectures. If you encounter any issues with your integration, or have any feature suggestions, please reach out to us on GitHub by filing an issue.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Will Childs-Klein

Will Childs-Klein

Will is a Senior Software Engineer at AWS Cryptography, where he focuses on developing cryptographic libraries, optimizing software performance, and deploying post-quantum cryptography. Previously at AWS, he worked on data storage and transfer services including Storage Gateway, Elastic File System, and DataSync.

You Can’t Rush Post-Quantum-Computing Cryptography Standards

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/you-cant-rush-post-quantum-computing-standards.html

I just read an article complaining that NIST is taking too long in finalizing its post-quantum-computing cryptography standards.

This process has been going on since 2016, and since that time there has been a huge increase in quantum technology and an equally large increase in quantum understanding and interest. Yet seven years later, we have only four algorithms, although last week NIST announced that a number of other candidates are under consideration, a process that is expected to take “several years.

The delay in developing quantum-resistant algorithms is especially troubling given the time it will take to get those products to market. It generally takes four to six years with a new standard for a vendor to develop an ASIC to implement the standard, and it then takes time for the vendor to get the product validated, which seems to be taking a troubling amount of time.

Yes, the process will take several years, and you really don’t want to rush it. I wrote this last year:

Ian Cassels, British mathematician and World War II cryptanalyst, once said that “cryptography is a mixture of mathematics and muddle, and without the muddle the mathematics can be used against you.” This mixture is particularly difficult to achieve with public-key algorithms, which rely on the mathematics for their security in a way that symmetric algorithms do not. We got lucky with RSA and related algorithms: their mathematics hinge on the problem of factoring, which turned out to be robustly difficult. Post-quantum algorithms rely on other mathematical disciplines and problems­—code-based cryptography, hash-based cryptography, lattice-based cryptography, multivariate cryptography, and so on­—whose mathematics are both more complicated and less well-understood. We’re seeing these breaks because those core mathematical problems aren’t nearly as well-studied as factoring is.

[…]

As the new cryptanalytic results demonstrate, we’re still learning a lot about how to turn hard mathematical problems into public-key cryptosystems. We have too much math and an inability to add more muddle, and that results in algorithms that are vulnerable to advances in mathematics. More cryptanalytic results are coming, and more algorithms are going to be broken.

As to the long time it takes to get new encryption products to market, work on shortening it:

The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required.

Whatever NIST comes up with, expect that it will get broken sooner than we all want. It’s the nature of these trap-door functions we’re using for public-key cryptography.

Backdoor in TETRA Police Radios

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/backdoor-in-tetra-police-radios.html

Seems that there is a deliberate backdoor in the twenty-year-old TErrestrial Trunked RAdio (TETRA) standard used by police forces around the world.

The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.

The researchers said they worked around this limitation by purchasing a TETRA-powered radio from eBay. In order to then access the cryptographic component of the radio itself, Wetzels said the team found a vulnerability in an interface of the radio.

[…]

Most interestingly is the researchers’ findings of what they describe as the backdoor in TEA1. Ordinarily, radios using TEA1 used a key of 80-bits. But Wetzels said the team found a “secret reduction step” which dramatically lowers the amount of entropy the initial key offered. An attacker who followed this step would then be able to decrypt intercepted traffic with consumer-level hardware and a cheap software defined radio dongle.

Looks like the encryption algorithm was intentionally weakened by intelligence agencies to facilitate easy eavesdropping.

Specifically on the researchers’ claims of a backdoor in TEA1, Boyer added “At this time, we would like to point out that the research findings do not relate to any backdoors. The TETRA security standards have been specified together with national security agencies and are designed for and subject to export control regulations which determine the strength of the encryption.”

And I would like to point out that that’s the very definition of a backdoor.

Why aren’t we done with secret, proprietary cryptography? It’s just not a good idea.

Details of the security analysis. Another news article.

Three ways to accelerate incident response in the cloud: insights from re:Inforce 2023

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/three-ways-to-accelerate-incident-response-in-the-cloud-insights-from-reinforce-2023/

AWS re:Inforce took place in Anaheim, California, on June 13–14, 2023. AWS customers, partners, and industry peers participated in hundreds of technical and non-technical security-focused sessions across six tracks, an Expo featuring AWS experts and AWS Security Competency Partners, and keynote and leadership sessions.

The threat detection and incident response track showcased how AWS customers can get the visibility they need to help improve their security posture, identify issues before they impact business, and investigate and respond quickly to security incidents across their environment.

With dozens of service and feature announcements—and innumerable best practices shared by AWS experts, customers, and partners—distilling highlights is a challenge. From an incident response perspective, three key themes emerged.

Proactively detect, contextualize, and visualize security events

When it comes to effectively responding to security events, rapid detection is key. Among the launches announced during the keynote was the expansion of Amazon Detective finding groups to include Amazon Inspector findings in addition to Amazon GuardDuty findings.

Detective, GuardDuty, and Inspector are part of a broad set of fully managed AWS security services that help you identify potential security risks, so that you can respond quickly and confidently.

Using machine learning, Detective finding groups can help you conduct faster investigations, identify the root cause of events, and map to the MITRE ATT&CK framework to quickly run security issues to ground. The finding group visualization panel shown in the following figure displays findings and entities involved in a finding group. This interactive visualization can help you analyze, understand, and triage the impact of finding groups.

Figure 1: Detective finding groups visualization panel

Figure 1: Detective finding groups visualization panel

With the expanded threat and vulnerability findings announced at re:Inforce, you can prioritize where to focus your time by answering questions such as “was this EC2 instance compromised because of a software vulnerability?” or “did this GuardDuty finding occur because of unintended network exposure?”

In the session Streamline security analysis with Amazon Detective, AWS Principal Product Manager Rich Vorwaller, AWS Senior Security Engineer Rima Tanash, and AWS Program Manager Jordan Kramer demonstrated how to use graph analysis techniques and machine learning in Detective to identify related findings and resources, and investigate them together to accelerate incident analysis.

In addition to Detective, you can also use Amazon Security Lake to contextualize and visualize security events. Security Lake became generally available on May 30, 2023, and several re:Inforce sessions focused on how you can use this new service to assist with investigations and incident response.

As detailed in the following figure, Security Lake automatically centralizes security data from AWS environments, SaaS providers, on-premises environments, and cloud sources into a purpose-built data lake stored in your account. Security Lake makes it simpler to analyze security data, gain a more comprehensive understanding of security across an entire organization, and improve the protection of workloads, applications, and data. Security Lake automates the collection and management of security data from multiple accounts and AWS Regions, so you can use your preferred analytics tools while retaining complete control and ownership over your security data. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard. With OCSF support, the service normalizes and combines security data from AWS and a broad range of enterprise security data sources.

Figure 2: How Security Lake works

Figure 2: How Security Lake works

To date, 57 AWS security partners have announced integrations with Security Lake, and we now have more than 70 third-party sources, 16 analytics subscribers, and 13 service partners.

In Gaining insights from Amazon Security Lake, AWS Principal Solutions Architect Mark Keating and AWS Security Engineering Manager Keith Gilbert detailed how to get the most out of Security Lake. Addressing questions such as, “How do I get access to the data?” and “What tools can I use?,” they demonstrated how analytics services and security information and event management (SIEM) solutions can connect to and use data stored within Security Lake to investigate security events and identify trends across an organization. They emphasized how bringing together logs in multiple formats and normalizing them into a single format empowers security teams to gain valuable context from security data, and more effectively respond to events. Data can be queried with Amazon Athena, or pulled by Amazon OpenSearch Service or your SIEM system directly from Security Lake.

Build your security data lake with Amazon Security Lake featured AWS Product Manager Jonathan Garzon, AWS Product Solutions Architect Ross Warren, and Global CISO of Interpublic Group (IPG) Troy Wilkinson demonstrating how Security Lake helps address common challenges associated with analyzing enterprise security data, and detailing how IPG is using the service. Wilkinson noted that IPG’s objective is to bring security data together in one place, improve searches, and gain insights from their data that they haven’t been able to before.

“With Security Lake, we found that it was super simple to bring data in. Not just the third-party data and Amazon data, but also our on-premises data from custom apps that we built.” — Troy Wilkinson, global CISO, Interpublic Group

Use automation and machine learning to reduce mean time to response

Incident response automation can help free security analysts from repetitive tasks, so they can spend their time identifying and addressing high-priority security issues.

In How LLA reduces incident response time with AWS Systems Manager, telecommunications provider Liberty Latin America (LLA) detailed how they implemented a security framework to detect security issues and automate incident response in more than 180 AWS accounts accessed by internal stakeholders and third-party partners by using AWS Systems Manager Incident Manager, AWS Organizations, Amazon GuardDuty, and AWS Security Hub.

LLA operates in over 20 countries across Latin America and the Caribbean. After completing multiple acquisitions, LLA needed a centralized security operations team to handle incidents and notify the teams responsible for each AWS account. They used GuardDuty, Security Hub, and Systems Manager Incident Manager to automate and streamline detection and response, and they configured the services to initiate alerts whenever there was an issue requiring attention.

Speaking alongside AWS Principal Solutions Architect Jesus Federico and AWS Principal Product Manager Sarah Holberg, LLA Senior Manager of Cloud Services Joaquin Cameselle noted that when GuardDuty identifies a critical issue, it generates a new finding in Security Hub. This finding is then forwarded to Systems Manager Incident Manager through an Amazon EventBridge rule. This configuration helps ensure the involvement of the appropriate individuals associated with each account.

“We have deployed a security framework in Liberty Latin America to identify security issues and streamline incident response across over 180 AWS accounts. The framework that leverages AWS Systems Manager Incident Manager, Amazon GuardDuty, and AWS Security Hub enabled us to detect and respond to incidents with greater efficiency. As a result, we have reduced our reaction time by 90%, ensuring prompt engagement of the appropriate teams for each AWS account and facilitating visibility of issues for the central security team.” — Joaquin Cameselle, senior manager, cloud services, Liberty Latin America

How Citibank (Citi) advanced their containment capabilities through automation outlined how the National Institute of Standards and Technology (NIST) Incident Response framework is applied to AWS services, and highlighted Citi’s implementation of a highly scalable cloud incident response framework designed to support the 28 AWS services in their cloud environment.

After describing the four phases of the incident response process — preparation and prevention; detection and analysis; containment, eradication, and recovery; and post-incident activity—AWS ProServe Global Financial Services Senior Engagement Manager Harikumar Subramonion noted that, to fully benefit from the cloud, you need to embrace automation. Automation benefits the third phase of the incident response process by speeding up containment, and reducing mean time to response.

Citibank Head of Cloud Security Operations Elvis Velez and Vice President of Cloud Security Damien Burks described how Citi built the Cloud Containment Automation Framework (CCAF) from the ground up by using AWS Step Functions and AWS Lambda, enabling them to respond to events 24/7 without human error, and reduce the time it takes to contain resources from 4 hours to 15 minutes. Velez described how Citi uses adversary emulation exercises that use the MITRE ATT&CK Cloud Matrix to simulate realistic attacks on AWS environments, and continuously validate their ability to effectively contain incidents.

Innovate and do more with less

Security operations teams are often understaffed, making it difficult to keep up with alerts. According to data from CyberSeek, there are currently 69 workers available for every 100 cybersecurity job openings.

Effectively evaluating security and compliance posture is critical, despite resource constraints. In Centralizing security at scale with Security Hub and Intuit’s experience, AWS Senior Solutions Architect Craig Simon, AWS Senior Security Hub Product Manager Dora Karali, and Intuit Principal Software Engineer Matt Gravlin discussed how to ease security management with Security Hub. Fortune 500 financial software provider Intuit has approximately 2,000 AWS accounts, 10 million AWS resources, and receives 20 million findings a day from AWS services through Security Hub. Gravlin detailed Intuit’s Automated Compliance Platform (ACP), which combines Security Hub and AWS Config with an internal compliance solution to help Intuit reduce audit timelines, effectively manage remediation, and make compliance more consistent.

“By using Security Hub, we leveraged AWS expertise with their regulatory controls and best practice controls. It helped us keep up to date as new controls are released on a regular basis. We like Security Hub’s aggregation features that consolidate findings from other AWS services and third-party providers. I personally call it the super aggregator. A key component is the Security Hub to Amazon EventBridge integration. This allowed us to stream millions of findings on a daily basis to be inserted into our ACP database.” — Matt Gravlin, principal software engineer, Intuit

At AWS re:Inforce, we launched a new Security Hub capability for automating actions to update findings. You can now use rules to automatically update various fields in findings that match defined criteria. This allows you to automatically suppress findings, update the severity of findings according to organizational policies, change the workflow status of findings, and add notes. With automation rules, Security Hub provides you a simplified way to build automations directly from the Security Hub console and API. This reduces repetitive work for cloud security and DevOps engineers and can reduce mean time to response.

In Continuous innovation in AWS detection and response services, AWS Worldwide Security Specialist Senior Manager Himanshu Verma and GuardDuty Senior Manager Ryan Holland highlighted new features that can help you gain actionable insights that you can use to enhance your overall security posture. After mapping AWS security capabilities to the core functions of the NIST Cybersecurity Framework, Verma and Holland provided an overview of AWS threat detection and response services that included a technical demonstration.

Bolstering incident response with AWS Wickr enterprise integrations highlighted how incident responders can collaborate securely during a security event, even on a compromised network. AWS Senior Security Specialist Solutions Architect Wes Wood demonstrated an innovative approach to incident response communications by detailing how you can integrate the end-to-end encrypted collaboration service AWS Wickr Enterprise with GuardDuty and AWS WAF. Using Wickr Bots, you can build integrated workflows that incorporate GuardDuty and third-party findings into a more secure, out-of-band communication channel for dedicated teams.

Evolve your incident response maturity

AWS re:Inforce featured many more highlights on incident response, including How to run security incident response in your Amazon EKS environment and Investigating incidents with Amazon Security Lake and Jupyter notebooks code talks, as well as the announcement of our Cyber Insurance Partners program. Content presented throughout the conference made one thing clear: AWS is working harder than ever to help you gain the insights that you need to strengthen your organization’s security posture, and accelerate incident response in the cloud.

To watch AWS re:Inforce sessions on demand, see the AWS re:Inforce playlists on YouTube.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Author

Himanshu Verma

Himanshu is a Worldwide Specialist for AWS Security Services. In this role, he leads the go-to-market creation and execution for AWS Security Services, field enablement, and strategic customer advisement. Prior to AWS, he held several leadership roles in Product Management, engineering and development, working on various identity, information security, and data protection technologies. He obsesses brainstorming disruptive ideas, venturing outdoors, photography, and trying various “hole in the wall” food and drinking establishments around the globe.

Jesus Federico

Jesus Federico

Jesus is a Principal Solutions Architect for AWS in the telecommunications vertical, working to provide guidance and technical assistance to communication service providers on their cloud journey. He supports CSPs in designing and implementing secure, resilient, scalable, and high-performance applications in the cloud.

Power LED Side-Channel Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/power-led-side-channel-attack.html

This is a clever new side-channel attack:

The first attack uses an Internet-connected surveillance camera to take a high-speed video of the power LED on a smart card reader­—or of an attached peripheral device—­during cryptographic operations. This technique allowed the researchers to pull a 256-bit ECDSA key off the same government-approved smart card used in Minerva. The other allowed the researchers to recover the private SIKE key of a Samsung Galaxy S8 phone by training the camera of an iPhone 13 on the power LED of a USB speaker connected to the handset, in a similar way to how Hertzbleed pulled SIKE keys off Intel and AMD CPUs.

There are lots of limitations:

When the camera is 60 feet away, the room lights must be turned off, but they can be turned on if the surveillance camera is at a distance of about 6 feet. (An attacker can also use an iPhone to record the smart card reader power LED.) The video must be captured for 65 minutes, during which the reader must constantly perform the operation.

[…]

The attack assumes there is an existing side channel that leaks power consumption, timing, or other physical manifestations of the device as it performs a cryptographic operation.

So don’t expect this attack to be recovering keys in the real world anytime soon. But, still, really nice work.

More details from the researchers.

Post-quantum hybrid SFTP file transfers using AWS Transfer Family

Post Syndicated from Panos Kampanakis original https://aws.amazon.com/blogs/security/post-quantum-hybrid-sftp-file-transfers-using-aws-transfer-family/

Amazon Web Services (AWS) prioritizes security, privacy, and performance. Encryption is a vital part of privacy. To help provide long-term protection of encrypted data, AWS has been introducing quantum-resistant key exchange in common transport protocols used by AWS customers. In this blog post, we introduce post-quantum hybrid key exchange with Kyber, the National Institute of Standards and Technology’s chosen quantum-resistant key encapsulation algorithm, in the Secure Shell (SSH) protocol. We explain why it’s important and show you how to use it with Secure File Transfer Protocol (SFTP) file transfers in AWS Transfer Family, the AWS file transfer service.

Why use PQ-hybrid key establishment in SSH

Although not available today, a cryptanalytically relevant quantum computer (CRQC) could theoretically break the standard public key algorithms currently in use. Today’s network traffic could be recorded now and then decrypted in the future with a CRQC. This is known as harvest-now-decrypt-later.

With such concerns in mind, the U.S. Congress recently signed the Quantum Computing Cybersecurity Preparedness Act, and the White House issued National Security Memoranda (NSM-8, NSM-10) to prepare for a timely and equitable transition to quantum-resistant cryptography. The National Security Agency (NSA) also announced its quantum-resistant algorithm requirements and timelines in its CNSA 2.0 release. Many other governments like Canada, Germany, and France and organizations like ISO/IEC and IEEE have also been prioritizing preparations and experiments with quantum-resistant cryptography technologies.

AWS is migrating to post-quantum cryptography. AWS Key Management Service (AWS KMS)AWS Certificate Manager (ACM), and AWS Secrets Manager TLS endpoints already include support for post-quantum hybrid (PQ-hybrid) key establishment with Elliptic Curve Diffie-Hellman (ECDH) and Kyber, NIST’s Post-Quantum Cryptography (PQC) project’s chosen key encapsulation mechanism (KEM). Although PQ-hybrid TLS 1.3 key exchange has received a lot of attention, there has been limited work on SSH.

SSH is a protocol widely used by AWS customers for various tasks ranging from moving files between machines to managing Amazon Elastic Compute Cloud (Amazon EC2) instances. Considering the importance of the SSH protocol, its ubiquitous use, and the data it transfers, we introduced PQ-hybrid key exchange with Kyber in it.

How PQ-hybrid key exchange works in Transfer Family SFTP

AWS just announced support for post-quantum key exchange in SFTP file transfers in AWS Transfer Family. Transfer Family securely scales business-to-business file transfers to AWS Storage services using SFTP and other protocols. SFTP is a secure version of the File Transfer Protocol (FTP) that runs over SSH. The post-quantum key exchange support of Transfer Family raises the security bar for data transfers over SFTP.

PQ-hybrid key establishment in SSH introduces post-quantum KEMs used in conjunction with classical key exchange. The client and server still do an ECDH key exchange. Additionally, the server encapsulates a post-quantum shared secret to the client’s post-quantum KEM public key, which is advertised in the client’s SSH key exchange message. This strategy combines the high assurance of a classical key exchange with the security of the proposed post-quantum key exchanges, to help ensure that the handshakes are protected as long as the ECDH or the post-quantum shared secret cannot be broken.

More specifically, the PQ-hybrid key exchange SFTP support in Transfer Family includes combining post-quantum Kyber-512, Kyber-768, and Kyber-1024, with ECDH over P256, P384, P521, or Curve25519 curves. The corresponding SSH key exchange methods — [email protected], [email protected], [email protected], and [email protected] — are specified in the PQ-hybrid SSH key exchange draft.

Why Kyber?

AWS is committed to supporting standardized interoperable algorithms, so we wanted to introduce Kyber to SSH. Kyber was chosen for standardization by NIST’s Post-Quantum Cryptography (PQC) project. Some standards bodies are already integrating Kyber in various protocols.

We also wanted to encourage interoperability by adopting, making available, and submitting for standardization, a draft that combines Kyber with NIST-approved curves like P256 for SSH. To help enhance security for our customers, the AWS implementation of the PQ key exchange in SFTP and SSH follows that draft.

Interoperability

The new key exchange methods — [email protected], [email protected], [email protected], and [email protected] — are supported in two new security policies in Transfer Family. These might change as the draft evolves towards standardization or when NIST ratifies the Kyber algorithm.

Is PQ-hybrid SSH key exchange aligned with cryptographic requirements like FIPS 140?

For customers that require FIPS compliance, Transfer Family provides FIPS cryptography in SSH by using the AWS-LC, open-source cryptographic library. The PQ-hybrid key exchange methods supported in the TransferSecurityPolicy-PQ-SSH-FIPS-Experimental-2023-04 policy in Transfer Family continue to meet FIPS requirements as described in SP 800-56Cr2 (section 2). BSI Germany and ANSSI France also recommend such PQ-hybrid key exchange methods.

How to test PQ SFTP with Transfer Family

To enable PQ-hybrid SFTP in Transfer Family, you need to enable one of the two security policies that support PQ-hybrid key exchange in your SFTP-enabled endpoint. You can choose the security policy when you create a new SFTP server endpoint in Transfer Family, as explained in the documentation; or by editing the Cryptographic algorithm options in an existing SFTP endpoint. The following figure shows an example of the AWS Management Console where you update the security policy.

Figure 1: Use the console to set the PQ-hybrid security policy in the Transfer Family endpoint

Figure 1: Use the console to set the PQ-hybrid security policy in the Transfer Family endpoint

The security policy names that support PQ key exchange in Transfer Family are TransferSecurityPolicy-PQ-SSH-Experimental-2023-04 and TransferSecurityPolicy-PQ-SSH-FIPS-Experimental-2023-04. For more details on Transfer Family policies, see Security policies for AWS Transfer Family.

After you choose the right PQ security policy in your SFTP Transfer Family endpoint, you can experiment with post-quantum SFTP in Transfer Family with an SFTP client that supports PQ-hybrid key exchange by following the guidance in the aforementioned draft specification. AWS tested and confirmed interoperability between the Transfer Family PQ-hybrid key exchange in SFTP and the SSH implementations of our collaborators on the NIST NCCOE Post-Quantum Migration project, namely OQS OpenSSH and wolfSSH.

OQS OpenSSH client

OQS OpenSSH is an open-source fork of OpenSSH that adds quantum-resistant cryptography to SSH by using liboqs. liboqs is an open-source C library that implements quantum-resistant cryptographic algorithms. OQS OpenSSH and liboqs are part of the Open Quantum Safe (OQS) project.

To test PQ-hybrid key exchange in Transfer Family SFTP with OQS OpenSSH, you first need to build OQS OpenSSH, as explained in the project’s README. Then you can run the example SFTP client to connect to your AWS SFTP endpoint (for example, s-9999999999999999999.server.transfer.us-west-2.amazonaws.com) by using the PQ-hybrid key exchange methods, as shown in the following command. Make sure to replace <user_priv_key_PEM_file> with the SFTP user private key PEM-encoded file used for user authentication, and <username> with the username, and update the SFTP-enabled endpoint with the one that you created in Transfer Family.

./sftp -S ./ssh -v -o \
   KexAlgorithms=ecdh-nistp384-kyber-768r3-sha384-d00@openquantumsafe.org \
   -i <user_priv_key_PEM_file> \
   <username>@s-9999999999999999999.server.transfer.us-west-2.amazonaws.com

wolfSSH client

wolfSSH is an SSHv2 client and server library that uses wolfCrypt for its cryptography. For more details and a link to download, see wolfSSL’s product licensing information

To test PQ-hybrid key exchange in Transfer Family SFTP with wolfSSH, you first need to build wolfSSH. When built with liboqs, the open-source library that implements post-quantum algorithms, wolfSSH automatically negotiates [email protected]. Run the example SFTP client to connect to your AWS SFTP server endpoint, as shown in the following command. Make sure to replace <user_priv_key_DER_file> with the SFTP user private key DER-encoded file used for user authentication, <user_public_key_PEM_file> with the corresponding SSH user public key PEM-formatted file, and <username> with the username. Also replace the s-9999999999999999999.server.transfer.us-west-2.amazonaws.com SFTP endpoint with the one that you created in Transfer Family.

./examples/sftpclient/wolfsftp -p 22 -u <username> \
      -i <user_priv_key_DER_file> -j <user_public_key_PEM_file> -h \
      s-9999999999999999999.server.transfer.us-west-2.amazonaws.com

As we migrate to a quantum-resistant future, we expect that more SFTP and SSH clients will add support for PQ-hybrid key exchanges that are standardized for SSH.

How to confirm PQ-hybrid key exchange in SFTP

To confirm that PQ-hybrid key exchange was used in an SSH connection for SFTP to Transfer Family, check the client output and optionally use packet captures.

OQS OpenSSH client

The client output (omitting irrelevant information for brevity) should look similar to the following:

$./sftp -S ./ssh -v -o KexAlgorithms=ecdh-nistp384-kyber-768r3-sha384-d00@openquantumsafe.org -i panos_priv_key_PEM_file panos@s-9999999999999999999.server.transfer.us-west-2.amazonaws.com
OpenSSH_8.9-2022-01_p1, Open Quantum Safe 2022-08, OpenSSL 3.0.2 15 Mar 2022
debug1: Reading configuration data /home/lab/openssh/oqs-test/tmp/ssh_config
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: Connecting to s-9999999999999999999.server.transfer.us-west-2.amazonaws.com [xx.yy.zz..12] port 22.
debug1: Connection established.
[...]
debug1: Local version string SSH-2.0-OpenSSH_8.9-2022-01_
debug1: Remote protocol version 2.0, remote software version AWS_SFTP_1.1
debug1: compat_banner: no match: AWS_SFTP_1.1
debug1: Authenticating to s-9999999999999999999.server.transfer.us-west-2.amazonaws.com:22 as 'panos'
debug1: load_hostkeys: fopen /home/lab/.ssh/known_hosts2: No such file or directory
[...]
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: [email protected]
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: aes192-ctr MAC: [email protected] compression: none
debug1: kex: client->server cipher: aes192-ctr MAC: [email protected] compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: SSH2_MSG_KEX_ECDH_REPLY received
debug1: Server host key: ssh-ed25519 SHA256:BY3gNMHwTfjd4n2VuT4pTyLOk82zWZj4KEYEu7y4r/0
[...]
debug1: rekey out after 4294967296 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 4294967296 blocks
[...]
Authenticated to AWS.Tranfer.PQ.SFTP.test-endpoint.aws.com ([xx.yy.zz..12]:22) using "publickey".s
debug1: channel 0: new [client-session]
[...]
Connected to s-9999999999999999999.server.transfer.us-west-2.amazonaws.com. sftp>

The output shows that client negotiation occurred using the PQ-hybrid [email protected] method and successfully established an SFTP session.

To view the negotiated PQ-hybrid key, you can use a packet capture in Wireshark or a similar network traffic analyzer. The key exchange method negotiation offered by the client should look similar to the following:

Figure 2: View the client proposed PQ-hybrid key exchange method in Wireshark

Figure 2: View the client proposed PQ-hybrid key exchange method in Wireshark

Figure 2 shows that the client is offering the PQ-hybrid key exchange method [email protected]. The Transfer Family SFTP server negotiates the same method, and the client offers a PQ-hybrid public key.

Figure 3: View the client P384 ECDH and Kyber-768 public keys

Figure 3: View the client P384 ECDH and Kyber-768 public keys

As shown in Figure 3, the client sent 1281 bytes for the PQ-hybrid public key. These are the ECDH P384 92-byte public key, the 1184-byte Kyber-768 public key, and 5 bytes of padding. The server response is of similar size and includes the 92-byte P384 public key and the 1088 Kyber-768 ciphertext.

wolfSSH client

The client output (omitting irrelevant information for brevity) should look similar to the following:

$ ./examples/sftpclient/wolfsftp -p 22 -u panos -i panos_priv_key_DER_file -j panos_public_key_PEM_file -h s-9999999999999999999.server.transfer.us-west-2.amazonaws.com
[...]
2023-05-25 17:37:24 [DEBUG] SSH-2.0-wolfSSHv1.4.12
[...]
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = [email protected]
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = unknown
2023-05-25 17:37:24 [DEBUG] DNL: name ID = diffie-hellman-group-exchange-sha256
[...]
2023-05-25 17:37:24 [DEBUG] connect state: SERVER_KEXINIT_DONE
[...]
2023-05-25 17:37:24 [DEBUG] connect state: CLIENT_KEXDH_INIT_SENT
[...]
2023-05-25 17:37:24 [DEBUG] Decoding MSGID_KEXDH_REPLY
2023-05-25 17:37:24 [DEBUG] Entering DoKexDhReply()
2023-05-25 17:37:24 [DEBUG] DKDR: Calling the public key check callback
Sample public key check callback
  public key = 0x24d011a
  public key size = 104
  ctx = s-9999999999999999999.server.transfer.us-west-2.amazonaws.com
2023-05-25 17:37:24 [DEBUG] DKDR: public key accepted
[...]
2023-05-25 17:37:26 [DEBUG] Entering wolfSSH_get_error()
2023-05-25 17:37:26 [DEBUG] Entering wolfSSH_get_error()
wolfSSH sftp>

The output shows that the client negotiated the PQ-hybrid [email protected] method and successfully established a quantum- resistant SFTP session. A packet capture of this session would be very similar to the previous one.

Conclusion

In this blog post, we introduced the importance of both migrating to post-quantum cryptography and adopting standardized algorithms and protocols. We also shared our approach for bringing PQ-hybrid key exchanges to SSH, and how to use this today using SFTP with Transfer Family. Additionally, AWS employees are collaborating with other cryptography experts on a draft for PQ-hybrid SSH key exchange, which is the draft specification that Transfer Family follows.

If you have questions about how to use Transfer Family PQ key exchange, start a new thread in the Transfer Family for SFTP forum. If you want to learn more about post-quantum cryptography with AWS, contact the post-quantum cryptography team.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security, Identity, & Compliance re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Panos Kampanakis

Panos Kampanakis

Panos is a Principal Security Engineer in AWS Cryptography organization. He has extensive experience in cybersecurity, applied cryptography, security automation, and vulnerability management. He has co-authored cybersecurity publications, and participated in various security standards bodies to provide common interoperable protocols and languages for security information sharing, cryptography, and PKI. Currently, he works with engineers and industry standards partners to provide cryptographic implementations, protocols, and standards.

Torben Hansen

Torben Hansen

Torben is a cryptographer on the AWS Cryptography team. He is focused on developing and deploying cryptographic libraries. He also contributes to the design and analysis of cryptographic solutions across AWS.

Alex Volanis

Alex Volanis

Alex is a Software Development Engineer at AWS with a background in distributed systems, cryptography, authentication and build tools. Currently working with the AWS Transfer Family team to provide scalable, secure, and high performing data transfer solutions for internal and external customers. Passionate coder and problem solver, and occasionally a pretty good skier.

Gerardo Ravago

Gerardo Ravago

Gerardo is a Senior Software Development Engineer in the AWS Cryptography organization, where he contributes to post-quantum cryptography and the Amazon Corretto Crypto Provider. In prior AWS roles, he’s worked on Storage Gateway and DataSync. Though a software developer by day, he enjoys diving deep into food, art, culture, and history through travel on his off days.

AI-Generated Steganography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/ai-generated-steganography.html

New research suggests that AIs can produce perfectly secure steganographic images:

Abstract: Steganography is the practice of encoding secret information into innocuous content in such a manner that an adversarial third party would not realize that there is hidden meaning. While this problem has classically been studied in security literature, recent advances in generative models have led to a shared interest among security and machine learning researchers in developing scalable steganography techniques. In this work, we show that a steganography procedure is perfectly secure under Cachin (1998)’s information theoretic-model of steganography if and only if it is induced by a coupling. Furthermore, we show that, among perfectly secure procedures, a procedure is maximally efficient if and only if it is induced by a minimum entropy coupling. These insights yield what are, to the best of our knowledge, the first steganography algorithms to achieve perfect security guarantees with non-trivial efficiency; additionally, these algorithms are highly scalable. To provide empirical validation, we compare a minimum entropy coupling-based approach to three modern baselines—arithmetic coding, Meteor, and adaptive dynamic grouping—using GPT-2, WaveRNN, and Image Transformer as communication channels. We find that the minimum entropy coupling-based approach achieves superior encoding efficiency, despite its stronger security constraints. In aggregate, these results suggest that it may be natural to view information-theoretic steganography through the lens of minimum entropy coupling.

News article.

EDITED TO ADD (6/13): Comments.

AWS Security Profile: Matthew Campagna, Senior Principal, Security Engineering, AWS Cryptography

Post Syndicated from Roger Park original https://aws.amazon.com/blogs/security/security-profile-matthew-campagna-aws-cryptography/

In the AWS Security Profile series, we interview Amazon Web Services (AWS) thought leaders who help keep our customers safe and secure. This interview features Matt Campagna, Senior Principal, Security Engineering, AWS Cryptography, and re:Inforce 2023 session speaker, who shares thoughts on data protection, cloud security, post-quantum cryptography, and more. Matthew was first profiled on the AWS Security Blog in 2019. This is part 1 of 3 in a series of interviews with our AWS Cryptography team.


What do you do in your current role and how long have you been at AWS?

I started at Amazon in 2013 as the first cryptographer at AWS. Today, my focus is on the cryptographic security of our customers’ data. I work across AWS to make sure that our cryptographic engineering meets our most sensitive customer needs. I lead our migration to quantum-resistant cryptography, and help make privacy-preserving cryptography techniques part of our security model.

How did you get started in the data protection and cryptography space? What about it piqued your interest?

I first learned about public-key cryptography (for example, RSA) during a math lesson about group theory. I found the mathematics intriguing and the idea of sending secret messages using only a public value astounding. My undergraduate and graduate education focused on group theory, and I started my career at the National Security Agency (NSA) designing and analyzing cryptologics. But what interests me most about cryptography is its ability to enable business by reducing risks. I look at cryptography as a financial instrument that affords new business cases, like e-commerce, digital currency, and secure collaboration. What enables Amazon to deliver for our customers is rooted in cryptography; our business exists because cryptography enables trust and confidentiality across the internet. I find this the most intriguing aspect of cryptography.

AWS has invested in the migration to post-quantum cryptography by contributing to post-quantum key agreement and post-quantum signature schemes to protect the confidentiality, integrity, and authenticity of customer data. What should customers do to prepare for post-quantum cryptography?

Our focus at AWS is to help ensure that customers can migrate to post-quantum cryptography as fast as prudently possible. This work started with inventorying our dependencies on algorithms that aren’t known to be quantum-resistant, like integer-factorization-based cryptography, and discrete-log-based cryptography, like ECC. Customers can rely on AWS to assist with transitioning to post-quantum cryptography for their cloud computing needs.

We recommend customers begin inventorying their dependencies on algorithms that aren’t quantum-resistant, and consider developing a migration plan, to understand if they can migrate directly to new post-quantum algorithms or if they should re-architect them. For the systems that are provided by a technology provider, customers should ask what their strategy is for post-quantum cryptography migration.

AWS offers post-quantum TLS endpoints in some security services. Can you tell us about these endpoints and how customers can use them?

Our open source TLS implementation, s2n-TLS, includes post-quantum hybrid key exchange (PQHKEX) in its mainline. It’s deployed everywhere that s2n is deployed. AWS Key Management Service, AWS Secrets Manager, and AWS Certificate Manager have enabled PQHKEX cipher suites in our commercial AWS Regions. Today customers can use the AWS SDK for Java 2.0 to enable PQHKEX on their connection to AWS, and on the services that also have it enabled, they will negotiate a post-quantum key exchange method. As we enable these cipher suites on additional services, customers will also be able to connect to these services using PQHKEX.

You are a frequent contributor to the Amazon Science Blog. What were some of your recent posts about?

In 2022, we published a post on preparing for post-quantum cryptography, which provides general information on the broader industry development and deployment of post-quantum cryptography. The post links to a number of additional resources to help customers understand post-quantum cryptography. The AWS Post-Quantum Cryptography page and the Science Blog are great places to start learning about post-quantum cryptography.

We also published a post highlighting the security of post-quantum hybrid key exchange. Amazon believes in evidencing the cryptographic security of the solutions that we vend. We are actively participating in cryptographic research to validate the security that we provide in our services and tools.

What’s been the most dramatic change you’ve seen in the data protection and post-quantum cryptography landscape since we talked to you in 2019?

Since 2019, there have been two significant advances in the development of post-quantum cryptography.

First, the National Institute of Standards and Technology (NIST) announced their selection of PQC algorithms for standardization. NIST expects to finish the standardization of a post-quantum key encapsulation mechanism (Kyber) and digital signature scheme (Dilithium) by 2024 as part of the Federal Information Processing Standard (FIPS). NIST will also work on standardization of two additional signature standards (FALCON and SPHINCS+), and continue to consider future standardization of the key encapsulation mechanisms BIKE, HQC, and Classical McEliece.

Second, the NSA announced their Commercial National Security Algorithm (CNSA) Suite 2.0, which includes their timelines for National Security Systems (NSS) to migrate to post-quantum algorithms. The NSA will begin preferring post-quantum solutions in 2025 and expect that systems will have completed migration by 2033. Although this timeline might seem far away, it’s an aggressive strategy. Experience shows that it can take 20 years to develop and deploy new high-assurance cryptographic algorithms. If technology providers are not already planning to migrate their systems and services, they will be challenged to meet this timeline.

What makes cryptography exciting to you?

Cryptography is a dynamic area of research. In addition to the business applications, I enjoy the mathematics of cryptography. The state-of-the-art is constantly progressing in terms of new capabilities that cryptography can enable, and the potential risks to existing cryptographic primitives. This plays out in the public sphere of cryptographic research across the globe. These advancements are made public and are accessible for companies like AWS to innovate on behalf of our customers, and protect our systems in advance of the development of new challenges to our existing crypto algorithms. This is happening now as we monitor the advancements of quantum computing against our ability to define and deploy new high-assurance quantum-resistant algorithms. For me, it doesn’t get more exciting than this.

Where do you see the cryptography and post-quantum cryptography space heading to in the future?

While NIST transitions from their selection process to standardization, the broader cryptographic community will be more focused on validating the cryptographic assurances of these proposed schemes for standardization. This is a critical part of the process. I’m optimistic that we will enter 2025 with new cryptographic standards to deploy.

There is a lot of additional cryptographic research and engineering ahead of us. Applying these new primitives to the cryptographic applications that use classical asymmetric schemes still needs to be done. Some of this work is happening in parallel, like in the IETF TLS working group, and in the ETSI Quantum-Safe Cryptography Technical Committee. The next five years should see the adoption of PQHKEX in protocols like TLS, SSH, and IKEv2 and certification of new FIPS hardware security modules (HSMs) for establishing new post-quantum, long-lived roots of trust for code-signing and entity authentication.

I expect that the selected primitives for standardization will also be used to develop novel uses in fields like secure multi-party communication, privacy preserving machine learning, and cryptographic computing.

With AWS re:Inforce 2023 around the corner, what will your session focus on? What do you hope attendees will take away from your session?

Session DAP302 – “Post-quantum cryptography migration strategy for cloud services” is about the challenge quantum computers pose to currently used public-key cryptographic algorithms and how the industry is responding. Post-quantum cryptography (PQC) offers a solution to this challenge, providing security to help protect against quantum computer cybersecurity events. We outline current efforts in PQC standardization and migration strategies. We want our customers to leave with a better understanding of the importance of PQC and the steps required to migrate to it in a cloud environment.

Is there something you wish customers would ask you about more often?

The question I am most interested in hearing from our customers is, “when will you have a solution to my problem?” If customers have a need for a novel cryptographic solution, I’m eager to try to solve that with them.

How about outside of work, any hobbies?

My main hobbies outside of work are biking and running. I wish I was as consistent attending to my hobbies as I am to my work desk. I am happier being able to run every day for a constant speed and distance as opposed to running faster or further tomorrow or next week. Last year I was fortunate enough to do the Cycle Oregon ride. I had registered for it twice before without being able to find the time to do it.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Roger Park

Roger Park

Roger is a Senior Security Content Specialist at AWS Security focusing on data protection. He has worked in cybersecurity for almost ten years as a writer and content producer. In his spare time, he enjoys trying new cuisines, gardening, and collecting records.

Campagna bio photo

Matthew Campagna

Matthew is a Sr. Principal Engineer for Amazon Web Services’s Cryptography Group. He manages the design and review of cryptographic solutions across AWS. He is an affiliate of Institute for Quantum Computing at the University of Waterloo, a member of the ETSI Security Algorithms Group Experts (SAGE), and ETSI TC CYBER’s Quantum Safe Cryptography group. Previously, Matthew led the Certicom Research group at BlackBerry managing cryptographic research, standards, and IP, and participated in various standards organizations, including ANSI, ZigBee, SECG, ETSI’s SAGE, and the 3GPP-SA3 working group. He holds a Ph.D. in mathematics from Wesleyan University in group theory, and a bachelor’s degree in mathematics from Fordham University.

AWS Security Profile – Cryptography Edition: Valerie Lambert, Senior Software Development Engineer

Post Syndicated from Roger Park original https://aws.amazon.com/blogs/security/aws-security-profile-cryptography-edition-valerie-lambert-senior-software-development-engineer/

In the AWS Security Profile series, we interview Amazon Web Services (AWS) experts who help keep our customers safe and secure. This interview features Valerie Lambert, Senior Software Development Engineer, Crypto Tools, and upcoming AWS re:Inforce 2023 speaker, who shares thoughts on data protection, cloud security, cryptography tools, and more.


What do you do in your current role and how long have you been at AWS?
I’m a Senior Software Development Engineer on the AWS Crypto Tools team in AWS Cryptography. My team focuses on building open source, client-side encryption solutions, such as the AWS Encryption SDK. I’ve been working in this space for the past four years.

How did you get started in cryptography? What about it piqued your interest?
When I started on this team back in 2019, I knew very little about the specifics of cryptography. I only knew its importance for securing our customers’ data and that security was our top priority at AWS. As a developer, I’ve always taken security and data protection very seriously, so when I learned about this particular team from one of my colleagues, I was immediately interested and wanted to learn more. It also helped that I’m a very math-oriented person. I find this domain endlessly interesting, and love that I have the opportunity to work with some truly amazing cryptography experts.

Why do cryptography tools matter today?
Customers need their data to be secured, and builders need to have tools they can rely on to help provide that security. It’s well known that no one should cobble together their own encryption scheme. However, even if you use well-vetted, industry-standard cryptographic primitives, there are still many considerations when applying those primitives correctly. By using tools that are simple to use and hard to misuse, builders can be confident in protecting their customers’ most sensitive data, without cryptographic expertise required.

What’s been the most dramatic change you’ve seen in the data protection and cryptography space?
In the past few years, I’ve seen more and more formal verification used to help prove various properties about complex systems, as well as build confidence in the correctness of our libraries. In particular, the AWS Crypto Tools team is using Dafny, a formal verification-aware programming language, to implement the business logic for some of our libraries. Given the high bar for correctness of cryptographic libraries, having formal verification as an additional tool in the toolbox has been invaluable. I look forward to how these tools mature in the next couple years.

You are speaking in Anaheim June 13-14 at AWS re:Inforce 2023 — what will your session focus on?
Our team has put together a workshop (DAP373) that will focus on strategies to use client-side encryption with Amazon DynamoDB, specifically focusing on solutions for effectively searching on data that has been encrypted on the client side. I hope that attendees will see that, with a bit of forethought put into their data access patterns, they can still protect their data on the client side.

Where do you see the cryptography tools space heading in the future?
More and more customers have been coming to us with use cases that involve client-side encryption with different database technologies. Although my team currently vends an out-of-the-box solution for DynamoDB, customers working with other database technologies have to build their own solutions to help keep their data safe. There are many, many considerations that come with encrypting data on the client side for use in a database, and it’s very expensive for customers to design, build, and maintain these solutions. The AWS Crypto Tools team is actively investigating this space—both how we can expand the usability of client-side encrypted data in DynamoDB, and how to bring our tools to more database technologies.

Is there something you wish customers would ask you about more often?
Customers shouldn’t need to understand the cryptographic details that underpin the security properties that our tools provide to protect their end users’ data. However, I love when our customers are curious and ask questions and are themselves interested in the nitty-gritty details of our solutions.

How about outside of work, any hobbies?
A couple years ago, I picked up aerial circus arts as a hobby. I’m still not very good, but it’s a lot of fun to play around on silks and trapeze. And it’s great exercise!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Roger Park

Roger Park

Roger is a Senior Security Content Specialist at AWS Security focusing on data protection. He has worked in cybersecurity for almost ten years as a writer and content producer. In his spare time, he enjoys trying new cuisines, gardening, and collecting records.

Valerie Lambert

Valerie Lambert

Valerie is a Senior Software Development Engineer at Amazon Web Services on the Crypto Tools team. She focuses on the security and usability of high-level cryptographic libraries. Outside of work, she enjoys drawing, hiking, and finding new indie video games to play.

New eBook: 5 Keys to Secure Enterprise Messaging

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/new-ebook-5-keys-to-secure-enterprise-messaging/

AWS is excited to announce a new eBook, 5 Keys to Secure Enterprise Messaging. The new eBook includes best practices for addressing the security and compliance risks associated with messaging apps.

An estimated 3.09 billion mobile phone users access messaging apps to communicate, and this figure is projected to grow to 3.51 billion users in 2025.

Legal and regulatory requirements for data protection, privacy, and data retention have made protecting business communications a priority for organizations across the globe. Although consumer messaging apps are convenient and support real-time communication with colleagues, customers, and partners, they often lack the robust security and administrative controls many businesses require.

The eBook details five keys to secure enterprise messaging that balance people, process, and technology.

We encourage you to read the eBook, and learn about:

  • Establishing messaging policies and guidelines that are effective for your workforce
  • Training employees to use messaging apps in a way that doesn’t increase organizational risk
  • Building a security-first culture
  • Using true end-to-end encryption (E2EE) to secure communications
  • Retaining data to help meet requirements, without exposing it to outside parties

Download 5 Keys to Secure Enterprise Messaging.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.