The following is background on ransomware, CIS, and the initiatives that led to the publication of this new blueprint.
The Ransomware Task Force
In April of 2021, the U.S. government launched the Ransomware Task Force (RTF), which has the mission of uniting key stakeholders across industry, government, and civil society to create new solutions, break down silos, and find effective new methods of countering the ransomware threat. The RTF has since launched several progress reports with specific recommendations, including the development of the RTF Blueprint for Ransomware Defense, which provides a framework with practical steps to mitigate, respond to, and recover from ransomware. AWS is a member of the RTF, and we have taken action to create our own AWS Blueprint for Ransomware Defense that maps actionable and foundational security controls to AWS services and features that customers can use to implement those controls. The AWS Blueprint for Ransomware Defense is based on the CIS Controls framework.
Center for Internet Security
The Center for Internet Security (CIS) is a community-driven nonprofit, globally recognized for establishing best practices for securing IT systems and data. To help establish foundational defense mechanisms, a subset of the CIS Critical Security Controls (CIS Controls) have been identified as important first steps in the implementation of a robust program to prevent, respond to, and recover from ransomware events. This list of controls was established to provide safeguards against the most impactful and well-known internet security issues. The controls have been further prioritized into three implementation groups (IGs), to help guide their implementation. IG1, considered “essential cyber hygiene,” provides foundational safeguards. IG2 builds on IG1 by including the controls in IG1 plus a number of additional considerations. Finally, IG3 includes the controls in IG1 and IG2, with an additional layer of controls that protect against more sophisticated security issues.
CIS recommends that organizations use the CIS IG1 controls as basic preventative steps against ransomware events. We’ve produced a mapping of AWS services that can help you implement aspects of these controls in your AWS environment. Ransomware is a complex event, and the best course of action to mitigate risk is to apply a thoughtful strategy of defense in depth. The mitigations and controls outlined in this mapping document are general security best practices, but are a non-exhaustive list.
Because data is often vital to the operation of mission-critical services, ransomware can severely disrupt business processes and applications that depend on this data. For this reason, many organizations are looking for effective security controls that will improve their security posture against these types of events. We hope you find the information in the AWS Blueprint for Ransomware Defense helpful and incorporate it as a tool to provide additional layers of security to help keep your data safe.
Let us know if you have any feedback through the AWS Security Contact Us page. Please reach out if there is anything we can do to add to the usefulness of the blueprint or if you have any additional questions on security and compliance. You can find more information from the IST (Institute for Security and Technology) describing ransomware and how to protect yourself on the IST website.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Amazon Web Services (AWS) has re-published the whitepaper Architecting for PCI DSS Scoping and Segmentation on AWS to provide guidance on how to properly define the scope of your Payment Card Industry (PCI) Data Security Standard (DSS) workloads that are running in the AWS Cloud. The whitepaper has been refreshed to include updated AWS best practices and technologies, and updates that are applicable to the new PCI DSS v4.0 requirements. The whitepaper looks at how to define segmentation boundaries between your in-scope and out-of-scope resources by using cloud-based AWS services.
The whitepaper is intended for engineers and solution builders, but it also serves as a guide for Qualified Security Assessors (QSAs) and internal security assessors (ISAs) to better understand the different segmentation controls that are available within AWS products and services, along with associated scoping considerations.
Compared to on-premises environments, software-defined networking on AWS transforms the scoping process for applications by providing additional segmentation controls beyond network segmentation. Thoughtful design of your applications and selection of security-impacting services for implementing your required controls can reduce the number of systems and services in your cardholder data environment (CDE).
In the AWS Security Profile series, we interview Amazon Web Services (AWS) thought leaders who help keep our customers safe and secure. This interview features Ritesh Desai, General Manager, AWS Secrets Manager, and re:Inforce 2023 session speaker, who shares thoughts on data protection, cloud security, secrets management, and more.
What do you do in your current role and how long have you been at AWS?
I’ve been in the tech industry for more than 20 years and joined AWS about three years ago. Currently, I lead our Secrets Management organization, which includes the AWS Secrets Manager service.
How did you get started in the data protection and secrets management space? What about it piqued your interest?
I’ve always been excited at the prospect of solving complex customer problems with simple technical solutions. Working across multiple small to large organizations in the past, I’ve seen similar challenges with secret sprawl and lack of auditing and monitoring tools. Centralized secrets management is a challenge for customers. As organizations evolve from start-up to enterprise level, they can end up with multiple solutions across organizational units to manage their secrets.
Being part of the Secrets Manager team gives me the opportunity to learn about our customers’ unique needs and help them protect access to their most sensitive digital assets in the cloud, at scale.
Why does secrets management matter to customers today?
Customers use secrets like database passwords and API keys to protect their most sensitive data, so it’s extremely important for them to invest in a centralized secrets management solution. Through secrets management, customers can securely store, retrieve, rotate, and audit secrets.
What’s been the most dramatic change you’ve seen in the data protection and secrets management space?
Secrets management is becoming increasingly important for customers, but customers now have to deal with complex environments that include single cloud providers like AWS, multi-cloud setups, hybrid (cloud and on-premises) environments, and only on-premises instances.
Customers tell us that they want centralized secrets management solutions that meet their expectations across these environments. They have two distinct goals here. First, they want a central secrets management tool or service to manage their secrets. Second, they want their secrets to be available closer to the applications where they’re run. IAM Roles Anywhere provides a secure way for on-premises servers to obtain temporary AWS credentials and removes the need to create and manage long-term AWS credentials. Now, customers can use IAM Roles Anywhere to access their Secrets Manager secrets from workloads running outside of AWS. Secrets Manager also launched a program in which customers can manage secrets in third-party secrets management solutions to replicate secrets to Secrets Manager for their AWS workloads. We’re continuing to invest in these areas to make it simpler for customers to manage their secrets in their tools of choice, while providing access to their secrets closer to where their applications are run.
With AWS re:Inforce 2023 around the corner, what will your session focus on? What do you hope attendees will take away from your session?
I’m speaking in a session called “Using AWS data protection services for innovation and automation” (DAP305) alongside one of our senior security specialist solutions architects on the topic of secrets management at scale. In the session, we’ll walk through a sample customer use case that highlights how to use data protection services like AWS Key Management Service (AWS KMS), AWS Private Certificate Authority (AWS Private CA), and Secrets Manager to help build securely and help meet organizational security and compliance expectations. Attendees will walk away with a clear picture of the services that AWS offers to protect sensitive data, and how they can use these services together to protect secrets at scale.
Where do you see the secrets management space heading in the future?
Traditionally, secrets management was addressed after development, rather than being part of the design and development process. This placement created an inherent clash between development teams who wanted to put the application in the hands of end users, and the security admins who wanted to verify that the application met security expectations. This resulted in longer timelines to get to market. Involving security in the mix only after development is complete, is simply too late. Security should enable business, not restrict it.
Organizations are slowly adopting the culture that “Security is everyone’s responsibility.” I expect more and more organizations will take the step to “shift-left” and embed security early in the development lifecycle. In the near future, I expect to see organizations prioritize the automation of security capabilities in the development process to help detect, remediate, and eliminate potential risks by taking security out of human hands.
Is there something you wish customers would ask you about more often?
I’m always happy to talk to customers to help them think through how to incorporate secure-by-design in their planning process. There are many situations where decisions could end up being expensive to reverse. AWS has a lot of experience working across a multitude of use cases for customers as they adopt secrets management solutions. I’d love to talk more to customers early in their cloud adoption journey, about the best practices that they should adopt and potential pitfalls to avoid, when they make decisions about secrets management and data protection.
How about outside of work—any hobbies?
I’m an avid outdoors person, and living in Seattle has given me and my family the opportunity to trek and hike through the beautiful landscapes of the Pacific Northwest. I’ve also been a consistent Tough Mudder-er for the last 5 years. The other thing that I spend my time on is working as an amateur actor for a friend’s nonprofit theater production, helping in any way I can.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Today we are thrilled to announce the general availability of Amazon Security Lake, first announced in a preview release at 2022 re:Invent. Security Lake centralizes security data from Amazon Web Services (AWS) environments, software as a service (SaaS) providers, on-premises, and cloud sources into a purpose-built data lake that is stored in your AWS account. With Open Cybersecurity Schema Framework (OCSF) support, the service normalizes and combines security data from AWS and a broad range of security data sources. This helps provide your team of analysts and security engineers with broad visibility to investigate and respond to security events, which can facilitate timely responses and helps to improve your security across multicloud and hybrid environments.
Figure 1 shows how Security Lake works, step by step. In this post, we discuss these steps, highlight some of the most popular use cases for Security Lake, and share the latest enhancements and updates that we have made since the preview launch.
Figure 1: How Security Lake works
Target use cases
In this section, we showcase some of the use cases that customers have found to be most valuable while the service was in preview.
Facilitate your security investigations with elevated visibility
Amazon Security Lake helps to streamline security investigations by aggregating, normalizing, and optimizing data storage in a single security data lake. Security Lake automatically normalizes AWS logs and security findings to the OCSF schema. This includes AWS CloudTrail management events, Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, Amazon Route 53 Resolver query logs, and AWS Security Hub security findings from Amazon security services, including Amazon GuardDuty, Amazon Inspector, and AWS IAM Access Analyzer, as well as security findings from over 50 partner solutions. By having security-related logs and findings in a centralized location, and in the same format, Security Operations teams can streamline their process and devote more time to investigating security issues. This centralization reduces the need to spend valuable time collecting and normalizing logs into a specific format.
Figure 2 shows the Security Lake activation page, which presents users with options to enable log sources, AWS Regions, and accounts.
Figure 2: Security Lake activation page with options to enable log sources, Regions, and accounts
Figure 3 shows another section of the Security Lake activation page, which presents users with options to set rollup Regions and storage classes.
Figure 3: Security Lake activation page with options to select a rollup Region and set storage classes
Simplify your compliance monitoring and reporting
With Security Lake, customers can centralize security data into one or more rollup Regions, which can help teams to simplify their regional compliance and reporting obligations. Teams often face challenges when monitoring for compliance across multiple log sources, Regions, and accounts. By using Security Lake to collect and centralize this evidence, security teams can significantly reduce the time spent on log discovery and allocate more time towards compliance monitoring and reporting.
Analyze multiple years of security data quickly
Security Lake offers integration with third-party security services such as security information and event management (SIEM) and extended detection and response (XDR) tools, as well as popular data analytics services like Amazon Athena and Amazon OpenSearch Service to quickly analyze petabytes of data. This enables security teams to gain deep insights into their security data and take nimble measures to help protect their organization. Security Lake helps enforce least-privilege controls for teams across organizations by centralizing data and implementing robust access controls, automatically applying policies that are scoped to the required subscribers and sources. Data custodians can use the built-in features to create and enforce granular access controls, such as to restrict access to the data in the security lake to only those who require it.
Figure 4 depicts the process of creating a data access subscriber within Security Lake.
Figure 4: Creating a data access subscriber in Security Lake
Unify security data management across hybrid environments
The centralized data repository in Security Lake provides a comprehensive view of security data across hybrid and multicloud environments, helping security teams to better understand and respond to threats. You can use Security Lake to store security-related logs and data from various sources, including cloud-based and on-premises systems, making it simpler to collect and analyze security data. Additionally, by using automation and machine learning solutions, security teams can help identify anomalies and potential security risks more efficiently. This can ultimately lead to better risk management and enhance the overall security posture for the organization. Figure 5 illustrates the process of querying AWS CloudTrail and Microsoft Azure audit logs simultaneously by using Amazon Athena.
Figure 5: Querying AWS CloudTrail and Microsoft Azure audit logs together in Amazon Athena
Updates since preview launch
Security Lake automatically normalizes logs and events from natively supported AWS services to the OCSF schema. With the general availability release, Security Lake now supports the latest version of OCSF, which is version 1 rc2. CloudTrail management events are now normalized into three distinct OCSF event classes: Authentication, Account Change, and API Activity.
We made various improvements to resource names and schema mapping to enhance the usability of logs. Onboarding is made simpler with automated AWS Identity and Access Management (IAM) role creation from the console. Additionally, you have the flexibility to collect CloudTrail sources independently including management events, Amazon Simple Storage Service (Amazon S3) data events, and AWS Lambda events.
To enhance query performance, we made a transition from hourly to daily time partitioning in Amazon S3, resulting in faster and more efficient data retrieval. Also, we added Amazon CloudWatch metrics to enable proactive monitoring of your log ingestion process to facilitate the identification of collection gaps or surges.
New Security Lake account holders are eligible for a 15-day free trial in supported Regions. Security Lake is now generally available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), and South America (São Paolo).
The AWS Professional Services organization is a global team of experts that can help customers realize their desired business outcomes when using AWS. Our teams of data architects and security engineers engage with customer Security, IT, and business leaders to develop enterprise solutions. We follow current recommendations to support customers in their journey to integrate data into Security Lake. We integrate ready-built data transformations, visualizations, and AI/machine learning (ML) workflows that help Security Operations teams rapidly realize value. If you are interested in learning more, reach out to your AWS Professional Services account representative.
Summary
We invite you to explore the benefits of using Amazon Security Lake by taking advantage of our 15-day free trial and providing your feedback on your experiences, use cases, and solutions. We have several resources to help you get started and build your first data lake, including comprehensive documentation, demo videos, and webinars. By giving Security Lake a try, you can experience firsthand how it helps you centralize, normalize, and optimize your security data, and ultimately streamline your organization’s security incident detection and response across multicloud and hybrid environments.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
At Amazon Web Services (AWS), we strive to continuously improve customer experience by delivering a cloud computing environment that supports the most modern security technologies. To improve the overall performance of your connections, we have already started to enable TLS version 1.3 globally across our AWS service API endpoints, and will complete this process by December 31, 2023. By using TLS 1.3, you can decrease your connection time by removing one network round trip for every connection request, and can benefit from some of the most modern and secure cryptographic cipher suites available today.
If you are using current software tools (2014 or later) including our AWS SDKs or AWS Command Line Interface (AWS CLI), you will automatically receive the benefits of TLS 1.3 with no action required on your part. This is because AWS services will negotiate the highest TLS protocol version that your client software supports. If you want to continue using TLS 1.2, you will still have full control through your client configurations. AWS will retain support for TLS 1.2, in addition to TLS 1.3, into the foreseeable future. Meanwhile, here’s the latest information on the on-going deprecation of TLS 1.0/1.1.
If you have any questions, start a new thread on AWS re:Post, or contact AWS Support or your technical account manager. If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
RSA Conference 2023 brought thousands of cybersecurity professionals to the Moscone Center in San Francisco, California from April 24 through 27.
The keynote lineup was eclectic, with more than 30 presentations across two stages featuring speakers ranging from renowned theoretical physicist and futurist Dr. Michio Kaku to Grammy-winning musician Chris Stapleton. Topics aligned with this year’s conference theme, “Stronger Together,” and focused on actions that can be taken by everyone, from the C-suite to those of us on the front lines of security, to strengthen collaboration, establish new best practices, and make our defenses more diverse and effective.
With over 400 sessions and 500 exhibitors discussing the latest trends and technologies, it’s impossible to recap every highlight. Now that the dust has settled and we’ve had time to reflect, here’s a glimpse of what caught our attention.
We announced three new capabilities for our Amazon GuardDuty threat detection service to help customers secure container, database, and serverless workloads. These include GuardDuty Elastic Kubernetes Service (EKS) Runtime Monitoring, GuardDuty RDS Protection for data stored in Amazon Aurora, and GuardDuty Lambda Protection for serverless applications. The new capabilities are designed to provide actionable, contextual, and timely security findings with resource-specific details.
Artificial intelligence
It was hard to find a single keynote, session, or conversation that didn’t touch on the impact of artificial intelligence (AI).
In “AI: Law, Policy and Common Sense Suggestions on How to Stay Out of Trouble,” privacy and gaming attorney Behnam Dayanim highlighted ambiguity around the definition of AI. Referencing a quote from University of Washington School of Law’s Ryan Calo, Dayanim pointed out that AI may be best described as “…a set of techniques aimed at approximating some aspect of cognition,” and should therefore be thought of differently than a discrete “thing” or industry sector.
Dayanim noted examples of skepticism around the benefits of AI. A recent Monmouth University poll, for example, found that 73% of Americans believe AI will make jobs less available and harm the economy, and a surprising 55% believe AI may one day threaten humanity’s existence.
Equally skeptical, he noted, is a joint statement made by the Federal Trade Commission (FTC) and three other federal agencies during the conference reminding the public that enforcement authority applies to AI. The statement takes a pessimistic view, saying that AI is “…oftenadvertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” but has the potential to produce negative outcomes.
Dayanim covered existing and upcoming legal frameworks around the world that are aimed at addressing AI-related risks related to intellectual property (IP), misinformation, and bias, and how organizations can design AI governance mechanisms to promote fairness, competence, transparency, and accountability.
Many other discussions focused on the immense potential of AI to automate and improve security practices. RSA Security CEO Rohit Ghai explored the intersection of progress in AI with human identity in his keynote. “Access management and identity management are now table stakes features”, he said. In the AI era, we need an identity security solution that will secure the entire identity lifecycle—not just access. To be successful, he believes, the next generation of identity technology needs to be powered by AI, open and integrated at the data layer, and pursue a security-first approach. “Without good AI,” he said, “zero trust has zero chance.”
Mark Ryland, director at the Office of the CISO at AWS, spoke with Infosecurity about improving threat detection with generative AI.
“We’re very focused on meaningful data and minimizing false positives. And the only way to do that effectively is with machine learning (ML), so that’s been a core part of our security services,” he noted.
“Machine learning and artificial intelligence will add a critical layer of automation to cloud security. AI/ML will help augment developers’ workstreams, helping them create more reliable code and drive continuous security improvement. — CJ Moses, CISO and VP of security engineering at AWS
The human element
Dozens of sessions focused on the human element of security, with topics ranging from the psychology of DevSecOps to the NIST Phish Scale. In “How to Create a Breach-Deterrent Culture of Cybersecurity, from Board Down,” Andrzej Cetnarski, founder, chairman, and CEO of Cyber Nation Central and Marcus Sachs, deputy director for research at Auburn University, made a data-driven case for CEOs, boards, and business leaders to set a tone of security in their organizations, so they can address “cyber insecure behaviors that lead to social engineering” and keep up with the pace of cybercrime.
Lisa Plaggemier, executive director of the National Cybersecurity Alliance, and Jenny Brinkley, director of Amazon Security, stressed the importance of compelling security awareness training in “Engagement Through Entertainment: How To Make Security Behaviors Stick.” Education is critical to building a strong security posture, but as Plaggemier and Brinkley pointed out, we’re “living through an epidemic of boringness” in cybersecurity training.
According to a recent report, just 28% of employees say security awareness training is engaging, and only 36% say they pay full attention during such training.
Citing a United Airlines preflight safety video and Amazon’s Protect and Connect public service announcement (PSA) as examples, they emphasized the need to make emotional connections with users through humor and unexpected elements in order to create memorable training that drives behavioral change.
Plaggemeier and Brinkley detailed five actionable steps for security teams to improve their awareness training:
Brainstorm with staff throughout the company (not just the security people)
Find ideas and inspiration from everywhere else (TV episodes, movies… anywhere but existing security training)
Be relatable, and include insights that are relevant to your company and teams
Start small; you don’t need a large budget to add interest to your training
Don’t let naysayers deter you — change often prompts resistance
“You’ve got to make people care. And so you’ve got to find out what their personal motivators are, and how to develop the type of content that can make them care to click through the training and…remember things as they’re walking through an office.” — Jenny Brinkley, director of Amazon Security
Cloud security
Cloud security was another popular topic. In “Architecting Security for Regulated Workloads in Hybrid Cloud,” Mark Buckwell, cloud security architect at IBM, discussed the architectural thinking practices—including zero trust—required to integrate security and compliance into regulated workloads in a hybrid cloud environment.
Maor highlighted common tactics focused on identity theft, including MFA push fatigue, phishing, business email compromise, and adversary-in-the middle attacks. After detailing techniques that are used to establish persistence in SaaS environments and deliver ransomware, Maor emphasized the importance of forensic investigation and threat hunting to gaining the knowledge needed to reduce the impact of SaaS security incidents.
Sarah Currey, security practice manager, and Anna McAbee, senior solutions architect at AWS, provided complementary guidance in “Top 10 Ways to Evolve Cloud Native Incident Response Maturity.” Currey and McAbee highlighted best practices for addressing incident response (IR) challenges in the cloud — no matter who your provider is:
Define roles and responsibilities in your IR plan
Train staff on AWS (or your provider)
Develop cloud incident response playbooks
Develop account structure and tagging strategy
Run simulations (red team, purple team, tabletop)
Prepare access
Select and set up logs
Enable managed detection services in all available AWS Regions
Determine containment strategy for resource types
Develop cloud forensics capabilities
Speaking to BizTech, Clarke Rodgers, director of enterprise strategy at AWS, noted that tools and services such as Amazon GuardDuty and AWS Key Management Service (AWS KMS) are available to help advance security in the cloud. When organizations take advantage of these services and use partners to augment security programs, they can gain the confidence they need to take more risks, and accelerate digital transformation and product development.
Security takes a village
There are more highlights than we can mention on a variety of other topics, including post-quantum cryptography, data privacy, and diversity, equity, and inclusion. We’ve barely scratched the surface of RSA Conference 2023. If there is one key takeaway, it is that no single organization or individual can address cybersecurity challenges alone. By working together and sharing best practices as an industry, we can develop more effective security solutions and stay ahead of emerging threats.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.
AWS re:Inforce is back, and we can’t wait to welcome security builders to Anaheim, CA, on June 13 and 14. AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As an attendee, you will have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. re:Inforce 2023 features content across the following six areas:
Data protection
Governance, risk, and compliance
Identity and access management
Network and infrastructure security
Threat detection and incident response
Application security
The threat detection and incident response track is designed to showcase how AWS, customers, and partners can intelligently detect potential security risks, centralize and streamline security management at scale, investigate and respond quickly to security incidents across their environment, and unlock security innovation across hybrid cloud environments.
Breakout sessions, chalk talks, and lightning talks
TDR201 | Breakout session | How Citi advanced their containment capabilities through automation Incident response is critical for maintaining the reliability and security of AWS environments. To support the 28 AWS services in their cloud environment, Citi implemented a highly scalable cloud incident response framework specifically designed for their workloads on AWS. Using AWS Step Functions and AWS Lambda, Citi’s automated and orchestrated incident response plan follows NIST guidelines and has significantly improved its response time to security events. In this session, learn from real-world scenarios and examples on how to use AWS Step Functions and other core AWS services to effectively build and design scalable incident response solutions.
TDR202 | Breakout session | Wix’s layered security strategy to discover and protect sensitive data Wix is a leading cloud-based development platform that empowers users to get online with a personalized, professional web presence. In this session, learn how the Wix security team layers AWS security services including Amazon Macie, AWS Security Hub, and AWS Identity and Access Management Access Analyzer to maintain continuous visibility into proper handling and usage of sensitive data. Using AWS security services, Wix can discover, classify, and protect sensitive information across terabytes of data stored on AWS and in public clouds as well as SaaS applications, while empowering hundreds of internal developers to drive innovation on the Wix platform.
TDR203 | Breakout session | Vulnerability management at scale drives enterprise transformation Automating vulnerability management at scale can help speed up mean time to remediation and identify potential business-impacting issues sooner. In this session, explore key challenges that organizations face when approaching vulnerability management across large and complex environments, and consider the innovative solutions that AWS provides to help overcome them. Learn how customers use AWS services such as Amazon Inspector to automate vulnerability detection, streamline remediation efforts, and improve compliance posture. Whether you’re just getting started with vulnerability management or looking to optimize your existing approach, gain valuable insights and inspiration to help you drive innovation and enhance your security posture with AWS.
TDR204 | Breakout session | Continuous innovation in AWS detection and response services Join this session to learn about the latest advancements and most recent AWS launches in detection and response. This session focuses on use cases such as automated threat detection, continual vulnerability management, continuous cloud security posture management, and unified security data management. Through these examples, gain a deeper understanding of how you can seamlessly integrate AWS services into your existing security framework to gain greater control and insight, quickly address security risks, and maintain the security of your AWS environment.
TDR205 | Breakout session | Build your security data lake with Amazon Security Lake, featuring Interpublic Group Security teams want greater visibility into security activity across their entire organizations to proactively identify potential threats and vulnerabilities. Amazon Security Lake automatically centralizes security data from cloud, on-premises, and custom sources into a purpose-built data lake stored in your account and allows you to use industry-leading AWS and third-party analytics and ML tools to gain insights from your data and identify security risks that require immediate attention. Discover how Security Lake can help you consolidate and streamline security logging at scale and speed, and hear from an AWS customer, Interpublic Group (IPG), on their experience.
TDR209 | Breakout session | Centralizing security at scale with Security Hub & Intuit’s experience As organizations move their workloads to the cloud, it becomes increasingly important to have a centralized view of security across their cloud resources. AWS Security Hub is a powerful tool that allows organizations to gain visibility into their security posture and compliance status across their AWS accounts and Regions. In this session, learn about Security Hub’s new capabilities that help simplify centralizing and operationalizing security. Then, hear from Intuit, a leading financial software company, as they share their experience and best practices for setting up and using Security Hub to centralize security management.
TDR210 | Breakout session | Streamline security analysis with Amazon Detective Join us to discover how to streamline security investigations and perform root-cause analysis with Amazon Detective. Learn how to leverage the graph analysis techniques in Detective to identify related findings and resources and investigate them together to accelerate incident analysis. Also hear a customer story about their experience using Detective to analyze findings automatically ingested from Amazon GuardDuty, and walk through a sample security investigation.
TDR310 | Breakout session | Developing new findings using machine learning in Amazon GuardDuty Amazon GuardDuty provides threat detection at scale, helping you quickly identify and remediate security issues with actionable insights and context. In this session, learn how GuardDuty continuously enhances its intelligent threat detection capabilities using purpose-built machine learning models. Discover how new findings are developed for new data sources using novel machine learning techniques and how they are rigorously evaluated. Get a behind-the-scenes look at GuardDuty findings from ideation to production, and learn how this service can help you strengthen your security posture.
TDR311 | Breakout session | Securing data and democratizing the alert landscape with an event-driven architecture Security event monitoring is a unique challenge for businesses operating at scale and seeking to integrate detections into their existing security monitoring systems while using multiple detection tools. Learn how organizations can triage and raise relevant cloud security findings across a breadth of detection tools and provide results to downstream security teams in a serverless manner at scale. We discuss how to apply a layered security approach to evaluate the security posture of your data, protect your data from potential threats, and automate response and remediation to help with compliance requirements.
TDR231 | Chalk talk | Operationalizing security findings at scale You enabled AWS Security Hub standards and checks across your AWS organization and in all AWS Regions. What should you do next? Should you expect zero critical and high findings? What is your ideal state? Is achieving zero findings possible? In this chalk talk, learn about a framework you can implement to triage Security Hub findings. Explore how this framework can be applied to several common critical and high findings, and take away mechanisms to prioritize and respond to security findings at scale.
TDR232 | Chalk talk | Act on security findings using Security Hub’s automation capabilities Alert fatigue, a shortage of skilled staff, and keeping up with dynamic cloud resources are all challenges that exist when it comes to customers successfully achieving their security goals in AWS. In order to achieve their goals, customers need to act on security findings associated with cloud-based resources. In this session, learn how to automatically, or semi-automatically, act on security findings aggregated in AWS Security Hub to help you secure your organization’s cloud assets across a diverse set of accounts and Regions.
TDR233 | Chalk talk | How LLA reduces incident response time with AWS Systems Manager Liberty Latin America (LLA) is a leading telecommunications company operating in over 20 countries across Latin America and the Caribbean. LLA offers communications and entertainment services, including video, broadband internet, telephony, and mobile services. In this chalk talk, discover how LLA implemented a security framework to detect security issues and automate incident response in more than 180 AWS accounts accessed by internal stakeholders and third-party partners using AWS Systems Manager Incident Manager, AWS Organizations, Amazon GuardDuty, and AWS Security Hub.
TDR432 | Chalk talk | Deep dive into exposed credentials and how to investigate them In this chalk talk, sharpen your detection and investigation skills to spot and explore common security events like unauthorized access with exposed credentials. Learn how to recognize the indicators of such events, as well as logs and techniques that unauthorized users use to evade detection. The talk provides knowledge and resources to help you immediately prepare for your own security investigations.
TDR332 | Chalk talk | Speed up zero-day vulnerability response In this chalk talk, learn how to scale vulnerability management for Amazon EC2 across multiple accounts and AWS Regions. Explore how to use Amazon Inspector, AWS Systems Manager, and AWS Security Hub to respond to zero-day vulnerabilities, and leave knowing how to plan, perform, and report on proactive and reactive remediations.
TDR333 | Chalk talk | Gaining insights from Amazon Security Lake You’ve created a security data lake, and you’re ingesting data. Now what? How do you use that data to gain insights into what is happening within your organization or assist with investigations and incident response? Join this chalk talk to learn how analytics services and security information and event management (SIEM) solutions can connect to and use data stored within Amazon Security Lake to investigate security events and identify trends across your organization. Leave with a better understanding of how you can integrate Amazon Security Lake with other business intelligence and analytics tools to gain valuable insights from your security data and respond more effectively to security events.
TDR431 | Chalk talk | The anatomy of a ransomware event Ransomware events can cost governments, nonprofits, and businesses billions of dollars and interrupt operations. Early detection and automated responses are important steps that can limit your organization’s exposure. In this chalk talk, examine the anatomy of a ransomware event that targets data residing in Amazon RDS and get detailed best practices for detection, response, recovery, and protection.
TDR221 | Lightning talk | Streamline security operations and improve threat detection with OCSF Security operations centers (SOCs) face significant challenges in monitoring and analyzing security telemetry data from a diverse set of sources. This can result in a fragmented and siloed approach to security operations that makes it difficult to identify and investigate incidents. In this lightning talk, get an introduction to the Open Cybersecurity Schema Framework (OCSF) and its taxonomy constructs, and see a quick demo on how this normalized framework can help SOCs improve the efficiency and effectiveness of their security operations.
TDR222 | Lightning talk | Security monitoring for connected devices across OT, IoT, edge & cloud With the responsibility to stay ahead of cybersecurity threats, CIOs and CISOs are increasingly tasked with managing cybersecurity risks for their connected devices including devices on the operational technology (OT) side of the company. In this lightning talk, learn how AWS makes it simpler to monitor, detect, and respond to threats across the entire threat surface, which includes OT, IoT, edge, and cloud, while protecting your security investments in existing third-party security tools.
TDR223 | Lightning talk | Bolstering incident response with AWS Wickr enterprise integrations Every second counts during a security event. AWS Wickr provides end-to-end encrypted communications to help incident responders collaborate safely during a security event, even on a compromised network. Join this lightning talk to learn how to integrate AWS Wickr with AWS security services such as Amazon GuardDuty and AWS WAF. Learn how you can strengthen your incident response capabilities by creating an integrated workflow that incorporates GuardDuty findings into a secure, out-of-band communication channel for dedicated teams.
TDR224 | Lightning talk | Securing the future of mobility: Automotive threat modeling Many existing automotive industry cybersecurity threat intelligence offerings lack the connected mobility insights required for today’s automotive cybersecurity threat landscape. Join this lightning talk to learn about AWS’s approach to developing an automotive industry in-vehicle, domain-specific threat intelligence solution using AWS AI/ML services that proactively collect, analyze, and deduce threat intelligence insights for use and adoption across automotive value chains.
Hands-on sessions (builders’ sessions and workshops)
TDR251 | Builders’ session | Streamline and centralize security operations with AWS Security Hub AWS Security Hub provides you with a comprehensive view of the security state of your AWS resources by collecting security data from across AWS accounts, Regions, and services. In this builders’ session, explore best practices for using Security Hub to manage security posture, prioritize security alerts, generate insights, automate response, and enrich findings. Come away with a better understanding of how to use Security Hub features and practical tips for getting the most out of this powerful service.
TDR351 | Builders’ session | Broaden your scope: Analyze and investigate potential security issues In this builders’ session, learn how you can more efficiently triage potential security issues with a dynamic visual representation of the relationship between security findings and associated entities such as accounts, IAM principals, IP addresses, Amazon S3 buckets, and Amazon EC2 instances. With Amazon Detective finding groups, you can group related Amazon GuardDuty findings to help reduce time spent in security investigations and in understanding the scope of a potential issue. Leave this hands-on session knowing how to quickly investigate and discover the root cause of an incident.
TDR352 | Builders’ session | How to automate containment and forensics for Amazon EC2 In this builders’ session, learn how to deploy and scale the self-service Automated Forensics Orchestrator for Amazon EC2 solution, which gives you a standardized and automated forensics orchestration workflow capability to help you respond to Amazon EC2 security events. Explore the prerequisites and ways to customize the solution to your environment.
TDR353 | Builders’ session | Detecting suspicious activity in Amazon S3 Have you ever wondered how to uncover evidence of unauthorized activity in your AWS account? In this builders’ session, join the AWS Customer Incident Response Team (CIRT) for a guided simulation of suspicious activity within an AWS account involving unauthorized data exfiltration and Amazon S3 bucket and object data deletion. Learn how to detect and respond to this malicious activity using AWS services like AWS CloudTrail, Amazon Athena, Amazon GuardDuty, Amazon CloudWatch, and nontraditional threat detection services like AWS Billing to uncover evidence of unauthorized use.
TDR354 | Builders’ session | Simulate and detect unwanted IMDS access due to SSRF Using appropriate security controls can greatly reduce the risk of unauthorized use of web applications. In this builders’ session, find out how the server-side request forgery (SSRF) vulnerability works, how unauthorized users may try to use it, and most importantly, how to detect it and prevent it from being used to access the instance metadata service (IMDS). Also, learn some of the detection activities that the AWS Customer Incident Response Team (CIRT) performs when responding to security events of this nature.
TDR341 | Code talk | Investigating incidents with Amazon Security Lake & Jupyter notebooks In this code talk, watch as experts live code and build an incident response playbook for your AWS environment using Jupyter notebooks, Amazon Security Lake, and Python code. Leave with a better understanding of how to investigate and respond to a security event and how to use these technologies to more effectively and quickly respond to disruptions.
TDR441 | Code talk | How to run security incident response in your Amazon EKS environment Join this Code Talk to get both an adversary’s and a defender’s point of view as AWS experts perform live exploitation of an application running on multiple Amazon EKS clusters, invoking an alert in Amazon GuardDuty. Experts then walk through incident response procedures to detect, contain, and recover from the incident in near real-time. Gain an understanding of how to respond and recover to Amazon EKS-specific incidents as you watch the events unfold.
TDR271-R | Workshop | Chaos Kitty: Gamifying incident response with chaos engineering When was the last time you simulated an incident? In this workshop, learn to build a sandbox environment to gamify incident response with chaos engineering. You can use this sandbox to test out detection capabilities, play with incident response runbooks, and illustrate how to integrate AWS resources with physical devices. Walk away understanding how to get started with incident response and how you can use chaos engineering principles to create mechanisms that can improve your incident response processes.
TDR371-R | Workshop | Threat detection and response on AWS Join AWS experts for a hands-on threat detection and response workshop using Amazon GuardDuty, AWS Security Hub, and Amazon Detective. This workshop simulates security events for different types of resources and behaviors and illustrates both manual and automated responses with AWS Lambda. Dive in and learn how to improve your security posture by operationalizing threat detection and response on AWS.
TDR372-R | Workshop | Container threat detection with AWS security services Join AWS experts for a hands-on container security workshop using AWS threat detection and response services. This workshop simulates scenarios and security events while using Amazon EKS and demonstrates how to use different AWS security services to detect and respond to events and improve your security practices. Dive in and learn how to improve your security posture when running workloads on Amazon EKS.
Browse the full re:Inforce catalog to get details on additional sessions and content at the event, including gamified learning, leadership sessions, partner sessions, and labs.
If you want to learn the latest threat detection and incident response best practices and updates, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
At Amazon Web Services (AWS), we’re committed to providing our customers with continued assurance over the security, availability, confidentiality, and privacy of the AWS control environment.
We’re proud to deliver the Spring 2023 System and Organization Controls (SOC) 1, 2 and 3 reports, which cover October 1, 2022, to March 31, 2023, to support your confidence in AWS services. SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives.
In the past, the Privacy SOC 2 report was issued separately from the other reports. However, starting with this Spring 2023 reporting cycle, the SOC 2 report is now consolidated and covers the Security, Availability, Confidentiality, and Privacy Trust Service Criteria.
The Spring 2023 SOC reports include four additional services in scope, for a total of 158 services. See the full list on our Services in Scope by Compliance Program page.
The following are the four additional services now in scope for the Spring 2023 SOC reports:
Five additional AWS Regions have been added to the scope, for a total of 29 Regions. The following are the five additional Regions now in scope for the Spring 2023 SOC reports:
Australia: Asia Pacific (Melbourne) (ap-southeast-4)
India: Asia Pacific (Hyderabad) (ap-south-2)
Spain: Europe (Spain) (eu-south-2)
Switzerland: Europe (Zurich) (eu-central-2)
United Arab Emirates: Middle East (UAE) (me-central-1)
AWS strives to bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. If there are additional AWS services you would like to see added to the scope of our SOC reports (or other compliance programs), reach out to your AWS representatives.
As always, we value your feedback and questions. Feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to-content, news, and feature announcements? Follow us on Twitter.
Cyber Essentials Plus is a UK Government-backed, industry-supported certification scheme intended to help organizations demonstrate organizational cyber security against common cyber attacks. An independent third-party auditor certified by the Information Assurance for Small and Medium Enterprises (IASME) completed the audit. The scope of our Cyber Essentials Plus certificate covers AWS Europe (London), AWS Europe (Ireland), and AWS Europe (Frankfurt) Regions.
The NHS DSPT is a self-assessment that organizations use to measure their performance against data security and information governance requirements. The UK Department of Health and Social Care sets these requirements.
When customers move to the AWS Cloud, AWS is responsible for protecting the global infrastructure that runs our services offered in the AWS Cloud. AWS customers are the data controllers for patient health and care data, and are responsible for anything they put in the cloud or connect to the cloud. For more information, see the AWS Shared Security Responsibility Model.
As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit a comment in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.
Want more AWS Security news? Follow us on Twitter.
In this post, we’d like to introduce you to the cryptographic computing feature of AWS Clean Rooms. With AWS Clean Rooms, customers can run collaborative data-query sessions on sensitive data sets that live in different AWS accounts, and can do so without having to share, aggregate, or replicate the data. When customers also use the cryptographic computing feature, their data remains cryptographically protected even while it is being processed by an AWS Clean Rooms collaboration.
Where would AWS Clean Rooms be useful? Consider a scenario where two different insurance companies want to identify duplicate claims so they can identify potential fraud. This would be simple if they could compare their claims with each other, but they might not be able to do so due to privacy constraints.
Alternately, consider an advertising network and a client that want to measure the effectiveness of an advertising campaign. To that end, they would like to know how many of the people who saw the campaign (exposures) went on to make a purchase from the client (purchasers). However, confidentiality concerns might prevent the advertising network from sharing their list of exposures with the client or prevent the client from sharing their list of purchasers with the advertising network.
As these examples show, there can be many situations in which different organizations want to collaborate on a joint analysis of their pooled data, but cannot share their individual datasets directly. One solution to this problem is a data clean room, which is a service trusted by a collaboration’s participants to do the following:
Hold the data of individual parties
Enforce access-control rules that collaborators specify regarding their data
Perform analyses over the pooled data
To serve customers with these needs, AWS recently launched a new data clean-room service called AWS Clean Rooms. This service provides AWS customers with a way to collaboratively analyze data (stored in other AWS services as SQL tables) without having to replicate the data, move the data outside of the AWS Cloud, or allow their collaborators to see the data itself.
Additionally, AWS Clean Rooms provides a feature that gives customers even more control over their data: cryptographic computing. This feature allows AWS Clean Rooms to operate over data that customers encrypt themselves and that the service cannot actually read. Specifically, customers can use this feature to select which portions of their data should be encrypted and to encrypt that data themselves. Collaborators can continue to analyze that data as if it were in the clear, however, even though the data in question remains encrypted while it is being processed in AWS Clean Rooms collaborations. In this way, customers can use AWS Clean Rooms to securely collaborate on data they may not have been able to share due to internal policies or regulations.
Cryptographic computing
Using the cryptographic computing feature of AWS Clean Rooms involves these steps:
Users create AWS Clean Rooms collaborations and set collaboration-wide encryption settings. They then invite collaborators to support the analysis process.
Outside of AWS Clean Rooms, those collaborators agree on a shared secret: a common, secret, cryptographic key.
Collaborators individually encrypt their tables outside of the AWS Cloud (typically on their own premises) using the shared secret, the collaboration ID of the intended collaboration, and the Cryptographic Computing for Clean Rooms (C3R) encryption client (which AWS provides as an open-source package). Collaborators then provide the encrypted tables to AWS Clean Rooms, just as they would have provided plaintext tables.
Collaborators continue to use AWS Clean Rooms for their data analysis. They impose access-control rules on their tables, submit SQL queries over the tables in the collaboration, and retrieve results.
These results might contain encrypted columns, and so collaborators decrypt the results by using the shared secret and the C3R encryption client.
As a result, data that enters AWS Clean Rooms in encrypted format will remain encrypted from input tables to intermediate values to result sets. AWS Clean Rooms will be unable to decrypt or read the data even while performing the queries.
Note: For those interested in the academic aspects of this process, the cryptographic computing feature of AWS Clean Rooms is based on server-aided private set intersection (PSI). Server-aided PSI allows two or more participants to submit sets of values to a server and learn which elements are found in all sets, but without (1) allowing the participants to learn anything about the other (non-shared) elements, or (2) allowing the server to learn anything about the underlying data (aside from the degrees to which the sets overlap). PSI is just one example of the field of cryptographic computing, which provides a variety of new methods by which encrypted data can be processed for various purposes and without decryption. These techniques allow our customers to use the scale and power of AWS systems on data that AWS will not be able to read. See our Cryptographic Computing webpage for more about our work in this area.
Let’s dive deeper into each new step in the process for using cryptographic computing in AWS Clean Rooms.
Key agreement. Each collaboration needs its own shared secret: a secure cryptographic secret (of at least 256 bits). Customers sometimes have a regulatory need to maintain ownership of their encryption keys. Therefore, the cryptographic computing feature supports the case where customers generate, distribute, and store their collaboration’s secret themselves. In this way, customers’ encryption keys are never stored on an AWS system.
Encryption. AWS Clean Rooms allows table owners to control how tables are encrypted on a column-by-column basis. In particular, each column in an encrypted table will be one of three types: cleartext, sealed, or fingerprint. These types map directly to both how columns are used in queries and how they are protected with cryptography, described as follows:
Cleartext columns are not cryptographically processed at all. They are copied to encrypted tables verbatim, and can be used anywhere in a SQL query.
Sealed columns are encrypted. The encryption scheme used (AES-GCM) is randomized, meaning that encrypting the same value multiple times yields different ciphertexts each time. This helps prevent the statistical analysis of these columns, but also means that these columns cannot be used in JOIN clauses. They can be used in SELECT clauses, however, which allows them to appear in query results.
Fingerprint columns are hashed using the Hash-based Message Authentication Code (HMAC) algorithm. There is no way to decrypt these values, and therefore no reason for them to appear in the SELECT clause of a query. They can, however, be used in JOIN clauses: HMAC will map a given value to the same fingerprint every time, meaning that JOINs will be able to unify common values across different fingerprint columns.
Encryption settings. This last point—that fingerprint values will always map a given plaintext value to the same fingerprint—might give pause to some readers. If this is true, won’t the encrypted table be vulnerable to statistical analysis? That is absolutely correct: it will. For this reason, users might wish to set collaboration-wide encryption settings to control these forms of analysis.
To see how statistical analysis might be a concern, imagine a table where one fingerprint column is named US_State. In this case, a simple frequency analysis will reverse-engineer the plaintext values relatively quickly: the most common fingerprint is almost certain to be “California”, followed by “Texas”, “Florida”, and so on. Also, imagine that the same table has another fingerprint column called US_City, and that a given fingerprint appears in both columns. In that case, the fingerprint in question is almost certain to be “New York”. If a row has a fingerprint in the US_City column but a NULL in the US_State column, furthermore, it’s very likely that the fingerprint is for “District of Columbia”. And finally, imagine that the table has a cleartext column called Time_Zone. In this case, values of “HST” (Hawaii standard time) or “AKST” (Alaska standard time) reveal the value in the US_State column regardless of the cryptography.
Not all datasets will be vulnerable to these kinds of statistical analysis, but some will. Only customers can determine which types of analysis may reveal their data and which may not. Because of this, the cryptographic computing feature allows the customer to decide which protections will be needed. At the time of collaboration creation, that is, the creator of the AWS Clean Rooms collaboration can configure the following collaboration-wide encryption settings:
Whether or not fingerprint columns can contain duplicate plaintext values (addressing the “California” example)
Whether or not fingerprint columns with different names should fingerprint values in the same way (addressing the “New York” example)
Whether or not NULL values in the plaintext table should be left as NULL in the encrypted table (addressing the “District of Columbia” example)
Whether or not encrypted tables should be allowed to have cleartext columns at all (addressing the time zone example)
Security is maximized when all of these options are set to “no,” but each “no” will limit the queries that C3R will be able to support. For example, the choice of whether or not encrypted tables should be allowed to have cleartext columns will determine which WHERE clauses will be supported: If cleartext columns are not supported, then the Time_Zone column must be cryptographically processed — meaning that the clause WHERE Time_Zone=”EST” will not act as intended. There might be reasons to set these options to “yes” in order to enable a wider variety of queries, which we discuss in the Query behavior section later in this post.
Decryption. AWS Clean Rooms will write query results to an Amazon Simple Storage Service (Amazon S3) bucket. The recipient copies these results from the bucket to some on-premises storage and then runs the C3R encryption client. The client will find encrypted elements of the output and decrypt them. Note that the client can only decrypt elements from sealed columns. If the output contains elements from a fingerprint column, the client will warn you, but will also leave these elements untouched, as cryptographic fingerprints can’t be decrypted.
Having finished our overview, let’s return to the discussion regarding how encryption can affect the behavior of queries.
Query behavior
Implicit in the discussion so far is something worth calling out explicitly: AWS Clean Rooms runs queries over the data that is provided to it. If the data given to AWS Clean Rooms is encrypted, therefore, queries will be run on the ciphertexts and not the plaintexts. This will not affect the results returned, so long as the columns are used for their intended purposes:
Fingerprint columns are used in JOIN clauses
Sealed columns are used in SELECT clauses
(Cleartext columns can be used anywhere.) Queries might produce unexpected results, however, if the columns are used outside of their intended purposes:
Sometimes queries will fail when they would have succeeded on the plaintext. For example, ciphertexts and fingerprints will be string values, even if the original plaintext values were another type. Therefore, SUM() or AVG() calls on fingerprint or sealed columns will yield errors even if the corresponding plaintext columns were numeric.
Sometimes queries will omit results that would have been found by querying the plaintext. For example, attempting to JOIN on sealed columns will yield empty result sets: no two ciphertexts will be the same, even if they encrypt the same plaintext value. (Also, performing a JOIN on fingerprint columns with different names will exhibit the same behavior, if the collaboration-wide encryption settings specified that fingerprint columns of different names should fingerprint values differently.)
Sometimes results will include rows that would not be found by querying the plaintext. As mentioned, ciphertexts and fingerprints will be string values—base64 encodings of random-looking bytes, specifically. This means that a clause such as WHERE ‘US_State’ CONTAINS ‘CA’ will match some ciphertexts or fingerprints even when they would not match the plaintext.
To avoid these issues, fingerprint and sealed columns should only be used for their intended purposes (JOIN and SELECT clauses, respectively).
Conclusion
In this blog post, you have learned how AWS Clean Rooms can help you harness the power of AWS services to query and analyze your most-sensitive data. By using cryptographic computing, you can work with collaborators to perform joint analyses over pooled data without sharing your “raw” data with each other—or with AWS. If you believe that you can benefit from cryptographic computing (in AWS Clean Rooms or elsewhere), we’d like to hear from you. Please contact us with any questions or feedback. Also, we invite you to learn more about AWS Clean Rooms (including its use of cryptographic computing). Finally, the C3R client is open source, and can be downloaded from its GitHub page.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
AWS is pleased to announce the publication of a checklist to help customers align with the requirements of the European Union’s electronic identification, authentication, and trust services (eIDAS) regulation regarding the use of electronic identities and trust services. The eIDAS regulation covers electronic identification and trust services for electronic transactions in the European single market.
This checklist is intended as a reference and supporting document to help institutions align with the requirements of eIDAS and the European Telecommunications Standards Institute (ETSI). Where applicable, under the AWS Shared Responsibility Model, this checklist provides supporting details and references in relation to AWS to assist institutions when adopting eIDAS and ETSI for their workloads on AWS services.
For the controls that AWS is fully or partially responsible for, the checklist compares the eIDAS and ETSI requirements to the following:
This checklist is valid until the current eIDAS EU regulation 910/2014, published July 23rd, 2014, ceases to be in force. The checklist is available upon request.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.
AWS re:Inforce 2023 is fast approaching, and this post can help you plan your agenda with a look at the sessions in the identity and access management track. AWS re:Inforce is a learning conference where you can learn more about cloud security, compliance, identity, and privacy. You have access to hundreds of technical and non-technical sessions, an AWS Partner expo featuring security partners with AWS Security Competencies, and keynote and leadership sessions featuring AWS Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, California, on June 13 and 14. re:Inforce 2023 features content in the following six areas:
The identity and access management track will share recommended practices and learnings for identity management and governance in AWS environments. You will hear from other AWS customers about how they are building customer identity and access management (CIAM) patterns for great customer experiences and new approaches for managing standard, elevated, and privileged workforce access. You will also hear from AWS leaders about accelerating the journey to least privilege with access insights and the role of identity within a Zero Trust architecture.
This post highlights some of the identity and access management sessions that you can sign up for, including breakout sessions, chalk talks, code talks, lightning talks, builders’ sessions, and workshops. For the full catalog, see the AWS re:Inforce catalog preview.
Breakout sessions
Lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.
IAM201: A first-principles approach: AWS Identity and Access Management (IAM) Learning how to build effectively and securely on AWS starts with a strong working knowledge of AWS Identity and Access Management (IAM). In this session aimed at engineers who build on AWS, explore a no-jargon, first-principles approach to IAM. Learn the fundamental concepts of IAM authentication and authorization policies as well as concrete techniques that you can immediately apply to the workloads you run on AWS.
IAM301: Establishing a data perimeter on AWS, featuring USAA In this session, dive deep into the data perimeter controls that help you manage your trusted identities and their access to trusted resources from expected networks. USAA shares how they use automation to embed security and AWS Identity and Access Management (IAM) baselines to empower a self-service mindset. Learn how they use data perimeters to support decentralization without compromising on security. Also, discover how USAA uses a threat-based approach to prioritize implementation of specific data perimeters.
IAM302: Create enterprise-wide preventive guardrails, featuring Inter & Co. In this session, learn how to establish permissions guardrails within your multi-account environment with AWS Organizations and service control policies (SCPs). Explore how effective use of SCPs can help your builders innovate on AWS while maintaining a high bar on security. Learn about the strategies to incorporate SCPs at different levels within your organization. In addition, Inter & Co. share their strategies for implementing enterprise-wide guardrails at scale within their multi-account environments. Discover how they use code repositories and CI/CD pipelines to manage approvals and deployments of SCPs.
IAM303: Balance least privilege & agile development, feat. Fidelity & Merck Finding a proper balance between securing multiple AWS accounts and enabling agile development to accelerate business innovation has been key to the cloud adoption journey for AWS customers. In this session, learn how Fidelity and Merck empowered their business stakeholders to quickly develop solutions while still conforming to security standards and operating within the guardrails at scale.
IAM304: Migrating to Amazon Cognito, featuring approaches from Fandango Digital transformation of customer-facing applications often involves changes to identity and access management to help improve security and user experience. This process can benefit from fast-growing technologies and open standards and may involve migration to a modern customer identity and access management solution, such as Amazon Cognito, that offers the security and scale your business requires. There are several ways to approach migrating users to Amazon Cognito. In this session, learn about options and best practices, as well as lessons learned from Fandango’s migration to Amazon Cognito.
IAM305: Scaling access with AWS IAM Identity Center, feat. Allegiant Airlines In this session, learn how to scale assignment of permission sets to users and groups by automating federated role-based access to any AWS accounts in your organization. As a highlight of this session, hear Allegiant Airlines’ success story of how this automation has benefited Allegiant by centralizing management of federated access for their organization of more than 5,000 employees. Additionally, explore how to build this automation in your environment using infrastructure as code tools like Terraform and AWS CloudFormation using a CI/CD pipeline.
IAM306: Managing hybrid workloads with IAM Roles Anywhere, featuring Hertz A key element of using AWS Identity and Access Management (IAM) Roles Anywhere is managing how identities are assigned to your workloads. In this session, learn how you can define and manage identities for your workloads, how to use those identities to control access to an AWS resource via attribute-based access control (ABAC), and how to monitor and audit activities performed by those identities. Discover key concepts, best practices, and troubleshooting tips. Hertz describes how they used IAM Roles Anywhere to secure access to AWS services from Salesforce and how it has improved their overall security posture.
IAM307: Steps towards a Zero Trust architecture on AWS Modern workplaces have evolved beyond traditional network boundaries as they have expanded to hybrid and multi-cloud environments. Identity has taken center stage for information security teams. The need for fine-grained, identity-based authorization, flexible identity-aware networks, and the removal of unneeded pathways to data has accelerated the adoption of Zero Trust principles and architecture. In this session, learn about different architecture patterns and security mechanisms available from AWS that you can apply to secure standard, sensitive, and privileged access to your critical data and workloads.
Builders’ sessions
Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.
IAM351: Sharing resources across accounts with least-privilege permissions Are you looking to manage your resource access control permissions? Learn how you can author customer managed permissions to provide least-privilege access to your resources shared using AWS Resource Access Manager (AWS RAM). Explore how to use customer managed permissions with use cases ranging from managing incident response with AWS Systems Manager Incident Manager to enhancing your IP security posture with Amazon VPC IP Address Manager.
IAM352: Cedar policy language in action Cedar is a language for defining permissions as policies that describe who should have access to what. Amazon Verified Permissions and AWS Verified Access use Cedar to define fine-grained permissions for applications and end users. In this builders’ session, come learn by building Cedar policies for access control.
IAM355: Using passwordless authentication with Amazon Cognito and WebAuthn In recent years, passwordless authentication has been on the rise. The FIDO Alliance, a first-mover for enabling passwordless in 2009, is an open industry association whose stated mission is to develop and promote authentication standards that “help reduce the world’s over-reliance on passwords.” This builders’ session allows participants to learn about and follow the steps to implement a passwordless authentication experience on a web or mobile application using Amazon Cognito.
IAM356: AWS Identity and Access Management (IAM) policies troubleshooting In this builders’ session, walk through practical examples that can help you build, test, and troubleshoot AWS Identity and Access Management (IAM) policies. Utilize a workflow that can help you create fine-grained access policies with the help of the IAM API, the AWS Management Console, and AWS CloudTrail. Also review key concepts of IAM policy evaluation logic.
Chalk talks
Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.
IAM231: Lessons learned from AWS IAM Identity Center migrations In this chalk talk, discover best practices and tips to migrate your workforce users’ access from IAM users to AWS IAM Identity Center (successor to AWS Single Sign-On). Learn how to create preventive guardrails, gain visibility into the usage of IAM users across an organization, and apply authentication solutions for common use cases.
IAM331: Leaving IAM access keys behind: A modern path forward Static credentials have been used for a long time to secure multiple types of access, including access keys for AWS Identity and Access Management (IAM) users, command line tools, secure shell access, application API keys, and pre-shared keys for VPN access. However, best practice recommends replacing static credentials with short-term credentials. In this chalk talk, learn how to identify static access keys in your environment, quantify the risk, and then apply multiple available methods to replace them with short-term credentials. The talk also covers prescriptive guidance and best practice advice for improving your overall management of IAM access keys.
IAM332: Practical identity and access management: The basics of IAM on AWS Learn from prescriptive guidance on how to build an Identity and Access Management strategy on AWS. We provide guidance on human access versus machine access using services like IAM Identity Center. You will also learn about the different IAM policy types, where each policy type is useful, and how you should incorporate each policy type in your AWS environment. This session will walk you through what you need to know to build an effective identity and access management baseline.
IAM431: A tour of the world of IAM policy evaluation This session takes you beyond the basics of IAM policy evaluation and focuses on how policy evaluation works with advanced AWS features. Hear about how policies are evaluated alongside AWS Key Management Service (AWS KMS) key grants, Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) access points, Amazon VPC Lattice, and more. You’ll leave this session with prescriptive guidance on what to do and what to avoid when designing authorization schemes.
Code talks
Engaging, code-focused sessions with a small audience. AWS experts lead an interactive discussion featuring live coding and/or code samples as they explain the “why” behind AWS solutions.
IAM341: Cedar: Fast, safe, and fine-grained access for your applications Cedar is a new policy language that helps you write fine-grained permissions in your applications. With Cedar, you can customize authorization and you can define and enforce who can access what. This code talk explains the design of Cedar, how it was built to a high standard of assurance, and its benefits. Learn what makes Cedar ergonomic, fast, and analyzable: simple syntax for expressing common authorization use cases, policy structure that allows for scalable real-time evaluation, and comprehensive auditing based on automated reasoning. Also find out how Cedar’s implementation was made safer through formal verification and differential testing.
IAM441: Enable new Amazon Cognito use cases with OAuth2.0 flows Delegated authorization without user interaction on a consumer device and reinforced passwordless authentication for higher identity assurance are advanced authentication flows achievable with Amazon Cognito. In this code talk, you can discover new OAuth2.0 flow diagrams, code snippets, and long and short demos that offer different approaches to these authentication use cases. Gain confidence using AWS Lambda triggers with Amazon Cognito, native APIs, and OAuth2.0 endpoints to help ensure greater success in customer identity and access management strategy.
Lightning talks
Short and focused theater presentations that are dedicated to either a specific customer story, service demo, or partner offering (sponsored).
IAM221: Accelerate your business with AWS Directory Service In this lightning talk, explore AWS Directory Service for Microsoft Active Directory and discover a number of use cases that provide flexibility, empower agile application development, and integrate securely with other identity stores. Join the talk to discover how you can take advantage of this managed service and focus on what really matters to your customers.
IAM321: Move toward least privilege with IAM Access Analyzer AWS Identity and Access Management (IAM) Access Analyzer provides tools that simplify permissions management by making it easy for organizations to set, verify, and refine permissions. In this lightning talk, dive into how you can detect resources shared with an external entity across one or multiple AWS accounts with IAM Access Analyzer. Find out how you can activate and use this feature and how it integrates with AWS Security Hub.
Workshops
Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!
IAM371: Building a Customer Identity and Access Management (CIAM) solution How do your customers access your application? Get a head start on customer identity and access management (CIAM) by using Amazon Cognito. Join this workshop to learn how to build CIAM solutions on AWS using Amazon Cognito, Amazon Verified Permissions, and several other AWS services. Start from the basic building blocks of CIAM and build up to advanced user identity and access management use cases in customer-facing applications.
IAM372: Consuming AWS Resources from everywhere with IAM Roles Anywhere If your workload already lives on AWS, then there is a high chance that some temporary AWS credentials have been securely distributed to perform needed tasks. But what happens when your workload is on premises? In this workshop, learn how to use AWS Identity and Access Management (IAM) Roles Anywhere. Start from the basics and create the necessary steps to learn how to use your applications outside of AWS in a safe way using IAM Roles Anywhere in practice.
IAM373: Building a data perimeter to allow access to authorized users In this workshop, learn how to create a data perimeter by building controls that allow access to data only from expected network locations and by trusted identities. The workshop consists of five modules, each designed to illustrate a different AWS Identity and Access Management (IAM) principle or network control. Learn where and how to implement the appropriate controls based on different risk scenarios.
If these sessions look interesting to you, join us in Anaheim by registering for AWS re:Inforce 2023. We look forward to seeing you there!
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
In today’s evolving security threat landscape, security teams increasingly require tools to detect and track security findings to protect their organizations’ assets. One objective of cloud security posture management is to identify and address security findings in a timely and effective manner. AWS Security Hub aggregates, organizes, and prioritizes security alerts and findings from various AWS services and supported security solutions from the AWS Partner Network.
As the volume of findings increases, tracking the changes and actions that have been taken on each finding becomes more difficult, as well as more important to perform timely and effective investigations. In this post, we will show you how to use the new Finding History feature in Security Hub to track and understand the history of a security finding.
Updates to findings occur when finding providers update certain fields, such as resource details, by using the BatchImportFindings API. You, as a user, can update certain fields, such as workflow status, in the AWS Management Console or through the BatchUpdateFindings API. Ticketing, incident management, security information and event management (SIEM), and automatic remediation solutions can also use the BatchUpdateFindings API to update findings. This new capability highlights these various changes and when they occurred so that you don’t need to investigate this yourself.
Finding History
The new Finding History feature in Security Hub helps you understand the state of a finding by providing an immutable history of changes within the finding details. By using this feature, you can track the history of each finding, including the before and after values of the fields that were changed, who or what made the changes, and when the changes were made. This simplifies how you operate on a finding by giving you visibility into the changes made to a finding over time, alongside the rest of the finding details, which removes the need for separate tooling or additional processes. This feature is available at no additional cost in AWS Regions where Security Hub is available, and appears by default for new or updated findings. Finding History is also available through the Security Hub APIs.
To try out this new feature, open the Security Hub console, select a finding, and choose the History tab. There you will see a chronological list of changes that have been made to the finding. The transparency of the finding history helps you quickly assess the status of the finding, understand actions already taken, and take the necessary actions to mitigate risk. For example, upon resolving a finding, you can add a note to the finding to indicate why you resolved it. Both the resolved status and note will appear in the history.
In the following example, the finding was updated and then resolved with an explanatory note left by the person that reviewed the finding. With Finding History, you can see the previous updates and events in the finding’s History tab.
Figure 1: Finding History shows recent updates to the finding
In addition, you can still view the current state of the finding in its Details tab.
Figure 2: Finding Details shows the record of a security check or security-related detection
Conclusion
With the new Finding History feature in Security Hub, you have greater visibility into the activity and updates on each finding, allowing for more efficient investigation and response to potential security risks. Next time that you start work to investigate and respond to a security finding in Security Hub, begin by checking the finding history.
At AWS, earning and maintaining customer trust is the foundation of our business. We understand that protecting customer data is key to achieving this. We also know that trust must continue to be earned through transparency and assurances.
In November 2022, we announced the new AWS Digital Sovereignty Pledge, our commitment to offering all AWS customers the most advanced set of sovereignty controls and features available in the cloud. Two pillars of this are verifiable control over data access, and the ability to encrypt everything everywhere. We already offer a range of data protection features, accreditations, and contractual commitments that give customers control over where they locate their data, who can access it, and how it is used. Today, I’d like to update you on how we are continuing to earn your trust with verifiable control over customer data access and external control of your encryption keys.
AWS Nitro System achieves independent third-party validation
We are committed to helping our customers meet evolving sovereignty requirements and providing greater transparency and assurances to how AWS services are designed and operated. With the AWS Nitro System, which is the foundation of AWS computing service Amazon EC2, we designed and delivered first-of-a-kind innovation by eliminating any mechanism AWS personnel have to access customer data on Nitro. Our removal of an operator access mechanism was unique in 2017 when we first launched the Nitro System.
As we continue to deliver on our digital sovereignty pledge of customer control over data access, I’m excited to share with you an independent report on the security design of the AWS Nitro System. We engaged NCC Group, a global cybersecurity consulting firm, to conduct an architecture review of our security claims of the Nitro System and produce a public report. This report confirms that the AWS Nitro System, by design, has no mechanism for anyone at AWS to access your data on Nitro hosts. The report evaluates the architecture of the Nitro System and our claims about operator access. It concludes that “As a matter of design, NCC Group found no gaps in the Nitro System that would compromise these security claims.” It also goes on to state, “NCC Group finds…there is no indication that a cloud service provider employee can obtain such access…to any host.” Our computing infrastructure, the Nitro System, has no operator access mechanism, and now is supported by a third-party analysis of those data controls. Read more in the NCC Group report.
New AWS Service Term
At AWS, security is our top priority. The NCC report shows the Nitro System is an exceptional computing backbone for AWS, with security at its core. The Nitro controls that prevent operator access are so fundamental to the Nitro System that we’ve added them in our AWS Service Terms, which are applicable to anyone who uses AWS.
Our AWS Service Terms now include the following on the Nitro System:
AWS personnel do not have access to Your Content on AWS Nitro System EC2 instances. There are no technical means or APIs available to AWS personnel to read, copy, extract, modify, or otherwise access Your Content on an AWS Nitro System EC2 instance or encrypted-EBS volume attached to an AWS Nitro System EC2 instance. Access to AWS Nitro System EC2 instance APIs – which enable AWS personnel to operate the system without access to Your Content – is always logged, and always requires authentication and authorization.
External control of your encryption keys with AWS KMS External Key Store
As part of our promise to continue to make the AWS Cloud sovereign-by-design, we pledged to continue to invest in an ambitious roadmap of capabilities, which includes our encryption capabilities. At re:Invent 2022, we took further steps to deliver on this roadmap of encrypt everything everywhere with encryption keys managed inside or outside the AWS Cloud by announcing the availability of AWS Key Management Service (AWS KMS)External Key Store (XKS). This innovation supports our customers who have a regulatory need to store and use their encryption keys outside the AWS Cloud. The open source XKS specification offers customers the flexibility to adapt to different HSM deployment use cases. While AWS KMS also prevents AWS personnel from accessing customer keys, this new capability may help some customers demonstrate compliance with specific regulations or industry expectations requiring storage and use of encryption keys outside of an AWS data center for certain workloads.
In order to accelerate our customers’ ability to adopt XKS for regulatory purposes, we collaborated with external HSM, key management, and integration service providers that our customers trust. To date, Thales, Entrust, Fortanix, DuoKey, and HashiCorp have launched XKS implementations, and Salesforce, Atos, and T-Systems have announced that they are building integrated service offerings around XKS. In addition, many SaaS solutions offer integration with AWS KMS for key management of their encryption offerings. Customers using these solutions, such as the offerings from Databricks, MongoDB, Reltio, Slack, Snowflake, and Zoom, can now utilize keys in external key managers via XKS to secure data. This allows customers to simplify their key management strategies across AWS as well as certain SaaS solutions by providing a centralized place to manage access policies and audit key usage.
We remain committed to helping our customers meet security, privacy, and digital sovereignty requirements. We will continue to innovate sovereignty features, controls, and assurances within the global AWS Cloud and deliver them without compromise to the full power of AWS.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
In the AWS Security Profile series, I interview some of the humans who work in AWS Security and help keep our customers safe and secure. In this profile, I interviewed Tatyana Yatskevich, Principal Solutions Architect for AWS Identity.
How long have you been at AWS and what do you do in your current role?
I’ve been at AWS for about five and a half years now. I’ve had several different roles, but I’m currently part of the Identity Solutions team, which is a team of solutions architects who are embedded into the Identity and Control Service. Our team focuses on staying current with customer use cases and emerging problems in the identity space so that we can facilitate the development of new capabilities and prescriptive guidance from AWS.
To keep up with the demand in certain industries, we work with some enterprise customers that operate large cloud environments on AWS. Knowing what these customers need to do to achieve their business outcomes while operating under stringent regulatory compliance requirements helps us provide valuable input into our service and feature development process and support customers in their cloud journey in the most efficient manner.
How did you get started in security?
At the beginning of my career, I mostly just happened to work on security-related projects. I performed security and vulnerability assessments, facilitated remediation work, and managed traditional on-premises security solutions such as web proxies, firewalls, and VPNs. Through these projects, I developed an interest in the security field because of its wide reach and impact, and because it presents a lot of opportunities for growth and problem solving as new challenges arise almost daily. My roles at AWS have been a logical continuation of my security-focused career. Here, I’ve mostly been motivated by empowering security teams to become business enablers, rather than being perceived as blockers to innovation and agility.
How do you explain your job to non-technical friends and family?
I usually give an example of a service or feature that most of us interact with on a regular basis, such as a banking application. I explain that it takes a lot of engineering work to build that application from the ground up and deliver on the user experience and security. That engineering work involves the use of many different technologies that support the user sign-in process, or storage of your personal information like your social security or credit card numbers. My job is to help companies that provide these services implement the proper security controls so that your personal information is used in accordance with local laws and isn’t disclosed for unauthorized use.
In your opinion, what’s one of the coolest things happening in identity right now?
I think it’s the increased role of identity, authentication, and authorization controls in the overall security model of newly built applications. It spans from helping to ensure secure workforce mobility now that providing access to business applications from anywhere is critical to business competitiveness, to keeping Internet of Things (IoT) infrastructure protected and operated in accordance with zero trust. The realization of the power and the increasing usage of identity-specific controls to manage access to digital assets is the coolest trend in identity right now.
What are you currently working on that you’re excited about?
One of the areas that I’m highly invested in is data perimeters. A data perimeter is a set of capabilities that help customers keep their data within their organizational boundary and mitigate the risks of data exfiltration or unintended access to data. We have customers in a wide variety of industries, such as the financial sector, telecom, media and entertainment, and public sector. There are compliance and regulatory requirements that they operate under. A lot of those requirements emphasize controls that guard sensitive data from unauthorized access and prevent movement of that data to places outside of company’s control.
To help customers meet these requirements in a scalable way, we continuously invest in the development of new capabilities. I talk to some of our largest enterprise customers on a regular basis to understand their challenges in this area, and I work with service teams to introduce new capabilities to meet new requirements. I also lead efforts to extend customer-facing guidance and solutions so that customers can design and implement data perimeters on their own. And I present at AWS events to reach more customers, with the most recent being our presentation with Goldman Sachs at re:Invent 2022.
Tell me about that presentation.
I co-presented a chalk talk with Shubham Shukla, Vice President of Cloud Enablement at Goldman Sachs, called Establishing a Data Perimeter on AWS. The session gave an overview of data perimeter capabilities and showcased Goldman Sachs’ experience implementing data perimeter controls at scale in their multi-account AWS environment. What’s cool about that session, I think, is that it’s always good to present about AWS best practices and our view of how certain things should be done, but it’s extra powerful when we include a customer. This is especially true when a large enterprise customer such as Goldman Sachs shares their experience and talks about how they do certain things in practice, like mapping specific requirements to the actual implementation and talking through lessons learned and their perspective on the problem and solution. A lot of our customers are interested in learning from other customers how to build and operate enterprise security controls at scale. We did a similar presentation with Vanguard at re:Inforce 2022, and I look forward to future opportunities to showcase the awesome work being done by our customers.
What is your favorite Amazon Leadership Principle and why?
Customer Obsession. For me, the core of it is building deeper, longer lasting relationships with our customers and taking their learnings back to our business to work backwards from the actual customer needs. Building better products, helping customers meet their business goals, and having wide-reaching impact is what makes me so excited to come to work every day.
What’s the thing you’re most proud of in your career?
As part of my former role as a security consultant in the AWS Professional Services organization, I led security-related projects to either help customers migrate their workloads to AWS or perform security assessments of their existing AWS environment. Part of that role involved developing mechanisms to better engage with customers on security-related topics and help them develop their own security strategy for running workloads on AWS. That work sometimes involved challenging conversations with customers. I would explain the value of the technology that AWS provides and help customers figure out how to implement AWS services to meet both their business and security needs. I took learnings from these conversations and developed some internal assets that helped newer AWS security consultants conduct those conversations more effectively, and I mentored them throughout the process.
If you had to pick an industry outside of security, what would you want to do?
I would be in the travel industry. I absolutely love visiting new places and exploring nature. I love learning the history and culture of different regions, and trying out different cuisines. It’s something that helps me learn more about myself through new experiences and ultimately be a happier person.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Amazon Web Services (AWS) is pleased to announce that we have achieved an AAA rating from Pinakes. The scope of this qualification covers 166 services in 25 global AWS Regions.
The Spanish banking association Centro de Cooperación Interbancaria (CCI) developed Pinakes, a rating framework intended to manage and monitor the cybersecurity controls of service providers that Spanish financial entities depend on. The requirements arise from the European Banking Authority guidelines (EBA/GL/2019/02).
Pinakes evaluates the cybersecurity levels of service providers through 1,315 requirements across 4 categories (confidentiality, integrity, availability of information, and general) and 14 domains:
Information security management program
Facility security
Third-party management
Normative compliance
Network controls
Access control
Incident management
Encryption
Secure development
Monitoring
Malware protection
Resilience
Systems operation
Staff safety
Each requirement is associated to a rating level (A+, A, B, C, D), ranging from the highest A+ (provider has implemented the most diligent measures and controls for cybersecurity management) to the lowest D (minimum security requirements are met).
An independent third-party auditor has verified the implementation status for each section. As a result, AWS has been qualified with A ratings for Confidentiality, Integrity and Availability, getting an overall rating of AAA.
Our Spanish financial customers can refer to the AWS Pinakes rating to confirm that the AWS control environment is appropriately designed and implemented. By receiving an AAA, AWS demonstrates our commitment to meet the heightened security expectations for cloud service providers set by the CCI. The full evaluation report will be published on AWS Artifact upon request. Pinakes participants who are AWS customers can contact their AWS account manager to request access to it.
As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. To learn more about our other compliance and security programs, see AWS Compliance Programs.
If you have feedback about this post, please submit them in the Comments section below.
Want more AWS Security news? Follow us on Twitter.
A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.
AWS re:Inforce is fast approaching, and this post can help you plan your agenda. AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As a re:Inforce attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, CA, on June 13 and 14. re:Inforce 2023 features content in the following six areas:
Data Protection
Governance, Risk, and Compliance
Identity and Access Management
Network and Infrastructure Security
Threat Detection and Incident Response
Application Security
The data protection track will showcase services and tools that you can use to help achieve your data protection goals in an efficient, cost-effective, and repeatable manner. You will hear from AWS customers and partners about how they protect data in transit, at rest, and in use. Learn how experts approach data management, key management, cryptography, data security, data privacy, and encryption. This post will highlight of some of the data protection offerings that you can add to your agenda. To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.
“re:Inforce is a great opportunity for us to hear directly from our customers, understand their unique needs, and use customer input to define solutions that protect sensitive data. We also use this opportunity to deliver content focused on the latest security research and trends, and I am looking forward to seeing you all there. Security is everyone’s job, and at AWS, it is job zero.” — Ritesh Desai, General Manager, AWS Secrets Manager
Breakout sessions, chalk talks, and lightning talks
DAP301: Moody’s database secrets management at scale with AWS Secrets Manager Many organizations must rotate database passwords across fleets of on-premises and cloud databases to meet various regulatory standards and enforce security best practices. One-time solutions such as scripts and runbooks for password rotation can be cumbersome. Moody’s sought a custom solution that satisfies the goal of managing database passwords using well-established DevSecOps processes. In this session, Moody’s discusses how they successfully used AWS Secrets Manager and AWS Lambda, along with open-source CI/CD system Jenkins, to implement database password lifecycle management across their fleet of databases spanning nonproduction and production environments.
DAP401: Security design of the AWS Nitro System The AWS Nitro System is the underlying platform for all modern Amazon EC2 instances. In this session, learn about the inner workings of the Nitro System and discover how it is used to help secure your most sensitive workloads. Explore the unique design of the Nitro System’s purpose-built hardware and software components and how they operate together. Dive into specific elements of the Nitro System design, including eliminating the possibility of operator access and providing a hardware root of trust and cryptographic system integrity protections. Learn important aspects of the Amazon EC2 tenant isolation model that provide strong mitigation against potential side-channel issues.
DAP322: Integrating AWS Private CA with SPIRE and Ottr at Coinbase Coinbase is a secure online platform for buying, selling, transferring, and storing cryptocurrency. This lightning talk provides an overview of how Coinbase uses AWS services, including AWS Private CA, AWS Secrets Manager, and Amazon RDS, to build out a Zero Trust architecture with SPIRE for service-to-service authentication. Learn how short-lived certificates are issued safely at scale for X.509 client authentication (i.e., Amazon MSK) with Ottr.
DAP331: AWS Private CA: Building better resilience and revocation techniques In this chalk talk, explore the concept of PKI resiliency and certificate revocation for AWS Private CA, and discover the reasons behind multi-Region resilient private PKI. Dive deep into different revocation methods like certificate revocation list (CRL) and Online Certificate Status Protocol (OCSP) and compare their advantages and limitations. Leave this talk with the ability to better design resiliency and revocations.
DAP231: Securing your application data with AWS storage services Critical applications that enterprises have relied on for years were designed for the database block storage and unstructured file storage prevalent on premises. Now, organizations are growing with cloud services and want to bring their security best practices along. This chalk talk explores the features for securing application data using Amazon FSx, Amazon Elastic File System (Amazon EFS), and Amazon Elastic Block Store (Amazon EBS). Learn about the fundamentals of securing your data, including encryption, access control, monitoring, and backup and recovery. Dive into use cases for different types of workloads, such as databases, analytics, and content management systems.
Hands-on sessions (builders’ sessions and workshops)
DAP353: Privacy-enhancing data collaboration with AWS Clean Rooms Organizations increasingly want to protect sensitive information and reduce or eliminate raw data sharing. To help companies meet these requirements, AWS has built AWS Clean Rooms. This service allows organizations to query their collective data without needing to expose the underlying datasets. In this builders’ session, get hands-on with AWS Clean Rooms preventative and detective privacy-enhancing controls to mitigate the risk of exposing sensitive data.
DAP371: Post-quantum crypto with AWS KMS TLS endpoints, SDKs, and libraries This hands-on workshop demonstrates post-quantum cryptographic algorithms and compares their performance and size to classical ones. Learn how to use AWS Key Management Service (AWS KMS) with the AWS SDK for Java to establish a quantum-safe tunnel to transfer the most critical digital secrets and protect them from a theoretical computer targeting these communications in the future. Find out how the tunnels use classical and quantum-resistant key exchanges to offer the best of both worlds, and discover the performance implications.
DAP271: Data protection risk assessment for AWS workloads Join this workshop to learn how to simplify the process of selecting the right tools to mitigate your data protection risks while reducing costs. Follow the data protection lifecycle by conducting a risk assessment, selecting the effective controls to mitigate those risks, deploying and configuring AWS services to implement those controls, and performing continuous monitoring for audits. Leave knowing how to apply the right controls to mitigate your business risks using AWS advanced services for encryption, permissions, and multi-party processing.
If these sessions look interesting to you, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.
AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As a re:Inforce attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, CA, on June 13 and 14.
In these sessions, you’ll hear directly from AWS leaders and get hands-on with the tools that help you ship securely. You’ll hear how organization and culture help security accelerate your business, and you’ll dive deep into the technology that helps you more swiftly get features to customers. You might even find some new tools that make it simpler to empower your builders to move quickly and ship securely.
Breakout sessions, chalk talks, and lightning talks
APS221: From code to insight: Amazon Inspector & AWS Lambda in action In this lightning talk, see a demo of Amazon Inspector support for AWS Lambda. Inspect a Java application running on Lambda for security vulnerabilities using Amazon Inspector for an automated and continual vulnerability assessment. In addition, explore how Amazon Inspector can help you identify package and code vulnerabilities in your serverless applications.
APS302: From IDE to code review, increasing quality and security Attend this session to discover how to improve the quality and security of your code early in the development cycle. Explore how you can integrate Amazon Code Whisperer, Amazon CodeGuru reviewer, and Amazon Inspector into your development workflow, which can help you identify potential issues and automate your code review process.
APS201: How AWS and MongoDB raise the security bar with distributed ownership In this session, explore how AWS and MongoDB have approached creating their Security Guardians and Security Champions programs. Learn practical advice on scoping, piloting, measuring, scaling, and managing a program with the goal of accelerating development with a high security bar, and discover tips on how to bridge the gap between your security and development teams. Learn how a guardians or champions program can improve security outside of your dev teams for your company as a whole when applied broadly across your organization.
APS202: AppSec tooling & culture tips from AWS & Toyota Motor North America In this session, AWS and Toyota Motor North America (TMNA) share how they scale AppSec tooling and culture across enterprise organizations. Discover practical advice and lessons learned from the AWS approach to threat modeling across hundreds of service teams. In addition, gain insight on how TMNA has embedded security engineers into business units working on everything from mainframes to serverless. Learn ways to support teams at varying levels of maturity, the importance of differentiating between risks and vulnerabilities, and how to balance the use of cloud-native, third-party, and open-source tools at scale.
APS331: Shift security left with the AWS integrated builder experience As organizations start building their applications on AWS, they use a wide range of highly capable builder services that address specific parts of the development and management of these applications. The AWS integrated builder experience (IBEX) brings those separate pieces together to create innovative solutions for both customers and AWS Partners building on AWS. In this chalk talk, discover how you can use IBEX to shift security left by designing applications with security best practices built in and by unlocking agile software to help prevent security issues and bottlenecks.
Hands-on sessions (builders’ sessions and workshops)
APS271: Threat modeling for builders Attend this facilitated workshop to get hands-on experience creating a threat model for a workload. Learn some of the background and reasoning behind threat modeling, and explore tools and techniques for modeling systems, identifying threats, and selecting mitigations. In addition, explore the process for creating a system model and corresponding threat model for a serverless workload. Learn how AWS performs threat modeling internally, and discover the principles to effectively perform threat modeling on your own workloads.
APS371: Integrating open-source security tools with the AWS code services AWS, open-source, and partner tooling works together to accelerate your software development lifecycle. In this workshop, learn how to use Automated Security Helper (ASH), a source-application security tool, to quickly integrate various security testing tools into your software build and deployment flows. AWS experts guide you through the process of security testing locally on your machines and within the AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline services. In addition, discover how to identify potential security issues in your applications through static analysis, software composition analysis, and infrastructure-as-code testing.
APS352: Practical shift left for builders Providing early feedback in the development lifecycle maximizes developer productivity and enables engineering teams to deliver quality products. In this builders’ session, learn how to use AWS Developer Tools to empower developers to make good security decisions early in the development cycle. Tools such as Amazon CodeGuru, Amazon CodeWhisperer, and Amazon DevOps Guru can provide continuous real-time feedback upon each code commit into a source repository and supplement this with ML capabilities integrated into the code review stage.
APS351: Secure software factory on AWS through the DoD DevSecOps lens Modern information systems within regulated environments are driven by the need to develop software with security at the forefront. Increasingly, organizations are adopting DevSecOps and secure software factory patterns to improve the security of their software delivery lifecycle. In this builder’s session, we will explore the options available on AWS to create a secure software factory aligned with the DoD Enterprise DevSecOps initiative. We will focus on the security of the software supply chain as we deploy code from source to an Amazon Elastic Kubernetes Service (EKS) environment.
If these sessions look interesting to you, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
This is a guest post by Preet Jassi from Alexa Smart Properties.
Alexa Smart Properties (ASP) is powered by a set of technologies that property owners, property managers, and third-party solution providers can use to deploy and manage Alexa-enabled devices at scale. Alexa can simplify tasks like playing music, controlling lights, or communicating with on-site staff. Our team got its start by building products for hospitality and residential properties, but we have since expanded our products to serve senior living and healthcare properties.
With Alexa now available in hotels, hospitals, senior living homes, and other facilities, we hear stories from our customers every day about how much they love Alexa. Everything from helping veterans with visual impairments gain access to information, to enabling a senior living home resident who had fallen and sustained an injury to immediately alert staff. It’s a great feeling when you can say, “The product I work on every day makes a difference in people’s lives!”
Our team builds the software that leading hospitality, healthcare, and senior living facilities use to manage Alexa devices in their properties. We partner directly with organizations that manage their own properties as well as third-party solution providers to provide comprehensive strategy and deployment support for Alexa devices and skills, making sure that they are ready for end-user customers. Our primary goal is to create value for properties through improved customer satisfaction, cost savings, and incremental revenue. We wanted a way to measure that impact in a fast, efficient, easily accessible way from a return on investment (ROI) perspective.
After we had established what capabilities we needed to close our analytics gap, we got in touch with the Amazon QuickSight team to help. In this post, we discuss our requirements and why Amazon QuickSight Embedded was the right fit for what we needed.
Telling the ROI story with data
As a business-to-business-to-consumer product, our team serves the needs of two customers: the end-users who enjoy Alexa-enabled devices at the properties, and the property managers or solution providers that manage the Alexa deployment. We needed to prove to the latter group of customers that deploying Alexa would not only help them delight their customers, but save money as well.
We had the data necessary to tell that ROI story, but we needed an analytics solution that would allow us to provide insights that can be communicated to leadership.
These were our requirements:
Embeddable dashboards – We wanted to embed analytics into our Alexa Smart Properties management console, used by both enterprise customers and solution providers. With QuickSight, dashboards are embedded for aggregated Alexa usage analytics.
Easy access to insights – We wanted a tool that was accessible to all of our customers, whether they had a technical background or not. QuickSight provides a beautiful, user-friendly user interface (UI) that our customers can use to interpret their data and analytics.
Customizable and rich visuals – Our customers needed to be able to dive deep. QuickSight allows you to drill down into the data to easily create and change whatever visuals you need. Our customers love the look of the visuals and how easy it is to share them with their customers.
Analytics drive engagement
With QuickSight, we can now show detailed device usage information, including quantity and frequency, with insights that connect the dots between that engagement and cost savings. For example, property managers can look at total dialog counts to determine that their guests are using Alexa often, which validates their investment.
The following screenshots show an example of the dashboard our solution providers can access, which they can use to send reports to staff at the properties they serve.
The following screenshots show an example of the Communications tab, which shows how properties use communications features to save costs (both in terms of time and equipment). Customers save time and money on protective equipment by using Alexa’s remote communication features, which enable caretakers to virtually check in on patients instead of visiting a property in person. These metrics help our customers calculate the cost savings from using Alexa.
In the last year, the Analytics page in our management console has had over 20,000 page views from customers who are accessing the data and insights there to understand the impact Alexa has had on their businesses.
Insights validate investment
With QuickSight embedded dashboards, our direct-property customers and solution providers now have an easy-to-understand visual representation of how Alexa is making a difference for the guests and patients at each property. Embedded dashboards simplify the viewing, analyzing, and insight gathering for key usage metrics that help both enterprise property owners and solution providers connect the dots between Alexa’s use and money saved. Because we use Amazon Redshift to house our data, QuickSight’s seamless integration made it a fantastic choice.
Going forward, we plan to expand and improve upon the analytics foundation we’ve built with QuickSight by providing programmatic access to data—for example, a CSV file that can be sent to a customer’s Amazon Simple Storage Service (Amazon S3) bucket—as well as adding more data to our dashboards, thereby creating new opportunities for deeper insights.
To learn more about how you can embed customized data visuals, interactive dashboards, and natural language querying into any application, visit Amazon QuickSight Embedded.
About the Author
Preet Jassi is a Principal Product Manager Technical with Alexa Smart Properties. Preet fell in love with technology in Grade 5 where he built his first website for his elementary school. Prior to completing his MBA at Cornell, Preet was a UI Team Lead with over 6 years of experience as a software engineer post BSc. Preet’s passion is combining his love of technology (specifically analytics and artificial intelligence), with design, and business strategy to build products that customers love, spending time with family, and keeping active. He currently manages the Developer Experience for Alexa Smart Properties focusing on making it quick and easy to deploy Alexa devices in properties and he loves hearing quotes from end customers on how Alexa has changed their lives.
At Amazon Web Services (AWS), we move fast and continually iterate to meet the evolving needs of our customers. We design services that can help our customers meet even the most stringent security and compliance requirements. Additionally, our service teams work closely with our AWS Security Guardians program to coordinate security efforts and to maintain a high quality bar. We also have internal compliance teams that continually monitor security control requirements from all over the world and engage with external auditors to achieve third-party validation of our services against these requirements.
In this post, I’ll cover some key strategies and best practices that we use to scale security and compliance while maintaining a culture of innovation.
Security as the foundation
At AWS, security is our top priority. Although compliance might be challenging, treating security as an integral part of everything we do at AWS makes it possible for us to adhere to a broad range of compliance programs, to document our compliance, and to successfully demonstrate our compliance status to our auditors and customers.
Over time, as the auditors get deeper into what we’re doing, we can also help improve and refine their approach, as well. This increases the depth and quality of the reports that we provide directly to our customers.
The challenge of scaling securely
Many customers struggle with balancing security, compliance, and production. These customers have applications that they want to quickly make available to their own customer base. They might need to audit these applications. The traditional process can include writing the application, putting it into production, and then having the audit team take a look to make sure it meets compliance standards. This approach can cause issues, because retroactively adding compliance requirements can result in rework and churn for the development team.
Enforcing compliance requirements in this way doesn’t scale and eventually causes more complexity and friction between teams. So how do you scale quickly and securely?
Speak their language
The first way to earn trust with development teams is to speak their language. It’s critical to use terms and references that developers use, and to know what tools they are using to develop, deploy, and secure code. It’s not efficient or realistic to ask the engineering teams to do the translation of diverse (and often vague) compliance requirements into engineering specs. The compliance teams must do the hard work of translating what is required into what specifically must be done, using language that engineers are familiar with.
Another strategy to scale is to embed compliance requirements into the way developers do their daily work. It’s important that compliance teams enable developers to do their work just as they normally do, without compliance needing to intervene. If you’re successful at that strategy—and the compliant path becomes the simplest and most natural path—then that approach can lead to a very scalable compliance program that fosters understanding between teams and increased collaboration. This approach has helped break down the barriers between the developer and audit/compliance organizations.
Treat auditors and regulators as partners
I believe that you should treat auditors and regulators as true business partners. An independent auditor or regulator understands how a wide range of customers will use the security assurance artifacts that you are producing, and therefore will have valuable insights into how your reports can best be used. I think people can fall into the trap of treating regulators as adversaries. The best approach is to communicate openly with regulators, helping them understand your business and the value you bring to your customers, and getting them ramped up on your technology and processes.
At AWS, we help auditors and regulators get ramped up in various ways. For example, we have the Digital Audit Symposium, which contains presentations on how we control and operate particular services in terms of security and compliance. We also offer the Cloud Audit Academy, a learning path that provides both cloud-agnostic and AWS-specific training to help existing and prospective auditing, risk, and compliance professionals understand how to audit regulated cloud workloads. We’ve learned that being a partner with auditors and regulators is key in scaling compliance.
Conclusion
Having security as a foundation is essential to driving and scaling compliance efforts. Speaking the language of developers helps them continue to work without disruption, and makes the simple path the compliant path. Although some barriers still exist, especially for organizations in highly regulated industries such as financial services and healthcare, treating auditors like partners is a positive strategic shift in perspective. The more proactive you are in helping them accomplish what they need, the faster you will realize the value they bring to your business.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.