Tag Archives: Foundational (100)

AWS completes the first cloud audit by the Ingelheim Kreis Initiative Joint Audits group for the pharmaceutical and life sciences sector

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-completes-the-first-cloud-audit-by-the-ingelheim-kreis-initiative-joint-audits-group-for-the-pharmaceutical-and-life-sciences-sector/

We’re excited to announce that Amazon Web Services (AWS) has completed the first cloud service provider (CSP) audit by the Ingelheim Kreis (IK) Initiative Joint Audits group. The audit group represents quality and compliance professionals from some of our largest pharmaceutical and life sciences customers who collectively perform audits on their key suppliers.

As customers embrace the scalability and flexibility of AWS, we’re helping them evolve security, identity, and compliance into key business enablers. At AWS, we’re obsessed with earning and maintaining customer trust. We work hard to provide our pharmaceutical and life sciences customers and their regulatory bodies with the assurance that AWS has the necessary controls in place to help protect their most sensitive data and regulated workloads.

Our collaboration with the IK Joint Audits Group to complete the first CSP audit is a good example of how we support your risk management and regulatory efforts. Regulated pharmaceutical and life sciences customers are required by GxP to employ a risk-based approach to design, develop, and maintain computerized systems. GxP is a collection of quality guidelines and regulations that are designed to ensure safe development and manufacturing of medical devices, pharmaceuticals, biologic, and other food and medical products. Currently, no specific certifications for GxP compliance exist for CSPs. Pharmaceutical companies must do their own supplier assessment to determine the adequacy of their development and support processes.

The joint audit thoroughly assessed the AWS controls that are designed to protect your data and material workloads and help satisfy your regulatory requirements. As more pharmaceutical and life sciences companies use cloud technology for their operations, the industry is experiencing greater regulatory oversight. Because the joint audit of independent auditors represented a group of companies, both AWS and our customers were able to streamline common controls and increase transparency, and use audit resources more efficiently to help decrease the organizational burden on both the companies and the supplier (in this case, AWS).

Audit results

The IK audit results provide IK members with assurance regarding the AWS controls environment, enabling members to work to remove compliance blockers, accelerate their adoption of AWS services, and obtain confidence and trust in the security controls of AWS.

As stated by the IK auditors, “…in the course of the Audit it became obvious that AWS within their service development, their data center operation and with their employees acts highly professional with a clear customer focus. The amount of control that AWS implemented and continuously extends exceeds our expectations of a qualified CSP”.

The report is confidential and only available to IK members who signed the NDA with AWS prior to the start of the audit in 2023. Members can access the report and assess their own residual risk. To participate in a future audit cycle, contact [email protected].

To learn more about our commitment to safeguard customer data, see AWS Cloud Security. For more information about the robust controls that are in place at AWS, see the AWS Compliance Program page. By integrating governance-focused, audit-friendly service features with applicable compliance or audit standards, AWS Compliance helps you set up and operate in an AWS control environment. Customers can also access AWS Artifact to download other compliance reports that independent auditors have evaluated.

Further reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Janice Leung

Janice is a Security Assurance Program Manager at AWS, based in New York. She leads various audit programs for industry including the automobile, healthcare, and telecommunications sectors across Europe. She previously worked in security assurance and technology risk management in the financial industry for 10 years.

Ian Sutcliffe

Ian Sutcliffe

Ian is a Global Solution Architect with more than 30 years of experience in IT, primarily in the life sciences industry. A thought leader in the area of regulated cloud computing, one of his areas of focus is IT operating model and process optimization and automation with the intent of helping customers become regulated cloud natives.

Senthil Gurumoorthi

Senthil Gurumoorthi

Senthil is the Principal, Global Security Assurance Lead- HCLS at AWS. He has over 20 years of experience in global biopharmaceutical healthcare and life sciences business technologies with leadership expertise in technology delivery, risk, security, health authority inspection, audits, and quality management. He is an experienced speaker, panelist, and moderator on HCLS security, quality, and compliance topics.

Latest PCI DSS v4.0 compliance package available in AWS Artifact

Post Syndicated from Nivetha Chandran original https://aws.amazon.com/blogs/security/latest-pci-dss-v4-0-compliance-package-available-in-aws-artifact/

Amazon Web Services is pleased to announce that eight additional AWS services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) v4.0 certification:

Coalfire, a third-party Qualified Security Assessor (QSA), evaluated AWS. For the full list of services in scope, see AWS Services in Scope by Compliance Program.

Customers can access the PCI DSS package in AWS Artifact. The package includes the following:

  • Attestation of Compliance (AoC) — shows that AWS has been successfully validated against the PCI DSS standard.
  • AWS Responsibility Summary – provides information to help you effectively manage a PCI cardholder environment on AWS and better understand your responsibility regarding operating controls to effectively develop and operate a secure environment on AWS.

To learn more about our PCI program and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

Want more AWS Security news? Follow us on Twitter.

Author

Nivetha Chandran

Nivetha is a Security Assurance Manager at Amazon Web Services. She leads multiple security and compliance initiatives within AWS. Nivetha has over ten years of experience in security assurance and holds a Master’s degree in Information Management from the University of Washington.

Building a security-first mindset: three key themes from AWS re:Invent 2023

Post Syndicated from Clarke Rodgers original https://aws.amazon.com/blogs/security/building-a-security-first-mindset-three-key-themes-from-aws-reinvent-2023/

Amazon CSO Stephen Schmidt

Amazon CSO Stephen Schmidt

AWS re:Invent drew 52,000 attendees from across the globe to Las Vegas, Nevada, November 27 to December 1, 2023.

Now in its 12th year, the conference featured 5 keynotes, 17 innovation talks, and over 2,250 sessions and hands-on labs offering immersive learning and networking opportunities.

With dozens of service and feature announcements—and innumerable best practices shared by AWS executives, customers, and partners—the air of excitement was palpable. We were on site to experience all of the innovations and insights, but summarizing highlights isn’t easy. This post details three key security themes that caught our attention.

Security culture

When we think about cybersecurity, it’s natural to focus on technical security measures that help protect the business. But organizations are made up of people—not technology. The best way to protect ourselves is to foster a proactive, resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

In Sustainable security culture: Empower builders for success, AWS Global Services Security Vice President Hart Rossman and AWS Global Services Security Organizational Excellence Leader Sarah Currey presented practical strategies for building a sustainable security culture.

Rossman noted that many customers who meet with AWS about security challenges are attempting to manage security as a project, a program, or a side workstream. To strengthen your security posture, he said, you have to embed security into your business.

“You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS

Three best practices can help:

  1. Be consistently persistent. Routinely and emphatically thank employees for raising security issues. It might feel repetitive, but treating security events and escalations as learning opportunities helps create a positive culture—and it’s a practice that can spread to other teams. An empathetic leadership approach encourages your employees to see security as everyone’s responsibility, share their experiences, and feel like collaborators.
  2. Brief the board. Engage executive leadership in regular, business-focused meetings. By providing operational metrics that tie your security culture to the impact that it has on customers, crisply connecting data to business outcomes, and providing an opportunity to ask questions, you can help build the support of executive leadership, and advance your efforts to establish a sustainable proactive security posture.
  3. Have a mental model for creating a good security culture. Rossman presented a diagram (Figure 1) that highlights three elements of security culture he has observed at AWS: a student, a steward, and a builder. If you want to be a good steward of security culture, you should be a student who is constantly learning, experimenting, and passing along best practices. As your stewardship grows, you can become a builder, and progress the culture in new directions.
Figure 1: Sample mental model for building security culture

Figure 1: Sample mental model for building security culture

Thoughtful investment in the principles of inclusivity, empathy, and psychological safety can help your team members to confidently speak up, take risks, and express ideas or concerns. This supports an escalation-friendly culture that can reduce employee burnout, and empower your teams to champion security at scale.

In Shipping securely: How strong security can be your strategic advantage, AWS Enterprise Strategy Director Clarke Rodgers reiterated the importance of security culture to building a security-first mindset.

Rodgers highlighted three pillars of progression (Figure 2)—aware, bolted-on, and embedded—that are based on meetings with more than 800 customers. As organizations mature from a reactive security posture to a proactive, security-first approach, he noted, security culture becomes a true business enabler.

“When organizations have a strong security culture and everyone sees security as their responsibility, they can move faster and achieve quicker and more secure product and service releases.” — Clarke Rodgers, Director of Enterprise Strategy at AWS
Figure 2: Shipping with a security-first mindset

Figure 2: Shipping with a security-first mindset

Human-centric AI

CISOs and security stakeholders are increasingly pivoting to a human-centric focus to establish effective cybersecurity, and ease the burden on employees.

According to Gartner, by 2027, 50% of large enterprise CISOs will have adopted human-centric security design practices to minimize cybersecurity-induced friction and maximize control adoption.

As Amazon CSO Stephen Schmidt noted in Move fast, stay secure: Strategies for the future of security, focusing on technology first is fundamentally wrong. Security is a people challenge for threat actors, and for defenders. To keep up with evolving changes and securely support the businesses we serve, we need to focus on dynamic problems that software can’t solve.

Maintaining that focus means providing security and development teams with the tools they need to automate and scale some of their work.

“People are our most constrained and most valuable resource. They have an impact on every layer of security. It’s important that we provide the tools and the processes to help our people be as effective as possible.” — Stephen Schmidt, CSO at Amazon

Organizations can use artificial intelligence (AI) to impact all layers of security—but AI doesn’t replace skilled engineers. When used in coordination with other tools, and with appropriate human review, it can help make your security controls more effective.

Schmidt highlighted the internal use of AI at Amazon to accelerate our software development process, as well as new generative AI-powered Amazon Inspector, Amazon Detective, AWS Config, and Amazon CodeWhisperer features that complement the human skillset by helping people make better security decisions, using a broader collection of knowledge. This pattern of combining sophisticated tooling with skilled engineers is highly effective, because it positions people to make the nuanced decisions required for effective security that AI can’t make on its own.

In How security teams can strengthen security using generative AI, AWS Senior Security Specialist Solutions Architects Anna McAbee and Marshall Jones, and Principal Consultant Fritz Kunstler featured a virtual security assistant (chatbot) that can address common security questions and use cases based on your internal knowledge bases, and trusted public sources.

Figure 3: Generative AI-powered chatbot architecture

Figure 3: Generative AI-powered chatbot architecture

The generative AI-powered solution depicted in Figure 3—which includes Retrieval Augmented Generation (RAG) with Amazon Kendra, Amazon Security Lake, and Amazon Bedrock—can help you automate mundane tasks, expedite security decisions, and increase your focus on novel security problems.

It’s available on Github with ready-to-use code, so you can start experimenting with a variety of large and multimodal language models, settings, and prompts in your own AWS account.

Secure collaboration

Collaboration is key to cybersecurity success, but evolving threats, flexible work models, and a growing patchwork of data protection and privacy regulations have made maintaining secure and compliant messaging a challenge.

An estimated 3.09 billion mobile phone users access messaging apps to communicate, and this figure is projected to grow to 3.51 billion users in 2025.

The use of consumer messaging apps for business-related communications makes it more difficult for organizations to verify that data is being adequately protected and retained. This can lead to increased risk, particularly in industries with unique recordkeeping requirements.

In How the U.S. Army uses AWS Wickr to deliver lifesaving telemedicine, Matt Quinn, Senior Director at The U.S. Army Telemedicine & Advanced Technology Research Center (TATRC), Laura Baker, Senior Manager at Deloitte, and Arvind Muthukrishnan, AWS Wickr Head of Product highlighted how The TATRC National Emergency Tele-Critical Care Network (NETCCN) was integrated with AWS Wickr—a HIPAA-eligible secure messaging and collaboration service—and AWS Private 5G, a managed service for deploying and scaling private cellular networks.

During the session, Quinn, Baker, and Muthukrishnan described how TATRC achieved a low-resource, cloud-enabled, virtual health solution that facilitates secure collaboration between onsite and remote medical teams for real-time patient care in austere environments. Using Wickr, medics on the ground were able to treat injuries that exceeded their previous training (Figure 4) with the help of end-to-end encrypted video calls, messaging, and file sharing with medical professionals, and securely retain communications in accordance with organizational requirements.

“Incorporating Wickr into Military Emergency Tele-Critical Care Platform (METTC-P) not only provides the security and privacy of end-to-end encrypted communications, it gives combat medics and other frontline caregivers the ability to gain instant insight from medical experts around the world—capabilities that will be needed to address the simultaneous challenges of prolonged care, and the care of large numbers of casualties on the multi-domain operations (MDO) battlefield.” — Matt Quinn, Senior Director at TATRC
Figure 4: Telemedicine workflows using AWS Wickr

Figure 4: Telemedicine workflows using AWS Wickr

In a separate Chalk Talk titled Bolstering Incident Response with AWS Wickr and Amazon EventBridge, Senior AWS Wickr Solutions Architects Wes Wood and Charles Chowdhury-Hanscombe demonstrated how to integrate Wickr with Amazon EventBridge and Amazon GuardDuty to strengthen incident response capabilities with an integrated workflow (Figure 5) that connects your AWS resources to Wickr bots. Using this approach, you can quickly alert appropriate stakeholders to critical findings through a secure communication channel, even on a potentially compromised network.

Figure 5: AWS Wickr integration for incident response communications

Figure 5: AWS Wickr integration for incident response communications

Security is our top priority

AWS re:Invent featured many more highlights on a variety of topics, including adaptive access control with Zero Trust, AWS cyber insurance partners, Amazon CTO Dr. Werner Vogels’ popular keynote, and the security partnerships showcased on the Expo floor. It was a whirlwind experience, but one thing is clear: AWS is working hard to help you build a security-first mindset, so that you can meaningfully improve both technical and business outcomes.

To watch on-demand conference sessions, visit the AWS re:Invent Security, Identity, and Compliance playlist on YouTube.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Clarke Rodgers

Clarke Rodgers

Clarke is a Director of Enterprise Security at AWS. Clarke has more than 25 years of experience in the security industry, and works with enterprise security, risk, and compliance-focused executives to strengthen their security posture, and understand the security capabilities of the cloud. Prior to AWS, Clarke was a CISO for the North American operations of a multinational insurance company.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than 13 years of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

AWS named as a Leader in 2023 ISG Provider Lens report for Multi Public Cloud Services – Sovereign Cloud Infrastructure Services (EU)

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/aws-named-as-a-leader-in-2023-isg-provider-lens-report-for-multi-public-cloud-services-sovereign-cloud-infrastructure-services-eu/

Amazon Web Services (AWS) is named as a Leader in the 2023 ISG Provider Lens Quadrant Report for Multi Public Cloud Services – Sovereign Cloud Infrastructure Services (EU), published on January 8, 2024. This first-ever Information Services Group (ISG) report evaluates providers of sovereign cloud infrastructure services in the multi public cloud environment and examines how they address the key challenges that confront enterprise clients in the European Union (EU). ISG defines Leaders as representing innovative strength and competitive stability.

AWS received the highest score among the providers that ISG evaluated on portfolio attractiveness, which was assessed on multiple factors, including:

  • Scope of portfolio – breadth and depth of offering
  • Portfolio quality – technology and skills, customer satisfaction, and security
  • Strategy and vision – product roadmap, thought leadership, and investments
  • Local characteristics – product support and infrastructure

According to ISG, “AWS’ network of data centers across the EU provides sovereign cloud services that are highly scalable. The AWS Nitro System, the foundation of AWS’ cloud services, ensures data residency, privacy, and sovereignty.”

Read the report to:

  • Gain perspective on the factors that ISG believes will influence the sovereign cloud market in the EU.
  • Discover some of the considerations that enterprises in the EU should consider when evaluating sovereign cloud infrastructure services.
  • Learn how the AWS Cloud is sovereign-by-design and how we continue to innovate without compromising on the full power of AWS.

The recognition of AWS as a Leader in this report highlights the work that we have undertaken to help address the complexity that European customers are facing in the evolving sovereignty landscape. AWS continues to deliver on the AWS Digital Sovereignty Pledge by investing in a comprehensive and ambitious roadmap of capabilities of data residency, granular access restriction, encryption, and resilience to provide customers with more choice in meeting their unique needs. Our recent innovations to help customers address their local regulatory requirements and sovereignty needs include AWS Dedicated Local Zones and the announcement of plans to launch the AWS European Sovereign Cloud. Download the full 2023 ISG Provider Lens Quadrant Report for Multi Public Cloud Services – Sovereign Cloud Infrastructure Services (EU) from AWS.

If you have feedback about this post, submit comments in the Comments section below.

Author

Marta Taggart

Marta is a Seattle-native and Principal Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services and digital sovereignty. Outside of work, you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels (with limited success).

AWS Certificate Manager will discontinue WHOIS lookup for email-validated certificates

Post Syndicated from Lerna Ekmekcioglu original https://aws.amazon.com/blogs/security/aws-certificate-manager-will-discontinue-whois-lookup-for-email-validated-certificates/

AWS Certificate Manager (ACM) is a managed service that you can use to provision, manage, and deploy public and private TLS certificates for use with Amazon Web Services (AWS) and your internal connected resources. Today, we’re announcing that ACM will be discontinuing the use of WHOIS lookup for validating domain ownership when you request email-validated TLS certificates.

WHOIS lookup is commonly used to query registration information for a given domain name. This information includes details such as when the domain was originally registered, and contact information for the domain owner and the technical and administrative contacts. Domain owners create and maintain domain registration information outside of ACM in WHOIS, which is a publicly available directory that contains information about domains sponsored by domain registrars and registries. You can use WHOIS lookup to view information about domains that are registered with Amazon Route 53.

Starting June 2024, ACM will no longer send domain validation emails by using WHOIS lookup for new email-validated certificates that you request. Starting October 2024, ACM will no longer send domain validation emails to mailboxes associated with WHOIS lookup for renewal of existing email-validated certificates. ACM will continue to send validation emails to the five common system addresses for the requested domain—we provide a list of these common system addresses in the next section of this post.

In this blog post, we share important details about this change and how you can prepare. Note that if you currently use DNS validation for your certificates requested from ACM, this change doesn’t affect you. These changes only apply to certificates that use email validation.

Background

When you request public certificates through ACM, you need to prove that you own or control the domain before ACM can issue the public certificate. ACM provides two options to validate ownership of a domain: DNS validation and email validation.

AWS recommends that you use DNS validation whenever possible so that ACM can automatically renew certificates that are requested from ACM without requiring action on your part. Email validation is another option that you can use to prove ownership of the domain, but you must manually validate ownership of the domain by using a link provided in an email. Figure 1 is a sample validation email from ACM for the AWS account 111122223333 and AWS US West (Oregon) Region (us-west-2) to validate ownership of the example.com domain.

Figure 1: Sample validation email for example.com domain

Figure 1: Sample validation email for example.com domain

How does ACM know where to send the validation email? Today, as part of the email validation process, ACM sends domain validation emails to the three contact addresses associated with the domain listed in the WHOIS database. These contact addresses are the domain registrant, technical contact, and administrative contact. You create and maintain domain registration information, including these contact addresses, outside of ACM—in the WHOIS database that your domain registrar provides.

Note: If you use Route53, see Updating contact information for a domain to update the contact information for your domain.

ACM also sends validation emails to the following five common system addresses for each domain: 

  • administrator@your_domain_name
  • hostmaster@your_domain_name
  • postmaster@your_domain_name
  • webmaster@your_domain_name
  • admin@your_domain_name

To prove that you own the domain, you must select the validation link included in these emails. ACM also sends validation emails to these same addresses to renew the certificate when the certificate is 45 days from expiry.

What’s changing?

If you currently use email validation for certificates requested from ACM, there are two important dates that you should be aware of:

  1. Starting June 2024, ACM will no longer send domain validation emails by using WHOIS lookup for new email-validated certificates that you request. ACM will continue to send validation emails to the three WHOIS lookup contact addresses for renewal of existing certificates, until October 2024.
  2. Starting October 2024, ACM will no longer send the validation emails to mailboxes associated with WHOIS lookup for existing certificates. After this date, ACM will not send validation emails to the three WHOIS lookup addresses for new or existing certificates.

ACM will continue to send validation emails to the five common system addresses that we listed in the previous section of this post.

Why are we making this change?

We’re making this change to mitigate a potential availability risk for ACM customers. A TLS certificate that ACM issues is valid for up to 395 days, and if you want to keep using it, you need to renew it prior to expiry. To renew an email-validated certificate, you must approve an email that ACM sends. ACM sends the first renewal email 45 days prior to certificate renewal, and if you don’t respond to this email, ACM sends additional reminders prior to expiry. If a certificate bound to one of your AWS resources—such as an Application Load Balancer—expires without being renewed, this could cause an outage for your application.

Some domain registrars that support WHOIS have made changes to the data that they publish to support their compliance with various privacy laws and recommended practices. Over the past several years, we’ve observed that the WHOIS lookup success rate has declined to less than 5 percent. If you rely on the contact addresses listed in the WHOIS database provided by your domain registrar to validate your domain ownership, this might create an availability risk. With a 5 percent success rate for WHOIS lookup, you might not receive validation emails for renewals of your certificates around 95 percent of the time. To provide a consistent mechanism for validating domain ownership when renewing certificates, ACM will only send validation emails to the five common system addresses that we listed in the Background section of this post.

What should you do to prepare?

If you currently monitor one of the five common system addresses (listed previously) for your domains, you don’t need to take any action. Otherwise, we strongly recommend that you create new DNS-validated certificates rather than creating and using email-validated certificates. ACM can automatically renew a DNS-validated certificate, without you taking any action, as long as the CNAME is accurately configured.

Alternatively, if you want to continue using email-validated certificates, we recommend that you monitor at least one of the five common email addresses listed previously. ACM sends the validation emails during certificate issuance for new ACM-issued certificates and during renewal of existing certificates. You can use the ACM describe-certificate API or check the certificate details on the ACM console to see if ACM previously sent validation emails to the relevant system addresses.

In addition, we strongly recommend that you use ACM Certificate Approaching Expiration events to monitor your certificates for expiry and help ensure that you’re notified about certificates that require an action from you to renew. For additional guidance, see How to manage certificate lifecycles using ACM event-driven workflows.

Conclusion

In this blog post, we outlined the changes coming to the email validation process when requesting and renewing certificates from ACM. We also shared the steps that you can take to prepare for this change, including monitoring at least one of the five relevant email addresses for your domains. Remember that these changes only apply to certificates that use email validation, not certificates that use DNS validation. For more information about certificate management on AWS, see the ACM documentation or get started using ACM today in the AWS Management Console.

If you have questions, contact AWS Support or your technical account manager (TAM), or start a new thread on the AWS re:Post ACM Forum. If you have feedback about this post, submit comments in the Comments section below.

 
Want more AWS Security news? Follow us on Twitter.

Lerna Ekmekcioglu

Lerna Ekmekcioglu

Lerna is a Senior Solutions Architect at AWS where she helps customers build secure, scalable, and highly available workloads. She has over 19 years of platform engineering experience including authentication systems, distributed caching, and multi-Region deployments using IaC and CI/CD to name a few. In her spare time, she enjoys hiking, sightseeing, and backyard astronomy.

Zach Miller

Zach Miller

Zach is a Senior Worldwide Security Specialist Solutions Architect at AWS. His background is in data protection and security architecture, focused on a variety of security domains, including cryptography, secrets management, and data classification. Today, he’s focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Georgy Sebastian

Georgy Sebastian

Georgy is a Senior Software Development Engineer at AWS Cryptography. He has a background in secure system architecture, PKI management, and key distribution. In his free time, he’s an amateur gardener and tinkerer.

Khyati Makim

Khyati Makim

Khyati is a Software Development Manager at AWS Cryptography where she’s focused on building secure solutions with cryptographic best practices. She has 25 years of experience building secure solutions in financial and retail businesses in distributed systems. In her spare time, she enjoys traveling, reading, painting, and spending time with family.

How Transfer Family can help you build a secure, compliant managed file transfer solution

Post Syndicated from John Jamail original https://aws.amazon.com/blogs/security/how-transfer-family-can-help-you-build-a-secure-compliant-managed-file-transfer-solution/

Building and maintaining a secure, compliant managed file transfer (MFT) solution to securely send and receive files inside and outside of your organization can be challenging. Working with a competent, vigilant, and diligent MFT vendor to help you protect the security of your file transfers can help you address this challenge. In this blog post, I will share how AWS Transfer Family can help you in that process, and I’ll cover five ways to use the security features of Transfer Family to get the most out of this service. AWS Transfer Family is a fully managed service for file transfers over SFTP, AS2, FTPS, and FTP for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS).

Benefits of building your MFT on top of Transfer Family

As outlined in the AWS Shared Responsibility Model, security and compliance are a shared responsibility between you and Transfer Family. This shared model can help relieve your operational burden because AWS operates, manages, and controls the components from the application, host operating system, and virtualization layer down to the physical security of the facilities in which the service operates. You are responsible for the management and configuration of your Transfer Family server and the associated applications outside of Transfer Family.

AWS follows industry best practices, such as automated patch management and continuous third-party penetration testing, to enhance the security of Transfer Family. This third-party validation and the compliance of Transfer Family with various regulatory regimes (such as SOC, PCI, HIPAA, and FedRAMP) integrates with your organization’s larger secure, compliant architecture.

One example of a customer who benefited from using Transfer Family is Regeneron. Due to their needs for regulatory compliance and security, and their desire for a scalable architecture, they moved their file transfer solution to Transfer Family. Through this move, they achieved their goal of a secure, compliant architecture and lowered their overall costs by 90%. They were also able to automate their malware scanning process for the intake of files. For more information on their success story, see How Regeneron built a secure and scalable file transfer service using AWS Transfer Family. There are many other documented success stories from customers, including Liberty Mutual, Discover, and OpenGamma.

Steps you can take to improve your security posture with Transfer Family

Although many of the security improvements that Transfer Family makes don’t require action on your part to use, you do need to take action on a few for compatibility reasons. In this section, I share five steps that you should take to adopt a secure, compliant architecture on Transfer Family.

  • Use strong encryption for data in transit — The first step in building a secure, compliant MFT service is to use strong encryption for data in transit. To help with this, Transfer Family now offers a strong set of available ciphers, including post-quantum ciphers that have been designed to resist decryption from future, fault-tolerant quantum computers that are still several years from production. Transfer Family will offer this capability by default for newly created servers after January 31, 2024. Existing customers can select this capability today by choosing the latest Transfer Family security policy. We review the choice of the default security policy for Transfer Family periodically to help ensure the best security posture for customers. For information about how to check what security policy you’re using and how to update it, see Security policies for AWS Transfer Family.
  • Duplicate your server’s host key — You need to make sure that a threat actor can’t impersonate your server by duplicating your server’s host key. Your server’s host key is a vital component of your secure, compliant architecture to help prevent man-in-the-middle style events where a threat actor can impersonate your server and convince your users to provide sensitive login information and data. To help prevent this possibility, we recommend that Transfer Family SFTP servers use at least a 4,096-bit RSA, ED25519, or ECDSA host key. As part of our shared responsibility model to help you build a secure global infrastructure, Transfer Family will increase its default host key size to 4,096 bits for newly created servers after January 31, 2024. To make key rotation as simple as possible for those with weaker keys, Transfer Family supports the use of multiple host keys of multiple types on a single server. However, you should deprecate the weaker keys as soon as possible because your server is only as secure as its weakest key. To learn what keys you’re using and how to rotate them, see Key management.

The next three steps apply if you use the custom authentication option in Transfer Family, which helps you use your existing identity providers to lift and shift workflows onto Transfer Family.

  • Require both a password and a key — To increase your security posture, you can require the use of both a password and key to help protect your clients from password scanners and a threat actor that might have stolen their key. For details on how to view and configure this, see Create an SFTP-enabled server.
  • Use Base64 encoding for passwords — The next step to improve your security posture is to use or update your custom authentication templates to use Base64 encoding for your passwords. This allows for a wider variety of characters and makes it possible to create more complex passwords. In this way, you can be more inclusive of a global audience that might prefer to use different character sets for their passwords. A more diverse character set for your passwords also makes your passwords more difficult for a threat actor to guess and compromise. The example templates for Transfer Family make use of Base64 encoding for passwords. For more details on how to check and update your templates to password encoding to use Base64, see Authenticating using an API Gateway method.
  • Set your API Gateway method’s authorizationType property to AWS_IAM — The final recommended step is to make sure that you set your API Gateway method’s authorizationType property to AWS_IAM to require that the caller submit the user’s credentials to be authenticated. With IAM authorization, you sign your requests with a signing key derived from your secret access key, instead of your secret access key itself, helping to ensure that authorization requests to your identity provider use AWS Signature Version 4. This provides an extra layer of protection for your secret access key. For details on how to set up AWS_IAM authorization, see Control access to an API with IAM permissions.

Conclusion

Transfer Family offers many benefits to help you build a secure, compliant MFT solution. By following the steps in this post, you can get the most out of Transfer Family to help protect your file transfers. As the requirements for a secure, compliant architecture for file transfers evolve and threats become more sophisticated, Transfer Family will continue to offer optimized solutions and provide actionable advice on how you can use them. For more information, see Security in AWS Transfer Family.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

John Jamail

John Jamail

John is the Head of Engineering for AWS Transfer Family. Prior to joining AWS, he spent eight years working in data security focused on security incident and event monitoring (SIEM), governance, risk, and compliance (GRC), and data loss prevention (DLP).

AWS successfully renews GSMA security certification for US East (Ohio) and Europe (Paris) Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-successfully-renews-gsma-security-certification-for-us-east-ohio-and-europe-paris-regions-2/

Amazon Web Services is pleased to announce that the AWS US East (Ohio) and Europe (Paris) Regions have been recertified through October 2024 by the GSM Association (GSMA) under its Security Accreditation Scheme for Subscription Management (SAS-SM) with scope Data Centre Operations and Management (DCOM).

The US East (Ohio) Region first obtained GSMA certification in September 2021, and the Europe (Paris) Region first obtained GSMA certification in October 2021. This renewal demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers. AWS customers who provide an embedded Universal Integrated Circuit Card (eUICC) for mobile devices can run their remote provisioning applications with confidence in the AWS Cloud in the GSMA-certified Regions.

For up-to-date information related to the certification, see the AWS Compliance Program page and choose GSMA under Europe, Middle East & Africa.

AWS was evaluated by independent third-party auditors that GSMA selected. The Certificate of Compliance that shows AWS achieved GSMA compliance status is available on the GSMA website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Janice Leung

Janice is a Security Audit Program Manager at Amazon Web Services, based in New York, U.S. She leads security audits across Europe and she has previously worked in security assurance and technology risk management in financial industry for 12 years.

Tammy He

Tammy He

Tammy is a Security Assurance Europe Lead at Amazon Web Services, based in London, UK. She builds EMEA customer trust through security compliance and assurance and she had over 7 years experience in technology and cyber security audit in financial industry.

Fall 2023 SOC reports now available in Spanish

Post Syndicated from Ryan Wilks original https://aws.amazon.com/blogs/security/fall-2023-soc-reports-now-available-in-spanish/

Spanish version »

We continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs at Amazon Web Services (AWS). We’re pleased to announce that the Fall 2023 System and Organization Controls (SOC) 1, SOC 2, and SOC 3 reports are now available in Spanish. These translated reports will help drive greater engagement and alignment with customer and regulatory requirements across Latin America and Spain. The reports cover the period October 1, 2022, to September 30, 2023. We extended the period of coverage to 12 months so that you have a full year of assurance from a single report.

The Spanish language version of the reports doesn’t contain the independent opinion issued by the auditors or the control test results, but you can find this information in the English language version. Stakeholders should use the English version as a complement to the Spanish version.

Translated SOC reports in Spanish are available to customers through AWS Artifact. Translated SOC reports in Spanish will be published twice a year, in alignment with the Fall and Spring reporting cycles.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.
 


Spanish version

LLos informes SOC de Otoño de 2023 ahora están disponibles en español

Seguimos escuchando a nuestros clientes, reguladores y partes interesadas para comprender sus necesidades en relación con los programas de auditoría, garantía, certificación y atestación en Amazon Web Services (AWS). Nos complace anunciar que de Otoño SOC 1, SOC 2 y SOC 3 de AWS de Primavera de 2023 ya están disponibles en español. Estos informes traducidos ayudarán a impulsar un mayor compromiso y alineación con los requisitos regulatorios y de los clientes en las regiones de América Latina y España. Los informes cubren el período del 1 de octubre de 2022 al 30 de septiembre de 2023. Ampliamos el período de cobertura a 12 meses para que tenga un año completo de garantía con un solo informe.

La versión en inglés de los informes debe tenerse en cuenta en relación con la opinión independiente emitida por los auditores y los resultados de las pruebas de controles, como complemento de las versiones en español.

Los informes SOC traducidos en español están disponibles en AWS Artifact. Los informes SOC traducidos en español se publicarán dos veces al año según los ciclos de informes de Otoño y Primavera.

Valoramos sus comentarios y preguntas; no dude en ponerse en contacto con nuestro equipo o enviarnos sus comentarios sobre esta publicación a través de nuestra página Contáctenos.

Si tienes comentarios sobre esta publicación, envíalos en la sección Comentarios a continuación.

ryan wilks

Ryan Wilks

Ryan is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Ryan has 13 years of experience in information security. Ryan has a bachelor of arts degree from Rutgers University and holds ITIL, CISM and CISA certifications.

Nathan Samuel

Nathan Samuel

Nathan is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Nathan has a bachelor of commerce degree from the University of the Witwatersrand, South Africa, and has over 20 years of experience in security assurance. He holds the CISA, CRISC, CGEIT, CISM, CDPSE, and Certified Internal Auditor certifications.

Brownell Combs

Brownell Combs

Brownell is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Brownell holds a master’s degree in computer science from the University of Virginia and a bachelor’s degree in computer science from Centre College. He has over 20 years of experience in information technology risk management and CISSP, CISA, and CRISC certifications.

Paul Hong

Paul Hong

Paul is a Compliance Program Manager at AWS. He leads multiple security, compliance, and training initiatives within AWS, and has ten years of experience in security assurance. Paul is a CISSP, CEH, and CPA, and holds a master’s degree in accounting information systems and a bachelor’s degree in business administration from James Madison University, Virginia.

Author

Rodrigo Fiuza

Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, the Caribbean, and Europe. Rodrigo previously worked in risk management, security assurance, and technology audits for 12 years.

AWS Security Profile: Arynn Crow, Sr. Manager for AWS User AuthN

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-arynn-crow-sr-manager-for-aws-user-authn/

AWS Security Profile series, I interview some of the humans who work in AWS Security and help keep our customers safe and secure. In this profile, I interviewed Arynn Crow, senior manager for AWS User AuthN in AWS Identity.


How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over 10 years now, and AWS for three of those years. I lead a team of product managers in AWS Identity who define the strategy for our user authentication related services. This includes internal and external services that handle AWS sign-in, account creation, threat mitigation, and underlying authentication components that support other AWS services. It’s safe to say that I’m thinking about something different nearly every day, which keeps it fun.

How do you explain your job to non-technical friends and family?

I tell people that my job is about figuring out how to make sure that people are who they say they are online. If they want to know a bit more, sometimes I will relate this to examples they’re increasingly likely to encounter in their everyday lives—getting text or email messages for additional security when they try to sign in to their favorite website, or using their fingerprint or facial scan to sign in instead of entering a password. There’s a lot more to identity and authentication, of course, but this usually gets the point across!

You haven’t always been in security. Tell me a little bit about your journey and how you got started in this space?

More than 10 years ago now, I started in one of our call centers as a temporary customer service agent. I was handling Kindle support calls (this was back when our Kindles still had physical keyboards on them, and “Alexa” wasn’t even part of our lexicon yet). After New Year’s 2013, I was converted to a full-time employee and resumed my college education—I earned both of my degrees (a BA in International Affairs, and MA in political science) while working at Amazon. Over the next few years, I moved into different positions including our Back Office team, a Kindle taskforce role supporting the launch of a new services, and Executive Customer Relations. Throughout these roles, I continued to manage projects related to anti-abuse and security. I got a lot of fulfillment out of these projects—protecting our customers, employees, and business against fraud and data loss is very gratifying work. When a position opened up in our Customer Service Security team, I got the role thanks in part to my prior experience working with that team to deliver security solutions within our operations centers.

After that, things moved fast—I started first with a project on account recovery and access control for our internal workforce, and continuously expanded my portfolio into increasingly broad and more technical projects that all related to what I now know is the field of Identity and Access Management. Eventually, I started leading our identity strategy for customer service as a whole, including our internal authentication and access management as well as external customer authentication to our call centers. I also began learning about and engaging more with the security and identity community that existed outside of Amazon by attending conferences and getting involved with organizations working on standards development like the FIDO Alliance. Moving to AWS Identity a few years later was an obvious next step to gain exposure to broader applications of identity.

What advice do you have for people who want to get into security but don’t have the traditional background?

First, it can be hard. This journey wasn’t easy for me, and I’m still working to learn more every day. I want to say that because if someone is having trouble landing their first security job, or feeling like they still don’t “fit” at first when they do get the job, they should know it doesn’t mean they’re failing. There are a lot of inspiring stories out there about people who seemingly naturally segued into this field from other projects and work, but there are just as many people working very hard to find their footing. Everyone doubts themselves sometimes. Don’t let it hold you back.

Next for the practical advice, whatever you’re doing now, there are probably opportunities to begin looking at your space with a security lens, and start helping wherever you find problems to address or processes to improve by bringing them to your security teams. This will help your organization while also helping you build relationships. Be insatiably curious! Cybersecurity is community-oriented, and I find that people in this field are very passionate about what we do. Many people I met were excited that I was interested in learning about what they do and how they do it. Sometimes, they’d agree to take a couple hours with me each month for me to ask questions about how things worked, and narrow down what resources were the best use of my time.

Finally, there are a lot of resources for learning. We have highly competent, successful security professionals that learned on the job and don’t hold a roster of certifications, so I don’t think these are essential for success. But, I do think these programs can be beneficial to familiarize you with basic concepts and give you access to a common language. Various certification and training courses exist, from basic, free computer science courses online to security-specific ones like CISSP, SANS, COMPTIA Security+, and CIDPro, to name just a few. AWS offers AWS-specific cloud security training, too, like our Ramp-Up Guide. You don’t have to learn to code beautifully to succeed in security, but I think developing a working understanding of systems and principles will help build credibility and extract deeper learning out of experiences you have.

In your opinion, why is it important to have people with different backgrounds working in security?

Our backgrounds color the way we think about and approach problems, and considering all of these different approaches helps make us well-rounded. And particularly in the current context, in which women and marginalized communities are underrepresented in STEM, expanding our thinking about what skills make a good security practitioner makes room for more people at the table while giving us a more comprehensive toolkit to tackle our toughest problems. As for myself, I apply my training in political science. Security sometimes looks like a series of technical challenges and solutions, but it’s interwoven with a complex array of regulatory and social considerations, too—this makes the systems-based and abstract thinking I honed in my education useful. I know other folks who came to identity from social science, mathematics, and biology backgrounds who feel the same about skills learned from their respective fields.

Pivoting a bit, what’s something that you’re working on right now that you’re excited about?

It’s a very interesting time to be working on authentication, many people who aren’t working in enterprises or regulated industries are still hesitant to adopt controls like multi-factor authentication. And beyond MFA, organizations like NIST and CISA are emphasizing the importance of phishing-resistant MFA. So, at the same time we’re continuously working to innovate in our MFA and other authentication offerings to customers, we’re collaborating with the rest of the industry to advance technologies for strong authentication and their adoption across sectors. I represent Amazon to the FIDO Alliance, which is an industry association that supports the development of a set of protocols collectively known as FIDO2 for strong, phishing-resistant authentication. With FIDO and its various member companies, we’re working to increase the usability, awareness, and adoption of FIDO2 security keys and passkeys, which are a newer implementation of FIDO2 that improves ease of use by enabling customers to use phishing-resistant keys across devices and platforms.

In your opinion, what is the coolest thing happening in identity right now?

What I think is the most important thing happening in identity is the convergence of digital and “traditional” identities. The industry is working through challenging questions with emerging technology right now to bring forth innovation balanced with concern for equity, privacy, and sustainability. Ease of use and improved security for users as well as abuse prevention for businesses is driving conversion of real-life identities and credentials (such as peoples’ driver’s licenses as one example) to a digital format, such as digital driver’s licenses, wallets, and emerging verifiable credentials.

What are you most proud of in your career?

I’m most grateful for the opportunities I’ve had to help define the next chapter of the AWS account protection strategy. Some of our work also translates to features we get to ship to customers, like when we extended support for multiple MFA devices for AWS Identity and Access Management (IAM) late last year, and this year we announced that in 2024 we will require MFA when customers sign in to the AWS Management Console. Seeing how excited people were for a security feature was really awesome. Account protection has always been important, but this is especially true in the years following the COVID-19 outbreak when we saw a rapid acceleration of resources going digital. This kind of work definitely isn’t a one-person show, and as fulfilling as it is to see the impact I have here, what I’m really proud of is that I get to work with and learn from so many really smart, competent, and kind team members that are just as passionate about this space as I am.

If you were to do anything other than security, what would you want to do?

Before I discovered my interest for security, I was trying to decide if I would continue on from my master’s program in political science to do a PhD in either political science or public health. Towards the end of my degree program, I became really interested in how research-driven public policy could drive improvements in maternal and infant health outcomes in areas with acute opioid-related health crises, which is an ongoing struggle for my home place. I’m still very invested in that topic and try to keep on top of the latest research—I could easily see myself moving back towards that if I ever decide it’s time to close this chapter.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Arynn Crow

Arynn Crow

Arynn Crow is a Manager of Product Management for AWS Identity. Arynn started at Amazon in 2012, trying out many different roles over the years before finding her happy place in security and identity in 2017. Arynn now leads the product team responsible for developing user authentication services at AWS.

New report released – Centralized Trust for Decentralized Uses: Revisiting Private Certificate Authorities

Post Syndicated from Katie Collins original https://aws.amazon.com/blogs/security/new_report_released_centralized_trust_for_decentralized_uses/

Amazon Web Services (AWS) is excited to announce the availability of a new sponsored report from S&P Global Market Intelligence 451 Research, Centralized Trust for Decentralized Uses: Revisiting Private Certificate Authorities. We heard from customers actively seeking centralized management solutions for multi-cloud environments and worked with 451 Research, a technology research solution that provides a holistic view of opportunities and risks across the enterprise technology landscape, to dive into this topic.

In the report, 451 Research highlights the need for centralized trust as organizations build applications across multiple cloud providers, local infrastructure, and distributed hosting environments. For security practitioners familiar with certificate authorities (CAs), this report looks at some of the wider business implications of using cryptographic certificates to establish trust in highly decentralized and dynamic environments.

451 Research explains how decentralized architectures, including technologies such as Kubernetes, service meshes, and Internet of Things (IoT) networks, are fueling the need to modernize the legacy approach to CAs. The growing adoption of cloud native solutions from a multitude of vendors leads to a greater decentralization of applications. According to the survey Voice of the Enterprise: DevOps, Developer Experience 2023, 59% of respondent companies have architected more than 50% of their applications with cloud-native technologies.

Organizations and developers can use the report to consider the following:

For more information, read the full report Centralized Trust for Decentralized Uses: Revisiting Private Certificate Authorities. You can also learn more about how to create private certificates to identify resources and protect data with AWS Private Certificate Authority.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Katie Collins

Katie Collins

Katie is a Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

Rushir Patel

Rushir Patel

Rushir is a Senior Security Specialist at AWS, focused on data protection and cryptography services. His goal is to make complex topics simple for customers and help them adopt better security practices. Before joining AWS, he worked in security product management at IBM and Bank of America.

Strengthening customer third-party due diligence with renewed AWS CyberGRX assessment

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/strengthening-customer-third-party-due-diligence-with-renewed-aws-cybergrx-assessment/

CyberGRX

Amazon Web Services (AWS) is pleased to announce the successful renewal of the AWS CyberGRX cyber risk assessment report. This third-party validated report helps customers perform effective cloud supplier due diligence on AWS and enhances customers’ third-party risk management process.

With the increase in adoption of cloud products and services across multiple sectors and industries, AWS has become a critical component of customers’ environments. Regulated customers are held to high standards by regulators and auditors when it comes to exercising effective due diligence on third parties.

Many customers use third-party cyber risk management (TPCRM) services such as CyberGRX to better manage risks from their evolving third-party environments and to drive operational efficiencies. To help with such efforts, AWS has completed the CyberGRX assessment of its security posture. CyberGRX security analysts perform the assessment and validate the results annually.

The CyberGRX assessment applies a dynamic approach to third-party risk assessment. This approach integrates advanced analytics, threat intelligence, and sophisticated risk models with vendors’ responses to provide an in-depth view of how a vendor’s security controls help protect against potential threats.

Vendor profiles are continuously updated as the risk level of cloud service providers changes, or as AWS updates its security posture and controls. This approach eliminates outdated static spreadsheets for third-party risk assessments, in which the risk matrices are not updated in near real time.

In addition, AWS customers can use the CyberGRX Framework Mapper to map AWS assessment controls and responses to well-known industry standards and frameworks, such as National Institute of Standards and Technology (NIST) 800-53, NIST Cybersecurity Framework, International Organization for Standardization (ISO) 27001, Payment Card Industry Data Security Standard (PCI DSS), and the U.S. Health Insurance Portability and Assessment Act (HIPAA). This mapping can reduce customers’ third-party supplier due-diligence burden.

Customers can access the AWS CyberGRX report at no additional cost. Customers can request access to the report by completing an access request form, available on the AWS CyberGRX page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is an Audit Lead for Canada. He has experience leading audits, attestations, certifications, and assessments across the Americas. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. He previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, and utilities industries.

Federate IAM-based single sign-on to Amazon Redshift role-based access control with Okta

Post Syndicated from Debu Panda original https://aws.amazon.com/blogs/big-data/federate-iam-based-single-sign-on-to-amazon-redshift-role-based-access-control-with-okta/

Amazon Redshift accelerates your time to insights with fast, easy, and secure cloud data warehousing at scale. Tens of thousands of customers rely on Amazon Redshift to analyze exabytes of data and run complex analytical queries.

You can use your preferred SQL clients to analyze your data in an Amazon Redshift data warehouse. Connect seamlessly by leveraging an identity provider (IdP) or single sign-on (SSO) credentials to connect to the Redshift data warehouse to reuse existing user credentials and avoid additional user setup and configuration. Using role-based access control (RBAC), you can simplify managing user privileges, database permissions and management of security permissions in Amazon Redshift. You can also use redshift database roles to define a set of elevated permissions, such as for a system monitor or database administrator.

Using AWS Identity and Access Management (IAM) with RBAC, organizations can simplify user management because you no longer need to create users and map them to database roles manually. You can define the mapped database roles as a principal tag for the IdP groups or IAM role, so Redshift database roles and users who are members of those IdP groups are granted to the database roles automatically.

Earlier in 2023, we launched support for Okta integration with Amazon Redshift Serverless using database roles. In this post, we focus on Okta as the IdP and provide step-by-step guidance to integrate a Redshift provisioned cluster with Okta using the Redshift Query Editor v2 and with SQL clients like SQL Workbench/J. You can use this mechanism with other IdP providers such as Azure Active Directory or Ping with any applications or tools using Amazon’s JDBC, ODBC, or Python driver.

Recently we also announced Amazon Redshift integration with AWS IAM Identity Center, supporting trusted identity propagation, allowing you to use third-party Identity Providers (IdP) such as Microsoft Entra ID (Azure AD), Okta, Ping, and OneLogin. This integration simplifies the authentication and authorization process for Amazon Redshift users using Query Editor V2 or Amazon Quicksight, making it easier for them to securely access your data warehouse. AWS IAM Identity Center offers automatic user and group provisioning from Okta to itself by utilizing the System for Cross-domain Identity Management (SCIM) 2.0 protocol. This integration allows for seamless synchronization of information between two services, ensuring accurate and up-to-date information in AWS IAM Identity Center. Refer to Integrate Okta with Amazon Redshift Query Editor V2 using AWS IAM Identity Center for seamless Single Sign-On blog post to learn more about setting up single sign-on (SSO) to Amazon Redshift using integration with IdC and Okta as the Identity Provider.

If you are interested in using IAM-based single sign-on with Amazon Redshift database roles then you can continue reading this blog.

Solution overview

The following diagram illustrates the authentication flow of Okta with a Redshift provisioned cluster using federated IAM roles and automatic database role mapping.

Architecture Diagram

The workflow contains the following steps:

  1. Either the user chooses an IdP app in their browser, or the SQL client initiates a user authentication request to the IdP (Okta).
  2. Upon a successful authentication, Okta submits a request to the AWS federation endpoint with a SAML assertion containing the principal tags.
  3. The AWS federation endpoint validates the SAML assertion and invokes the AWS Security Token Service (AWS STS) API AssumeRoleWithSAML. The SAML assertion contains the IdP user and group information that is stored in the RedshiftDbUser and RedshiftDbRoles principal tags, respectively. Temporary IAM credentials are returned to the SQL client or, if using the Query Editor v2, the user’s browser is redirected to the Query Editor v2 console using the temporary IAM credentials.
  4. The temporary IAM credentials are used by the SQL client or Query Editor v2 to call the Redshift API GetClusterCredentialsWithIAM. This API uses the principal tags to determine the user and database roles that the user belongs to. An associated database user is created if the user is signing in for the first time and is granted the matching database roles automatically. A temporary password is returned to the SQL client.
  5. Using the database user and temporary password, the SQL client or Query Editor v2 connects to Amazon Redshift. Upon login, the user is authorized based on the Redshift database roles that were assigned in Step 4.

Prerequisites

You need the following prerequisites to set up this solution:

Connect with a Redshift provisioned cluster as a federated user using Query Editor v2

To connect using Query Editor v2, complete the following steps:

  1. Follow all the steps described in the sections Set up your Okta application and Set up AWS configuration in the following post.
  2. For the Amazon Redshift access IAM policy, replace the policy with the following JSON to use the GetClusterCredentialsWithIAM API:
    {
    	"Version": "2012-10-17",
    	"Statement": [
    					{
    						"Sid": "VisualEditor0",
    						"Effect": "Allow",
    						"Action": "redshift:GetClusterCredentialsWithIAM",
    						"Resource": "arn:aws:redshift:us-west-2:123456789012:dbname:redshift-cluster-1/dev"
    					}
    				]
    }

Now you’re ready to connect to your Redshift provisioned cluster using Query Editor v2 and federated login.

  1. Use the SSO URL from Okta and log in to your Okta account with your user credentials. For this demo, we log in with user Ethan.
  2. In Query Editor v2, choose your Redshift provisioned cluster (right-click) and choose Create connection.
  3. For Authentication, select Temporary credentials using your IAM identity.
  4. For Database, enter the database name you want to connect to.
  5. Choose Create connection.
  6. Run the following command to validate that you are logged in as a federated user and also to get the list of roles associated with that user for the current session:
SELECT current_user,* FROM pg_get_session_roles() eff_ro(name name, roleid integer);

Because Ethan is part of the sales group and has been granted permissions to access tables in the sales_schema, he should be able to access those tables without any issues. However, if he tries to access tables in the finance_schema, he would receive a permission denied error because Ethan is not part of the finance group in Okta.

Okta-QEV2-Federation

Connect with a Redshift provisioned cluster as a federated user via a third-party client

To connect as a federated user via a third-party client, complete the following steps:

  1. Follow steps 1 and 2 which are described in above section (Connect with a Redshift provisioned cluster as a federated user using Query Editor v2).
  2. Use the Redshift JDBC driver v2.1.0.18 and above because it supports authentication with IAM group federation. For the URL, enter jdbc:redshift:iam://<cluster endpoint>:<port>:<databasename>?groupfederation=true. For example, jdbc:redshift:iam://redshift-cluster-1.abdef0abc0ab.us-west-2.redshift.amazonaws.com:5439/dev?groupfederation=true

In the preceding URL, groupfederation is a mandatory parameter that allows you to authenticate with the IAM credentials for the Redshift provisioned cluster. Without the groupfederation parameter, it will not use Redshift database roles.

  1. For Username and Password, enter your Okta credentials.

SQL Workbench/J - Connection

  1. To set up extended properties, follow Steps 4–9 in the section Configure the SQL client (SQL Workbench/J) in the following post.

User Ethan will be able to access the sales_schema tables. If Ethan tries to access the tables in the finance_schema, he will get a permission denied error.

SQL Workbench/J Demo

Troubleshooting

If your connection didn’t work, consider the following:

  • Enable logging in the driver. For instructions, see Configure logging.
  • Make sure to use the latest Amazon Redshift JDBC driver version.
  • If you’re getting errors while setting up the application on Okta, make sure you have admin access.
  • If you can authenticate via the SQL client but get a permission issue or can’t see objects, grant the relevant permission to the role.

Clean up

When you’re done testing the solution, clean up the resources to avoid incurring future charges:

  1. Delete the Redshift provisioned cluster.
  2. Delete the IAM roles, IAM IdPs, and IAM policies.

Conclusion

In this post, we provided step-by-step instructions to integrate a Redshift provisioned cluster with Okta using the Redshift Query Editor v2 and SQL Workbench/J with the help of federated IAM roles and automatic database role mapping. You can use a similar setup with other SQL clients (such as DBeaver or DataGrip). We also showed how Okta group membership is mapped automatically with Redshift provisioned cluster roles to use role-based authentication seamlessly.

If you have any feedback or questions, please leave them in the comments.


About the Authors

Debu-PandaDebu Panda is a Senior Manager, Product Management at AWS. He is an industry leader in analytics, application platform, and database technologies, and has more than 25 years of experience in the IT world.

Ranjan Burman is an Analytics Specialist Solutions Architect at AWS. He specializes in Amazon Redshift and helps customers build scalable analytical solutions. He has more than 16 years of experience in different database and data warehousing technologies. He is passionate about automating and solving customer problems with cloud solutions.

Maneesh Sharma is a Senior Database Engineer at AWS with more than a decade of experience designing and implementing large-scale data warehouse and analytics solutions. He collaborates with various Amazon Redshift Partners and customers to drive better integration.

2023 ISO and CSA STAR certificates now available with ISO 27001 transition from 2013 to 2022 version

Post Syndicated from Atulsing Patil original https://aws.amazon.com/blogs/security/2023-iso-and-csa-star-certificates-now-available-with-iso-27001-transition-from-2013-to-2022-version/

Amazon Web Services (AWS) successfully completed a surveillance audit for ISO 9001:2015, 27001:2022, 27017:2015, 27018:2019, 27701:2019, 20000-1:2018, and 22301:2019, and Cloud Security Alliance (CSA) STAR Cloud Controls Matrix (CCM) v4.0. Ernst and Young Certify Point auditors conducted the audit and reissued the certificates on Nov 22, 2023. The objective of the audit was to assess the level of compliance for AWS services and Regions with the requirements of the applicable international standards.

For ISO 27001, we updated our ISO 27001:2013 certification to the 27001:2022 version.

For ISO 9001:2015, 27001:2022, 27017:2015, 27018:2019, 27701:2019, and 22301:2019, we added seven additional AWS services to the scope of this surveillance audit since the last certification issued on May 23, 2023. The seven additional services are:

For 20000-1:2018, we added additional 65 services (including the seven noted previously) to the scope of this surveillance audit since the last certification issued on Dec 13, 2022.

For a full list of AWS services that are certified under ISO and CSA STAR, please see the AWS ISO and CSA STAR Certified page. Customers can also access the certifications in the AWS Management Console through AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Atul Patil

Atulsing Patil

Atulsing is a Compliance Program Manager at AWS. He has 27 years of consulting experience in information technology and information security management. Atulsing holds a Master of Science in Electronics degree and professional certifications such as CCSP, CISSP, CISM, CDPSE, ISO 27001 Lead Auditor, HITRUST CSF, Archer Certified Consultant, and AWS CCP.

Nimesh Ravas

Nimesh Ravasa

Nimesh is a Compliance Program Manager at AWS, where he leads multiple security and privacy initiatives. Nimesh has 15 years of experience in information security and holds CISSP, CDPSE, CISA, PMP, CSX, AWS Solutions Architect – Associate, and AWS Security Specialty certifications.

Fall 2023 SOC reports now available with 171 services in scope

Post Syndicated from Ryan Wilks original https://aws.amazon.com/blogs/security/fall-2023-soc-reports-now-available-with-171-services-in-scope/

At Amazon Web Services (AWS), we’re committed to providing our customers with continued assurance over the security, availability, confidentiality, and privacy of the AWS control environment.

We’re proud to deliver the Fall 2023 System and Organizational (SOC) 1, 2, and 3 reports to support your confidence in AWS services. The reports cover the period October 1, 2022, to September 30, 2023. We extended the period of coverage to 12 months so that you have a full year of assurance from a single report. We also updated the associated infrastructure supporting our in-scope products and services to reflect new edge locations, AWS Wavelength zones, and AWS Local Zones.

The SOC 2 report includes the Security, Availability, Confidentiality, and Privacy Trust Service Criteria that cover both the design and operating effectiveness of controls over a period of time. The SOC 2 Privacy Trust Service Criteria, developed by the American Institute of Certified Public Accountants (AICPA), establishes the criteria for evaluating controls and how personal information is collected, used, retained, disclosed, and disposed of. For more information about our privacy commitments supporting the SOC 2 Type 2 report, see the AWS Customer Agreement.

The scope of the Fall 2023 SOC 2 Type 2 report includes information about how we handle the content that you upload to AWS, and how we protect that content across the services and locations that are in scope for the latest AWS SOC reports.

The Fall 2023 SOC reports include an additional 13 services in scope, for a total of 171 services. See the full list on our Services in Scope by Compliance Program page.

Here are the 13 additional services in scope for the Fall 2023 SOC reports:

Customers can download the Fall 2023 SOC reports through AWS Artifact in the AWS Management Console. You can also download the SOC 3 report as a PDF file from AWS.

AWS strives to bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. If there are additional AWS services that you would like us to add to the scope of our SOC reports (or other compliance programs), reach out to your AWS representatives.

We value your feedback and questions. Feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to-content, news, and feature announcements? Follow us on Twitter.

ryan wilks

Ryan Wilks

Ryan is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Ryan has 13 years of experience in information security. He has a bachelor of arts degree from Rutgers University and holds ITIL, CISM, and CISA certifications.

Nathan Samuel

Nathan Samuel

Nathan is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Nathan has a bachelor of commerce degree from the University of the Witwatersrand, South Africa, and has over 20 years of experience in security assurance. He holds the CISA, CRISC, CGEIT, CISM, CDPSE, and Certified Internal Auditor certifications.

Brownell Combs

Brownell Combs

Brownell is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Brownell holds a master of science degree in computer science from the University of Virginia and a bachelor of science degree in computer science from Centre College. He has over 20 years of experience in IT risk management and CISSP, CISA, and CRISC certifications.

Paul Hong

Paul Hong

Paul is a Compliance Program Manager at AWS. He leads multiple security, compliance, and training initiatives within AWS, and has 10 years of experience in security assurance. Paul holds CISSP, CEH, and CPA certifications, and holds a master’s degree in accounting information systems and a bachelor’s degree in business administration from James Madison University, Virginia.

Improve performance of workloads containing repetitive scan filters with multidimensional data layout sort keys in Amazon Redshift

Post Syndicated from Yanzhu Ji original https://aws.amazon.com/blogs/big-data/improve-performance-of-workloads-containing-repetitive-scan-filters-with-multidimensional-data-layout-sort-keys-in-amazon-redshift/

Amazon Redshift, the most widely used cloud data warehouse, has evolved significantly to meet the performance requirements of the most demanding workloads. This post covers one such new feature—the multidimensional data layout sort key.

Amazon Redshift now improves your query performance by supporting multidimensional data layout sort keys, which is a new type of sort key that sorts a table’s data by filter predicates instead of physical columns of the table. Multidimensional data layout sort keys will significantly improve the performance of table scans, especially when your query workload contains repetitive scan filters.

Amazon Redshift already provides the capability of automatic table optimization (ATO), which automatically optimizes the design of tables by applying sort and distribution keys without the need for administrator intervention. In this post, we introduce multidimensional data layout sort keys as an additional capability offered by ATO and fortified by Amazon Redshift’s sort key advisor algorithm.

Multidimensional data layout sort keys

When you define a table with the AUTO sort key, Amazon Redshift ATO will analyze your query history and automatically select either a single-column sort key or multidimensional data layout sort key for your table, based on which option is better for your workload. When multidimensional data layout is selected, Amazon Redshift will construct a multidimensional sort function that co-locates rows that are typically accessed by the same queries, and the sort function is subsequently used during query runs to skip data blocks and even skip scanning the individual predicate columns.

Consider the following user query, which is a dominant query pattern in the user’s workload:

SELECT season, sum(metric2) AS "__measure__0"
FROM titles
WHERE lower(subregion) like '%United States%'
GROUP BY 1
ORDER BY 1;

Amazon Redshift stores data for each column in 1 MB disk blocks and stores the minimum and maximum values in each block as part of the table’s metadata. If a query uses a range-restricted predicate, Amazon Redshift can use the minimum and maximum values to rapidly skip over large numbers of blocks during table scans. However, this query’s filter on the subregion column can’t be used to determine which blocks to skip based on minimum and maximum values, and as a result, Amazon Redshift scans all rows from the titles table:

SELECT table_name, input_rows, step_attribute
FROM sys_query_detail
WHERE query_id = 123456789;

When the user’s query was run with titles using a single-column sort key on subregion, the result of the preceding query is as follows:

  table_name | input_rows | step_attribute
-------------+------------+---------------
  titles     | 2164081640 | 
(1 rows)

This shows that the table scan read 2,164,081,640 rows.

To improve scans on the titles table, Amazon Redshift might automatically decide to use a multidimensional data layout sort key. All rows that satisfy the lower(subregion) like '%United States%' predicate would be co-located to a dedicated region of the table, and therefore Amazon Redshift will only scan data blocks that satisfy the predicate.

When the user’s query is run with titles using a multidimensional data layout sort key that includes lower(subregion) like '%United States%' as a predicate, the result of the sys_query_detail query is as follows:

  table_name | input_rows | step_attribute
-------------+------------+---------------
  titles     | 152324046  | multi-dimensional
(1 rows)

This shows that the table scan read 152,324,046 rows, which is only 7% of the original, and it used the multidimensional data layout sort key.

Note that this example uses a single query to showcase the multidimensional data layout feature, but Amazon Redshift will consider all the queries running against the table and can create multiple regions to satisfy the most commonly run predicates.

Let’s take another example, with more complex predicates and multiple queries this time.

Imagine having a table items (cost int, available int, demand int) with four rows as shown in the following example.

#id cost available demand
1 4 3 3
2 2 23 6
3 5 4 5
4 1 1 2

Your dominant workload consists of two queries:

  • 70% queries pattern:
    select * from items where cost > 3 and available < demand

  • 20% queries pattern:
    select avg(cost) from items where available < demand

With traditional sorting techniques, you might choose to sort the table over the cost column, such that the evaluation of cost > 3 will benefit from the sort. So, the items table after sorting using a single cost column will look like the following.

#id cost available demand
Region #1, with cost <= 3
Region #2, with cost > 3
#id cost available demand
4 1 1 2
2 2 23 6
1 4 3 3
3 5 4 5

By using this traditional sort, we can immediately exclude the top two (blue) rows with ID 4 and ID 2, because they don’t satisfy cost > 3.

On the other hand, with a multidimensional data layout sort key, the table will be sorted based on a combination of the two commonly occurring predicates in the user’s workload, which are cost > 3 and available < demand. As a result, the table’s rows are sorted into four regions.

#id cost available demand
Region #1, with cost <= 3 and available < demand
Region #2, with cost <= 3 and available >= demand
Region #3, with cost > 3 and available < demand
Region #4, with cost > 3 and available >= demand
#id cost available demand
4 1 1 2
2 2 23 6
3 5 4 5
1 4 3 3

This concept is even more powerful when applied to entire blocks instead of single rows, when applied to complex predicates that use operators not suitable for traditional sorting techniques (such as like), and when applied to more than two predicates.

System tables

The following Amazon Redshift system tables will show users if multidimensional data layouts are used on their tables and queries:

  • To determine if a particular table is using a multidimensional data layout sort key, you can check whether sortkey1 in svv_table_info is equal to AUTO(SORTKEY(padb_internal_mddl_key_col)).
  • To determine if a particular query uses multidimensional data layout to accelerate table scans, you can check step_attribute in the sys_query_detail view. The value will be equal to multi-dimensional if the table’s multidimensional data layout sort key was used during the scan.

Performance benchmarks

We performed internal benchmark testing for multiple workloads with repetitive scan filters and see that introducing multidimensional data layout sort keys produced the following results:

  • A 74% total runtime reduction compared to having no sort key.
  • A 40% total runtime reduction compared to having the best single-column sort key on each table.
  • A 80% reduction in total rows read from tables compared to having no sort key.
  • A 47% reduction in total rows read from tables compared to having the best single-column sort key on each table.

Feature comparison

With the introduction of multidimensional data layout sort keys, your tables can now be sorted by expressions based off of the commonly occurring filter predicates in your workload. The following table provides a feature comparison for Amazon Redshift against two competitors.

Feature Amazon Redshift Competitor A Competitor B
Support for sorting on columns Yes Yes Yes
Support for sorting by expression Yes Yes No
Automatic column selection for sorting Yes No Yes
Automatic expressions selection for sorting Yes No No
Automatic selection between columns sorting or expressions sorting Yes No No
Automatic use of sorting properties for expressions during scans Yes No No

Considerations

Keep in mind the following when using a multidimensional data layout:

  • Multidimensional data layout is enabled when you set your table as SORTKEY AUTO.
  • Amazon Redshift Advisor will automatically choose either a single-column sort key or multidimensional data layout for the table by analyzing your historical workload.
  • Amazon Redshift ATO adjusts the multidimensional data layout sorting results based on the manner in which ongoing queries interact with the workload.
  • Amazon Redshift ATO maintains multidimensional data layout sort keys the same way as it currently does for existing sort keys. Refer to Working with automatic table optimization for more details on ATO.
  • Multidimensional data layout sort keys will work with both provisioned clusters and serverless workgroups.
  • Multidimensional data layout sort keys will work with your existing data as long as the AUTO SORTKEY is enabled on your table and a workload with repetitive scan filters is detected. The table will be reorganized based on the results of multi-dimensional sort function.
  • To disable multidimensional data layout sort keys for a table, use alter table: ALTER TABLE table_name ALTER SORTKEY NONE. This disables the AUTO sort key feature on the table.
  • Multidimensional data layout sort keys are preserved when restoring or migrating your provisioned cluster to a serverless cluster or vice versa.

Conclusion

In this post, we showed that multidimensional data layout sort keys can significantly improve query runtime performance for workloads where dominant queries have repetitive scan filters.

To create a preview cluster from the Amazon Redshift console, navigate to the Clusters page and choose Create preview cluster. You can create a cluster in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Stockholm) Regions and test your workloads.

We would love to hear your feedback on this new feature and look forward to your comments on this post.


About the authors

Yanzhu Ji is a Product Manager in the Amazon Redshift team. She has experience in product vision and strategy in industry-leading data products and platforms. She has outstanding skill in building substantial software products using web development, system design, database, and distributed programming techniques. In her personal life, Yanzhu likes painting, photography, and playing tennis.

Milind Oke is a Data Warehouse Specialist Solutions Architect based out of New York. He has been building data warehouse solutions for over 15 years and specializes in Amazon Redshift.

Jialin Ding is an Applied Scientist in the Learned Systems Group, specializing in applying machine learning and optimization techniques to improve the performance of data systems such as Amazon Redshift.

AWS Security Profile: Chris Betz, CISO of AWS

Post Syndicated from Chris Betz original https://aws.amazon.com/blogs/security/aws-security-profile-chris-betz-ciso-of-aws/

In the AWS Security Profile series, we feature the people who work in Amazon Web Services (AWS) Security and help keep our customers safe and secure. This interview is with Chris Betz, Chief Information Security Officer (CISO), who began his role as CISO of AWS in August of 2023.


How did you get started in security? What prompted you to pursue this field?

I’ve always had a passion for technology, and for keeping people out of harm’s way. When I found computer science and security in the Air Force, this world opened up to me that let me help others, be a part of building amazing solutions, and engage my competitive spirit. Security has the challenges of the ultimate chess game, though with real and impactful consequences. I want to build reliable, massively scaled systems that protect people from malicious actors. This is really hard to do and a huge challenge I undertake every day. It’s an amazing team effort that brings together the smartest folks that I know, competing with threat actors.

What are you most excited about in your new role?

One of the most exciting things about my role is that I get to work with some of the smartest people in the field of security, people who inspire, challenge, and teach me something new every day. It’s exhilarating to work together to make a significant difference in the lives of people all around the world, who trust us at AWS to keep their information secure. Security is constantly changing, we get to learn, adapt, and get better every single day. I get to spend my time helping to build a team and culture that customers can depend on, and I’m constantly impressed and amazed at the caliber of the folks I get work with here.

How does being a former customer influence your role as AWS CISO?

I was previously the CISO at Capital One and was an AWS customer. As a former customer, I know exactly what it’s like to be a customer who relies on a partner for significant parts of their security. There needs to be a lot of trust, a lot of partnership across the shared responsibility model, and consistent focus on what’s being done to keep sensitive data secure. Every moment that I’m here at AWS, I’m reminded about things from the customer perspective and how I can minimize complexity, and help customers leverage the “super powers” that the cloud provides for CISOs who need to defend the breadth of their digital estate. I know how important it is to earn and keep customer trust, just like the trust I needed when I was in their shoes. This mindset influences me to learn as much as I can, never be satisfied with ”good enough,” and grab every opportunity I can to meet and talk with customers about their security.

What’s been the most dramatic change you’ve seen in the security industry recently?

This is pretty easy to answer: artificial intelligence (AI). This is a really exciting time. AI is dominating the news and is on the mind of every security professional, everywhere. We’re witnessing something very big happening, much like when the internet came into existence and we saw how the world dramatically changed because of it. Every single sector was impacted, and AI has the same potential. Many customers use AWS machine learning (ML) and AI services to help improve signal-to-noise ratio, take over common tasks to free up valuable time to dig deeper into complex cases, and analyze massive amounts of threat intelligence to determine the right action in less time. The combination of Data + Compute power + AI is a huge advantage for cloud companies.

AI and ML have been a focus for Amazon for more than 25 years, and we get to build on an amazing foundation. And it’s exciting to take advantage of and adapt to the recent big changes and the impact this is having on the world. At AWS, we’re focused on choice and broadening access to generative AI and foundation models at every layer of the ML stack, including infrastructure (chips), developer tools, and AI services. What a great time to be in security!

What’s the most challenging part of being a CISO?

Maintaining a culture of security involves each person, each team, and each leader. That’s easy to say, but the challenge is making it tangible—making sure that each person sees that, even though their title doesn’t have “security” in it, they are still an integral part of security. We often say, “If you have access, you have responsibility.” We work hard to limit that access. And CISOs must constantly work to build and maintain a culture of security and help every single person who has access to data understand that security is an important part of their job.

What’s your short- and long-term vision for AWS Security?

Customers trust AWS to protect their data so they can innovate and grow quickly, so in that sense, our vision is for security to be a growth lever for our customers, not added friction. Cybersecurity is key to unlocking innovation, so managing risk and aligning the security posture of AWS with our business objectives will continue for the immediate future and long term. For our customers, my vision is to continue helping them understand that investing in security helps them move faster and take the right risks—the kind of risks they need to remain competitive and innovative. When customers view security as a business accelerator, they achieve new technical capabilities and operational excellence. Strong security is the ultimate business enabler.

If you could give one piece of advice to all CISOs, what would it be?

Nail Zero Trust. Zero Trust is the path to the strongest, most effective security, and getting back to the core concepts is important. While Zero Trust is a different journey for every organization, it’s a natural evolution of cybersecurity and defense in depth in particular. No matter what’s driving organizations toward Zero Trust—policy considerations or the growing patchwork of data protection and privacy regulations—Zero Trust meaningfully improves security outcomes through an iterative process. When companies get this right, they can quickly identify and investigate threats and take action to contain or disrupt unwanted activity.

What are you most proud of in your career?

I’m proud to have worked—and still be working with—such talented, capable, and intelligent security professionals who care deeply about security and are passionate about making the world a safer place. Being among the world’s top security experts really makes me grateful and humble for all the amazing opportunities I’ve had to work alongside them, working together to solve problems and being part of creating a legacy to make security better.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Chris Betz

Chris Betz

Chris is CISO at AWS. He oversees security teams and leads the development and implementation of security policies, with the aim of managing risk and aligning the company’s security posture with business objectives. Chris joined Amazon in August 2023, after holding CISO and security leadership roles at leading companies. He lives in Northern Virginia with his family.

Lisa Maher

Lisa Maher

Lisa Maher joined AWS in February 2022 and leads AWS Security Thought Leadership PR. Before joining AWS, she led crisis communications for clients experiencing data security incidents at two global PR firms. Lisa, a former journalist, is a graduate of Texas A&M School of Law, where she specialized in Cybersecurity Law & Policy, Risk Management & Compliance.

AWS achieves SNI 27001 certification for the AWS Asia Pacific (Jakarta) Region

Post Syndicated from Airish Mariano original https://aws.amazon.com/blogs/security/aws-achieves-sni-27001-certification-for-the-aws-asia-pacific-jakarta-region/

Amazon Web Services (AWS) is proud to announce the successful completion of its first Standar Nasional Indonesia (SNI) certification for the AWS Asia Pacific (Jakarta) Region in Indonesia. SNI is the Indonesian National Standard, and it comprises a set of standards that are nationally applicable in Indonesia. AWS is now certified according to the SNI 27001 requirements. An independent third-party auditor that is accredited by the Komite Akreditasi Nasional (KAN/National Accreditation Committee) assessed AWS, per regulations in Indonesia.

SNI 27001 is based on the ISO/IEC 27001 standard, which provides a framework for the development and implementation of an effective information security management system (ISMS). An ISMS that is implemented according to this standard is a tool for risk management, cyber-resilience, and operational excellence.

AWS achieved the certification for compliance with SNI 27001 on October 28, 2023. The SNI 27001 certification covers the Asia Pacific (Jakarta) Region in Indonesia. For a full list of AWS services that are certified under the SNI 27001, see the SNI 27001 compliance page. Customers can also download the latest SNI 27001 certificate on AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

AWS is committed to bringing new services into the scope of its compliance programs to help you meet your architectural, business, and regulatory needs. If you have questions about the SNI 27001 certification, contact your AWS account team.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Airish Mariano

Airish Mariano

Airish is an Audit Specialist at AWS based in Singapore. She leads security audit engagements in the Asia-Pacific region. Airish also drives the execution and delivery of compliance programs that provide security assurance for customers to accelerate their cloud adoption.

2023 Canadian Centre for Cyber Security Assessment Summary report available with 20 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2023-canadian-centre-for-cyber-security-assessment-summary-report-available-with-20-additional-services/

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of current and new AWS services and features. We are pleased to announce the availability of the 2023 Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS. With this assessment, a total of 150 AWS services and features are assessed in the Canada (Central) Region, including 20 additional AWS services and features. The assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the Services in Scope page. The 20 new services and features are the following:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.  

The CCCS cloud service provider information technology security assessment process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach, Annex 3 – Security control catalogue). As of November 2023, 150 AWS services in the Canada (Central) Region have been assessed by CCCS and meet the requirements for the Medium cloud security profile. Meeting the Medium cloud security profile is required to host workloads that are classified up to and including Medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is an Audit Lead for Canada. He has experience leading audits, attestations, certifications, and assessments across the Americas. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. He previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, and utilities industries.

AWS Speaker Profile: Zach Miller, Senior Worldwide Security Specialist Solutions Architect

Post Syndicated from Roger Park original https://aws.amazon.com/blogs/security/aws-speaker-profile-zach-miller-senior-worldwide-security-specialist-solutions-architect/

In the AWS Speaker Profile series, we interview Amazon Web Services (AWS) thought leaders who help keep our customers safe and secure. This interview features Zach Miller, Senior Worldwide Security Specialist SA and re:Invent 2023 presenter of Securely modernize payment applications with AWS and Centrally manage application secrets with AWS Secrets Manager. Zach shares thoughts on the data protection and cloud security landscape, his unique background, his upcoming re:Invent sessions, and more.


How long have you been at AWS?

I’ve been at AWS for more than four years, and I’ve enjoyed every minute of it! I started as a consultant in Professional Services, and I’ve been a Security Solutions Architect for around three years.

How do you explain your job to your non-tech friends?

Well, my mother doesn’t totally understand my role, and she’s been known to tell her friends that I’m the cable company technician that installs your internet modem and router. I usually tell my non-tech friends that I help AWS customers protect their sensitive data. If I mention cryptography, I typically only get asked questions about cryptocurrency—which I’m not qualified to answer. If someone asks what cryptography is, I usually say it’s protecting data by using mathematics.

How did you get started in data protection and cryptography? What about it piqued your interest?

I originally went to school to become a network engineer, but I discovered that moving data packets from point A to point B wasn’t as interesting to me as securing those data packets. Early in my career, I was an intern at an insurance company, and I had a mentor who set up ethnical hacking lessons for me—for example, I’d come into the office and he’d have a compromised workstation preconfigured. He’d ask me to do an investigation and determine how the workstation was compromised and what could be done to isolate it and collect evidence. Other times, I’d come in and find my desk cabinets were locked with a padlock, and he wanted me to pick the lock. Security is particularly interesting because it’s an ever-evolving field, and I enjoy learning new things.

What’s been the most dramatic change you’ve seen in the data protection landscape?

One of the changes that I’ve been excited to see is an emphasis on encrypting everything. When I started my career, we’d often have discussions about encryption in the context of tradeoffs. If we needed to encrypt sensitive data, we’d have a conversation with application teams about the potential performance impact of encryption and decryption operations on their systems (for example, their databases), when to schedule downtime for the application to encrypt the data or rotate the encryption keys protecting the data, how to ensure the durability of their keys and make sure they didn’t lose data, and so on.

When I talk to customers about encryption on AWS today—of course, it’s still useful to talk about potential performance impact—but the conversation has largely shifted from “Should I encrypt this data?” to “How should I encrypt this data?” This is due to services such as AWS Key Management Service (AWS KMS) making it simpler for customers to manage encryption keys and encrypt and decrypt data in their applications with minimal performance impact or application downtime. AWS KMS has also made it simple to enable encryption of sensitive data—with over 120 AWS services integrated with AWS KMS, and services such as Amazon Simple Storage Service (Amazon S3) encrypting new S3 objects by default.

You are a frequent contributor to the AWS Security Blog. What were some of your recent posts about?

My last two posts covered how to use AWS Identity and Access Management (IAM) condition context keys to create enterprise controls for certificate management and how to use AWS Secrets Manager to securely manage and retrieve secrets in hybrid or multicloud workloads. I like writing posts that show customers how to use a new feature, or highlight a pattern that many customers ask about.

You are speaking in a couple of sessions at AWS re:Invent; what will your sessions focus on? What do you hope attendees will take away from your session?

I’m delivering two sessions at re:Invent this year. The first is a chalk talk, Centrally manage application secrets with AWS Secrets Manager (SEC221), that I’m delivering with Ritesh Desai, who is the General Manager of Secrets Manager. We’re discussing how you can securely store and manage secrets in your workloads inside and outside of AWS. We will highlight some recommended practices for managing secrets, and answer your questions about how Secrets Manager integrates with services such as AWS KMS to help protect application secrets.

The second session is also a chalk talk, Securely modernize payment applications with AWS (SEC326). I’m delivering this talk with Mark Cline, who is the Senior Product Manager of AWS Payment Cryptography. We will walk through an example scenario on creating a new payment processing application. We will discuss how to use AWS Payment Cryptography, as well as other services such as AWS Lambda, to build a simple architecture to help process and secure credit card payment data. We will also include common payment industry use cases such as tokenization of sensitive data, and how to include basic anti-fraud detection, in our example app.

What are you currently working on that you’re excited about?

My re:Invent sessions are definitely something that I’m excited about. Otherwise, I spend most of my time talking to customers about AWS Cryptography services such as AWS KMS, AWS Secrets Manager, and AWS Private Certificate Authority. I also lead a program at AWS that enables our subject matter experts to create and publish videos to demonstrate new features of AWS Security Services. I like helping people create videos, and I hope that our videos provide another mechanism for viewers who prefer information in a video format. Visual media can be more inclusive for customers with certain disabilities or for neurodiverse customers who find it challenging to focus on written text. Plus, you can consume videos differently than a blog post or text documentation. If you don’t have the time or desire to read a blog post or AWS public doc, you can listen to an instructional video while you work on other tasks, eat lunch, or take a break. I invite folks to check out the AWS Security Services Features Demo YouTube video playlist.

Is there something you wish customers would ask you about more often?

I always appreciate when customers provide candid feedback on our services. AWS is a customer-obsessed company, and we build our service roadmaps based on what our customers tell us they need. You should feel comfortable letting AWS know when something could be easier, more efficient, or less expensive. Many customers I’ve worked with have provided actionable feedback on our services and influenced service roadmaps, just by speaking up and sharing their experiences.

How about outside of work, any hobbies?

I have two toddlers that keep me pretty busy, so most of my hobbies are what they like to do. So I tend to spend a lot of time building elaborate toy train tracks, pushing my kids on the swings, and pretending to eat wooden toy food that they “cook” for me. Outside of that, I read a lot of fiction and indulge in binge-worthy TV.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Roger Park

Roger Park

Roger is a Senior Security Content Specialist at AWS Security focusing on data protection. He has worked in cybersecurity for almost ten years as a writer and content producer. In his spare time, he enjoys trying new cuisines, gardening, and collecting records.

Zach Miller

Zach Miller

Zach is a Senior Worldwide Security Specialist Solutions Architect at AWS. His background is in data protection and security architecture, focused on a variety of security domains, including cryptography, secrets management, and data classification. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

AWS Security Profile: Tom Scholl, VP and Distinguished Engineer, AWS

Post Syndicated from Tom Scholl original https://aws.amazon.com/blogs/security/aws-security-profile-tom-scholl-vp-and-distinguished-engineer-aws/

Tom Scholl Main Image

In the AWS Security Profile series, we feature the people who work in Amazon Web Services (AWS) Security and help keep our customers safe and secure. This interview is with Tom Scholl, VP and Distinguished Engineer for AWS.

What do you do in your current role and how long have you been at AWS?

I’m currently a vice president and distinguished engineer in the infrastructure organization at AWS. My role includes working on the AWS global network backbone, as well as focusing on denial-of-service detection and mitigation systems. I’ve been with AWS for over 12 years.

What initially got you interested in networking and how did that lead you to the anti-abuse and distributed denial of service (DDoS) space?

My interest in large global network infrastructure started when I was a teenager in the 1990s. I remember reading a magazine at the time that cataloged all the large IP transit providers on the internet, complete with network topology maps and sizes of links. It inspired me to want to work on the engineering teams that supported that. Over time, I was fortunate enough to move from working at a small ISP to a telecom carrier where I was able to work on their POP and backbone designs. It was there that I learned about the internet peering ecosystem and started collaborating with network operators from around the globe.

For the last 20-plus years, DDoS was always something I had to deal with to some extent. Namely, from the networking lens of preventing network congestion through traffic-engineering and capacity planning, as well as supporting the integration of DDoS traffic scrubbers into network infrastructure.

About three years ago, I became especially intrigued by the network abuse and DDoS space after using AWS network telemetry to observe the size of malicious events in the wild. I started to be interested in how mitigation could be improved, and how to break down the problem into smaller pieces to better understand the true sources of attack traffic. Instead of merely being an observer, I wanted to be a part of the solution and make it better. This required me to immerse myself into the domain, both from the perspective of learning the technical details and by getting hands-on and understanding the DDoS industry and environment as a whole. Part of how I did this was by engaging with my peers in the industry at other companies and absorbing years of knowledge from them.

How do you explain your job to your non-technical friends and family?

I try to explain both areas that I work on. First, that I help build the global network infrastructure that connects AWS and its customers to the rest of the world. I explain that for a home user to reach popular destinations hosted on AWS, data has to traverse a series of networks and physical cables that are interconnected so that the user’s home computer or mobile phone can send packets to another part of the world in less than a second. All that requires coordination with external networks, which have their own practices and policies on how they handle traffic. AWS has to navigate that complexity and build and operate our infrastructure with customer availability and security in mind. Second, when it comes to DDoS and network abuse, I explain that there are bad actors on the internet that use DDoS to cause impairment for a variety of reasons. It could be someone wanting to disrupt online gaming, video conferencing, or regular business operations for any given website or company. I work to prevent those events from causing any sort of impairment and trace back the source to disrupt that infrastructure launching them to prevent it from being effective in the future.

Recently, you were awarded the J.D. Falk Award by the Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG) for IP Spoofing Mitigation. Congratulations! Please tell us more about the efforts that led to this.

Basically, there are three main types of DDoS attacks we observe: botnet-based unicast floods, spoofed amplification/reflection attacks, and proxy-driven HTTP request floods. The amplification/reflection aspect is interesting because it requires DDoS infrastructure providers to acquire compute resources behind providers that permit IP spoofing. IP spoofing itself has a long history on the internet, with a request for comment/best current practice (RFC/BCP) first written back in 2000 recommending that providers prevent this from occurring. However, adoption of this practice is still spotty on the internet.

At NANOG76, there was a proposal that these sorts of spoofed attacks could be traced by network operators in the path of the pre-amplification/reflection traffic (before it bounced off the reflectors). I personally started getting involved in this effort about two years ago. AWS operates a large global network and has network telemetry data that would help me identify pre-amplification/reflection traffic entering our network. This would allow me to triangulate the source network generating this. I then started engaging various networks directly that we connect to and provided them timestamps, spoofed source IP addresses, and specific protocols and ports involved with the traffic, hoping they could use their network telemetry to identify the customer generating it. From there, they’d engage with their customer to get the source shutdown or, failing that, implement packet filters on their customer to prevent spoofing.

Initially, only a few networks were capable of doing this well. This meant I had to spend a fair amount of energy in educating various networks around the globe on what spoofed traffic is, how to use their network telemetry to find it, and how to handle it. This was the most complicated and challenging part because this wasn’t on the radar of many networks out there. Up to this time, frontline network operations and abuse teams at various networks, including some very large ones, were not proficient in dealing with this.

The education I did included a variety of engagements, including sharing drawings with the day-in-the-life of a spoofed packet in a reflection attack, providing instructions on how to use their network telemetry tools, connecting them with their network telemetry vendors to help them, and even going so far as using other more exotic methods to identify which of their customers were spoofing and pointing out who they needed to analyze more deeply. In the end, it’s about getting other networks to be responsive and take action and, in the best cases, find spoofing on their own and act upon it.

Incredible! How did it feel accepting the award at the M3AAWG General Meeting in Brooklyn?

It was an honor to accept it and see some acknowledgement for the behind-the-scenes work that goes on to make the internet a better place.

What’s next for you in your work to suppress IP spoofing?

Continue tracing exercises and engaging with external providers. In particular, some of the network providers that experience challenges in dealing with spoofing and how we can improve their operations. Also, determining more effective ways to educate the hosting providers where IP spoofing is a common issue and making them implement proper default controls to not allow this behavior. Another aspect is being a force multiplier to enable others to spread the word and be part of the education process.

Looking ahead, what are some of your other goals for improving users’ online experiences and security?

Continually focusing on improving our DDoS defense strategies and working with customers to build tailored solutions that address some of their unique requirements. Across AWS, we have many services that are architected in different ways, so a key part of this is how do we raise the bar from a DDoS defense perspective across each of them. AWS customers also have their own unique architecture and protocols that can require developing new solutions to address their specific needs. On the disruption front, we will continue to focus on disrupting DDoS-as-a-service provider infrastructure beyond disrupting spoofing to disrupting botnets and the infrastructure associated with HTTP request floods.

With HTTP request floods being much more popular than byte-heavy and packet-heavy threat methods, it’s important to highlight the risks open proxies on the internet pose. Some of this emphasizes why there need to be some defaults in software packages to prevent misuse, in addition to network operators proactively identifying open proxies and taking appropriate action. Hosting providers should also recognize when their customer resources are communicating with large fleets of proxies and consider taking appropriate mitigations.

What are the most critical skills you would advise people need to be successful in network security?

I’m a huge proponent of being hands-on and diving into problems to truly understand how things are operating. Putting yourself outside your comfort zone, diving deep into the data to understand something, and translating that into outcomes and actions is something I highly encourage. After you immerse yourself in a particular domain, you can be much more effective at developing strategies and rapid prototyping to move forward. You can make incremental progress with small actions. You don’t have to wait for the perfect and complete solution to make some progress. I also encourage collaboration with others because there is incredible value in seeking out diverse opinions. There are resources out there to engage with, provided you’re willing to put in the work to learn and determine how you want to give back. The best people I’ve worked with don’t do it for public attention, blog posts, or social media status. They work in the background and don’t expect anything in return. They do it because of their desire to protect their customers and, where possible, the internet at large.

Lastly, if you had to pick an industry outside of security for your career, what would you be doing?

I’m over the maximum age allowed to start as an air traffic controller, so I suppose an air transport pilot or a locomotive engineer would be pretty neat.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Tom Scholl

Tom Scholl

Tom is Vice President and Distinguished Engineer at AWS.

Amanda Mahoney

Amanda Mahoney

Amanda is the public relations lead for the AWS portfolio of security, identity, and compliance services. She joined AWS in September 2022.