Tag Archives: Foundational (100)

Spring 2021 PCI DSS report now available with nine services added in scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/spring-2021-pci-dss-report-now-available-with-nine-services-added-in-scope/

We’re continuing to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that nine new services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This provides our customers with more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in AWS.

You can see the full list of services on our Services in Scope by Compliance Program page. The nine new services are:

AWS Local Zones sites were newly assessed as additional infrastructure deployments as part of the spring 2021 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) that shows AWS PCI compliance status is available through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

Five reasons why I’m excited to attend AWS re:Inforce 2021 in Houston, TX

Post Syndicated from Clarke Rodgers original https://aws.amazon.com/blogs/security/five-reasons-why-im-excited-to-attend-aws-reinforce-2021-in-houston-tx/

You may have seen the recent invitation from Stephen Schmidt, Chief Information Security Officer (CISO) at Amazon Web Services, to join us at AWS re:Inforce in Houston, TX on August 24 and 25. I’d like to dive a little bit deeper into WHY you should attend and HOW to make the most of your time there.

Why listen to me? As an AWS Enterprise Strategist focused on security, risk, and compliance (and a former CISO), I spend most of my time with customers helping them navigate the world of security in the cloud. I also help them learn how security fits into the bigger picture of digital transformation and cloud adoption. Virtually every question and problem set I’m presented with as part of my day job is addressed at AWS re:Inforce. It is a learning conference designed to educate our customers through hands-on workshops and sessions, whether they’re executives, managers, engineers, or developers. Focused on the security mission, AWS re:Inforce is the event we’ve been looking for as security professionals, free of the fear, uncertainty, and doubt (FUD) and “magic potions” presented at other security events.

Although many attendees come to learn from AWS leaders and security practitioners, I believe some of the best knowledge comes from our customers (your peers) and the amazing security solutions they’ve built using AWS services. The customer sessions present practical solutions that have worked at their respective organizations. Struggling on how to solve a particular security challenge? Chances are one of your peers has already solved it. And during the ample networking opportunities, you may be able to teach and inspire your peers as well! View the listings for customer-led and AWS-led sessions.

I’ve written about how to think broadly about security solutions by focusing on capabilities rather than specific tools or vendor solutions. AWS re:Inforce underscores the point that with the cloud, you are in charge of your own security destiny.

Some of the sessions and experiences I’m particularly excited about at this year’s AWS re:Inforce:

  1. Stephen Schmidt’s keynote with special guest speaker, AWS CEO Adam Selipsky. AWS re:Inforce 2021 kicks off with the keynote on Tuesday, August 24. Stephen and Adam will take the stage with industry-leading guest speakers to share best practices for managing security, compliance, identity, and governance in the cloud. In addition to attending in person, you can register for the livestream of the keynote.
  2. Leadership sessions from some of the top security, risk, and compliance minds at AWS. The sessions cover the latest in Data Protection & Privacy, Governance, Risk & Compliance, Identity & Access Management, Network & Infrastructure, and Threat Detection & Incident Response.
  3. Security Jams, which are gamified security exercises based on real-world security problems. We’re offering two gamified learning opportunities at re:Inforce: AWS Security Jams and Capture the Flag. Security Jams are scheduled sessions, and the Capture the Flag experience in the Expo is self-paced. Both offer you an opportunity to showcase your knowledge of general security concepts as well as AWS security best practices. Create a team or work solo—all you need to bring is your desire to learn and a laptop.
  4. The Security Learning Hub, where you can learn at your own pace about the subjects YOU are interested in, from the experts who either built or regularly support your favorite AWS security solutions. The Security Learning Hub is the central location for learning and engagement at re:Inforce. Not only will you learn from AWS Security Partners, but you’ll engage in unique demos and experiences, connect with AWS experts, and network with the security community.
  5. Networking—meet with peers, customers, AWS Partners, and AWS Security experts under one roof (and yes, in person!).

Still not sure how to best plan your time? Take a look at these curated attendee guides by some of our security-focused AWS Heroes, who are experts in their own right. Whether you identify as a builder, executive, or industry/security professional, there’s a path for you to follow for your time at AWS re:Inforce.

If you’re already registered, I look forward to seeing you in Houston and wish you an awesome AWS re:Inforce 2021. Not registered yet…what are you waiting for? Register today and use the code “RFSALUwi70xfx” for a $300 discount.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clarke Rodgers

Clarke is an Enterprise Security Strategist with AWS. In this role, Clarke works with enterprise security, risk, and compliance focused executives on how AWS can strengthen their security posture and to help understand the security capabilities/possibilities of the cloud. Prior to AWS, Clarke was a CISO for the North American operations of a multinational insurance/reinsurance company where he took a strategic division all-in to AWS for security reasons, to include achieving SOC2/Type2 attestation. Clarke’s 20+ year career in IT operations and security-focused roles helps him align with the needs of today’s enterprise customers during their cloud transformation journeys. Clarke attended the University of North Carolina and served as a United States Marine.

New 2021 H1 IRAP report is now available on AWS Artifact for Australian customers

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/new-2021-h1-irap-report-is-now-available-on-aws-artifact-for-australian-customers/

We are excited to announce that an additional 15 AWS services are now assessed to be in scope for Information Security Registered Assessors Program (IRAP) after a successful incremental audit completed in June 2021 by independent ASD (Australian Signals Directorate) certified IRAP assessor. This brings the total to 112 services assessed at IRAP PROTECTED level. The new IRAP report is now available in AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For the full list of these services, see the AWS Services in Scope page on the IRAP tab. All services in scope are available in the Asia Pacific (Sydney) Region.

The following are the 15 newly added services in scope:

  1. Amazon Connect Customer Profiles
  2. Amazon Detective
  3. Amazon Fraud Detector
  4. Amazon Kendra
  5. Amazon Keyspaces (for Apache Cassandra)
  6. Amazon Lex
  7. Amazon Textract
  8. AWS App Mesh
  9. AWS Cloud Map
  10. AWS Cloud9
  11. AWS Ground Station
  12. AWS OpsWorks for Chef Automate
  13. AWS OpsWorks for Puppet Enterprise
  14. AWS Personal Health Dashboard
  15. AWS Resource Groups

The IRAP documentation pack is developed in accordance with the Australian Cyber Security Centre (ACSC) Cloud Security Guidance and their Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Australian Government Information Security Manual (ISM), the Attorney-General’s Department Protective Security Policy Framework (PSPF), and the Digital Transformation Agency (DTA) Secure Cloud Strategy.

The IRAP package on AWS Artifact also includes the AWS Consumer Guide and the whitepaper Reference Architectures for ISM PROTECTED Workloads in the AWS Cloud.

We created the IRAP documentation pack to help Australian government agencies and their partners to plan, architect, and risk assess their workloads, in utilizing AWS Cloud services. Please reach out to your AWS representatives to let us know which additional services you would like to see in scope for coming IRAP assessments. We strive to bring more services into the scope of the IRAP PROTECTED level, based on your requirements.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the Audit Program Manager for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

OSPAR 2021 report now available with 127 services in scope

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/ospar-2021-report-now-available-with-127-services-in-scope/

We are excited to announce the completion of the third Outsourced Service Provider Audit Report (OSPAR) audit cycle on July 1, 2021. The latest OSPAR certification includes the addition of 19 new services in scope, bringing the total number of services to 127 in the Asia Pacific (Singapore) Region.

You can download our latest OSPAR report in AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. The list of services in scope for OSPAR is available in the report, and is also available on the AWS Services in Scope by Compliance Program webpage.

Some of the newly added services in scope include the following:

  • AWS Outposts, a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to any datacenter, co-location space, or an on-premises facility for a consistent hybrid experience.
  • Amazon Connect, an easy to use omnichannel cloud contact center that helps customers provide superior customer service across voice, chat, and tasks at lower cost than traditional contact center systems.
  • Amazon Lex, a service for building conversational interfaces into any application using voice and text.
  • Amazon Macie, a fully managed data security and data privacy service that uses machine learning and pattern matching to help customers discover, monitor, and protect customers’ sensitive data in AWS.
  • Amazon Quantum Ledger Database (QLDB), a fully managed ledger database that provides a transparent, immutable and cryptographically verifiable transaction log owned by a central trusted authority.

The OSPAR assessment is performed by an independent third-party auditor, selected from the ABS list of approved auditors. The assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates to customers, our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. You can leverage the OSPAR assessment report to conduct due diligence, and to help reduce the effort and costs required for compliance. AWS OSPAR reporting period is now updated in the ABS list of OSPAR Audited Outsourced Service Providers.

As always, we are committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the OSPAR report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the Audit Program Manager for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

How AWS is helping EU customers navigate the new normal for data protection

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/how-aws-is-helping-eu-customers-navigate-the-new-normal-for-data-protection/

Achieving compliance with the European Union’s data protection regulations is critical for hundreds of thousands of Amazon Web Services (AWS) customers. Many of them are subject to the EU’s General Data Protection Regulation (GDPR), which ensures individuals’ fundamental right to privacy and the protection of personal data. In February, we announced strengthened commitments to protect customer data, such as challenging law enforcement requests for customer data that conflict with EU law.

Today, we’re excited to announce that we’ve launched two new online resources to help customers more easily complete data transfer assessments and comply with the GDPR, taking into account the European Data Protection Board (EDPB) recommendations. These resources will also assist AWS customers in other countries to understand whether their use of AWS services involves a data transfer.

Using AWS’s new “Privacy Features for AWS Services,” customers can determine whether their use of an individual AWS service involves the transfer of customer data (the personal data they’ve uploaded to their AWS account). Knowing this information enables customers to choose the right action for their applications, such as opting out of the data transfer or creating an appropriate disclosure of the transfer for end user transparency.

We’re also providing additional information on the processing activities and locations of the limited number of sub-processors that AWS engages to provide services that involve the processing of customer data. AWS engages three types of sub-processors:

  • Local AWS entities that provide the AWS infrastructure.
  • AWS entities that process customer data for specific AWS services.
  • Third parties that AWS contracts with to provide processing activities for specific AWS services.

The enhanced information available on our updated Sub-processors page enables customers to assess if a sub-processor is relevant to their use of AWS services and AWS Regions.

These new resources make it easier for AWS customers to conduct their data transfer assessments as set out in the EDPB recommendations and, as a result, comply with GDPR. After completing their data transfer assessments, customers will also be able to determine whether they need to implement supplemental measures in line with the EDPB’s recommendations.

These resources support our ongoing commitment to giving customers control over where their data is stored, how it’s stored, and who has access to it.

Since we opened our first region in the EU in 2007, customers have been able to choose to store customer data with AWS in the EU. Today, customers can store their data in our AWS Regions in France, Germany, Ireland, Italy, and Sweden, and we’re adding Spain in 2022. AWS will never transfer data outside a customer’s selected AWS Region without the customer’s agreement.

AWS customers control how their data is stored, and we have a variety of tools at their disposal to enhance security. For example, AWS CloudHSM and AWS Key Management Service (AWS KMS) allow customers to encrypt data in transit and at rest and securely generate and manage encryption keys that they control.

Finally, our customers control who can access their data. We never use customer data for marketing or advertising purposes. We also prohibit, and our systems are designed to prevent, remote access by AWS personnel to customer data for any purpose, including service maintenance, unless requested by a customer, required to prevent fraud and abuse, or to comply with the law.

As previously mentioned, we challenge law enforcement requests for customer data from governmental bodies, whether inside or outside the EU, where the request conflicts with EU law, is overbroad, or we otherwise have any appropriate grounds to do so.

Earning customer trust is the foundation of our business at AWS, and we know protecting customer data is key to achieving this. We also know that helping customers protect their data in a world with constantly changing regulations, technology, and risks takes teamwork. We would never expect our customers to go it alone.

As we continue to enhance the capabilities customers have at their fingertips, they can be confident that choosing AWS will ensure they have the tools necessary to help them meet the most stringent security, privacy, and compliance requirements.

If you have questions or need more information, visit our EU Data Protection page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Author

Donna Dodson

Donna is a Senior Principal Scientist at AWS focusing on security and privacy capabilities including cryptography, risk management, standards, and assessments. Before joining AWS, Donna was the Chief Cybersecurity Advisor at the National Institute of Standards and Technology (NIST). She led NIST’s comprehensive cybersecurity research and development to cultivate trust in technology for stakeholders nationally and internationally.

AWS achieves Spain’s ENS High certification across 149 services

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-149-services/

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). We continually add more services to our ENS certification scope. This helps to assure public sector organizations in Spain that want to build secure applications and services on AWS that the expected ENS certification security standards are being met.

ENS certification establishes security standards that apply to all government agencies and public organizations in Spain, and to service providers that the public services are dependent on. Spain’s National Security Framework is regulated under Royal Decree 3/2010 and was developed through close collaboration between Entidad Nacional de Acreditación (ENAC), the Ministry of Finance and Public Administration, and the National Cryptologic Centre (CCN), as well as other administrative bodies.

We’re excited to announce the addition of 44 new services to the scope of our Spain Esquema Nacional de Seguridad (ENS) High certification, for a total of 149 services. The certification covers all AWS Regions. Some of the new security services included in ENS High scope are:

  • Amazon Macie is a data security and data privacy service that uses machine learning and pattern matching to help you discover, monitor, and protect your sensitive data in AWS.
  • AWS Control Tower is a service you can use to set up and govern a new, secure, multi-account AWS environment based on best practices established through AWS’s experience working with thousands of enterprises as they move to the cloud.
  • Amazon Fraud Detector is a fully managed machine learning (ML) fraud detection solution that provides everything needed to build, deploy, and manage fraud detection models.
  • AWS Network Firewall is a managed service that makes it easier to deploy essential network protections for all your Amazon Virtual Private Clouds (Amazon VPC).

AWS’s achievement of the ENS High certification is verified by BDO España, which conducted an independent audit and attested that AWS meets the required confidentiality, integrity, and availability standards.

For more information about ENS High, see the AWS Compliance page Esquema Nacional de Seguridad High. To view which services are covered, see the ENS High tab on the AWS Services in Scope by Compliance Program page. You can download the Esquema Nacional de Seguridad (ENS) Certificate from AWS Artifact in the AWS Console or from the Compliance page Esquema Nacional de Seguridad High.

As always, we’re committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. Please reach out to your AWS account team or [email protected] if you have questions about the ENS program.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is a Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across the Asia Pacific, Japan, and Europe Regions. During his professional career, he has helped multiple cloud service providers obtain global and regional security certifications. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

AWS Verified episode 6: A conversation with Reeny Sondhi of Autodesk

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-6-a-conversation-with-reeny-sondhi-of-autodesk/

I’m happy to share the latest episode of AWS Verified, where we bring you global conversations with leaders about issues impacting cybersecurity, privacy, and the cloud. We take this opportunity to meet with leaders from various backgrounds in security, technology, and leadership.

For our latest episode of Verified, I had the opportunity to meet virtually with Reeny Sondhi, Vice President and Chief Security Officer of Autodesk. In her role, Reeny drives security-related strategy and decisions across the company. She leads the teams responsible for the security of Autodesk’s infrastructure, cloud, products, and services, as well as the teams dedicated to security governance, risk & compliance, and security incident response.

Reeny and I touched on a variety of subjects, from her career journey, to her current stewardship of Autodesk’s security strategy based on principles of trust. Reeny started her career in product management, having conceptualized, created, and brought multiple software and hardware products to market. “My passion as a product manager was to understand customer problems and come up with either innovative products or features to help solve them. I tell my team I entered the world of security by accident from product management, but staying in this profession has been my choice. I’ve been able to take the same passion I had when I was a product manager for solving real world customer problems forward as a security leader. Even today, sitting down with my customers, understanding what their problems are, and then building a security program that directly solves these problems, is core to how I operate.”

Autodesk has customers across a wide variety of industries, so Reeny and her team work to align the security program with customer experience and expectations. Reeny has also worked to drive security awareness across Autodesk, empowering employees throughout the organization to act as security owners. “One lesson is consistency in approach. And another key lesson that I’ve learned over the last few years is to demystify security as much as possible for all constituents in the organization. We have worked pretty hard to standardize security practices across the entire organization, which has helped us in scaling security throughout Autodesk.”

Reeny and Autodesk are setting a great example on how to innovate on behalf of their customers, securely. I encourage you to learn more about her perspective on this, and other aspects of how to manage and scale a modern security program, by watching the interview.

Watch my interview with Reeny, and visit the Verified webpage for previous episodes, including conversations with security leaders at Netflix, Comcast, and Vodafone. If you have any suggestions for topics you’d like to see featured in future episodes, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Join us in person for AWS re:Inforce 2021

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/join-us-in-person-for-aws-reinforce-2021/

I’d like to personally invite you to attend our security conference, AWS re:Inforce 2021 in Houston, TX on August 24–25. This event will offer interactive educational content to address your security, compliance, privacy, and identity management needs.

As the Chief Information Security Officer of Amazon Web Services (AWS), my primary job is to help our customers navigate their security journey while keeping the AWS environment safe. AWS re:Inforce will help you understand how you can change to accelerate the pace of innovation in your business while staying secure. With recent headlines around ransomware, misconfigurations, and unintended privacy consequences, this is your chance to learn the tactical and strategic lessons that will help keep your systems and tools protected.

AWS re:Inforce 2021 will kick off with my keynote on Tuesday, August 24. You’ll hear about the latest innovations in cloud security from AWS and learn what you can do to foster a culture of security in your business. Take a look at my re:Invent 2020 presentation, AWS Security: Where we’ve been, where we’re going, or this short overview of the top 10 areas security groups should focus on for examples of the type of content to expect.

For those who are just getting started on AWS and for our more tenured customers, AWS re:Inforce offers you an opportunity to learn how to prioritize your security posture and investments. Using the Security pillar of the AWS Well-Architected Framework, sessions will address how you can build practical and prescriptive measures to protect your data, systems, and assets.

Sessions are offered at all levels and for all backgrounds, from business to technical, and there are learning opportunities in over 100 sessions across five tracks: Data Protection & Privacy; Governance, Risk & Compliance; Identity & Access Management; Network & Infrastructure Security; and Threat Detection & Incident Response. In these sessions, you’ll connect with and learn from AWS experts, customers, and partners who share actionable insights that you can apply in your everyday work. AWS re:Inforce is interactive, with sessions like chalk talks and lecture-style breakout content available to suit your learning style and goals. Sessions will be available from the intermediate (200) through expert (400) levels, so you can grow your skills, no matter where you are in your career. Finally, there will be a leadership session for each track, where AWS leaders will share best practices and trends in each of these areas.

At re:Inforce, AWS developers and experts will cover the latest advancements in AWS security, compliance, privacy, and identity solutions—including actionable insights your business can use right now. Plus, you’ll learn from AWS customers and partners who are using AWS services in innovative ways to protect their data, achieve security at scale, and stay ahead of bad actors in this rapidly evolving security landscape.

We hope you can join us in Houston, and we want you to feel safe if you do. The health and safety of our customers, partners, and employees remains our top priority. If you want to learn more about health measures that are being taken at re:Inforce, visit our Health Measures page on the conference website. If you’re not yet comfortable attending in person, or if local travel restrictions prevent you from doing so, register to access a livestream of my keynote for free. Also, a selection of sessions will be recorded and available to watch after the event. Keep checking the AWS re:Inforce website for additional updates.

A full conference pass is $1,099. However, if you register today with the code “RFSALUwi70xfx” you’ll receive a $300 discount (while supplies last).

We’re excited to get back to re:Inforce; it is emblematic of our commitment to giving customers direct access to the latest security research and trends. We’ll continue to release additional details about the event on our website, and we look forward to seeing you in Houston!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

AWS welcomes Wickr to the team

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-welcomes-wickr-to-the-team/

We’re excited to share that AWS has acquired Wickr, an innovative company that has developed the industry’s most secure, end-to-end encrypted, communication technology. With Wickr, customers and partners benefit from advanced security features not available with traditional communications services – across messaging, voice and video calling, file sharing, and collaboration. This gives security conscious enterprises and government agencies the ability to implement important governance and security controls to help them meet their compliance requirements.

wickrToday, public sector customers use Wickr for a diverse range of missions, from securely communicating with office-based employees to providing service members at the tactical edge with encrypted communications. Enterprise customers use Wickr to keep communications between employees and business partners private, while remaining compliant with regulatory requirements.

The need for this type of secure communications is accelerating. With the move to hybrid work environments, due in part to the COVID-19 pandemic, enterprises and government agencies have a growing desire to protect their communications across many remote locations. Wickr’s secure communications solutions help enterprises and government organizations adapt to this change in their workforces and is a welcome addition to the growing set of collaboration and productivity services that AWS offers customers and partners.

AWS is offering Wickr services effective immediately and Wickr customers, channel, and business partners can continue to use Wickr’s services as they do today. To get started with Wickr visit www.wickr.com.

Want more AWS Security news? Follow us on Twitter.

Author

Stephen Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Security is the top priority for Amazon S3

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/security-is-the-top-priority-for-amazon-s3/

Amazon Simple Storage Service (Amazon S3) launched 15 years ago in March 2006, and became the first generally available service from Amazon Web Services (AWS). AWS marked the fifteenth anniversary with AWS Pi Week—a week of in-depth streams and live events. During AWS Pi Week, AWS leaders and experts reviewed the history of AWS and Amazon S3, and some of the key decisions involved in building and evolving S3.

As part of this celebration, Werner Vogels, VP and CTO for Amazon.com, and Eric Brandwine, VP and Distinguished Engineer with AWS Security, had a conversation about the role of security in Amazon S3 and all AWS services. They touched on why customers come to AWS, and how AWS services grow with customers by providing built-in security that can progress to protections that are more complex, based on each customer’s specific needs. They also touched on how, starting with Amazon S3 over 15 years ago and continuing to this day, security is the top priority at AWS, and how nothing can proceed at AWS without security that customers can rely on.

“In security, there are constantly challenging tradeoffs,” Eric says. “The path that we’ve taken at AWS is that our services are usable, but secure by default.”

To learn more about how AWS helps secure its customers’ systems and information through a culture of security first, watch the video, and be sure to check out AWS Pi Week 2021: The Birth of the AWS Cloud.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful, inclusive content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Announcing the AWS Security and Privacy Knowledge Hub for Australia and New Zealand

Post Syndicated from Phil Rodrigues original https://aws.amazon.com/blogs/security/announcing-the-aws-security-and-privacy-knowledge-hub-for-australia-and-new-zealand/

Cloud technology provides organizations across Australia and New Zealand with the flexibility to adapt quickly and scale their digital presences up or down in response to consumer demand. In 2021 and beyond, we expect to see cloud adoption continue to accelerate as organizations of all sizes realize the agility, operational, and financial benefits of moving to the cloud.

To fully harness the benefits of the digital economy it’s important that you remain vigilant about the security of your technology resources in order to protect the confidentiality, integrity, and availability of your systems and data. Security is our top priority at AWS, and more than ever we believe it’s critical for everyone to understand the best practices to use cloud technology securely. Organizations of all sizes can benefit by implementing automated guardrails that allow you to innovate while maintaining the highest security standards. We want to help you move fast and innovate quickly while staying secure.

This is why we are excited to announce the new AWS Security and Privacy Knowledge Hub for Australia and New Zealand.

The new website offers many resources specific to Australia and New Zealand, including:

  • The latest local security and privacy updates from AWS security experts in Australia and New Zealand.
  • How customers can use AWS to help meet the requirements of local privacy laws, government security standards, and banking security guidance.
  • Local customer stories about Australian and New Zealand companies and agencies that focus on security, privacy, and compliance.
  • Details about AWS infrastructure in Australia and New Zealand, including the upcoming AWS Region in Melbourne.
  • General FAQs on security and privacy in the cloud.

AWS maintains the highest security and privacy practices, which is one reason we are trusted by governments and organizations around the world to deliver services to millions of individuals. In Australia and New Zealand, we have hundreds of thousands of active customers using AWS each month, with many building mission critical applications for their business. For example, the National Bank of Australia (NAB) provides banking platforms like NAB Connect that offer services to businesses of all sizes, built on AWS. The Australian Taxation Office (ATO) offers the flexibility and speed for all Australians to lodge their tax returns electronically on the MyTax application, built on AWS. The University of Auckland runs critical teaching and learning applications relied on by their 18,000 students around the world, built on AWS. AWS Partner Versent helps businesses like Transurban and government agencies like Service NSW operate in the cloud securely, built on AWS.

Security is a shared responsibility between AWS and our customers. You should review the security features that we provide with our services, and be familiar with how to implement your security requirements within your AWS environment. To help you with your responsibility, we offer security services and partner solutions that you can utilize to implement automated and effective security in the cloud. This allows you to focus on your business while keeping your content and applications secure.

We’re inspired by the rapid rate of innovation as customers of all sizes use the cloud to create new business models and work to improve our communities, now and into the future. We look forward to seeing what you will build next on AWS – with security as your top priority.

The AWS Security and Privacy Knowledge Hub for Australia and New Zealand launched today.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Phil Rodrigues

Phil is the Head of the Security Team, Australia & New Zealand for AWS, based in Sydney. He and his team work with AWS’s largest customers to improve their security, risk and compliance in the cloud. Phil is a frequent speaker at AWS and cloud security events across Australia. Prior to AWS he worked for over 20 years in Information Security in the US, Europe, and Asia-Pacific.

Getting Started with serverless for developers: Part 4 – Local developer workflow

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/getting-started-with-serverless-for-developers-part-4-local-developer-workflow/

This blog is part 4 of the “Getting started with serverless for developers” series, helping developers start building serverless applications from their IDE.

Many “getting started” guides demonstrate how to build serverless applications from within the AWS Management Console. However, most developers spend the majority of their time building from within their local integrated development environment (IDE).

The next two blog posts in this series focus on the serverless developer workflow. They describe how to check logs, test, and iterate on business logic while building locally, and how this differs from traditional applications.

This blog post explains how the developer workflow for building serverless applications differs from a traditional developer workflow. It shows the methods that I use to test business logic locally before deploying to a sandbox AWS developer account, and how to test against cloud services as you build. This eliminates the need to deploy to the AWS Cloud each time you want to test a code change.

Traditional developer workflow

Developers typically use the following workflow cycle before committing code to the main branch:

  1. Write code.
  2. Save code.
  3. Run code.
  4. Check results.

This is sometimes referred to as the inner loop, shown as follows:

With traditional (non-serverless) applications, developers commonly create a development environment on their local machine. This local development environment keeps parity with the staging and production environments. It allows developers to test their application locally end-to-end before committing code to the main branch.

Serverless developer workflow

Part 2: the business logic, explains how serverless applications use managed services that abstract away the need for developers to patch and scale their application infrastructure. This means that the code base in a serverless application is focused purely on business logic, allowing managed services to handle other important layers such as:

  • Authorization
  • Presentation
  • Database
  • Application integration
  • Notification

A good serverless developer workflow enables developers to test and iterate on business logic quickly. It allows them to check that the business logic runs correctly with the managed services that compose an application.

To achieve this, the best approach is not to try and emulate managed services on your local development machine. Instead, your local code should interact directly with real cloud services in a sandboxed AWS account.

This approach lets you rapidly test and iterate on business logic locally, deploying to the development environment to test for infrastructure, security, and environment configuration changes. Once the business logic is ready in the development environment, it can be deployed to other environments (test, staging, production) via CI/CD automation or manually triggered commands.

The following sections explain how I test business logic locally using a custom-written test harness. Each time I create a Lambda function I create a directory to hold:

  1. The function code.
  2. A relative package.json.
  3. A file called testharness.js.

The test harness file is used to run the Lambda function code on my local development machine. It is configured to mock environment variables loaded from a file named `env.json` and loads a JSON test event payload located in the events directory.

Testing Lambda function code locally with a test harness

Part 2: the business logic, explains the anatomy of a Lambda function and its handler:

A small Lambda function

Note that the function handler receives an input payload called an event object. To test the local function code effectively, use a test event object that represents the production event object.

The serverless application introduced in getting started with serverless part 1, shows that Amazon API Gateway invokes the Lambda function.

This invocation contains an event object with a JSON representation of the HTTP request. The event object has a defined structure. Follow the steps in this GitHub repository to see how to create a test event object.

Before you start

To deploy this stage of the application, follow the steps from post 1 to clone and deploy the sample application.

  1. Run the following commands from the root directory of the cloned repository:
    cd ./part_4/src_starred
    npm install

The following steps show how I configure my testharness.js file to run my function code:

// Mock event
const event = require('../events/testEvent.json')
// Mock environment variables
const environmentVars = require('../env.json')
process.env.AWS_REGION = environmentVars.AWS_REGION
process.env.localTest = true
process.env.slackEndpoint= environmentVars.slackEndpoint
process.env.bucket = environmentVars.bucket
// Lambda handler
const { handler } = require('./app')
const main = async () => {
  console.time('localTest')
  console.dir(await handler(event))
  console.timeEnd('localTest')
}
main().catch(error => console.error(error))
  1. A test event object is loaded into a variable called event.
  2. The environment variables required by the Lambda function are defined in env.json and loaded.
  3. The Lambda function code is loaded into a variable called handler
  4. The Lambda function code is run synchronously
  5. The console shows output from the Lambda function code, along with any errors that occur.

I run my test harness file by entering the following command in a terminal window:

$ node testharness.js

This produces the following output:

The output indicates that the function code ran without error, it returned a 200 status code and completed in 30.999 ms.

To generate an error in the function code,  I change the app.js file by commenting out the following line:

//const axios = require('axios');

I save the file and run the test harness again. I see the following response in the terminal window:

f

This indicates an error in my function code that I must resolve before deploying to my AWS account.

By iteratively updating and locally testing my code, I maintain a fast inner loop. This eliminates the need to deploy to the AWS Cloud each time I want to test a code change.

Testing against cloud resources

In many instances, a Lambda function interacts with other cloud resources. This could be via an SDK or some other native integration. In this case, you should deploy those resources to a sandboxed AWS developer account.

In the following example, I update a Lambda function to log each inbound HTTP request to a bucket in Amazon S3, a highly scalable object storage service running in the AWS cloud. To do this, I use the JavaScript SDK:

s3.putObject(params, function(err, data)

I update the AWS Serverless Application Model (AWS SAM) template to define a new S3 bucket:

SrcBucket:
Type: AWS::S3::Bucket

I change into the part_4 directory then build and deploy the application with the following commands:

cd ../part_4
sam build
sam deploy --guided --config-file ../samconfig.toml

After deploying, the output shows:

I copy the new bucket value to the environment variable defined in in /part_4/env.json .

"slackEndpoint": "Insert_Slack_Endpoint",
"bucket" : "Insert_S3_Bucket_Name"
}

Now I am ready to test the local Lambda function code against cloud resources in my sandbox AWS development account. I run a new local test with the following command:

node testHarness.js

The terminal returns:

This indicates that the Lambda function code completed without error. I can verify this by checking the contents of the S3 bucket using the AWS Command Line Interface (AWS CLI):

aws s3 ls s3://githubtoslackapp-srcbucket-ge4wkt9dljwa

This returns the following, confirming that the request object has been saved to S3 and the application code is running correctly:

Conclusion

Using managed services to build serverless applications helps developers focus on business logic. It also changes the developer workflow compared with building traditional (non-serverless) applications.

A good serverless developer workflow enables developers to test and iterate on business logic quickly while still being able to interact with cloud services. This blog post shows how I achieve this by using a test harness to run function code locally and deploying application resources to a sandboxed developer account.

In the next blog post, I show how to invoke Lambda functions deployed to sandboxed developer account, without leaving your IDE. This lets you test for infrastructure, security, and environment configuration changes while building.

AWS Shield threat landscape review: 2020 year-in-review

Post Syndicated from Mario Pinho original https://aws.amazon.com/blogs/security/aws-shield-threat-landscape-review-2020-year-in-review/

AWS Shield is a managed service that protects applications that are running on Amazon Web Services (AWS) against external threats, such as bots and distributed denial of service (DDoS) attacks. Shield detects network and web application-layer volumetric events that may indicate a DDoS attack, web content scraping, or other unauthorized non-human traffic that is interacting with AWS resources.

In this blog post, I’ll show you some of the volumetric event trends from network traffic and web request patterns that we observed in 2020 as more workloads moved to the cloud. It includes insights that are broadly applicable to cloud applications and insights that are specific to gaming applications. I will also share tips and best practices that you can follow to protect the availability of the applications that you run on AWS.

DDoS trends as more developers rely on the cloud

In 2020, we saw an increase in developers building applications on AWS and protecting their availability with AWS Shield Advanced, which includes AWS WAF at no additional cost. The DDoS threat vectors we observed were similar to the ones that were observed in 2019, but they occurred with greater frequency. Between February 2020 and April 2020, we observed a 72% increase in the monthly number of events that were detected by Shield.

TCP SYN floods and UDP reflection attacks, which attempt to reflect and amplify packets off legitimate services running on the internet, were among the most common infrastructure-layer events detected by AWS Shield in 2020. (In this blog post, we’ll use the term infrastructure layer to refer to Layers 3 and 4 of the OSI model.) These tactics attempt to affect the availability of an application by overwhelming its ability to process packets or establish new connections on behalf of legitimate users. One of the oldest UDP reflection vectors, DNS reflection, remains the most common, at 15.5% of all infrastructure-layer events detected by Shield. TCP SYN floods were the second most common at 13.8%. This is unsurprising, because web applications commonly rely upon both DNS and TCP traffic. Bad actors can find a consistent supply of systems on the internet that can be used as reflectors, due to the properties of these protocols, or system misconfiguration.

Bad actors may use application-layer requests, in isolation or together with infrastructure-layer attacks, in their attempt to affect the availability of an application. The most common application-layer attack observed by Shield in 2020 was the web request flood, an observation that is consistent with prior years. This vector gives a bad actor more leverage, meaning that they can have a greater effect with less traffic and effort. Instead of having to exhaust the capacity of a network path, device, or other lower-level component, they only need to send more web requests than the application is able to handle. This attack vector was a significant cause of increased volumetric events detected by Shield in the first half of 2020. For more information about events detected by Shield during 2020, see Figure 1.
 

Figure 1: Monthly number of volumetric events detected by AWS Shield in 2020

Figure 1: Monthly number of volumetric events detected by AWS Shield in 2020

A closer look at web application-layer attacks

The request volume of web application-layer events that are detected by AWS Shield has increased, an indication that bad actors are making greater investments in tactics that are more challenging to detect and mitigate than infrastructure-layer events. Shield continuously monitors DDoS activity and alerts customers if there is an elevated threat at any point in time. In 2020, Shield reported elevated threats on 53 days, 33 of which were caused by high-volume web request floods. There were 55 events with a volume of greater than 500,000 requests per second (RPS), some of which reached millions of RPS. The RPS of the 99th percentile (P99) of the volume of web request floods detected by Shield nearly doubled between the first and second halves of the year. (The 99th percentile is the request volume in RPS, below which 99% of request floods were observed.). For more information about the volume of web request floods detected by Shield in 2020, see Figure 2.
 

Figure 2: Quarterly P90 and P99 volume of web request floods detected by AWS Shield in 2020

Figure 2: Quarterly P90 and P99 volume of web request floods detected by AWS Shield in 2020

It’s important to protect web applications against DDoS attacks of any size. The more common request floods are relatively small, but smaller attacks can affect an application if it isn’t architected for DDoS resiliency. You can follow these best practices to help protect your web application against request floods and other DDoS attacks:

  • Protect internet-facing resources with AWS Shield Advanced. You can use AWS Shield Advanced to protect your applications that are running on AWS against most common, frequently occurring network and transport layer DDoS attacks. When you add protected resources in AWS Shield Advanced, network volumetric attacks against those resources are detected and mitigated more quickly. You also receive visibility into security events by using the AWS Shield console, API, or Amazon CloudWatch metrics. If you need assistance during an active event, you can quickly engage with AWS Shield experts or escalate to the AWS Shield Response Team (SRT).
  • Access greater network and request capacity with Amazon CloudFront and Amazon Route 53. You can use these services to serve static and dynamic web content, as well as DNS answers, by using the global network of AWS edge locations. This provides you with greater capacity to help mitigate large volumetric attacks. Applications that are fronted by Amazon CloudFront and Amazon Route 53 also benefit from inline mitigation that continually inspects all traffic and mitigates most infrastructure-layer DDoS attempts in less than one second. CloudFront and the AWS Shield DDoS mitigation systems use SYN cookies to verify new connections, which protects against SYN floods and other traffic floods that aren’t valid for the application. (A SYN cookie is a technique by which the Shield infrastructure encodes connection setup information into the SYN response (SYN-ACK packet) in such a way that the TCP connection resources are only consumed for legitimate clients who complete the TCP handshake.)
  • Use AWS WAF and rate-based rules to mitigate application-layer attacks. AWS Shield Advanced provides you with protection against infrastructure-layer attacks that can be mitigated with network-based DDoS mitigation systems. When you add Shield Advanced protection to CloudFront or Application Load Balancer (ALB) for serving web content, you receive AWS WAF at no additional cost. AWS Managed Rules for AWS WAF makes it easy to select and apply pre-configured rules, depending on your specific requirements. You also receive web request flood detection and can mitigate security events by configuring rate-based rules to match and temporarily block IP addresses that are sending traffic above a rate that you define. For larger applications, or applications that span multiple AWS accounts, you can use AWS Firewall Manager to deploy and manage rules across all of your resources.

Considerations unique to gaming use cases

On AWS, you can build and protect any kind of application. Internet-facing applications are more likely to receive DDoS attacks, particularly if a bad actor is motivated to disrupt the normal function of the application. We looked across AWS Shield data and found that one type of application stood out as the most likely to be targeted by DDoS attacks: gaming servers. Gaming servers host matches between players on their personal computers or gaming consoles. 16% of infrastructure-layer events detected by Shield in 2020 targeted gaming applications. The application might be targeted simply out of malice, or to gain an advantage in the game. Between Q1 2020 and Q2 2020, we observed a 46% increase in the frequency of events that were detected on behalf of gaming applications. This increase aligns with the increased use of residential internet networks during the same time.

There are unique considerations for protecting a gaming application against DDoS attacks. Many gaming applications rely upon UDP traffic, which makes it infeasible to block UDP as a countermeasure against the most common DDoS attacks, like UDP reflection attacks or UDP floods. You can nevertheless protect your gaming application and the experience of your players by using Elastic IP addresses and protecting these resources with AWS Shield Advanced. Shield Advanced has the ability to perform deep packet inspection of all traffic, even at extremely high PPS rates. Using that powerful tool, the AWS Shield Response Team (SRT) can work with you to understand your application and build a custom mitigation that allows only valid player traffic.

Reacting to extortion attempts

From August 2020 through November 2020, we saw a revival of DDoS extortion attempts, a tactic that is now more than six years old. Each extortion attempt reported by customers to the AWS SRT had familiar characteristics. A malicious actor would target an application that wasn’t running on AWS as a proof of concept and then threaten a larger, follow-on attack if a ransom wasn’t paid. Although it’s very uncommon for the follow-on attack to actually occur, application owners take these threats seriously and use the opportunity to assess their own protection and operational readiness. In approximately 90% of AWS support cases related to these attempts, the SRT assisted the application owners directly with their preparation. We also assisted Shield Advanced customers who weren’t directly targeted by extortion attempts but were aware of other extortion campaigns.

One question that we frequently hear is how AWS can help developers monitor their applications and take quick action if a possible DDoS attack is detected. When you protect your resources with AWS Shield Advanced, you have the option to associate an Amazon Route 53 health check. The status of the health check is used to improve the decisions that are made by the Shield detection system. If you have Shield Advanced proactive engagement enabled, the SRT is automatically engaged any time a Shield event corresponds to an unhealthy Route 53 health check that is associated to your protected resource. Based on the contact information provided in the Shield console, an SRT engineer will contact you to coordinate a response to the detected event. If you’re running a web application, you can choose to delegate access to your Shield Advanced and AWS WAF APIs to the SRT and provide the team with copies of your AWS WAF logs. During an escalation, an SRT engineer will evaluate your logs for DDoS signatures and robotic patterns and assist in building effective mitigations.

Summary

In this blog post, I shared some of the trends that were observed by AWS Shield in 2020, as well as steps that you can take to protect the availability of your applications against DDoS attacks. If you’d like to learn more about DDoS protection on AWS and configuring AWS Shield Advanced, check out the following resources:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Shield forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mário Pinho

Mário is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into their simpler components. In his free time, he pretends to be an artist by playing piano and doing landscape photography.

AWS Verified episode 5: A conversation with Eric Rosenbach of Harvard University’s Belfer Center

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-5-a-conversation-with-eric-rosenbach-of-harvard-universitys-belfer-center/

I am pleased to share the latest episode of AWS Verified, where we bring you conversations with global cybersecurity leaders about important issues, such as how to create a culture of security, cyber resiliency, Zero Trust, and other emerging security trends.

Recently, I got the opportunity to experience distance learning when I took the AWS Verified series back to school. I got a taste of life as a Harvard grad student, meeting (virtually) with Eric Rosenbach, Co-Director of the Belfer Center of Science and International Affairs at Harvard University’s John F. Kennedy School of Government. I call it, “Verified meets Veritas.” Harvard’s motto may never be the same again.

In this video, Eric shared with me the Belfer Center’s focus as the hub of the Harvard Kennedy School’s research, teaching, and training at the intersection of cutting edge and interdisciplinary topics, such as international security, environmental and resource issues, and science and technology policy. In recognition of the Belfer Center’s consistently stellar work and its six consecutive years ranked as the world’s #1 university-affiliated think tank, in 2021 it was named a center of excellence by the University of Pennsylvania’s Think Tanks and Civil Societies Program.

Eric’s deep connection to the students reflects the Belfer Center’s mission to prepare future generations of leaders to address critical areas in practical ways. Eric says, “I’m a graduate of the school, and now that I’ve been out in the real world as a policy practitioner, I love going into the classroom, teaching students about the way things work, both with cyber policy and with cybersecurity/cyber risk mitigation.”

In the interview, I talked with Eric about his varied professional background. Prior to the Belfer Center, he was the Chief of Staff to US Secretary of Defense, Ash Carter. Eric was also the Assistant Secretary of Defense for Homeland Defense and Global Security, where he was known around the US government as the Pentagon’s cyber czar. He has served as an officer in the US Army, written two books, been the Chief Security Officer for the European ISP Tiscali, and was a professional committee staff member in the US Senate.

I asked Eric to share his opinion on what the private sector and government can learn from each other. I’m excited to share Eric’s answer to this with you as well as his thoughts on other topics, because the work that Eric and his Belfer Center colleagues are doing is important for technology leaders.

Watch my interview with Eric Rosenbach, and visit the AWS Verified webpage for previous episodes, including interviews with security leaders from Netflix, Vodafone, Comcast, and Lockheed Martin. If you have an idea or a topic you’d like covered in this series, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

C5 Type 2 attestation report now available with one new Region and 123 services in scope

Post Syndicated from Mercy Kanengoni original https://aws.amazon.com/blogs/security/c5-type-2-attestation-report-available-one-new-region-123-services-in-scope/

Amazon Web Services (AWS) is pleased to announce the issuance of the 2020 Cloud Computing Compliance Controls Catalogue (C5) Type 2 attestation report. We added one new AWS Region (Europe-Milan) and 21 additional services and service features to the scope of the 2020 report.

Germany’s national cybersecurity authority, Bundesamt für Sicherheit in der Informationstechnik (BSI), established C5 to define a reference standard for German cloud security requirements. Customers in Germany and other European countries can use AWS’s attestation report to help them meet local security requirements of the C5 framework.

The C5 Type 2 report covers the time period October 1, 2019, through September 30, 2020. It was issued by an independent third-party attestation organization and assesses the design and the operational effectiveness of AWS’s controls against C5’s basic and additional criteria. This attestation demonstrates our commitment to meet the security expectations for cloud service providers set by the BSI in Germany.

We continue to add new Regions and services to the C5 compliance scope so that you have more services to choose from that meet regulatory and compliance requirements. AWS has added the Europe (Milan) Region and the following 21 services and service features to this year’s C5 scope:

You can see a current list of the services in scope for C5 on the AWS Services in Scope by Compliance Program page. The C5 report and Continuing Operations Letter is available to AWS customers through AWS Artifact. For more information, see Cloud Computing Compliance Controls Catalogue (C5).

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mercy Kanengoni

Mercy is a Security Audit Program Manager at AWS. She leads security audits across Europe, and she has previously worked in security assurance and technology risk management.

How AWS SSO Active Directory sync enhances AWS application experiences

Post Syndicated from Sharanya Ramakrishnan original https://aws.amazon.com/blogs/security/how-aws-sso-active-directory-sync-enhances-aws-application-experiences/

Identity management is easiest when you can manage identities in a centralized location and use these identities across various accounts and applications. You also want to be able to use these identities for other purposes within applications, like searching through groups, finding members of a certain group, and sharing projects with other users or groups. For example, when you use AWS Systems Manager Change Manager, you might want to search for groups or distinguish a user from a list of users with the same name based on their email address. You expect that the user and group details you see are consistent with the details that appear in a different application.

AWS Single Sign-On (AWS SSO) streamlines identity management by enabling you to connect an identity provider (IdP), such as the AWS internal directory or a range of partners and use the IdP identity information for access and collaboration within applications. Now you can get the same benefits when you connect your Microsoft Active Directory (AD) as your AWS SSO identity source. With the release of AWS SSO AD sync, you’ll be able to access AD groups, along with AD users, from AWS SSO-integrated applications, and use these groups and users for collaborative experiences. AD sync automatically brings identity information from your Active Directory into AWS SSO and makes this information available to you within applications. It makes sure that the user and group details you access in Amazon Web Services (AWS) stay consistent with information in Active Directory through periodic synchronizations.

In this post, I’ll walk you through key use cases that highlight how applications use the user and group information that is synchronized from Active Directory and how the AD synchronization capability works to make this possible.

Access control

Your ability to manage who can access which parts of an application or who has the necessary permissions to drive certain tasks within an application relies on the application’s ability to retrieve user and group information. It’s also important that any access that you configure is updated dynamically when there are any changes at the source. For example, if you define approval access to a group in an application and a member leaves the group when they change roles within the company, their group-based access within the application should be revoked. With AD sync, AWS SSO-integrated applications can utilize user and group information that is periodically updated, and therefore stays current.

Suppose you’ve set up an approval template in Systems Manager Change Manager for patching instances and want to require that all members of the IT Security Operations team approve any change requests created with this template. AD sync enhances this process by giving you the option to define approvers at the AD group level. If you have an IT Security Operations group in Active Directory and the group has permissions set up to access AWS SSO, this group will be available to you in Change Manager to select as an approver in your template. If a member of the IT Security Operations group switches roles and leaves the team, AD sync helps to ensure that the member’s access to approve patching-related change requests is revoked, by dynamically updating the IT Security Operations group in Change Manager once the member is removed from the group in Active Directory.

It’s common for teams at companies to work on cross-functional initiatives that involve sharing projects, reports, or dashboards with members of different teams for their review and feedback, or for collaboration. In such cases, you want to be able to easily search for users and groups within the application and share out relevant artifacts. AD sync makes it possible to access users and groups within AWS SSO-integrated applications, and you can then use this information for searching and sharing.

For example, if you use an AWS SSO-integrated application like AWS IoT SiteWise to create and share dashboards for metrics reviews with leadership or to collaborate with other teams in your organization, you’ll now be able to see all users with access to AWS. AD sync makes it possible for AWS IoT SiteWise to access all users, rather than only the users who signed in to AWS at least once.

Administrative efficiency

If you’re a platform admin or cloud admin who manages access to AWS SSO in your company, assigning users and groups with access to AWS accounts and resources is a routine task that requires administrative effort. Because AD sync periodically syncs AD groups into AWS SSO, you only need to pre-define access to resources for an AD group once. After that point, any new member, such as a new employee, who is added to the AD group in Active Directory will gain access to resources tied to the AD group. The new employee will also be added to AWS SSO through AD sync, and their information will stay current through periodic syncs. Therefore, the administrative effort involved on your end for managing users is reduced.

Similarly, if an employee leaves the company, you will no longer have to worry about deleting their information in AWS, because AD sync automatically deletes user and group objects that you delete in Active Directory. This simplifies your user lifecycle management and reduces the manual effort involved in the process.

How Active Directory sync works in the background

This new AD sync feature is for customers who want to use their AD identities with AWS SSO, without setting up a separate IdP, such as AD Federation Service or Azure AD. To use this capability, you must connect AWS SSO to your Active Directory by using AWS SSO with either AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) or AD Connector. Learn more about using AWS Managed Microsoft AD and AD Connector.

AD sync brings in user and group information from your Active Directory and stores it in the AWS SSO identity store. Once this information is synchronized, AWS SSO-integrated applications can use the user and group information to deliver collaborative experiences, such as sharing a dashboard with other users.

AD sync obtains a list of users and groups to be synchronized from Active Directory based on the assignments that you make to AWS accounts and applications. It then syncs those users and groups (including the group members) into the AWS identity store, keeping the information updated through periodic syncs, as shown in Figure 1.

Figure 1: Active Directory synchronization of users and groups

Figure 1: Active Directory synchronization of users and groups

If a user has assignments based on attribute-based access-control (ABAC) and changes departments, attributes will automatically update at the next sync. If a user happens to sign in before the next sync, the attributes will be updated at sign-in to maintain consistency. The user will now see their assignments updated based on their new department.

AD sync also syncs in all members of a group, including sub-groups or nested groups. It flattens members of the nested groups, that is, it adds them to the parent group in the AWS SSO identity store. For example, if Group B is a member or nested group of Group A in Active Directory, then members of Group B are also synced into AWS SSO and added directly to Group A, as shown in Figure 2. So, only Group A can be used in AWS SSO accounts and applications.

Figure 2: Members of nested Group B flattened and added to parent Group A

Figure 2: Members of nested Group B flattened and added to parent Group A

If you delete a user or group in Active Directory, AD sync automatically deletes the user or group from the AWS SSO identity store. You won’t see the deleted identity appear in AWS SSO-integrated applications, either. However, if you only delete the assignments for a user or group, the user or group will remain in AWS SSO and won’t be automatically deleted.

Summary

In this blog post, I explained how user and group synchronization can help deliver better application experiences with less administrative effort. I also covered how the AWS SSO AD sync capability delivers this benefit for applications such as AWS Systems Manager and AWS IoT SiteWise. AD sync capability is available to you at no additional cost in all AWS Regions supported by AWS SSO. If you want to get started with AWS SSO or learn more about AD sync, see the AWS SSO User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sharanya Ramakrishnan

Sharanya is a Senior Technical Product Manager in the AWS Identity team. She enjoys solving customer problems through meaningful products, particularly in the dynamic security and identity space. Outside of work, Sharanya likes to travel and enjoys hiking and reading.

Essential security for everyone: Building a secure AWS foundation

Post Syndicated from Byron Pogson original https://aws.amazon.com/blogs/security/essential-security-for-everyone-building-a-secure-aws-foundation/

In this post, I will show you how teams of all sizes can gain access to world-class security in the cloud without a dedicated security person in your organization. I look at how small teams can build securely on Amazon Web Services (AWS) in a way that’s cost effective and time efficient. I show you the key elements to create a foundation with good security controls, and how you can then use that foundation as a base to build a secure workload upon. In this post, I will also share a lab guide to get you started today. It may look like a lot of work but I ran this as a day-long workshop across Australia in 2019 reaching many start-ups and small businesses. The majority of them implemented the guide by mid-afternoon.

Many large organizations run their regulated workloads on AWS and customers of all sizes have the same security controls available to them. These large organizations have gone through a rigorous process to ensure that the right security controls are available to them. If you go to the AWS Startups Blog, you can read the story of two Australian customers and their journeys to set up a secure foundation on AWS: Tic:Toc, an Australian scaleup in the financial services industry and FYI, a start-up with their document and process management system for accounting practices.

The Well-Architected Framework has been developed to help cloud architects build secure, high performance, resilient, and efficient infrastructure for their applications. Based on five pillars—operational excellence, security, reliability, performance efficiency, and cost optimization—the Framework provides a consistent approach for customers and partners to evaluate architectures and implement designs that will scale over time. In this post, I will discuss the key areas from the security pillar to help you build a secure foundation. These areas are:

  • Security foundations. You can use an AWS account as a coarse boundary for isolating resources and use cross-account roles to share common infrastructure. Protect your AWS accounts and use tools like AWS Control Tower to help you get started quickly.
  • Identity and access management. Be deliberate about who has access to what.
  • Detection. Start with the implementation of baseline logging and monitoring. Do this in a way that’s implemented automatically so it is scalable. When incidents occur, this will help to ensure that basic log data is in place to aid your investigations. Configure alerts for key events and define your response plan so you are prepared to take action.
  • Infrastructure and data protection. Apply defense in depth, starting with the features that AWS provides you, to help build a secure application.
  • Incidence response. Ensure your team is prepared to respond to incidents by educating your team, creating a response plan, simulating scenarios so your team knows what to do before it happens and iterating to improve your plan.

Small teams want to move fast and deliver value. To support that, you want to build a secure foundation. This post focuses on the key initial steps to help you achieve that. To help guide you through the content in this post and implement your foundation faster, we have a Quick Steps to Security Success quest in our Well-Architected Labs.

Security Foundations

With a strong foundation in place to support your workload, you can look at how to build securely on top of it. Security is part of every feature, not a separate feature to be implemented later. Teams need to be comfortable with the idea that a feature isn’t complete just when it’s tested and in production. Adjust your culture to think of complete as meaning tested and secure in production.

An AWS account is a boundary within which resources are deployed. You can open multiple AWS accounts for different purposes. For example, to separate different applications you operate by splitting different workloads across multiple environments in different accounts, to provide developer sandbox accounts or to isolated resources such as a security account. A workload is a collection of systems and applications to meet a specific business objective and could be a useful guide for determining what needs to be deployed into separate accounts. From a security point of view, being able to use an AWS account as a boundary helps isolate different parts of your workloads. The account boundary acts as a coarse isolation boundary and you have to be deliberate about how you allow access to resources in it. For human access, this can form a basis for providing least privilege access – an IAM best practice for ensuring that users only have permissions required to fulfil their tasks.

A best practice is to keep users away from data – least privilege could start with not providing access to the production environment. One way to achieve this is to create a separate account for your production workload and ensure that all regular operations are performed at a distance through tools such as pipelines or ticketing systems. Where human access is essential, only grant temporary human access for a fixed period of time. In addition to limited IAM policies, you can give people access only to AWS accounts containing the workload they need access to. For machine-to-machine access you can apply the same concepts and use cross-account access.

At a minimum, it’s a best practice to have a separate organizational management account that’s only used to establish controls across your set of accounts and for configuring identity and access management within your organization. The same identity configuration is then used across accounts. Also, set up a dedicated account for logging to more securely store data such as audit logs. To increase security, create an audit account that has read-only access to the logs and other accounts used by your security team. Then create different accounts for different environments and workloads.

The easiest way to get started creating and organizing accounts is to use AWS Control Tower, which will set up a separated logging and audit account, an AWS Single Sign-On (AWS SSO) directory—which supports identity federation with SAML 2.0—as well as a few basic guardrails. AWS SSO can also give users a single view of all the accounts and roles within those accounts that they have access to. AWS Control Tower also includes a basic account-creation tool—the Account Factory—that you can use to create additional accounts within your AWS account structure.

Guardrails are an important mechanism that customers can implement to help maintain security in the cloud. AWS Control Tower provides two types of guardrails: preventive and detective.

Preventive guardrails are designed to prevent users from performing certain actions; for example, preventing a user from disabling security logging. You can implement preventative guardrails through AWS Control Tower, which provides a feature of AWS Organizations called Service Control Policies (SCP) that you can use to set the maximum boundary for what is allowed in an account. These guardrails are either enforced or disabled.

Detective guardrails look at the state of resources in an account using AWS Config rules and indicate if resources are compliant to those rules or not. For example, looking for Amazon Simple Storage Service (Amazon S3) buckets that are publicly accessible. If you need to have data publicly available, be deliberate about how you do it.

AWS Control Tower has a number of mandatory guardrails that are necessary for the operation of AWS Control Tower as well as a number of strongly recommended and elective guardrails. The strongly recommended and elective guardrails help to ensure that you’re building a strong security posture as soon as you enable them.

There is no additional charge to use AWS Control Tower. However, when you set up AWS Control Tower, you will begin to incur costs for AWS services configured to set up your landing zone and mandatory guardrails. For further details see the AWS Control Tower pricing.

Identity and access management

Identity forms the basis of validating that users are who they say they are and how you give them permission to operate in your environment.

When you sign up for an AWS account, the first login you receive is the root user credentials. The root user credentials are very powerful and allows full access to all resources in the account. It’s critical that you protect your root account from unauthorized access, starting with multi-factor authentication. Multi-factor authentication uses a password (something you know) plus something you have (such as a one-time key or a hardware token) to create a more secure login. After you set up multi-factor authentication, both factors are required to access the root account. After that, use the root account only in emergencies, not in day-to-day operations. Moving from using the root account to using centralized identities allows you to manage your identities centrally and tie every action taken in your environment back an individual. The most effective way to enable connecting all actions to individual users is through federation.

Federation lets you reuse your existing identities, such as those you have in your organization’s identity directory. When a user joins your organization, the first thing you’re likely to do is to give them an identity (so they can do things like access your email systems) and when they leave, you would remove that identity and therefore the access. By federating your AWS accounts with your existing identity directory, you can use the same mechanisms that are tied to your business processes to provide AWS access. Using tools like AWS Single Sign-On (AWS SSO) enables you to quickly federate access for your users and maintain a mapping of the AWS IAM roles (an identity with specific permissions that can be assigned to or assumed by other identities) they have access to across accounts in your organization. If you don’t have an existing identity store you can still achieve a central identity store by using the built-in provider in AWS SSO. When you are assigning permissions, be deliberate with what access you give different users. Ensure that you’re creating and assigning roles based on least privilege—giving only as much access as users need to perform their tasks.

IAM is a feature of your AWS account provided at no additional charge and AWS SSO is offered at no extra charge. Implementing SSO is a low effort way to build a strong identity foundation. If you’ve been operating for a while on AWS, you should perform an audit of your existing AWS Identity and Access Management (IAM) users with a goal to move to a centralized model. An audit of your IAM resources (and centralized identities) will help you to understand who has access to your AWS environment, clear unused credentials, and check that users are assigned permissions that are relevant for their role. IAM access advisor will allow you to see when services were last accessed. Tools such as the IAM Access Analyzer will help you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. At the same time, make sure that your account contacts are up to date so that you don’t miss any important information from AWS. You can update these details under AWS billing and management in the console.

Detection

After some baseline controls are in place, you need to add controls to ensure that you are aware of what is happening in the environment and that actions are logged. To help you with governance, compliance and auditing your AWS environment you can configure AWS CloudTrail. A CloudTrail log shows you who attempted to take what actions against resources in your AWS account and if the action was allowed or denied. Having a secure store of these logs provides you with an audit history of who did what in your environment. AWS Control Tower configures a secure log store for you in the logging account.

Amazon GuardDuty is a security service that uses intelligent threat detection to alert you to unusual activity in your environment. GuardDuty uses CloudTrail logs to alert you to malicious activity and unauthorized behavior in addition to DNS logs and VPC Flow Logs—which are similar to network flow logs—to analyze the behavior of your workload. GuardDuty builds a baseline over time of activity in your account and alerts you when behavior that strays from the baseline is detected. For example, GuardDuty sends an alert when a user tries to escalate their privilege. These events can be configured in Amazon CloudWatch Events for alerting and triggering automatic actions—for example by triggering an AWS Lambda function to disable the user trying to escalate their privilege until you can contact them.

Implementing manual dashboards though Amazon CloudWatch or those provided with detective tools such as Amazon GuardDuty can give you a clear idea of what’s happening in your environment, but you should also configure alerting for key events. An initial, temporary way of achieving this could be by creating an Amazon CloudWatch Rule with an Amazon SNS topic as the destination and have your team subscribe their email to the SNS topic. As part of setting up alerts, ensure that there’s a remediation process defined for each alert that includes what action to take when an alert is triggered. In the longer term, as your cloud skills mature, you can evolve this to filter out alerts appropriately and iterate your response and remediation processes.

Having a single view of what’s happening in your infrastructure across all accounts and relevant regions gives you a clear picture of the overall state of your environment. Consider using AWS Security Hub to bring together alerts from GuardDuty, other AWS services such as AWS Inspector (for network availability and common vulnerabilities and exposures analysis), and partner products. Security Hub lets you consolidate findings from multiple sources and normalize them so they are comparable. This allows you to have a single view of where you need to take action and what high priority actions are required. Security Hub also allows you to enable compliance checks on your AWS infrastructure to help you adhere to best practices. A great starting point is AWS Foundational Security Best Practices standard.

Both GuardDuty and Security Hub include a free trial period and scale with usage after you turn them on. You can use the trial period to estimate what they will cost to use in all your AWS accounts.

Infrastructure and data protection

Build protection in layers and be aware of what features are available in the services that you’re using. Many AWS services include a specific section on security as part of their developer documentation. Before you add an AWS service, read the security section of the documentation and understand what options are available to you. Make sure that you are understand the cloud-native AWS security services that integrate with the services you use. AWS Key Management Service integrates with many AWS services to enable encryption at rest. For example, you can enable default encryption for all EBS volumes in a region. AWS Certificate Manager provides public certificates which integrate with Elastic Load Balancing and Amazon CloudFront to encrypt data in transit. Public SSL/TLS certificates provisioned through AWS Certificate Manager are free. You pay only for the AWS resources you create to run your application. You can implement AWS WAF (web application firewall) and AWS Shield to protect your HTTPS endpoints. Where implement services that manage resources, such as Amazon RDS, AWS Lambda, and Amazon ECS, to reduce your security maintenance tasks as part of the shared responsibility model.

Incident response

Once you have your baseline security controls in place your team needs to be prepared to respond effectively during an incident. This includes designing your incident response goals, educating your team and preparing to respond. Simulating events helps the team to learn your processes and tools. Always iterate to improve the process for the future. As a start, consider using the GuardDuty finding types as the basis for what you should be able to respond to. Have a look through the finding types, identify which findings are most applicable and write a runbook outlining the steps on how you would respond. For each finding type, test your response process. In doing so your team will ensure they have the right tools available, the right emergency access, and know who they need to escalate to and collaborate with. By simulating your response process, your team becomes practiced in how to respond and will reduce the time to recovery if an incident should occur.

When comfortable with the process, automate it. For example, create a Lambda function to perform remediation without you having to wake up in the middle of the night to take action. This can be built up over time as you build a baseline of events. Spending some time thinking through priority events for your environment will help you develop a playbook to respond to them. When you’re comfortable with what incident responses you need, you can automate those responses so remediation is triggered when an event occurs, though you may also want a human to verify before triggering a potentially impactful response.

For example, one of the GuardDuty finding types identifies when an EC2 instance is querying an IP address that is associated with cryptocurrency-related activity. The suggested remediation is to investigate the instance, create a snapshot, consider stopping and starting a new instance and raise a support case. Your runbook could outline how to do each of those steps or you could use CloudWatch to trigger a Lambda function which will place the instance in an isolated security group with no internet access for investigation later. Further examples of automation can be found in the Getting Hands on with Amazon GuardDuty labs.

Conclusion

In this post, I’ve shown you some of the techniques and services that can be used to build a secure foundation. Build a strong security foundation and have a multi-account strategy that allows you to isolate different workloads within your organization. A strong identity foundation ensures that you know who is doing what in your environment. Logging and monitoring ensures that you are ready to take action. Building security in layers and using the service features available as you build ensures that you’re using the security controls available to all customers on the platform. Be prepared to respond to incidents and regularly practice your response process so your team is ready if an incident should occur.

A secure foundation is just the start. Remember that security is not a separate feature and new features are not complete until they’re tested and securely in production. Build a security culture of continuous improvement, and take action to ensure that you remain secure as you build out your workloads. Iterate to continue to reduce risk. Use the AWS Well-Architected Tool which allows you and your team to review your workload against best practices which can be paired with the Well-Architected labs for hands on learning. As mentioned above, a lab to help you implement the content in this blog post yourself can be found in the Quick Steps to Security Success lab. Don’t forget that you can also read stories from two AWS customers—Tic:Toc and FYI—on the AWS Startups Blog.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on one of the AWS Security, Identity, and Compliance forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Byron Pogson

Byron helps organizations use technology to solve their biggest problems. With Amazon Web Services he works with teams across West and South Australia to help refine their technology strategies in this fast-changing environment. You can often find him talking about security best practices to help organizations of all sizes to move fast and stay secure.

Fall 2020 PCI DSS report now available with eight additional services in scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2020-pci-dss-report-now-available-with-eight-additional-services-in-scope/

We continue to expand the scope of our assurance programs and are pleased to announce that eight additional services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This gives our customers more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in Amazon Web Services (AWS).

You can see the full list on Services in Scope by Compliance Program. The eight additional services are:

  1. Amazon Augmented AI (Amazon A2I) (excluding public workforce and vendor workforce)
  2. Amazon Kendra
  3. Amazon Keyspaces (for Apache Cassandra)
  4. Amazon Timestream
  5. AWS App Mesh
  6. AWS Cloud Map
  7. AWS Glue DataBrew
  8. AWS Ground Station

Private AWS Local Zones and AWS Wavelength sites were newly assessed as additional infrastructure deployments as part of the fall 2020 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) evidencing AWS PCI compliance status is available through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions. You can contact the compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS. He has over 15 years of experience managing information technology risk and control for Fortune 500 companies covering security compliance, auditing, and control framework implementation. He has a bachelor’s degree in Finance, master’s degree in Business Administration, and industry certifications including CISA and ISSPCS. Outside of work, he loves singing and reading.

Updated whitepaper available: Encrypting File Data with Amazon Elastic File System

Post Syndicated from Joe Travaglini original https://aws.amazon.com/blogs/security/updated-whitepaper-available-encrypting-file-data-with-amazon-elastic-file-system/

We’re sharing an update to the Encrypting File Data with Amazon Elastic File System whitepaper to provide customers with guidance on enforcing encryption of data at rest and in transit in Amazon Elastic File System (Amazon EFS). Amazon EFS provides simple, scalable, highly available, and highly durable shared file systems in the cloud. The file systems you create by using Amazon EFS are elastic, which allows them to grow and shrink automatically as you add and remove data. They can grow to petabytes in size, distributing data across an unconstrained number of storage servers in multiple Availability Zones.

Read the updated whitepaper to learn about best practices for encrypting Amazon EFS. Learn how to enforce encryption at rest while you create an Amazon EFS file system in the AWS Management Console and in the AWS Command Line Interface (AWS CLI), and how to enforce encryption of data in transit at the client connection layer by using AWS Identity and Access Management (IAM).

Download and read the updated whitepaper.

If you have questions or want to learn more, contact your account executive or contact AWS Support. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Joseph Travaglini

For over four years, Joe has been a product manager on the Amazon Elastic File System team, responsible for the Amazon EFS security and compliance roadmap, and a product lead for the launch of EFS Infrequent Access. Prior to joining the Amazon EFS team, Joe was Director of Products at Sqrrl, a cybersecurity analytics startup acquired by AWS in 2018.

Author

Peter Buonora

Pete is a Principal Solutions Architect for AWS, with a focus on enterprise cloud strategy and information security. Pete has worked with the largest customers of AWS to accelerate their cloud adoption and improve their overall security posture.

Author

Siva Rajamani

Siva is a Boston-based Enterprise Solutions Architect for AWS. He enjoys working closely with customers and supporting their digital transformation and AWS adoption journey. His core areas of focus are security, serverless computing, and application integration.

AWS and EU data transfers: strengthened commitments to protect customer data

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-and-eu-data-transfers-strengthened-commitments-to-protect-customer-data/

Last year we published a blog post describing how our customers can transfer personal data in compliance with both GDPR and the new “Schrems II” ruling. In that post, we set out some of the robust and comprehensive measures that AWS takes to protect customers’ personal data.

Today, we are announcing strengthened contractual commitments that go beyond what’s required by the Schrems II ruling and currently provided by other cloud providers to protect the personal data that customers entrust AWS to process (customer data). Significantly, these new commitments apply to all customer data subject to GDPR processed by AWS, whether it is transferred outside the European Economic Area (EEA) or not. These commitments are automatically available to all customers using AWS to process their customer data, with no additional action required, through a new supplementary addendum to the AWS GDPR Data Processing Addendum.

Our strengthened contractual commitments include:

  • Challenging law enforcement requests: We will challenge law enforcement requests for customer data from governmental bodies, whether inside or outside the EEA, where the request conflicts with EU law, is overbroad, or where we otherwise have any appropriate grounds to do so.
  • Disclosing the minimum amount necessary: We also commit that if, despite our challenges, we are ever compelled by a valid and binding legal request to disclose customer data, we will disclose only the minimum amount of customer data necessary to satisfy the request.

These strengthened commitments to our customers build on our long track record of challenging law enforcement requests. AWS rigorously limits – or rejects outright – law enforcement requests for data coming from any country, including the United States, where they are overly broad or we have any appropriate grounds to do so.

These commitments further demonstrate AWS’s dedication to securing our customers’ data: it is AWS’s highest priority. We implement rigorous contractual, technical, and organizational measures to protect the confidentiality, integrity, and availability of customer data regardless of which AWS Region a customer selects. Customers have complete control over their data through powerful AWS services and tools that allow them to determine where data will be stored, how it is secured, and who has access.

For example, customers using our latest generation of EC2 instances automatically gain the protection of the AWS Nitro System. Using purpose-built hardware, firmware, and software, AWS Nitro provides unique and industry-leading security and isolation by offloading the virtualization of storage, security, and networking resources to dedicated hardware and software. This enhances security by minimizing the attack surface and prohibiting administrative access while improving performance. Nitro was designed to operate in the most hostile network we could imagine, building in encryption, secure boot, a hardware-based root of trust, a decreased Trusted Computing Base (TCB) and restrictions on operator access. The newly announced AWS Nitro Enclaves feature enables customers to create isolated compute environments with cryptographic controls to assure the integrity of code that is processing highly sensitive data.

AWS encrypts all data in transit, including secure and private connectivity between EC2 instances of all types. Customers can rely on our industry leading encryption features and take advantage of AWS Key Management Services to control and manage their own keys within FIPS-140-2 certified hardware security modules. Regardless of whether data is encrypted or unencrypted, we will always work vigilantly to protect data from any unauthorized access. Find out more about our approach to data privacy.

AWS is constantly working to ensure that our customers can enjoy the benefits of AWS everywhere they operate. We will continue to update our practices to meet the evolving needs and expectations of customers and regulators, and fully comply with all applicable laws in every country in which we operate. With these changes, AWS continues our customer obsession by offering tooling, capabilities, and contractual rights that nobody else does.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.