Tag Archives: Foundational (100)

Spring 2020 PCI DSS report now available with 124 services in scope

Post Syndicated from Nivetha Chandran original https://aws.amazon.com/blogs/security/spring-2020-pci-dss-report-available-124-services-in-scope/

Amazon Web Services (AWS) continues to expand the scope of our PCI compliance program to support our customers’ most important workloads. We are pleased to announce that six services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) compliance program. These services were validated by Coalfire, our independent Qualified Security Assessor (QSA).

The Spring 2020 PCI DSS attestation of compliance covers 124 services that you can use to securely architect your Cardholder Data Environment (CDE) in AWS. You can see the full list of services on the AWS Services in Scope by Compliance Program page. The six newly added services are:

The compliance reports, including the Spring 2020 PCI DSS report, are available on demand through AWS Artifact. The PCI DSS package available in AWS Artifact includes the DSS v. 3.2.1 Attestation of Compliance (AOC) and Shared Responsibility Guide.

You can learn more about our PCI program and other compliance and security programs on the AWS Compliance Programs page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nivetha Chandran

Nivetha is a Security Assurance Manager at Amazon Web Services on the Global Audits team, managing the PCI compliance program. Nivetha holds a Master’s degree in Information Management from the University of Washington.

Accreditation models for secure cloud adoption

Post Syndicated from Jana Kay original https://aws.amazon.com/blogs/security/accreditation-models-for-secure-cloud-adoption/

Today, as part of its Secure Cloud Adoption series, AWS released new strategic outlook recommendations to support decision makers in any sector considering or planning for secure cloud adoption. “Accreditation Models for Secure Cloud Adoption” provides best practices with respect to cloud accreditation to help organizations capitalize on the security benefits of commercial cloud computing, while maximizing efficiency, scalability, and cost reduction.

Many organizations are looking to modernize their IT investments and transition quickly to the cloud. However, determining how to accredit cloud services can be a challenge. If the organizational model is too laborious or is seen as an obstacle to cloud adoption and cloud-first policies, this can delay the transition to cloud. Understanding the best practices of early cloud adopters and the organizational models that support their accreditation programs helps leaders make well-informed decisions.

Accreditation Models for Secure Cloud Adoption” provides an overview of three organizational models for cloud accreditation: decentralized, centralized, and hybrid. It differentiates them based on who determines and approves risk decisions. Regardless of the organizational model used, four recommended best practices help cloud adopters balance speed, efficiency, and cost of adoption with security.

Ultimately, cloud adoption depends on a multitude of factors unique to each situation. Organizations should have a thorough understanding of the shared responsibility between the cloud service provider and the consumer to create a more secure, robust, and transparent environment. Examining a range of options and understanding how each can facilitate successful cloud adoption empowers organizations to make the best choice.

If you have questions or want to learn more, contact your account executive or AWS Support.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jana Kay

Since 2018, Jana Kay has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

The importance of encryption and how AWS can help

Post Syndicated from Ken Beer original https://aws.amazon.com/blogs/security/importance-of-encryption-and-how-aws-can-help/

Encryption is a critical component of a defense-in-depth strategy, which is a security approach with a series of defensive mechanisms designed. It means if one security mechanism fails, there’s at least one more still operating. As more organizations look to operate faster and at scale, they need ways to meet critical compliance requirements and improve data security. Encryption, when used correctly, can provide an additional layer of protection above basic access control.

How and why does encryption work?

Encryption works by using an algorithm with a key to convert data into unreadable data (ciphertext) that can only become readable again with the right key. For example, a simple phrase like “Hello World!” may look like “1c28df2b595b4e30b7b07500963dc7c” when encrypted. There are several different types of encryption algorithms, all using different types of keys. A strong encryption algorithm relies on mathematical properties to produce ciphertext that can’t be decrypted using any practically available amount of computing power without also having the necessary key. Therefore, protecting and managing the keys becomes a critical part of any encryption solution.

Encryption as part of your security strategy

An effective security strategy begins with stringent access control and continuous work to define the least privilege necessary for persons or systems accessing data. AWS requires that you manage your own access control policies, and also supports defense in depth to achieve the best possible data protection.

Encryption is a critical component of a defense-in-depth strategy because it can mitigate weaknesses in your primary access control mechanism. What if an access control mechanism fails and allows access to the raw data on disk or traveling along a network link? If the data is encrypted using a strong key, as long as the decryption key is not on the same system as your data, it is computationally infeasible for an attacker to decrypt your data. To show how infeasible it is, let’s consider the Advanced Encryption Standard (AES) with 256-bit keys (AES-256). It’s the strongest industry-adopted and government-approved algorithm for encrypting data. AES-256 is the technology we use to encrypt data in AWS, including Amazon Simple Storage Service (S3) server-side encryption. It would take at least a trillion years to break using current computing technology. Current research suggests that even the future availability of quantum-based computing won’t sufficiently reduce the time it would take to break AES encryption.

But what if you mistakenly create overly permissive access policies on your data? A well-designed encryption and key management system can also prevent this from becoming an issue, because it separates access to the decryption key from access to your data.

Requirements for an encryption solution

To get the most from an encryption solution, you need to think about two things:

  1. Protecting keys at rest: Are the systems using encryption keys secured so the keys can never be used outside the system? In addition, do these systems implement encryption algorithms correctly to produce strong ciphertexts that cannot be decrypted without access to the right keys?
  2. Independent key management: Is the authorization to use encryption independent from how access to the underlying data is controlled?

There are third-party solutions that you can bring to AWS to meet these requirements. However, these systems can be difficult and expensive to operate at scale. AWS offers a range of options to simplify encryption and key management.

Protecting keys at rest

When you use third-party key management solutions, it can be difficult to gauge the risk of your plaintext keys leaking and being used outside the solution. The keys have to be stored somewhere, and you can’t always know or audit all the ways those storage systems are secured from unauthorized access. The combination of technical complexity and the necessity of making the encryption usable without degrading performance or availability means that choosing and operating a key management solution can present difficult tradeoffs. The best practice to maximize key security is using a hardware security module (HSM). This is a specialized computing device that has several security controls built into it to prevent encryption keys from leaving the device in a way that could allow an adversary to access and use those keys.

One such control in modern HSMs is tamper response, in which the device detects physical or logical attempts to access plaintext keys without authorization, and destroys the keys before the attack succeeds. Because you can’t install and operate your own hardware in AWS datacenters, AWS offers two services using HSMs with tamper response to protect customers’ keys: AWS Key Management Service (KMS), which manages a fleet of HSMs on the customer’s behalf, and AWS CloudHSM, which gives customers the ability to manage their own HSMs. Each service can create keys on your behalf, or you can import keys from your on-premises systems to be used by each service.

The keys in AWS KMS or AWS CloudHSM can be used to encrypt data directly, or to protect other keys that are distributed to applications that directly encrypt data. The technique of encrypting encryption keys is called envelope encryption, and it enables encryption and decryption to happen on the computer where the plaintext customer data exists, rather than sending the data to the HSM each time. For very large data sets (e.g., a database), it’s not practical to move gigabytes of data between the data set and the HSM for every read/write operation. Instead, envelope encryption allows a data encryption key to be distributed to the application when it’s needed. The “master” keys in the HSM are used to encrypt a copy of the data key so the application can store the encrypted key alongside the data encrypted under that key. Once the application encrypts the data, the plaintext copy of data key can be deleted from its memory. The only way for the data to be decrypted is if the encrypted data key, which is only a few hundred bytes in size, is sent back to the HSM and decrypted.

The process of envelope encryption is used in all AWS services in which data is encrypted on a customer’s behalf (which is known as server-side encryption) to minimize performance degradation. If you want to encrypt data in your own applications (client-side encryption), you’re encouraged to use envelope encryption with AWS KMS or AWS CloudHSM. Both services offer client libraries and SDKs to add encryption functionality to their application code and use the cryptographic functionality of each service. The AWS Encryption SDK is an example of a tool that can be used anywhere, not just in applications running in AWS.

Because implementing encryption algorithms and HSMs is critical to get right, all vendors of HSMs should have their products validated by a trusted third party. HSMs in both AWS KMS and AWS CloudHSM are validated under the National Institute of Standards and Technology’s FIPS 140-2 program, the standard for evaluating cryptographic modules. This validates the secure design and implementation of cryptographic modules, including functions related to ports and interfaces, authentication mechanisms, physical security and tamper response, operational environments, cryptographic key management, and electromagnetic interference/electromagnetic compatibility (EMI/EMC). Encryption using a FIPS 140-2 validated cryptographic module is often a requirement for other security-related compliance schemes like FedRamp and HIPAA-HITECH in the U.S., or the international payment card industry standard (PCI-DSS).

Independent key management

While AWS KMS and AWS CloudHSM can protect plaintext master keys on your behalf, you are still responsible for managing access controls to determine who can cause which encryption keys to be used under which conditions. One advantage of using AWS KMS is that the policy language you use to define access controls on keys is the same one you use to define access to all other AWS resources. Note that the language is the same, not the actual authorization controls. You need a mechanism for managing access to keys that is different from the one you use for managing access to your data. AWS KMS provides that mechanism by allowing you to assign one set of administrators who can only manage keys and a different set of administrators who can only manage access to the underlying encrypted data. Configuring your key management process in this way helps provide separation of duties you need to avoid accidentally escalating privilege to decrypt data to unauthorized users. For even further separation of control, AWS CloudHSM offers an independent policy mechanism to define access to keys.

Even with the ability to separate key management from data management, you can still verify that you have configured access to encryption keys correctly. AWS KMS is integrated with AWS CloudTrail so you can audit who used which keys, for which resources, and when. This provides granular vision into your encryption management processes, which is typically much more in-depth than on-premises audit mechanisms. Audit events from AWS CloudHSM can be sent to Amazon CloudWatch, the AWS service for monitoring and alarming third-party solutions you operate in AWS.

Encrypting data at rest and in motion

All AWS services that handle customer data encrypt data in motion and provide options to encrypt data at rest. All AWS services that offer encryption at rest using AWS KMS or AWS CloudHSM use AES-256. None of these services store plaintext encryption keys at rest — that’s a function that only AWS KMS and AWS CloudHSM may perform using their FIPS 140-2 validated HSMs. This architecture helps minimize the unauthorized use of keys.

When encrypting data in motion, AWS services use the Transport Layer Security (TLS) protocol to provide encryption between your application and the AWS service. Most commercial solutions use an open source project called OpenSSL for their TLS needs. OpenSSL has roughly 500,000 lines of code with at least 70,000 of those implementing TLS. The code base is large, complex, and difficult to audit. Moreover, when OpenSSL has bugs, the global developer community is challenged to not only fix and test the changes, but also to ensure that the resulting fixes themselves do not introduce new flaws.

AWS’s response to challenges with the TLS implementation in OpenSSL was to develop our own implementation of TLS, known as s2n, or signal to noise. We released s2n in June 2015, which we designed to be small and fast. The goal of s2n is to provide you with network encryption that is easier to understand and that is fully auditable. We released and licensed it under the Apache 2.0 license and hosted it on GitHub.

We also designed s2n to be analyzed using automated reasoning to test for safety and correctness using mathematical logic. Through this process, known as formal methods, we verify the correctness of the s2n code base every time we change the code. We also automated these mathematical proofs, which we regularly re-run to ensure the desired security properties are unchanged with new releases of the code. Automated mathematical proofs of correctness are an emerging trend in the security industry, and AWS uses this approach for a wide variety of our mission-critical software.

Implementing TLS requires using encryption keys and digital certificates that assert the ownership of those keys. AWS Certificate Manager and AWS Private Certificate Authority are two services that can simplify the issuance and rotation of digital certificates across your infrastructure that needs to offer TLS endpoints. Both services use a combination of AWS KMS and AWS CloudHSM to generate and/or protect the keys used in the digital certificates they issue.

Summary

At AWS, security is our top priority and we aim to make it as easy as possible for you to use encryption to protect your data above and beyond basic access control. By building and supporting encryption tools that work both on and off the cloud, we help you secure your data and ensure compliance across your entire environment. We put security at the center of everything we do to make sure that you can protect your data using best-of-breed security technology in a cost-effective way.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS KMS forum or the AWS CloudHSM forum, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ken Beer

Ken is the General Manager of the AWS Key Management Service. Ken has worked in identity and access management, encryption, and key management for over 7 years at AWS. Before joining AWS, Ken was in charge of the network security business at Trend Micro. Before Trend Micro, he was at Tumbleweed Communications. Ken has spoken on a variety of security topics at events such as the RSA Conference, the DoD PKI User’s Forum, and AWS re:Invent.

AWS achieves its first PCI 3DS attestation

Post Syndicated from Nivetha Chandran original https://aws.amazon.com/blogs/security/aws-achieves-first-pci-3ds-attestation/

We are pleased to announce that Amazon Web Services (AWS) has achieved its first PCI 3-D Secure (3DS) certification. Financial institutions and payment providers are implementing EMV® 3-D Secure services to support application-based authentication, integration with digital wallets, and browser-based e-commerce transactions. Although AWS doesn’t perform 3DS functions directly, the AWS PCI 3DS attestation of compliance enables customers to attain their own PCI 3DS compliance for their services running on AWS.

All AWS regions in scope for PCI DSS were included in the 3DS attestation. AWS was assessed by Coalfire, an independent Qualified Security Assessor (QSA).

AWS compliance reports, including this latest PCI 3DS attestation, are available on demand through AWS Artifact. The 3DS package available in AWS Artifact includes the 3DS Attestation of Compliance (AOC) and Shared Responsibility Guide.

To learn more about our PCI program and other compliance and security programs, please visit the AWS Compliance Programs page.

We value your feedback and questions. Feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nivetha Chandran

Nivetha is a Security Assurance Manager at Amazon Web Services on the Global Audits team, managing the PCI compliance program. Nivetha holds a Master’s degree in Information Management from the University of Washington.

Getting Ready for Your AWS Certified Solutions Architect – Associate Exam

Post Syndicated from Ed Van Sickle original https://aws.amazon.com/blogs/architecture/getting-ready-for-your-aws-certified-solutions-architect-associate-exam/

AWS Training and CertificationSo, you decided you want to earn your AWS Certified Solutions Architect – Associate certification. Good decision! Maybe you’ve read through our exam preparation recommendations (or maybe just opened that link to do so), explored our Architect Learning Path, and are now contemplating next steps.

You may be thinking:

  • I know myself well enough to know that if I’m going to get this done on time, I need a class with an instructor on my calendar.
  • Taking Architecting on AWS and an exam readiness course seems like a good idea, but can I schedule those together?
  • Note to self: take a practice exam or two.

If these resonate, our new five-day classroom training course, Exam Readiness Intensive Workshop: AWS Certified Solutions Architect – Associate was built with you in mind. This course combines expert, AWS-accredited instructor led training and exam readiness with AWS service deep-dives and quizzes exclusive to this course. This intensive, focused approach has already helped APN Partners and customers get ready for their certification exams on their way to achieving their AWS Certification goals.

Dive deep with architectural best practices

The course starts with learning the fundamentals of building an IT infrastructure on AWS. You’ll spend three days on the Architecting on AWS course, which builds that architectural mindset and foundation. Throughout the course, you’ll get in a “getting ready for my exam” mindset with end-of-module quizzes to help reinforce learning and provide practice in answering test questions.

Complement architectural best practices with additional AWS services and exam taking skills

Day four provides additional deep dives on AWS services not covered in Architecting on AWS and a half-day dedicated to the Exam Readiness: AWS Certified Solutions Architect – Associate course. You’ll review sample exam questions in each topic area and learn how to interpret the concepts being tested, so that you can more easily eliminate incorrect responses. At the end of the day, you’ll put your knowledge to the test with the first of two quizzes with guided reviews with your instructor.

Test your knowledge

Day five starts with a guided review of the practice exam. You’ll get to see if your answers are right, and if not, have an opportunity to engage the instructor to understand why your answers are wrong, and how you could have eliminated some of the options. You’ll finish day five with one more quiz to test your knowledge. With instructor-guided reviews of two quizzes and a practice exam, you can gain a boost of confidence at the end of the course before sitting for the exam.

The right course for the right candidate

This course is tailored for the individual who wants to combine learning architectural concepts and exam-taking skills from an instructor. If that’s you, find a class today delivered by AWS or one of our AWS APN Training Partners.

AWS Shield Threat Landscape report is now available

Post Syndicated from Mario Pinho original https://aws.amazon.com/blogs/security/aws-shield-threat-landscape-report-now-available/

AWS Shield is a managed threat protection service that safeguards applications running on AWS against exploitation of application vulnerabilities, bad bots, and Distributed Denial of Service (DDoS) attacks. The AWS Shield Threat Landscape Report (TLR) provides you with a summary of threats detected by AWS Shield. This report is curated by the AWS Threat Response Team (TRT), who continually monitors and assesses the threat landscape to build protections on behalf of AWS customers. This includes rules and mitigations for services like AWS Managed Rules for AWS WAF and AWS Shield Advanced. You can use this information to expand your knowledge of external threats and improve the security of your applications running on AWS.

Here are some of our findings from the most recent report, which covers Q1 2020:

Volumetric Threat Analysis

AWS Shield detects network and web application-layer volumetric events that may indicate a DDoS attack, web content scraping, account takeover bots, or other unauthorized, non-human traffic. In Q1 2020, we observed significant increases in the frequency and volume of network volumetric threats, including a CLDAP reflection attack with a peak volume of 2.3 Tbps.

You can find a summary of the volumetric events detected in Q1 2020, compared to the same quarter in 2019, in the following table:

MetricSame Quarter, Prior Year (Q1 2019)
Most Recent Quarter (Q1 2020)
Change
Total number of events253,231310,954+23%
Largest bit rate (Tbps)0.82.3+188%
Largest packet rate (Mpps)260.1293.1+13%
Largest request rate (rps)1,000,414694,201-31%
Days of elevated threat*13+200%

Days of elevated threat indicates the number of days during which the volume or frequency of events was unusually high.

Malware Threat Analysis

AWS operates a threat intelligence platform that monitors Internet traffic and evaluates potentially suspicious interactions. We observed significant increases in the both the total number of events and the number of unique suspects, relative to the prior quarter. The most common interactions observed in Q1 2020 were Remote Code Execution (RCE) attempts on Apache Hadoop YARN applications, where the suspect attempts to exploit the API of a Hadoop cluster’s resource management system and execute code, without authorization. In March 2020, these interactions accounted for 31% of all events detected by the threat intelligence platform.

You can find a summary of the volumetric events detected in Q1 2020, compared to the prior quarter, in the following table:

MetricPrior Quarter
(Q4 2019)
Most Recent Quarter
(Q1 2020)
Change
Total number of events (billion)0.71.1+57%
Unique suspects (million)1.21.6+33%

 

For more information about the threats detected by AWS Shield in Q1 2020 and steps that you can take to protect your applications running on AWS, download the AWS Shield Threat Landscape Report.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Shield forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mario Pinho

Mario Pinho is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into its simpler components. In his free time he pretends to be an artist by playing piano and doing landscape photography.

Measure Effectiveness of Virtual Training in Real Time with AWS AI Services

Post Syndicated from Rajeswari Malladi original https://aws.amazon.com/blogs/architecture/measure-effectiveness-of-virtual-training-in-real-time-with-aws-ai-services/

As per International Data Corporation (IDC), worldwide spending on digital transformation will reach $2.3 trillion in 2023. As organizations adopt digital transformation, training becomes an important aspect of this journey. Whether these are internal trainings to upskill existing workforce or a packaged content for commercial use, these trainings need to be efficient and cost effective. With the advent of education technology, it is a common practice to deliver trainings via digital platforms. This makes it accessible for larger population and is cost effective, but it is important that the trainings are interactive and effective. According to  a recent article published by Forbes, immersive education and data driven insights are among the top five Education Technology (EdTech) innovations. These are the key characteristics of creating an effective training experience.

An earlier blog series explored how to build a virtual trainer on AWS using Amazon Sumerian. This series illustrated how to easily build an immersive and highly engaging virtual training experience without needing additional devices or a complex virtual reality platform management. These trainings are easy to maintain and are cost effective.

In this blog post, we will further extend the architecture to gather real-time feedback about the virtual trainings and create data-driven insights to measure its effectiveness with the help of Amazon artificial intelligence (AI) services.

Architecture and its benefits

Virtual training on AWS and AI Services - Architecture

Virtual training on AWS and AI Services – Architecture

Consider a scenario where you are a vendor in the health care sector. You’ve developed a cutting-edge device, such as patient vital monitoring hardware that goes through frequent software upgrades and it is about to be rolled out across different U.S. hospitals. The nursing staff needs to be well trained before it can begin using the device. Let’s take a look at an architecture to solve this problem. We will first explain the architecture for building the training and then we will show how we can measure its effectiveness.

At the core of the architecture is Amazon Sumerian. Sumerian is a managed service that lets you create and run 3D, Augmented Reality (AR), and Virtual Reality (VR) applications. Within Sumerian, real-life scenes from a hospital environment can be created by importing the assets from the assets library. Scenes consist of host(s) and an AI-driven animated character with built-in animation, speech, and behavior. The hosts act as virtual trainers that interact with the nursing staff. The speech component assigns text to the virtual trainer for playback with Amazon Polly. Polly helps convert training content from Sumerian to life-like speech in real time and ensures the nursing staff receives the latest content related to the equipment on which it’s being trained.

The nursing staff accesses the training via web browsers on iOS or Android mobile devices or laptops, and authenticates using Amazon Cognito. Cognito is a service that lets you easily add user sign-up and authentication to your mobile and web apps. Sumerian then uses the Cognito identity pool to create temporary credentials to access AWS services.

The flow of the interactions within Sumerian is controlled using a visual state machine in the Sumerian editor. Within the editor, the dialogue component assigns an Amazon Lex chatbot to an entity, in this case the virtual trainer or host. Lex is a service for building conversational interfaces with voice and text. It provides you the ability to have interactive conversations with the nursing staff, understand its areas of interest, and deliver appropriate training material. This is an important aspect of the architecture where you can customize the training per users’ needs.

Lex has native interoperability with AWS Lambda, a serverless compute offering where you just write and run your code in Lambda functions. Lambda can be used to validate user inputs or apply any business logic, such as fetching the user selected training material from Amazon DynamoDB (or another database) in real time. This material is then delivered to Lex as a response to user queries.

You can extend the state machine within the Sumerian editor to introduce new interactive flows to collect user feedback. Amazon Lex collects user feedback, which is saved in Amazon Simple Storage Service (S3) and analyzed by Amazon Comprehend. Amazon Comprehend is a natural language processing service that uses AI to find meaning and insights/sentiments in text. Insights from user feedback are stored in S3, which is a highly scalable, durable, and highly available object storage.

You can analyze the insights from user feedback using Amazon Athena, an interactive query service which analyzes data in S3 using standard SQL. You can then easily build visualizations using Amazon QuickSight.

By using this architecture, you not only deliver the virtual training to your nursing staff in an immersive environment created by Amazon Sumerian, but you can also gather the feedback interactively. You can gain insights from this feedback and iterate over it to make the training experience more effective.

Conclusion and next steps

In this blog post we reviewed the architecture to build interactive trainings and measure their effectiveness. The serverless nature of this architecture makes it cost effective, agile, and easy to manage, and you can apply it to a number of use cases. For example, an educational institution can develop training content designed for multiple learning levels and the training level can be adjusted in real time based on live interactions with the students. In the manufacturing scenario, you can build a digital twin of your process and train your resources to handle different scenarios with full interactions. You can integrate AWS services just like Lego blocks, and you can further expand this architecture to integrate with Amazon Kendra to build interactive FAQ or integrate with Amazon Comprehend Medical to build trainings for the healthcare industry. Happy building!

AWS Artifact service launches new user interface

Post Syndicated from Dhiraj Mehta original https://aws.amazon.com/blogs/security/aws-artifact-service-launches-new-user-interface/

AWS Artifact service introduces a new user interface (UI) that provides a more intuitive experience in searching and saving AWS compliance reports, and accepting agreements. The new UI includes AWS Artifact home page equipped with information and videos on how to use the AWS Artifact service for your compliance needs. Additionally, the Reports and Agreements console now provides keyword search capability allowing you to accurately search the artifact you are looking for rather than scrolling through the entire page. The new UI is supported on a smartphone, tablet, laptop, or widescreen monitor, resizing the on-screen content dynamically.

Check out the new AWS Artifact UI and connect with us on our new AWS Artifact Forum for any questions, feedback or suggestions for new features. To learn more about AWS Artifact, please visit our product page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dhiraj Mehta

Dhiraj Mehta is a Senior Technical Program Manager at AWS and product owner of AWS Artifact. He has extensive experience in security, risk, and compliance domains. He received his MBA from University of California, Irvine, and completed his undergrad in Computer Science from Kurukshetra University in India. Outside of work, Dhiraj likes to travel, cook, and play ping-pong.

Spring 2020 SOC 2 Type I Privacy report now available

Post Syndicated from Ninad Naik original https://aws.amazon.com/blogs/security/spring-2020-soc-2-type-i-privacy-report-now-available/

We continue to be customer focused in addressing privacy requirements, enabling you to be confident in how your content is protected while using Amazon Web Services. Our latest SOC2 Type 1 Privacy report is now available to demonstrate our privacy compliance commitments to you.

Our spring 2020 SOC 2 Type I Privacy report provides you with a third-party attestation of systems (services) and the suitability of the design of our privacy controls. The SOC 2 Privacy Trust Principle, developed by the American Institute of CPAs (AICPA), establishes the criteria for evaluating controls related to how personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives. Additional information supporting our SOC2 Type1 report can be found in our Privacy Notice and Customer Agreement documentation.

The scope of our privacy report includes information about how we handle the content that you upload to AWS and how it is protected in all of the services and locations that are in scope for the latest AWS SOC reports. You can download the latest SOC 2 Type I Privacy report through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ninad Naik

Ninad is a Security Assurance Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Ninad holds a Master’s degree in Information Systems from Syracuse University, NY and a Bachelor’s of Engineering degree in Information Technology from Mumbai University, India. Ninad has 10 years of experience in security assurance and ITIL, CISA, CGEIT, and CISM certifications.

Spring 2020 SOC reports now available with 122 services in scope

Post Syndicated from Ashutosh Sawant original https://aws.amazon.com/blogs/security/spring-2020-soc-reports-now-available-122-services-in-scope/

At AWS, our customers’ security is of the highest importance and we continue to provide transparency into our security posture.

We’re proud to deliver the System and Organizational Controls (SOC) 1, 2, and 3 reports to our AWS customers. The SOC program continues to enable our global customer base to maintain confidence in our secured control environments with a focus on information security, confidentiality, and availability. For the spring 2020 SOC reports covering period 10/1/2019 to 03/31/2020, we are excited to announce six new services in scope, for a total of 122 total services in scope. Additionally, we have updated how the scope of AWS locations is represented in our SOC reports, to provide better clarity to our customers.

These SOC reports are now available through AWS Artifact in the AWS Management Console. The SOC 3 report can also be downloaded online as a PDF.

Here are the 6 new services in scope (followed by their SDK names):

  • Amazon Chime (chime)
  • AWS Data Exchange (dataexchange)
  • AWS Elemental MediaLive (medialive)
  • AWS Elemental MediaConvert (mediaconvert)
  • AWS Personal Health Dashboard (health)
  • Amazon Textract (textract)

As always, AWS strives to bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope across any of our compliance programs.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ashutosh Sawant

Ashutosh is a Security Assurance Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Prior to joining AWS, Ashutosh spent over 7 years at Ernst & Young as a Manager in the Risk Advisory Practice. Ashutosh holds a Master’s degree in Information Systems from Northeastern University, Boston and a Bachelor’s degree in Information Technology from Gujarat University, India.

AWS achieves Spain’s ENS High certification across 105 services

Post Syndicated from Borja Larrumbide original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-105-services/

AWS achieved Spain’s Esquema Nacional de Seguridad (ENS) High certification across 105 services in all AWS Regions. To successfully achieve the ENS High certification, BDO España conducted an independent audit and attested that AWS meets confidentiality, integrity, and availability standards. This provides assurance to Spain’s public sector organizations wanting to build secure applications and services on AWS.

Spain’s National Security Framework is regulated under Royal Decree 3/2010, and was developed through close collaboration between Entidad Nacional de Acreditación (ENAC), the Ministry of Finance and Public Administration, and the National Cryptologic Centre (CCN), as well as other administrative bodies.

The following AWS services are ENS High certified across all AWS Regions:

  • Amazon API Gateway
  • Amazon AppStream 2.0
  • Amazon Athena
  • Amazon Chime
  • Amazon Cloud Directory
  • Amazon CloudFront
  • Amazon CloudWatch
  • Amazon CloudWatch Events
  • Amazon CloudWatch Logs
  • Amazon Cognito
  • Amazon Comprehend
  • Amazon Connect
  • Amazon DocumentDB
  • Amazon DynamoDB (with MongoDB compatibility)
  • Amazon Elastic Block Store (Amazon EBS)
  • Amazon Elastic Compute Cloud (Amazon EC2)
  • Amazon Elastic Container Registry (Amazon ECR)
  • Amazon Elastic Container Service (Amazon ECS)
  • Amazon Elastic File System (Amazon EFS)
  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • Amazon ElastiCache
  • Amazon Elasticsearch Service (Amazon ES)
  • Amazon EMR
  • Amazon FSx
  • Amazon GuardDuty
  • Amazon Inspector
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Data Streams
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Pinpoint
  • Amazon Polly
  • Amazon Redshift
  • Amazon Rekognition
  • Amazon Relational Database Service (Amazon RDS)
  • Amazon Route 53
  • Amazon Route 53 Resolver
  • Amazon Simple Storage Service Glacier
  • Amazon SageMaker
  • Amazon Simple Notification Service (Amazon SNS)
  • Amazon Simple Queue Service (Amazon SQS)
  • Amazon Simple Storage Service (Amazon S3)
  • Amazon Simple Workflow Service (Amazon SWF)
  • Amazon Transcribe
  • Amazon Translate
  • Amazon Virtual Private Cloud (Amazon VPC)
  • Amazon WorkSpaces
  • AWS Amplify
  • AWS AppSync
  • AWS Artifact
  • AWS Auto Scaling
  • AWS Backup
  • AWS Batch
  • AWS Certificate Manager (ACM)
  • AWS CloudFormation
  • AWS CloudHSM
  • AWS CloudTrail
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS CodeDeploy
  • AWS CodePipeline
  • AWS CodeStar
  • AWS Config
  • AWS Database Migration Service (AWS DMS)
  • AWS DataSync
  • AWS Direct Connect
  • AWS Directory Service
  • AWS Elastic Beanstalk
  • AWS Elemental MediaConnect
  • AWS Elemental MediaConvert
  • AWS Elemental MediaLive
  • AWS Firewall Manager
  • AWS Global Accelerator
  • AWS Glue
  • AWS Identity and Access Management (IAM)
  • AWS IoT
  • AWS Key Management Service (AWS KMS)
  • AWS Lambda
  • AWS License Manager
  • AWS Managed Services
  • AWS OpsWorks for CM
  • AWS OpsWorks Stacks
  • AWS Organizations
  • AWS Outposts
  • AWS Secrets Manager
  • AWS Security Hub
  • AWS Server Migration Service (AWS SMS)
  • AWS Serverless Application Repository
  • AWS Service Catalog
  • AWS Shield
  • AWS Single Sign-On
  • AWS Snowball
  • AWS Snowball Edge
  • AWS Snowmobile
  • AWS Storage Gateway
  • AWS Systems Manager
  • AWS Transfer for SFTP
  • AWS Transit Gateway
  • AWS Trusted Advisor
  • AWS WAF
  • AWS X-Ray
  • Elastic Load Balancing
  • VM Import/Export

For more information about ENS High, see the AWS Compliance page Esquema Nacional de Seguridad High. To view which services are covered, see the ENS High tab on the AWS Services in Scope by Compliance Program page. You may download the Esquema Nacional de Seguridad (ENS) Certificate from AWS Artifact in the AWS Console or from the Compliance page Esquema Nacional de Seguridad High.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked in companies such as Microsoft and BBVA, where he has performed in different roles and sectors. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

AWS IAM introduces updated policy defaults for IAM user passwords

Post Syndicated from Mark Burr original https://aws.amazon.com/blogs/security/aws-iam-introduces-updated-policy-defaults-for-iam-user-passwords/

To improve the default security for all AWS customers, we are adding a default password policy for AWS Identity and Access Management (IAM) users in AWS accounts. This update will be made globally to the IAM service on August 3rd, 2020. You can implement this change today by creating an IAM password policy in your AWS account. AWS accounts with an existing IAM password policy will not be affected by this change, but it is important to review the details below so you can evaluate any necessary changes to your environment.

What is an IAM password policy?

The IAM password policy is an account-level setting that applies to all IAM users, excluding the root user. You can create a policy to do things like require a minimum password length and specific character types, along with setting mandatory rotation periods. These password settings apply only to passwords assigned to IAM users and do not affect any access keys they might have.

What is the new default policy?

The new default IAM policy will have the following minimum requirements and must:

  • be a minimum of 8 or more characters
  • include a minimum of three of the following mix of character types: uppercase, lowercase, numbers, non-alphanumeric symbols, for example [email protected]#$%^&*()_+-[]{}|‘
  • not be identical to your AWS account name or email address

You can determine your own password requirements by setting a custom policy. Please note that this change does not apply to the root user, which has a separate password policy.

What should customers do to prepare for this update?

For AWS accounts with no password policy applied — the experience will be unchanged until you update user passwords. The new password will need to align with the minimum requirements of the default policy. Likewise, when you create new IAM users in these AWS accounts, the passwords must meet the new minimum requirements of the default policy. A default password policy will be set for all AWS accounts that do not currently have one.

For AWS accounts with an existing password policy — there is no change for any new and existing user passwords, and they will not be affected by this update. If you disable the existing password policy, then any new IAM users created from that point onward will require passwords that meet the minimum requirements of the default policy.

For AWS accounts using automation workflows which create IAM users — If you have implemented an automated user creation workflow that does not produce passwords that meet the new required complexity and have not implemented your own custom policy, you will be affected. You should inspect and evaluate your existing workflows, and they should either be updated to meet the default password policy or set with a custom policy prior to August 3rd to ensure continued operation.

When will these changes happen?

To provide time for you to evaluate potential impact by this change, AWS is updating the default password policy in 90 days, which will take effect at the beginning of August 2020. We encourage all customers to be proactive about assessing and modifying any automation workflows that create IAM users and passwords without a corresponding password policy.

How do I check if a policy is already set?

You can navigate to the AWS IAM console then click on Account settings that will state whether or not a password policy has been set for the account. Click here for an example of how to check this via the AWS Command Line Interface (AWS CLI). For further information and to learn how to check this using the API, please refer to the documentation.

AWS Single Sign-On (AWS SSO)

Note: if you are primarily using IAM users as the source of your identities across multiple accounts, you may want to evaluate AWS SSO, that simplifies the user experience and improves security by eliminating individual passwords in each account. It also allows you to quickly and easily assign your employees access to AWS accounts managed with AWS Organizations, business cloud applications, and custom applications that support Security Assertion Markup Language (SAML) 2.0. To learn more, visit the AWS Single Sign-on page.

Need more assistance?

AWS IQ enables AWS customers to find, securely collaborate with, and pay AWS Certified third-party experts for on-demand project work. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log into your console and select Get Started with AWS IQ to start a request.

The AWS Technical Support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support does not include code development for client applications.

If you have any questions or issues, please start a new thread on the AWS IAM forum, or contact AWS Support or your Technical Account Manager (TAM). If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Burr

Mark is a Principal Consultant with the Worldwide Public Sector Professional Services team. He specializes in security, automation, large-scale migrations, enterprise transformation, and executive strategy. Mark enjoys helping global customers achieve amazing outcomes in AWS. When he’s not in the cloud, he’s on a bicycle or drinking a Belgian ale.

16 additional AWS services authorized at DoD Impact Level 4 for AWS GovCloud (US) Regions

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/16-additional-aws-services-authorized-at-dod-impact-level-4-for-aws-govcloud-us-regions/

I’m excited to share that the Defense Information Systems Agency (DISA) has authorized 16 additional AWS services at Impact Level 4 and one service at Impact Level 5 in the AWS GovCloud (US) Regions. With these additional 16 services, AWS now offers a total of 72 services and features authorized to process data at Impact Level (IL) 4 and 56 services and features at IL 5 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). DISA’s authorization demonstrates that AWS effectively implemented over 370 security controls using applicable criteria from NIST SP 800-53 Rev. 4, the US General Services Administration’s FedRAMP Moderate baseline, and DoD CC SRG for Impact Level 4.

The authorization at DoD IL 4 allows DoD Mission Owners and their contractors to process controlled unclassified information (CUI) in the AWS GovCloud (US) Regions. This authorization supplements the full range of U.S. Government data classifications supported on AWS. AWS remains the only Cloud Service Provider accredited to address the full range, including Unclassified, Secret, and Top Secret.

The newly authorized AWS services and features provide additional choices for DoD Mission Owners and their contractors to optimize and modernize their database and data analytics operations, conduct machine learning to build insights into relationships within stored text, train and deploy machine-learning models, accurately transcribe and translate large volumes of text, efficiently route traffic to Internet applications, build out and connect Internet of Things (IoT) environments, manage software licenses and IT catalogs, optimize real-time workload provisioning guidance, provide scalable workload management, and protect web applications using advanced web application firewalls.

Recently authorized AWS services and features at DoD Impact Level 4

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler Harding is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

AWS Online Tech Talks for April 2020

Post Syndicated from Jimmy Cooper original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-for-april-2020/

Join us for live, online presentations led by AWS solutions architects and engineers. AWS Online Tech Talks cover a range of topics and expertise levels, and feature technical deep dives, demonstrations, customer examples, and live Q&A with AWS experts.

Note – All sessions are free and in Pacific Time. Can’t join us live? Access webinar recordings and slides on our On-Demand Portal.

Tech talks this month are:

April 20, 2020 | 9:00 AM – 10:00 AM PT – Save Costs Running Kubernetes Clusters with EC2 Spot Instances – ​Learn how you can lower costs and improve application resiliency by running Kubernetes workloads on Amazon EKS with Spot Instances.​

April 20, 2020 | 11:00 AM – 12:00 PM PT – Hadoop 3.0 and Docker on Amazon EMR 6.0 – A deep dive into what’s new in EMR 6.0 including Apache Hadoop 3.0, Docker containers & Apache Hive performance improvements​.

​April 20, 2020 | 1:00 PM – 2:00 PM PT – Infrastructure as Code on AWS – ​Join this tech talk to learn how to use AWS CloudFormation and AWS CDK to provision and manage infrastructure, deploy code, and automate your software-release processes.

April 21, 2020 | 9:00 AM – 10:00 AM PT – How to Maximize Results with a Cloud Contact Center, Featuring Aberdeen Research – ​Learn how to maximize results with a cloud contact center, featuring Aberdeen Research and Amazon Connect​.

April 21, 2020 | 11:00 AM – 12:00 PM PT – Connecting Microcontrollers to the Cloud for IoT Applications – ​Learn how you can connect microcontrollers to the cloud for IoT applications​.

April 21, 2020 | 1:00 PM – 2:00 PM PT – Reducing Machine Learning Inference Cost for PyTorch Models – ​Join us for a tech talk to learn about deploying your PyTorch models for low latency at low cost.​

April 22, 2020 | 11:00 AM – 12:00 PM PT – Top 10 Security Items to Improve in Your AWS Account – Learn about the top 10 security items to improve in your AWS environment and how you can automate them.​

April 22, 2020 | 1:00 PM – 2:00 PM PT – Building Your First Application with AWS Lambda – ​Learn how to build your first serverless application with AWS Lambda, including basic design patterns and best practices.​

April 23, 2020 | 9:00 AM – 10:00 AM PT – Persistent Storage for Containers with Amazon EFS – ​Learn how to securely store your containers in the cloud with Amazon EFS​.

April 23, 2020 | 11:00 AM – 12:00 PM PT – Build Event Driven Graph Applications with AWS Purpose-Built Databases – ​Learn how to build event driven graph applications using AWS purpose-built database services including Amazon Neptune, Amazon DynamoDB, and Amazon ElastiCache.​

April 23, 2020 | 1:00 PM – 2:00 PM PT – Migrate with AWS – ​Introduction to best practice driven process for migrations to AWS, developed by the experience in helping thousands of enterprises migrate.

April 27, 2020 | 9:00 AM – 10:00 AM PT – Best Practices for Modernizing On-Premise Big Data Workloads Using Amazon EMR – ​Learn about best practices to migrate from on-premises big data (Apache Spark and Hadoop) to Amazon EMR.​

April 27, 2020 | 11:00 AM – 12:00 PM PT – Understanding Game Changes and Player Behavior with Graph Databases – ​Learn how to solve problems with highly connected data in game datasets with Amazon Neptune.

​​April 27, 2020 | 1:00 PM – 2:00 PM PT – Assess, Migrate, and Modernize from Legacy Databases to AWS: Oracle to Amazon Aurora PostgreSQL Migration – ​Punitive licensing and high cost of on-premises legacy databases could hold you back. Join this tech talk to learn how to assess, migrate, and modernize your Oracle workloads over to Amazon Aurora PostgreSQL, using Amazon Database Migration Service (DMS).​

April 28, 2020 | 9:00 AM – 10:00 AM PT – Implementing SAP in the Cloud with AWS Tools and Services – ​This tech talk will help architects and administrators to understand the automation capabilities available that can assist your SAP migration.​

April 28, 2020 | 11:00 AM – 12:00 PM PT – Choosing Events, Queues, Topics, and Streams in Your Serverless Application – ​Learn how to choose between common Lambda event sources like EventBridge, SNS, SQS, and Kinesis Data Streams.​

April 30, 2020 | 9:00 AM – 10:00 AM PT – Inside Amazon DocumentDB: The Makings of a Managed Non-relational Database – Join Rahul Pathak, GM of Emerging Databases and Blockchain at AWS, to learn about the inner workings of Amazon DocumentDB and how it provides better performance, scalability, and availability while reducing operational overhead for managing your own non-relational databases.

2019 C5 attestation is now available

Post Syndicated from Kevin Quaid original https://aws.amazon.com/blogs/security/2019-c5-attestation-is-now-available/

AWS has completed its 2019 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Germany’s national cybersecurity authority—Bundesamt für Sicherheit in der Informationstechnik (BSI)—established C5 to define a reference standard for German cloud security requirements. With C5, customers in German states can use the work performed under this BSI compliance catalog to help them comply with their stringent local requirements.

AWS has added four regions (London, Paris, Stockholm and Singapore); Edge locations in England, France, Germany, Ireland and Singapore; and 30 services to this year’s scope:

  • Amazon Comprehend
  • Amazon DocumentDB
  • Amazon Elastic Container Service for Kubernetes
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Neptune
  • Amazon Pinpoint
  • Amazon Translate
  • Amazon WorkLink
  • AWS Amplify Console
  • AWS Backup
  • AWS CodeDeploy
  • AWS DataSync
  • AWS Elemental MediaConnect
  • AWS Global Accelerator
  • AWS Glue
  • AWS IoT Greengrass
  • AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise
  • AWS Organizations
  • AWS Resource Groups
  • AWS RoboMaker
  • AWS Secrets Manager
  • AWS Security Hub
  • AWS Server Migration Service
  • AWS Serverless Application Repository
  • AWS Transfer for SFTP

AWS now has 101 services in scope of C5. The current list of services in scope can be found here. For more information, see Cloud Computing Compliance Controls Catalog (C5).

The English version of the C5 report is available through AWS Artifact.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kevin Quaid

Kevin leads Regional Audits and Region Expansions for Security Assurance, supporting customers using and migrating to AWS. He previously managed datacenter site selection and qualification for AWS Infrastructure. He is passionate about leveraging his decade-plus risk management experience at Amazon to drive innovation and cloud adoption.

Selecting and migrating a Facebook API version for Amazon Cognito

Post Syndicated from James Li original https://aws.amazon.com/blogs/security/selecting-and-migrating-a-facebook-api-version-for-amazon-cognito/

On May 1, 2020, Facebook will remove version 2.12 of the Facebook Graph API. This change impacts Amazon Cognito customers who are using version 2.12 of the Facebook Graph API in their identity federation configuration. In this post, I explain how to migrate your Amazon Cognito configuration to use the latest version of the Facebook API.

Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party, such as Facebook, Amazon, Google, or Apple.

An Amazon Cognito User Pool is a user directory that helps you manage identities. It’s also where users can sign into your web or mobile app. User pools support federation through third-party identity providers, such as Google, Facebook, and Apple, as well as Amazon’s own Login with Amazon. Additionally, federation can use identity providers that work with OpenID Connect (OIDC) or Security Assertion Markup Language (SAML) 2.0. Federating a user through the third-party identity provider streamlines the user experience, because users don’t need to sign up directly for your web or mobile app.

Amazon Cognito User Pools now enable users to select the version of the Facebook API for federated login. Previously, version 2.12 of Facebook’s Graph API was automatically used for federated login and to retrieve user attributes from Facebook. By selecting a specific version of Facebook’s API, you can now upgrade versions and test changes. This provides a mechanism to revert back to earlier versions if necessary.

To help ease this transition for our customers, we are doing two phases of mitigation. In the first phase, already underway, you can choose which Facebook version to use for federated login. You can test out the new API version and discover the impact upgrading has on your application. If you must make changes, you can revert to the older version, and you have until May 1, 2020 to perform updates. In the second phase, starting sometime in April, we will automatically migrate customers to version 5.0 if they haven’t selected an API version.

There are benefits to having access to newer versions of Facebook APIs. For instance, if customers who use version 5.0 store a Facebook access token and use it to call the Messenger API, they can use webhook events. This type of benefit is useful for users who react or reply to messages from businesses. You can also use business asset groups to manage a large number of assets with Facebook API v4.0 and the Facebook Marketing API.

How to use different Facebook API versions with Amazon Cognito

These instructions assume you’re familiar with Amazon Cognito User Pools and the User Pool clients. You also need a User Pool domain already set up with the appropriate settings for a hosted UI. If you haven’t set up a user pool yet, you can find the instructions in the Amazon Cognito Developer Guide. You need your User Pool domain information when you set up your Facebook app.

Set up the Facebook app

  1. Go to the Facebook for Developers website and sign in, or sign up if you do not have an account. Create a new Facebook app if you must, or you can reuse an existing one.
  2. Navigate to the App Dashboard and select your App.
  3. On the navigation menu, select Products, then Facebook Login, and then Settings.
  4. In the Valid OAuth Redirect URLs field, add your user pool domain with the endpoint /oauth2/idpresponse. As shown in Figure 1, it should look like https://<yourDomainPrefix>.auth.<region>.amazoncognito.com/oauth2/idpresponse.

    Figure 1

    Figure 1

  5. In the navigation menu, select Settings, then choose Basic.
  6. Note your App ID and your App Secret for the next step.

Adding your Facebook app to your Amazon Cognito user pool

Next, you need to add your Facebook app to your user pool. This can be done either through the AWS Management Console or the command line interface (CLI) and I will show you both methods.

Adding the Facebook app to a user pool through using the AWS Management Console

    1. On the AWS Management Console, navigate to Amazon Cognito, then select Manage Pools. From the list that shows up, select your user pool.
    2. On the navigation menu, select Federation, then Identity Providers.
    3. Select Facebook. Enter the Facebook App ID and App Secret from step 6 above. Then, under Authorize Scopes, enter the appropriate scopes.
    4. In the navigation menu, select Federation and go to Attributes Mapping.
    5. Now select the version of the Facebook API you want to use. By default, the highest available version (v6.0) for newly created Facebook identity providers is pre-selected for you.
    6. After choosing your API version and attribute mapping, click Save.

 

Figure 2

Figure 2

Adding the Facebook app to a user pool through the CLI

The command below adds the Facebook app configuration to your user pool. Use the values for <USER_POOL_ID>,<FACEBOOK_APP_ID> and <FACEBOOK_APP_SECRET> that you noted earlier:


aws cognito-idp create-identity-provider --cli-input-json '{
    "UserPoolId": "<USER_POOL_ID>",
    "ProviderName": "Facebook",
    "ProviderType": "Facebook",
    "ProviderDetails": {
        "client_id": "<FACEBOOK_APP_ID>",
        "client_secret": "<FACEBOOK_APP_SECRET>",
        "authorize_scopes": "email",
        "api_version": "v5.0"
    },
    "AttributeMapping": {
        "email": "email"
    }
}'

The command below updates the Facebook app configuration to your user pool. Use the values for <USER_POOL_ID>, <FACEBOOK_APP_ID> and <FACEBOOK_APP_SECRET> that you noted earlier:


aws cognito-idp update-identity-provider --cli-input-json '{
    "UserPoolId": "<USER_POOL_ID>",
    "ProviderName": "Facebook",
    "ProviderType": "Facebook",
    "ProviderDetails": {
        "client_id": "<FACEBOOK_APP_ID>",
        "client_secret": "<FACEBOOK_APP_SECRET>",
        "authorize_scopes": "email",
        "api_version": "v5.0"
    },
    "AttributeMapping": {
        "email": "email"
    }
}'

You can verify that the create or update was successful by checking the version returned in the describe-identity-provider call:


aws cognito-idp describe-identity-provider --user-pool-id "" --provider-name "Facebook"
{
    "IdentityProvider": {
        "UserPoolId": "<USER_POOL_ID>",
        "ProviderName": "Facebook",
        "ProviderType": "Facebook",
        "ProviderDetails": {
            "api_version": "v5.0",
            "attributes_url": "https://graph.facebook.com/v5.0/me?fields=",
            "attributes_url_add_attributes": "true",
            "authorize_scopes": "email",
            "authorize_url": "https://www.facebook.com/v5.0/dialog/oauth",
            "client_id": "<FACEBOOK_APP_ID>",
            "client_secret": "<FACEBOOK_APP_SECRET>",
            "token_request_method": "GET",
            "token_url": "https://graph.facebook.com/v5.0/oauth/access_token"
        },
        "AttributeMapping": {
            "email": "email",
            "username": "id"
        },
        ...
    }
}

Use the updated configuration with the Cognito Hosted UI:

  1. On the AWS Console for Amazon Cognito, navigate to your user pool and go to the navigation menu. In App Integration, go to App client settings, find your app, and check Facebook as the Enabled Identity Providers.
  2. Select Launch Hosted UI.
  3. Select Continue with Facebook.
  4. If you aren’t automatically signed in at this point, the URL displays your selected version. For example, if v5.0 was selected, the URL starts with: https://www.facebook.com/v5.0/dialog/oauth. If you would like to disable automatic sign-in, simply remove your app from Facebook so that the sign-in prompts for permissions again. Follow these instructions to learn more.
  5. The browser returns to your redirect URL with a code issued by Amazon Cognito if it was successful.

Notes on testing

Facebook will redirect your API call to a more recent version if your app is not allowed to call it. For example, if you created your Facebook app in November 2018, the latest available version at the time was version 3.2. If you were to call the Graph API using version 3.0, the call is upgraded to version 3.2. You can tell which version you are using by referring to the facebook-api-version header in Facebook’s response headers.

If an attribute was not marked as required, and the attribute is missing from Facebook, federation still succeeds, but the attribute is empty in the user pool. There have been various deprecations of fields from Facebook since Facebook federation was launched for Amazon Cognito. For instance, gender and birthday attributes have since changed to be explicitly requested on their own separate permissions rather than granted by default. The cover attribute has also been deprecated. You can confirm that your attribute has successfully federated on the user’s page in the user pools page of the AWS Management Console for Amazon Cognito. You should, as part of your migration, validate that end attributes that you are working with are passed in the way you expect.

Summary

In this post, I explained how to select the version of Facebook’s Graph API for federated login. If you already use Amazon Cognito for federated login with Facebook, you should migrate to the most recent version as soon as possible. Use this process to make sure you get all the attributes you need for your application. New customers can immediately take advantage of the latest API version.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Cognito Forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

James Li

James is a Software Development Engineer at Amazon Cognito. He values operational excellence and security. James is from Toronto, Canada, where he has worked as a software developer for 4 years.

TLS 1.2 to become the minimum for all AWS FIPS endpoints

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/tls-1-2-to-become-the-minimum-for-all-aws-fips-endpoints/

To improve security for data in transit, AWS will update all of our AWS Federal Information Processing Standard (FIPS) endpoints to a minimum Transport Layer Security (TLS) version TLS 1.2 over the next year. This update will deprecate the ability to use TLS 1.0 and TLS 1.1 on all FIPS endpoints across all AWS Regions by March 31, 2021. No other AWS endpoints are affected by this change.

As outlined in the AWS Shared Responsibility Model, security and compliance is a shared responsibility between AWS and our customers. When a customer makes a connection from their client application to an AWS service endpoint, the client provides its TLS minimum and TLS maximum version. The AWS service endpoint selects the maximum version offered.

What should customers do to prepare for this update?

Customers should confirm that their client applications support TLS 1.2 by verifying it is encapsulated between the clients’ minimum and the maximum TLS versions. We encourage customers to be proactive with security standards in order to avoid any impact to availability and to protect the integrity of their data in transit. Also, we recommend configuration changes should be tested in a staging environment, before introduction into production workloads.

When will these changes happen?

To minimize the impact to our customers who use TLS 1.0 and TLS 1.1, AWS is rolling out changes on a service-by-service basis between now and the end of March 2021. For each service, after a 30-day period during which no connections are detected, AWS will deploy a configuration change to remove support for them. After March 31, 2021, AWS may update the endpoint configuration to remove TLS 1.0 and 1.1, even if we detect customer connections. Additional reminders will be provided before these updates are final.

What are AWS FIPS endpoints?

All AWS services offer Transport Layer Security (TLS) 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints for customers that require use of FIPS validated cryptographic libraries.

What is Transport Layer Security (TLS)?

Transport Layer Security (TLS) is a cryptographic protocol designed to provide secure communication across a computer network. API calls to AWS services are secured using TLS.

Is there more assistance available to help verify or update client applications?

Customers using an AWS Software Development Kit (AWS SDK) can find information about how to properly configure their client’s minimum and maximum TLS versions on the following topics in the AWS SDKs:

Or see Tools to Build on AWS, and browse by programming language to find the relevant SDK.

Additionally, AWS IQ enables customers to find, securely collaborate with, and pay AWS Certified third-party experts for on-demand project work. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log into your console and select Get Started with AWS IQ to start a request.

The AWS Technical Support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support does not include code development for client applications.

If you have any questions or issues, please start a new thread on one of the AWS Forums, or contact AWS Support or your Technical Account Manager (TAM). If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sincerely,
Amazon Web Services

Author

Janelle Hopper

Janelle Hopper is a Senior Technical Program Manager in AWS Security with over 15 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve AWS’ security posture.

Top 10 security items to improve in your AWS account

Post Syndicated from Nathan Case original https://aws.amazon.com/blogs/security/top-10-security-items-to-improve-in-your-aws-account/

If you’re looking to improve your cloud security, a good place to start is to follow the top 10 most important cloud security tips that Stephen Schmidt, Chief Information Security Officer for AWS, laid out at AWS re:Invent 2019. Below are the tips, expanded to help you take action.

10 most important security tips

1) Accurate account information

When AWS needs to contact you about your AWS account, we use the contact information defined in the AWS Management Console, including the email address used to create the account and those listed under Alternate Contacts. All email addresses should be set up to go to aliases that are not dependent on a single person. You should also have a process for regularly checking that these email addresses work, and that you are responding to emails—especially security notifications you might receive from [email protected]. Learn how to set the alternate contacts to help ensure someone is receiving important messages, even when you are unavailable.

Alternative Contacts user interface

2) Use multi-factor authentication (MFA)

MFA is the best way to protect accounts from inappropriate access. Always set up MFA on your Root user and AWS Identity and Access Management (IAM) users. If you use AWS Single Sign-On (SSO) to control access to AWS or to federate your corporate identity store, you can enforce MFA there. Implementing MFA at the federated identity provider (IdP) means that you can take advantage of existing MFA processes in your organization. To get started, see Using Multi-Factor Authentication (MFA) in AWS.

3) No hard-coding secrets

When you build applications on AWS, you can use AWS IAM roles to deliver temporary, short-lived credentials for calling AWS services. However, some applications require longer-lived credentials, such as database passwords or other API keys. If this is the case, you should never hard code these secrets in the application or store them in source code.

You can use AWS Secrets Manager to control the information in your application. Secrets Manager allows you to rotate, manage, and retrieve database credentials, API keys, and other secrets through their lifecycle. Users and applications can retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hard code sensitive information in plain text.

You should also learn how to use AWS IAM roles for applications running on Amazon EC2. Also, for best results, learn how to securely provide database credentials to AWS Lambda functions by using AWS Secrets Manager.

4) Limit security groups

Security groups are a key way that you can enable network access to resources you have provisioned on AWS. Ensuring that only the required ports are open and the connection is enabled from known network ranges is a foundational approach to security. You can use services such as AWS Config or AWS Firewall Manager to programmatically ensure that the virtual private cloud (VPC) security group configuration is what you intended. The Network Reachability rules package analyzes your Amazon Virtual Private Cloud (Amazon VPC) network configuration to determine whether your Amazon EC2 instances can be reached from external networks, such as the Internet, a virtual private gateway, or AWS Direct Connect. AWS Firewall Manager can also be used to automatically apply AWS WAF rules to internet-facing resources across your AWS accounts. Learn more about detecting and responding to changes in VPC Security Groups.

5) Intentional data policies

Not all data is created equal, which means classifying data properly is crucial to its security. It’s important to accommodate the complex tradeoffs between a strict security posture and a flexible agile environment. A strict security posture, which requires lengthy access-control procedures, creates stronger guarantees about data security. However, such a security posture can work counter to agile and fast-paced development environments, where developers require self-service access to data stores. Design your approach to data classification to meet a broad range of access requirements.

How you classify data doesn’t have to be as binary as public or private. Data comes in various degrees of sensitivity and you might have data that falls in all of the different levels of sensitivity and confidentiality. Design your data security controls with an appropriate mix of preventative and detective controls to match data sensitivity appropriately. In the suggestions below, we deal mostly with the difference between public and private data. If you have no classification policy currently, public versus private is a good place to start.

To protect your data once it has been classified, or while you are classifying it:

  1. If you have Amazon Simple Storage Service (Amazon S3) buckets that are for public usage, move all of that data into a separate AWS account set aside for public access. Set up policies to allow only processes — not humans — to move data into those buckets. This lets you block the ability to make a public Amazon S3 bucket in any other AWS account.
  2. Use Amazon S3 to block public access in any account that should not be able to share data through Amazon S3.
  3. Use two different IAM roles for encryption and decryption with KMS. This lets you separate the data entry (encryption) and data review (decryption), and it allows you to do threat detection on the failed decryption attempts by analyzing that role.

6) Centralize CloudTrail logs

Logging and monitoring are important parts of a robust security plan. Being able to investigate unexpected changes in your environment or perform analysis to iterate on your security posture relies on having access to data. AWS recommends that you write logs, especially AWS CloudTrail, to an S3 bucket in an AWS account designated for logging (Log Archive). The permissions on the bucket should prevent deletion of the logs, and they should also be encrypted at rest. Once the logs are centralized, you can integrate with SIEM solutions or use AWS services to analyze them. Learn how to use AWS services to visualize AWS CloudTrail logs. Once you have CloudTrail logs centralized, you can also use the same Log Archive account to centralize logs from other sources, such as CloudWatch Logs and AWS load balancers.

7) Validate IAM roles

As you operate your AWS accounts to iterate and build capability, you may end up creating multiple IAM roles that you discover later you don’t need. Use AWS IAM Access Analyzer to review access to your internal AWS resources and determine where you have shared access outside your AWS accounts. Routinely reevaluating AWS IAM roles and permissions with Security Hub or open source products such as Prowler will give you the visibility needed to validate compliance with your Governance, Risk, and Compliance (GRC) policies. If you’re already past this point, and have already created multiple roles, you can search for unused IAM roles and remove them.

8) Take actions on findings (This isn’t just GuardDuty anymore!)

AWS Security Hub, Amazon GuardDuty, and AWS Identity and Access Management Access Analyzer are managed AWS services that provide you with actionable findings in your AWS accounts. They are easy to turn on and can integrate across multiple accounts. Turning them on is the first step. You also need to take action when you see findings. The action(s) to take are determined by your own incident response policy. For each finding, ensure that you have determined what your required response actions should be.

Action can be notifying a human to respond, but as you get more experienced in AWS services, you will want to automate the response to the findings generated by Security Hub or GuardDuty. Learn more about how to automate your response and remediation from Security Hub findings.

9) Rotate keys

One of the things that Security Hub provides is a view of the compliance posture of your AWS accounts using the CIS Benchmarks. One of these checks is to look for IAM users with access keys more than 90 days old. If you need to use access keys rather than roles, you should rotate them regularly. Review best practices for managing AWS access keys for more guidance. If your users access AWS via federation, then you can remove the need to issue AWS access keys for your users. Users authenticate to the IdP and assume an IAM role in the target AWS account. The result is that long-term credentials are not needed, and your user will have short-term credentials associated with an IAM role.

10) Be involved in the dev cycle

All of the guidance to this point has been focused on the technology configuration that you can implement. The last piece of advice, “be involved in the dev cycle,” is about people, and can be broadly summarized as “raise the security culture of your organization.” The role of people in all parts of the organization is to help the business launch their solutions securely. As people focused on security, we can guide and educate the rest of our organization to understand what they need to do to raise the bar for security in everything they build. Security is everyone’s job — not just for those folks with it in their job title.

What the security people in every organization can do is to make security easier, by shifting the process to make the easiest and most desirable action one that is almost the most secure. For example, each team should not build their own identity federation or logging solution. We are stronger when we work together, and this applies to securing the cloud as well. The goal is to make security more approachable so that co-workers want to talk to the security team because they know it is the place to get help. For more about creating this type of security team, read Cultivating Security Leadership.

Now that you’ve revisited the top 10 things to make your cloud more secure, make sure you have them set up in your AWS accounts — and go build securely!

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nathan Case

Nathan Case

Nathan is a Security Strategist, Geek. He joined AWS in 2016. You can learn more about him here.

15 additional AWS services authorized at DoD Impact Level 6 for the AWS Secret Region

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/15-additional-aws-services-authorized-dod-impact-level-6-aws-secret-region/

The Defense Information Systems Agency (DISA) has authorized 15 additional AWS services in the AWS Secret Region for production workloads at the Department of Defense (DoD) Impact Level (IL) 6 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). The authorization at DoD IL 6 allows DoD Mission Owners to process classified and mission-critical workloads for National Security Systems in the AWS Secret Region. The AWS Secret Region was built as part of the Commercial Cloud Services (C2S) contract and is available to the DoD on the AWS GSA IT70 schedule.

The AWS services successfully completed an independent evaluation by members of the Intelligence Community (IC), which confirmed that the AWS services effectively implemented 859 security controls using applicable criteria from NIST SP 800-53 Rev 4, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the Moderate Confidentiality, Moderate Integrity, and Moderate Availability impact levels.

The 15 AWS services newly authorized by DISA at IL 6 provide additional choices for DoD Mission Owners to leverage the capabilities of the AWS Cloud in service areas such as compute, storage, database, networking, and security, bringing our total IL 6 authorizations to 26 services as listed below.

Authorized AWS services and features at DoD Impact Level 6

  1. Amazon CloudWatch
  2. Amazon DynamoDB
  3. Amazon Elastic Block Store
  4. Amazon Elastic Compute Cloud (including VM Import/Export)
  5. Amazon EC2 Auto Scaling
  6. Amazon ElastiCache
  7. Amazon Kinesis Data Streams
  8. Amazon Redshift
  9. Amazon Relational Database Service (including MariaDB, MySQL, Oracle, PostgreSQL, and SQL Server)
  10. Amazon S3 Glacier
  11. Amazon Simple Notification Service
  12. Amazon Simple Queue Service
  13. Amazon Simple Storage Service
  14. Amazon Simple Workflow
  15. Amazon Virtual Private Cloud
  16. AWS CloudFormation
  17. AWS CloudTrail
  18. AWS Config
  19. AWS Database Migration Service
  20. AWS Direct Connect
  21. AWS Identity and Access Management
  22. AWS Key Management Service
  23. AWS Snowball
  24. AWS Step Functions
  25. AWS Trusted Advisor
  26. Elastic Load Balancing (Classic and Application Load Balancer)

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler Harding is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

How financial institutions can approve AWS services for highly confidential data

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/how-financial-institutions-can-approve-aws-services-for-highly-confidential-data/

As a Principal Solutions Architect within the Worldwide Financial Services industry group, one of the most frequently asked questions I receive is whether a particular AWS service is financial-services-ready. In a regulated industry like financial services, moving to the cloud isn’t a simple lift-and-shift exercise. Instead, financial institutions use a formal service-by-service assessment process, often called whitelisting, to demonstrate how cloud services can help address their regulatory obligations. When this process is not well defined, it can delay efforts to migrate data to the cloud.

In this post, I will provide a framework consisting of five key considerations that financial institutions should focus on to help streamline the whitelisting of cloud services for their most confidential data. I will also outline the key AWS capabilities that can help financial services organizations during this process.

Here are the five key considerations:

  1. Achieving compliance
  2. Data protection
  3. Isolation of compute environments
  4. Automating audits with APIs
  5. Operational access and security

For many of the business and technology leaders that I work with, agility and the ability to innovate quickly are the top drivers for their cloud programs. Financial services institutions migrate to the cloud to help develop personalized digital experiences, break down data silos, develop new products, drive down margins for existing products, and proactively address global risk and compliance requirements. AWS customers who use a wide range of AWS services achieve greater agility as they move through the stages of cloud adoption. Using a wide range of services enables organizations to offload undifferentiated heavy lifting to AWS and focus on their core business and customers.

My goal is to guide financial services institutions as they move their company’s highly confidential data to the cloud — in both production environments and mission-critical workloads. The following considerations will help financial services organizations determine cloud service readiness and to achieve success in the cloud.

1. Achieving compliance

For financial institutions that use a whitelisting process, the first step is to establish that the underlying components of the cloud service provider’s (CSP’s) services can meet baseline compliance needs. A key prerequisite to gaining this confidence is to understand the AWS shared responsibility model. Shared responsibility means that the secure functioning of an application on AWS requires action on the part of both the customer and AWS as the CSP. AWS customers are responsible for their security in the cloud. They control and manage the security of their content, applications, systems, and networks. AWS manages security of the cloud, providing and maintaining proper operations of services and features, protecting AWS infrastructure and services, maintaining operational excellence, and meeting relevant legal and regulatory requirements.

In order to establish confidence in the AWS side of the shared responsibility model, customers can regularly review the AWS System and Organization Controls 2 (SOC 2) Type II report prepared by an independent, third-party auditor. The AWS SOC 2 report contains confidential information that can be obtained by customers under an AWS non-disclosure agreement (NDA) through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

Key takeaway: Currently, 116 AWS services are in scope for SOC compliance, which will help organizations streamline their whitelisting process. For more information about which services are in scope, see AWS Services in Scope by Compliance Program.

2. Data protection

Financial institutions use comprehensive data loss prevention strategies to protect confidential information. Customers using AWS data services can employ encryption to mitigate the risk of disclosure, alteration of sensitive information, or unauthorized access. The AWS Key Management Service (AWS KMS) allows customers to manage the lifecycle of encryption keys and control how they are used by their applications and AWS services. Allowing encryption keys to be generated and maintained in the FIPS 140-2 validated hardware security modules (HSMs) in AWS KMS is the best practice and most cost-effective option.

For AWS customers who want added flexibility for key generation and storage, AWS KMS allows them to either import their own key material into AWS KMS and keep a copy in their on-premises HSM, or generate and store keys in dedicated AWS CloudHSM instances under their control. For each of these key material generation and storage options, AWS customers can control all the permissions to use keys from any of their applications or AWS services. In addition, every use of a key or modification to its policy is logged to AWS CloudTrail for auditing purposes. This level of control and audit over key management is one of the tools organizations can use to address regulatory requirements for using encryption as a data privacy mechanism.

All AWS services offer encryption features, and most AWS services that financial institutions use integrate with AWS KMS to give organizations control over their encryption keys used to protect their data in the service. AWS offers customer-controlled key management features in twice as many services as any other CSP.

Financial institutions also encrypt data in transit to ensure that it is accessed only by the intended recipient. Encryption in transit must be considered in several areas, including API calls to AWS service endpoints, encryption of data in transit between AWS service components, and encryption in transit within applications. The first two considerations fall within the AWS scope of the shared responsibility model, whereas the latter is the responsibility of the customer.

All AWS services offer Transport Layer Security (TLS) 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints in selected AWS Regions. These FIPS 140-2 endpoints use a cryptographic library that has been validated under the Federal Information Processing Standards (FIPS) 140-2 standard. For financial institutions that operate workloads on behalf of the US government, using FIPS 140-2 endpoints helps them to meet their compliance requirements.

To simplify configuring encryption in transit within an application, which falls under the customer’s responsibility, customers can use the AWS Certificate Manager (ACM) service. ACM enables easy provisioning, management, and deployment of x.509 certificates used for TLS to critical application endpoints hosted in AWS. These integrations provide automatic certificate and private key deployment and automated rotation for Amazon CloudFront, Elastic Load Balancing, Amazon API Gateway, AWS CloudFormation, and AWS Elastic Beanstalk. ACM offers both publicly-trusted and private certificate options to meet the trust model requirements of an application. Organizations may also import their existing public or private certificates to ACM to make use of existing public key infrastructure (PKI) investments.

Key takeaway: AWS KMS allows organizations to manage the lifecycle of encryption keys and control how encryption keys are used for over 50 services. For more information, see AWS Services Integrated with AWS KMS. AWS ACM simplifies the deployment and management of PKI as compared to self-managing in an on-premises environment.

3. Isolation of compute environments

Financial institutions have strict requirements for isolation of compute resources and network traffic control for workloads with highly confidential data. One of the core competencies of AWS as a CSP is to protect and isolate customers’ workloads from each other. Amazon Virtual Private Cloud (Amazon VPC) allows customers to control their AWS environment and keep it separate from other customers’ environments. Amazon VPC enables customers to create a logically separate network enclave within the Amazon Elastic Compute Cloud (Amazon EC2) network to house compute and storage resources. Customers control the private environment, including IP addresses, subnets, network access control lists, security groups, operating system firewalls, route tables, virtual private networks (VPNs), and internet gateways.

Amazon VPC provides robust logical isolation of customers’ resources. For example, every packet flow on the network is individually authorized to validate the correct source and destination before it is transmitted and delivered. It is not possible for information to pass between multiple tenants without specifically being authorized by both the transmitting and receiving customers. If a packet is being routed to a destination without a rule that matches it, the packet is dropped. AWS has also developed the AWS Nitro System, a purpose-built hypervisor with associated custom hardware components that allocates central processing unit (CPU) resources for each instance and is designed to protect the security of customers’ data, even from operators of production infrastructure.

For more information about the isolation model for multi-tenant compute services, such as AWS Lambda, see the Security Overview of AWS Lambda whitepaper. When Lambda executes a function on a customer’s behalf, it manages both provisioning and the resources necessary to run code. When a Lambda function is invoked, the data plane allocates an execution environment to that function or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment. Each function runs in one or more dedicated execution environments that are used for the lifetime of the function and are then destroyed. Execution environments run on hardware-virtualized lightweight micro-virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. Execution environments are never shared across functions, and microVMs are never shared across AWS accounts. AWS continues to innovate in the area of hypervisor security, and resource isolation enables our financial services customers to run even the most sensitive workloads in the AWS Cloud with confidence.

Most financial institutions require that traffic stay private whenever possible and not leave the AWS network unless specifically required (for example, in internet-facing workloads). To keep traffic private, customers can use Amazon VPC to carve out an isolated and private portion of the cloud for their organizational needs. A VPC allows customers to define their own virtual networking environments with segmentation based on application tiers.

To connect to regional AWS services outside of the VPC, organizations may use VPC endpoints, which allow private connectivity between resources in the VPC and supported AWS services. Endpoints are managed virtual devices that are highly available, redundant, and scalable. Endpoints enable private connection between a customer’s VPC and AWS services using private IP addresses. With VPC endpoints, Amazon EC2 instances running in private subnets of a VPC have private access to regional resources without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Furthermore, when customers create an endpoint, they can attach a policy that controls the use of the endpoint to access only specific AWS resources, such as specific Amazon Simple Storage Service (Amazon S3) buckets within their AWS account. Similarly, by using resource-based policies, customers can restrict access to their resources to only allow access from VPC endpoints. For example, by using bucket policies, customers can restrict access to a given Amazon S3 bucket only through the endpoint. This ensures that traffic remains private and only flows through the endpoint without traversing public address space.

Key takeaway: To help customers keep traffic private, more than 45 AWS services have support for VPC Endpoints.

4. Automating audits with APIs

Visibility into user activities and resource configuration changes is a critical component of IT governance, security, and compliance. On-premises logging solutions require installing agents, setting up configuration files and log servers, and building and maintaining data stores to store the data. This complexity may result in poor visibility and fragmented monitoring stacks, which in turn takes longer to troubleshoot and resolve issues. CloudTrail provides a simple, centralized solution to record AWS API calls and resource changes in the cloud that helps alleviate this burden.

CloudTrail provides a history of activity in a customer’s AWS account to help them meet compliance requirements for their internal policies and regulatory standards. CloudTrail helps identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help customers analyze and respond to activity in their AWS account. CloudTrail management events provide insights into the management (control plane) operations performed on resources in an AWS account. For example, customers can log administrative actions, such as creation, deletion, and modification of Amazon EC2 instances. For each event, they receive details such as the AWS account, IAM user role, and IP address of the user that initiated the action as well as time of the action and which resources were affected.

CloudTrail data events provide insights into the resource (data plane) operations performed on or within the resource itself. Data events are often high-volume activities and include operations, such as Amazon S3 object-level APIs, and AWS Lambda function Invoke APIs. For example, customers can log API actions on Amazon S3 objects and receive detailed information, such as the AWS account, IAM user role, IP address of the caller, time of the API call, and other details. Customers can also record activity of their Lambda functions and receive details about Lambda function executions, such as the IAM user or service that made the Invoke API call, when the call was made, and which function was executed.

To help customers simplify continuous compliance and auditing, AWS uniquely offers the AWS Config service to help them assess, audit, and evaluate the configurations of AWS resources. AWS Config continuously monitors and records AWS resource configurations, and allows customers to automate the evaluation of recorded configurations against internal guidelines. With AWS Config, customers can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories.

Key takeaway: Over 160 AWS services are integrated with CloudTrail, which helps customers ensure compliance with their internal policies and regulatory standards by providing a history of activity within their AWS account. For more information about how to use CloudTrail with specific AWS services, see AWS Service Topics for CloudTrail in the CloudTrail user guide. For more information on how to enable AWS Config in an environment, see Getting Started with AWS Config.

5. Operational access and security

In our discussions with financial institutions, they’ve told AWS that they are required to have a clear understanding of access to their data. This includes knowing what controls are in place to ensure that unauthorized access does not occur. AWS has implemented layered controls that use preventative and detective measures to ensure that only authorized individuals have access to production environments where customer content resides. For more information about access and security controls, see the AWS SOC 2 report in AWS Artifact.

One of the foundational design principles of AWS security is to keep people away from data to minimize risk. As a result, AWS created an entirely new virtualization platform called the AWS Nitro System. This highly innovative system combines new hardware and software that dramatically increases both performance and security. The AWS Nitro System enables enhanced security with a minimized attack surface because virtualization and security functions are offloaded from the main system board where customer workloads run to dedicated hardware and software. Additionally, the locked-down security model of the AWS Nitro System prohibits all administrative access, including that of Amazon employees, which eliminates the possibility of human error and tampering.

Key takeaway: Review third-party auditor reports (including SOC 2 Type II) available in AWS Artifact, and learn more about the AWS Nitro System.

Conclusion

AWS can help simplify and expedite the whitelisting process for financial services institutions to move to the cloud. When organizations take advantage of a wide range of AWS services, it helps maximize their agility by making use of the existing security and compliance measures built into AWS services to complete whitelisting so financial services organizations can focus on their core business and customers.

After organizations have completed the whitelisting process and determined which cloud services can be used as part of their architecture, the AWS Well-Architected Framework can then be implemented to help build and operate secure, resilient, performant, and cost-effective architectures on AWS.

AWS also has a dedicated team of financial services professionals to help customers navigate a complex regulatory landscape, as well as other resources to guide them in their migration to the cloud – no matter where they are in the process. For more information, see the AWS Financial Services page, or fill out this AWS Financial Services Contact form.

Additional resources

  • AWS Security Documentation
    The security documentation repository shows how to configure AWS services to help meet security and compliance objectives. Cloud security at AWS is the highest priority. AWS customers benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.
  • AWS Compliance Center
    The AWS Compliance Center is an interactive tool that provides customers with country-specific requirements and any special considerations for cloud use in the geographies in which they operate. The AWS Compliance Center has quick links to AWS resources to help with navigating cloud adoption in specific countries, and includes details about the compliance programs that are applicable in these jurisdictions. The AWS Compliance Center covers many countries, and more countries continue to be added as they update their regulatory requirements related to technology use.
  • AWS Well-Architected Framework and AWS Well-Architected Tool
    The AWS Well-Architected Framework helps customers understand the pros and cons of decisions they make while building systems on AWS. The AWS Well-Architected Tool helps customers review the state of their workloads and compares them to the latest AWS architectural best practices. For more information about the AWS Well-Architected Framework and security, see the Security Pillar – AWS Well-Architected Framework whitepaper.

If you have feedback about this blog post, submit comments in the Comments section below.

Author

Ilya Epshteyn

Ilya is a solutions architect with AWS. He helps customers to innovate on the AWS platform by building highly available, scalable, and secure architectures. He enjoys spending time outdoors and building Lego creations with his kids.