Tag Archives: Compliance

New SOC 2 Report Available: Privacy

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-soc-2-report-available-privacy/

Maintaining your trust is an ongoing commitment of ours, and your voice drives our growing portfolio of compliance reports, attestations, and certifications. As a result of your feedback and deep interest in privacy and data security, we are happy to announce the publication of our new SOC 2 Type I Privacy report.

Keeping you informed of our privacy and data security policies, practices, and technologies we’ve put in place is important to us. The SOC 2 Privacy Type I report is complementary to that effort . The SOC 2 Privacy Trust Principle, developed by the American Institute of CPAs (AICPA), establishes the criteria for evaluating controls related to how personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives. The AWS SOC 2 Privacy Type I report provides you with a third-party attestation of our systems and the suitability of the design of our privacy controls, as stated in our Privacy Notice.

The scope of the privacy report includes systems AWS uses to collect personal information and all 72 services and locations in scope for the latest AWS SOC reports. You can download the new SOC 2 Type I Privacy report now through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

New podcast: VP of Security answers your compliance and data privacy questions

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/new-podcast-vp-of-security-answers-your-compliance-and-data-privacy-questions/

Does AWS comply with X program? How about GDPR? What about after Brexit? And what happens with machine learning data?

In the latest AWS Security & Compliance Podcast, we sit down with VP of Security Chad Woolf, who answers your compliance and data privacy questions. Including one of the most frequently asked questions from customers around the world, which is: how many compliance programs does AWS have/attest to/audit against?

Chad also shares what it was like to work at AWS in the early days. When he joined, AWS was housed on just a handful of floors, in a single building. Over the course of nearly nine years with the company, he has witnessed tremendous growth of the business and industry.

Listen to the podcast and hear about company history and get answers to your tough questions. If you have a compliance or data privacy question, you can submit it through our contact us form.

Want more AWS news? Follow us on Twitter.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

2018 ISO certificates are here, with a 70% increase of in scope services

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/2018-iso-certificates-are-here-with-a-70-increase-of-in-scope-services/

In just the last year, we’ve increased the number of ISO services in scope by 70%. That makes 114 services in total that have been validated against ISO 9001, 27001, 27017, and 27018.

The following services are new to our ISO program:

  • Amazon AppStream 2.0
  • Amazon Athena
  • Amazon Chime
  • Amazon CloudWatch Events
  • Amazon CloudWatch
  • Amazon Comprehend
  • Amazon Elastic Container Service for Kubernetes
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx*
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Pinpoint
  • Amazon Polly
  • Amazon Rekognition
  • Amazon Transcribe
  • Amazon Translate
  • AWS Amplify*
  • AWS AppSync
  • AWS Artifact
  • AWS Certificate Manager
  • AWS CodeStar
  • AWS DataSync*
  • AWS Device Farm
  • AWS Elemental MediaConnect*
  • AWS Elemental MediaConvert
  • AWS Elemental MediaLive
  • AWS Firewall Manager
  • AWS Global Accelerator*
  • AWS Glue
  • AWS IoT Greengrass
  • AWS IoT 1-Click
  • AWS IoT Analytics
  • AWS License Manager*
  • AWS OpsWorks CM [includes Chef Automate, Puppet Enterprise]
  • AWS Organizations
  • AWS RoboMaker*
  • AWS Secrets Manager
  • AWS Server Migration Service
  • AWS Serverless Application Repository
  • AWS Service Catalog
  • AWS Single Sign-On
  • AWS Transfer for SFTP*
  • AWS Trusted Advisor
  • Amazon Route 53 Resolver*

*New Service

The latest certificates for ISO 9001, 27001, 27017, and 27018 are now available, giving you insight into our information security management system from third-party auditors. They contain the full list of AWS locations in scope and reference the ISO Certified webpage, which includes all services in scope. For convenience, you can also download the certs in the console via AWS Artifact, as well.

We’re clearly accelerating the pace that we add services in scope, but our ultimate goal is to eliminate your wait for compliant services altogether. To that end, in this latest audit cycle 9 of the 51 services added, launched generally available with the certifications at re:Invent 2018.

Want more AWS Security news? Follow us on Twitter.

New PCI DSS report now available, 31 services added to scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-pci-dss-report-now-available-31-services-added-to-scope/

In just the last 6 months, we’ve increased the number of Payment Card Industry Data Security Standard (PCI DSS) certified services by 50%. We were evaluated by third-party auditors from Coalfire and the latest report is now available on AWS Artifact.

I would like to especially call out the six new services (marked with asterisks) that just launched generally available at re:Invent with PCI certification. We’re increasing the rate we add existing services in scope and are also launching new services PCI certified, enabling you to use them for regulated workloads sooner. The goal is for all of our services to have compliance certifications so you never have to wait to verify their security and compliance posture. Additional work to that end is already underway, and we’ll be updating you about our progress at every significant milestone.

With the addition of the following 31 services, you can now select from a total of 93 PCI-compliant services. To see the full list, go to our Services in Scope by Compliance Program page.

  • Amazon Athena
  • Amazon Comprehend
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx*
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Rekognition
  • Amazon Transcribe
  • Amazon Translate
  • AWS AppSync
  • AWS Certificate Manager (ACM)
  • AWS DataSync*
  • AWS Elemental MediaConnect*
  • AWS Global Accelerator*
  • AWS Glue
  • AWS Greengrass
  • AWS IoT Core {includes Device Management}
  • AWS OpsWorks for Chef Automate {includes Puppet Enterprise}
  • AWS RoboMaker*
  • AWS Secrets Manager
  • AWS Serverless Application Repository
  • AWS Server Migration Service (SMS)
  • AWS Step Functions
  • AWS Transfer for SFTP*
  • VM Import/Export

*New Service

If you want to know more about our compliance programs or provide feedback, please contact us. Your feedback helps us prioritize our decisions and innovate our programs.

Want more AWS Security news? Follow us on Twitter.

Scaling a governance, risk, and compliance program for the cloud, emerging technologies, and innovation

Post Syndicated from Michael South original https://aws.amazon.com/blogs/security/scaling-a-governance-risk-and-compliance-program-for-the-cloud/

Governance, risk, and compliance (GRC) programs are sometimes looked upon as the bureaucracy getting in the way of exciting cybersecurity work. But a good GRC program establishes the foundation for meeting security and compliance objectives. It is the proactive approach to cybersecurity that, if done well, minimizes reactive incident response.

Of the three components of cybersecurity—people, processes, and technology—technology is the viewed as the “easy button” because in relative terms, it’s simpler than drafting a policy with the right balance of flexibility and specificity or managing countless organizational principles and human behavior. Still, as much as we promote technology and automation at AWS, we also understand that automating a bad process with the latest technology doesn’t make the process or outcome better. Cybersecurity must incorporate all three aspects with a programmatic approach that scales. To reach that goal, an effective GRC program is essential because it ensures a holistic view has been taken while tackling the daunting mission of cybersecurity.

Although governance, risk, and compliance are oftentimes viewed as separate functions, there is a symbiotic relationship between them. Governance establishes the strategy and guardrails for meeting specific requirements that align and support the business. Risk management connects specific controls to governance and assessed risks, and provides business leaders with the information they need to prioritize resources and make risk-informed decisions. Compliance is the adherence and monitoring of controls to specific governance requirements. It is also the “ante” to play the game in certain industries and, with continuous monitoring, closes the feedback loop regarding effective governance. Security architecture, engineering, and operations are built upon the GRC foundation.

Without a GRC program, people tend to solely focus on technology and stove-pipe processes. For example, say a security operations employee is faced with four events to research and mitigate. In the absence of a GRC program, the staffer would have no context about the business risk or compliance impact of the events, which could lead them to prioritize the least important issue.

GRC relationship model

GRC has a symbiotic relationship

The breadth and depth of a GRC program varies with each organization. Regardless of its simplicity or complexity, there are opportunities to transform or scale that program for the adoption of cloud services, emerging technologies, and other future innovations.

Below is a checklist of best practices to help you on your journey. The key takeaways of the checklist are: base governance on objectives and capabilities, include risk context in decision-making, and automate monitoring and response.

Governance

Identify compliance requirements

__ Identify required compliance frameworks (such as HIPAA or PCI) and contract/agreement obligations.

__ Identify restrictions/limitations to cloud or emerging technologies.

__ Identify required or chosen standards to implement (for example NIST, ISO, COBIT, CSA, CIS, etc.).

Conduct program assessment

__ Conduct a program assessment based on industry processes such as the NIST Cyber Security Framework (CSF) or ISO/IEC TR 27103:2018 to understand the capability and maturity of your current profile.

__ Determine desired end-state capability and maturity, also known as target profile.

__ Document and prioritize gaps (people, process, and technologies) for resource allocation.

__ Build a Cloud Center of Excellence (CCoE) team.

__ Draft and publish a cloud strategy that includes procurement, DevSecOps, management, and security.

__ Define and assign functions, roles, and responsibilities.

Update and publish policies, processes, procedures

__ Update policies based on objectives and desired capabilities that align to your business.

__ Update processes for modern organization and management techniques such as DevSecOps and Agile, specifying how to upgrade old technologies.

__ Update procedures to integrate cloud services and other emerging technologies.

__ Establish technical governance standards to be used to select controls and that monitor compliance.

Risk management

Conduct a risk assessment*

__ Conduct or update an organizational risk assessment (e.g., market, financial, reputation, legal, etc.).

__ Conduct or update a risk assessment for each business line (such as mission, market, products/services, financial, etc.).

__ Conduct or update a risk assessment for each asset type.

* The use of pre-established threat models can simplify the risk assessment process, both initial and updates.

Draft risk plans

__ Implement plans to mitigate, avoid, transfer, or accept risk at each tier, business line, and asset (for example, a business continuity plan, a continuity of operations plan, a systems security plan).

__ Implement plans for specific risk areas (such as supply chain risk, insider threat).

Authorize systems

__ Use NIST Risk Management Framework (RMF) or other process to authorize and track systems.

__ Use NIST Special Publication 800-53, ISO ISO/IEC 27002:2013, or other control set to select, implement, and assess controls based on risk.

__ Implement continuous monitoring of controls and risk, employing automation to the greatest extent possible.

Incorporate risk information into decisions

__ Link system risk to business and organizational risk

__ Automate translation of continuous system risk monitoring and status to business and org risk

__ Incorporate “What’s the risk?” (financial, cyber, legal, reputation) into leadership decision-making

Compliance

Monitor compliance with policy, standards, and security controls

__ Automate technical control monitoring and reporting (advanced maturity will lead to AI/ML).

__ Implement manual monitoring of non-technical controls (for example periodic review of visitor logs).

__ Link compliance monitoring with security information and event management (SIEM) and other tools.

Continually self-assess

__ Automate application security testing and vulnerability scans.

__ Conduct periodic self-assessments from sampling of controls, entire functional area, and pen-tests.

__ Be overly critical of assumptions, perspectives, and artifacts.

Respond to events and changes to risk

__ Integrate security operations with the compliance team for response management.

__ Establish standard operating procedures to respond to unintentional changes in controls.

__ Mitigate impact and reset affected control(s); automate as much as possible.

Communicate events and changes to risk

__ Establish a reporting tree and thresholds for each type of incident.

__ Include general counsel in reporting.

__ Ensure applicable regulatory authorities are notified when required.

__ Automate where appropriate.

Want more AWS Security news? Follow us on Twitter.

Michael South

Michael joined AWS in 2017 as the Americas Regional Leader for public sector security and compliance business development. He supports customers who want to achieve business objectives and improve their security and compliance in the cloud. His customers span across the public sector, including: federal governments, militaries, state/provincial governments, academic institutions, and non-profits from North to South America. Prior to AWS, Michael was the Deputy Chief Information Security Officer for the city of Washington, DC and the U.S. Navy’s Chief Information Officer for Japan.

How federal agencies can leverage AWS to extend CDM programs and CIO Metric Reporting

Post Syndicated from Darren House original https://aws.amazon.com/blogs/security/how-federal-agencies-can-leverage-aws-to-extend-cdm-programs-and-cio-metric-reporting/

Continuous Diagnostics and Mitigation (CDM), a U.S. Department of Homeland Security cybersecurity program, is gaining new visibility as part of the federal government’s overall focus on securing its information and networks. How an agency performs against CDM will soon be regularly reported in the updated Federal Information Technology Acquisition Reform Act (FITARA) scorecard. That’s in addition to updates in the President’s Management Agenda. Because of this additional scrutiny, there are many questions about how cloud service providers can enable the CDM journey.

This blog will explain how you can implement a CDM program—or extend an existing one—within your AWS environment, and how you can use AWS capabilities to provide real-time CDM compliance and FISMA reporting.

When it comes to compliance for departments and agencies, the AWS Shared Responsibility Model describes how AWS is responsible for security of the cloud and the customer is responsible for security in the cloud. The Shared Responsibility Model is segmented into 3 categories (1) infrastructure services (see figure 1 below), (2) container services, and (3) abstracted services, each having a different level of controls customers can inherit to minimize their effort in managing and maintaining compliance and audit requirements. For example, when designing your workloads in AWS that fall under Infrastructure Services in the Shared Responsibility Model, AWS helps relieve your operational burden, as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate. This also relates to IT controls you can inherit through AWS compliance programs. Before their journey to the AWS Cloud, a customer may have been responsible for the entire control set in their compliance and auditing program. With AWS, you can inherit controls from AWS compliance programs, allowing you to focus on workloads and data you put in the cloud.

Figure 1: AWS Infrastructure Services

For example, if you deploy infrastructure services such as Amazon Virtual Private Cloud (Amazon VPC) networking and Amazon Elastic Compute Cloud (Amazon EC2) instances, you can base your CDM compliance controls on the AWS controls you inherit for network infrastructure, virtualization, and physical security. You would be responsible for managing things like change management for Amazon EC2 AMIs, operating system and patching, AWS Config management, AWS Identity and Access Management (IAM), and encryption of services at rest and in transit.

If you deploy container services such as Amazon Relational Database Service (Amazon RDS) or Amazon EMR that build on top of infrastructure services, AWS manages the underlying infrastructure virtualization, physical controls, and the OS and associated IT controls, like patching and change management. You inherit the IT security controls from AWS compliance programs and can use them as artifacts in your compliance program. For example, you can request a SOC report for one of our 62 SOC-compliant services available from AWS Artifact. You are still responsible for the controls of what you put in the cloud.

Another example is if you deploy abstracted services such as Amazon Simple Storage Service (S3) or Amazon DynamoDB. AWS is responsible for the IT controls applicable to the services we provide. You are responsible for managing your data, classifying your assets, using IAM to apply permissions to individual resources at the platform level, or managing permissions based on user identity or user responsibility at the IAM user/group level.

For agencies struggling to comply with FISMA requirements and thus CDM reporting, leveraging abstracted and container services means you now have the ability to inherit more controls from AWS, reducing the resource drain and cost of FISMA compliance.

So, what does this all mean for your CDM program on AWS? It means that AWS can help agencies realize CDM’s vision of near-real time FISMA reporting for infrastructure, container, and abstracted services. The following paragraphs explain how you can leverage existing AWS Services and solutions that support CDM.

1.0 Identify

AWS can meet CDM Asset Management (AM) 1.1 – 1.3 using AWS Config to track AWS resource configurations, use the resource groups tagging API to manage FISMA Classifications of your cloud infrastructure, and automatically tag Amazon EC2 resources.

For AM 1.4, AWS Config identifies all cloud assets in your AWS account, which can be stored in a DynamoDB table in the reporting format required. AWS Config rules can enforce and report on compliance, ensuring all Amazon Elastic Block Store (EBS) volumes (block level storage) and Amazon S3 buckets (object storage) are encrypted.

2.0 Protect

For CIO metrics 2.1, 2.2, and 2.14, Amazon Inspector uses Common Vulnerabilities and Exposures (CVEs) based on the National Vulnerability Database (NVD) Common Vulnerability Scoring System (CVSS). Using Amazon Inspector, customers can set up continuous golden AMI vulnerability assessments then take the Inspector findings and store them in a CIO metrics DynamoDB table for reporting.

For metrics 2.3 – 2.7, AWS provides federation services with on-premise Active Directory (AD) services, allowing customers to map IAM Roles to AD groups, report on IAM privileged system access, and identify which IAM roles have access to particular services.

To provide reporting for metric 2.10.1, AWS offers FIPS 140-2 validated endpoints for US East/West regions and AWS GovCloud (US).

For metric 2.9, AWS Config provides relationship information for all resources, which means you can use AWS Config relationship data to report on CIO metrics focused on the segmentation of resources. AWS Config can identify things like the network interfaces, security groups (firewalls), subnets, and VPCs related to an instance.

3.0 Detect

For detection that covers 3.3, 3.6, 3.8, 3.9, 3.11, and 3.12, AWS Web Application Firewall (WAF) and Amazon GuardDuty have native automation through Amazon CloudWatch, AWS Step Functions (orchestration), and AWS Lambda (serverless) to provide adaptive computing capabilities such as automating the application of AWS WAF rules, recovering from incidents, mitigating OWASP top 10 threats, and automating the import of third-party threat intelligence feeds.

4.0 Respond

The focus is on you to develop policies and procedures to respond to cyber “incidents.” AWS provides capabilities to automate policies and procedures in a policy driven manner to improve detection times, shorten response times, and reduce attack surface. FISMA defines “incident” as “an occurrence that (A) actually or imminently jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information or an information system; or (B) constitutes a violation or imminent threat of violation of law, security policies, security procedures, or acceptable use policies.”

Enabling GuardDuty in your accounts and writing finding results to a reporting database complies with CIO “Recover” metrics 4.1 and 4.2. Requirement 4.3 mandates you “automatically disable the system or relevant asset upon the detection of a given security violation or vulnerability,” per NIST SP 800-53r4 IR-4(2). You can comply by enabling automation to remediate instances.

5.0 Recover

Responding and recovering are closely related processes. Enabling automation to remediate instances also complies with 5.1 and 5.2, which focus on recovering from incidents. While it is your responsibility to develop an Information System Contingency Plan (ISCP), AWS can enable you to translate that plan into automated machine-readable policy through AWS CloudFormation. The service can use AWS Config to report on the number of HVA systems for which an ISCP has been developed (5.1). For both 5.1 and 5.2, “mean time for the organization to restore operations following the containment of a system intrusion or compromise,” using AWS multi-region architectures, inter-region VPC peering, S3 cross-region replication, AWS Landing Zones, and the global AWS network to enable global networking with services like AWS Direct Connect gateway allows you to develop an architecture (5.1.1) that tracks the number of HVA systems that have an alternate processing site identified and provisioned to enable global recovery in minutes.

Conclusion

There are many ways to report and visualize the data for all the solutions identified. A simple way is to write sensor data to a DynamoDB table. You can enhance your basic reporting to provide advanced analytics and visualization by integrating DynamoDB data with Amazon Athena. You can log all your services directly to an Amazon S3 bucket, use Amazon QuickSight for the dashboard and create custom analyses from your dashboards. You could also enrich your CIO metric dashboard data with other normalized datasets.

The end result is that you can build a new CDM program or extend an existing one using AWS. We are relentlessly focused on innovating for you and providing a comprehensive platform of secure IT services to meet your unique needs. Federal customers required to comply with CDM can use these innovations to incorporate services like AWS Config and AWS Systems Manager to provide assets and software management in AWS and on-premise systems to answer the question, “What is on the network?”

For real time compliance and reporting, you can leverage services like GuardDuty to analyze AWS CloudTrail, DNS, and VPC flow logs in your account so you really know “who is on your network,” and “what is happening on the network.”  VPC, security groups, Amazon CloudFront, and AWS WAF allow you to protect the boundary of your environment, while services like AWS Key Management Service (KMS), AWS CloudHSM, AWS Certificate Manager (ACM) and HTTPS endpoints enable you to “protect the data on the network.”

The services discussed in this blog provide you with the ability to design a cloud architecture that can help you move from ad-hoc to optimized in your CDM reporting. If you want more data, send us feedback and please, let us know how we can help with your CDM journey! If you have feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Darren House

Darren is a Senior Solutions Architect at AWS who is passionate about designing business- and mission-aligned cloud solutions that enable government customers to achieve objectives and improve desired outcomes. Outside work, Darren enjoys hiking, camping, motorcycles, and snowboarding.

Fall 2018 SOC reports now available with 73 services in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/fall-2018-soc-reports-now-available-with-73-services-in-scope/

Seventy-three. That’s the number of AWS services now available to our customers under our System and Organizational Controls (SOC) 1, 2, and 3 audits, with 11 additional services included during this most recent audit cycle. The SOC reports are now available to you on demand in the AWS Management Console. The SOC 3 report can be downloaded online as a pdf.

As you can see from the list of new services added below, we’re now including services’ namespaces to our assessment documentation. We’ll be including namespaces going forward to have a standard naming convention for services across our audits. Knowing services’ namespaces also helps you identify services when creating IAM policies, working with Amazon Resource Names (ARNs), and reading AWS CloudTrail logs.

The 11 services newly added to our SOC scope:

As always, my team strives to include services into the scope of our compliance programs based on your architectural and regulatory needs. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope across any of our compliance programs. To see our current list, go to the Services in Scope page.

Want more AWS Security news? Follow us on Twitter.

New Podcast: Preview the security track at re:Invent, learn what’s new and maximize your time

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/previewing-the-security-track-at-reinvent-learn-whats-new-and-maximize-your-time/

There are about 60 security-focused sessions and talks at re:Invent this year. That’s in addition to more than 2,000 other sessions, activities, chalk talks, and demos planned throughout the week. We want to help you get the most out the event and maximize your time. That’s why we’re previewing the security track and highlighting what’s new in the latest AWS Security & Compliance podcast.

Staffers developing security track content offer their advice for navigating the learning conference that is expected to draw 50,000 people from around the world. Listen to the podcast and learn about the newest hands-on session, which was designed to give you deep technical insight within a small-group setting. Plus, find out about the event change that is meant to make it easier to attend more of the talks that interest you.

Three key trends in financial services cloud compliance

Post Syndicated from Igor Kleyman original https://aws.amazon.com/blogs/security/three-key-trends-in-financial-services-cloud-compliance/

As financial institutions increasingly move their technology infrastructure to the cloud, financial regulators are tailoring their oversight to the unique features of a cloud environment. Regulators have followed a variety of approaches, sometimes issuing new rules and guidance tailored to the cloud. Other times, they have updated existing guidelines for managing technology providers to be more applicable for emerging technologies. In each case, however, policymakers’ heightened focus on cybersecurity and privacy has led to increased scrutiny on how financial institutions manage security and compliance.

Because we strive to ensure you can use AWS to meet the highest security standards, we also closely monitor regulatory developments and look for trends to help you stay ahead of the curve. Here are three common themes we’ve seen emerge in the regulatory landscape:

Data security and data management

Regulators expect financial institutions to implement controls and safety measures to protect the security and confidentiality of data stored in the cloud. AWS services are content agnostic—we treat all customer data and associated assets as highly confidential. We have implemented sophisticated technical and physical measures against unauthorized access. Encryption is an important step to help protect sensitive information. You can use AWS Key Management Service (KMS), which is integrated into many services, to encrypt data. KMS also makes it easy to create and control your encryption keys.

Cybersecurity

Financial regulators expect financial institutions to maintain a strong cybersecurity posture. In the cloud, security is a shared responsibility between the cloud provider and the customer: AWS manages security of the cloud, and customers are responsible for managing security in the cloud. To manage security of the cloud, AWS has developed and implemented a security control environment designed to protect the confidentiality, integrity, and availability of your systems and content. AWS infrastructure complies with global and regional regulatory requirements and best practices. You can help ensure security in the cloud by leveraging AWS services. Some new services strive to automate security. Amazon Inspector performs automated security assessments to scan cloud environments for vulnerabilities or deviations from best practices. AWS is also on the cutting edge of using automated reasoning to ensure established security protocols are in place. You can leverage automated proofs with a tool called Zelkova, which is integrated within certain AWS services. Zelkova helps you obtain higher levels of security assurance about your most sensitive systems and operations. Financial institutions can also perform vulnerability scans and penetration testing on their AWS environments—another recurring expectation of financial regulators.

Risk management

Regulators expect financial institutions to have robust risk management processes when using the cloud. Continuous monitoring is key to ensuring that you are managing the risk of your cloud environment, and AWS offers financial institutions a number of tools for governance and traceability. You can have complete visibility of your AWS resources by using services such as AWS CloudTrail, Amazon CloudWatch, and AWS Config to monitor, analyze, and audit events that occur in your cloud environment. You can also use AWS CloudTrail to log and retain account activity related to actions across your AWS infrastructure.

We understand how important security and compliance are for financial institutions, and we strive to ensure that you can use AWS to meet the highest regulatory standards. Here is a selection of resources we created to help you make sense of the changing regulatory landscape around the world:

You can go to our security and compliance resources page for additional information. Have more questions? Reach out to your Account Manager or request to be contacted.

Want more AWS Security news? Follow us on Twitter.

AWS completes TISAX high assessment

Post Syndicated from Gerald Boyne original https://aws.amazon.com/blogs/security/aws-completes-tisax-high-assessment/

We have completed the European automotive industry’s TISAX high assessment for 43 services. To successfully complete the TISAX high assessment, EY Germany conducted an independent audit, and attested that our information management system meets industry-set standards. This provides automotive industry organizations the assurance needed to build secure applications and services on AWS.

TISAX was established by the German Association of the Automotive Industry (VDA) and is governed by the European Network Exchange (ENX), which is an association of 15 companies within the European automotive industry.

The following AWS services were TISAX high assessed in our Dublin and Frankfurt Regions:

  • Amazon API Gateway
  • Amazon CloudFront
  • Amazon CloudWatch Logs
  • Amazon Cognito
  • Amazon Connect
  • Amazon DynamoDB
  • Amazon ElastiCache
  • Amazon Elastic Block Store (EBS)
  • Amazon Elastic Container Registry (ECR)
  • Amazon Elastic Container Service (ECS)
  • Amazon Elastic Cloud Compute (EC2)
  • Amazon Elastic File System (EFS)
  • Amazon Elastic Load Balancing
  • Amazon Elastic MapReduce (EMR)
  • Amazon Glacier
  • Amazon Kinesis Data Streams
  • Amazon Redshift
  • Amazon Relational Database Service (RDS)
  • Amazon Route 53
  • Amazon S3 Transfer Acceleration
  • Amazon Simple Notification Service (SNS)
  • Amazon Simple Queue Service (SQS)
  • Amazon Simple Storage Service (S3)
  • Amazon Simple Workflow Service (SWF)
  • Amazon Virtual Private Cloud (VPC)
  • Amazon WorkSpaces
  • AWS CloudFormation
  • AWS CloudHSM
  • AWS CloudTrail
  • AWS Database Migration Service (DMS)
  • AWS Direct Connect
  • AWS Directory Service for Microsoft Active Directory
  • AWS Elastic Beanstalk
  • AWS Identity and Access Management (IAM)
  • AWS IoT Core
  • AWS Key Management Service (KMS)
  • AWS Lambda
  • AWS [email protected]
  • AWS Shield
  • AWS Step Functions
  • AWS Storage Gateway
  • AWS Systems Manager
  • VM Import/Export

AWS Compliance Center for financial services now available

Post Syndicated from Frank Fallon original https://aws.amazon.com/blogs/security/aws-compliance-center-financial-services/

On Tuesday, September 4, AWS announced the launch of an AWS Compliance Center for our Financial Services (FS) customers. This addition to our compliance offerings gives you a central location to research cloud-related regulatory requirements that impact the financial services industry. Prior to the launch of the AWS Compliance Center, customers preparing to adopt AWS for their FS workloads typically had to browse multiple in-depth sources to understand the expectations of regulatory agencies in each country.

The AWS Compliance Center is designed to make this process easier. It aggregates any given country’s regulatory position regarding the adoption and operation of cloud services. Key components of the FS industry—including regulatory approvals, data privacy, and data protection—are explained, along with the steps you must take throughout your adoption of AWS services to help satisfy regulatory requirements. You can browse the information in the portal and export it as printable documents.

We expect the AWS Compliance Center to evolve as our customers’ compliance needs change and as regulators begin to address the challenges and opportunities that cloud services create in the FS industry. The AWS Compliance Center covers 13 countries, and we’ll continue to enhance it with additional countries and information based on your needs.

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 14 Services in the AWS US East/West and GovCloud Regions

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-moderate-provisional-authorization/

Since I launched our FedRAMP program way back in 2013, it has always excited me to talk about how we’re continually expanding the scope of our compliance programs because that means you’re able to use more of our services for sensitive and regulated workloads. Up to this point, we’ve had 22 services in our US East/West Regions under FedRAMP Moderate and 21 services in our GovCloud Region under FedRAMP High.

Today, I’m happy tell you about the latest expansion of our FedRAMP program, which makes for a 64% overall increase in FedRAMP covered services. We’ve achieved JAB authorizations for an additional 14 FedRAMP Moderate services in our US East/West Regions and three of those services also received FedRAMP High in our GovCloud Region. Check out the services below. All the services are available in the US East/West Regions, and the services with asterisks are also available in GovCloud.

  • Amazon API Gateway
  • Amazon Cloud Directory
  • Amazon Cognito
  • Amazon ElastiCache*
  • Amazon Inspector
  • Amazon Macie
  • Amazon QuickSight
  • Amazon Route 53
  • Amazon WAF
  • AWS Config
  • AWS Database Migration Service*
  • AWS Lambda
  • AWS Shield Advanced
  • AWS Snowball/Snowball Edge*

You can now see our updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our site. As always, our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.

Our customer obsession starts with you. It’s been a personal goal of mine, and a point of direct feedback from you, to accelerate the pace at which we’re onboarding services into all of our compliance programs, not just FedRAMP. So, we’ll continue to work with you and with regulatory and compliance bodies around the world to ensure that we’re raising the bar on your security and compliance needs and continually earning the trust you place in us.

To learn about what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. And certainly, stay tuned for more exciting future FedRAMP updates.

Want more AWS Security news? Follow us on Twitter.

How to use AWS Secrets Manager to rotate credentials for all Amazon RDS database types, including Oracle

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/

You can now use AWS Secrets Manager to rotate credentials for Oracle, Microsoft SQL Server, or MariaDB databases hosted on Amazon Relational Database Service (Amazon RDS) automatically. Previously, I showed how to rotate credentials for a MySQL database hosted on Amazon RDS automatically with AWS Secrets Manager. With today’s launch, you can use Secrets Manager to automatically rotate credentials for all types of databases hosted on Amazon RDS.

In this post, I review the key features of Secrets Manager. You’ll then learn:

  1. How to store the database credential for the superuser of an Oracle database hosted on Amazon RDS
  2. How to store the Oracle database credential used by an application
  3. How to configure Secrets Manager to rotate both Oracle credentials automatically on a schedule that you define

Key features of Secrets Manager

AWS Secrets Manager makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. The key features of this service include the ability to:

  1. Secure and manage secrets centrally. You can store, view, and manage all your secrets centrally. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. You can use fine-grained IAM policies or resource-based policies to control access to your secrets. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  2. Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for all Amazon RDS databases (MySQL, PostgreSQL, Oracle, Microsoft SQL Server, MariaDB, and Amazon Aurora.) You can also extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  3. Transmit securely. Secrets are transmitted securely over Transport Layer Security (TLS) protocol 1.2. You can also use Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink to keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity.
  4. Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts, licensing fees, or infrastructure and personnel costs. For example, a typical production-scale web application will generate an estimated monthly bill of $6. If you follow along the instructions in this blog post, your estimated monthly bill for Secrets Manager will be $1. Note: you may incur additional charges for using Amazon RDS and Amazon Lambda, if you’ve already consumed the free tier for these services.

Now that you’re familiar with Secrets Manager features, I’ll show you how to store and automatically rotate credentials for an Oracle database hosted on Amazon RDS. I divided these instructions into three phases:

  1. Phase 1: Store and configure rotation for the superuser credential
  2. Phase 2: Store and configure rotation for the application credential
  3. Phase 3: Retrieve the credential from Secrets Manager programmatically

Prerequisites

To follow along, your AWS Identity and Access Management (IAM) principal (user or role) requires the SecretsManagerReadWrite AWS managed policy to store the secrets. Your principal also requires the IAMFullAccess AWS managed policy to create and configure permissions for the IAM role used by Lambda for executing rotations. You can use IAM permissions boundaries to grant an employee the ability to configure rotation without also granting them full administrative access to your account.

Phase 1: Store and configure rotation for the superuser credential

From the Secrets Manager console, on the right side, select Store a new secret.

Since I’m storing credentials for database hosted on Amazon RDS, I select Credentials for RDS database. Next, I input the user name and password for the superuser. I start by securing the superuser because it’s the most powerful database credential and has full access to the database.
 

Figure 1: For "Select secret type," choose "Credentials for RDS database"

Figure 1: For “Select secret type,” choose “Credentials for RDS database”

For this example, I choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS Key Management Service (AWS KMS). To learn more, read the Using Your AWS KMS CMK documentation.
 

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance oracle-rds-database from the list, and then I select Next.

I then specify values for Secret name and Description. For this example, I use Database/Development/Oracle-Superuser as the name and enter a description of this secret, and then select Next.
 

Figure 3: Provide values for "Secret name" and "Description"

Figure 3: Provide values for “Secret name” and “Description”

Since this database is not yet being used, I choose to enable rotation. To do so, I select Enable automatic rotation, and then set the rotation interval to 60 days. Remember, if this database credential is currently being used, first update the application (see phase 3) to use Secrets Manager APIs to retrieve secrets before enabling rotation.
 

Figure 4: Select "Enable automatic rotation"

Figure 4: Select “Enable automatic rotation”

Next, Secrets Manager requires permissions to rotate this secret on my behalf. Because I’m storing the credentials for the superuser, Secrets Manager can use this credential to perform rotations. Therefore, on the same screen, I select Use a secret that I have previously stored in AWS Secrets Manager, and then select Next.

Finally, I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored a secret in Secrets Manager.

Note: Secrets Manager will now create a Lambda function in the same VPC as my Oracle database and trigger this function periodically to change the password for the superuser. I can view the name of the Lambda function on the Rotation configuration section of the Secret Details page.

The banner on the next screen confirms that I’ve successfully configured rotation and the first rotation is in progress, which enables me to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
 

Figure 5: The confirmation notification

Figure 5: The confirmation notification

Phase 2: Store and configure rotation for the application credential

The superuser is a powerful credential that should be used only for administrative tasks. To enable your applications to access a database, create a unique database credential per application and grant these credentials limited permissions. You can use these database credentials to read or write to database tables required by the application. As a security best practice, deny the ability to perform management actions, such as creating new credentials.

In this phase, I will store the credential that my application will use to connect to the Oracle database. To get started, from the Secrets Manager console, on the right side, select Store a new secret.

Next, I select Credentials for RDS database, and input the user name and password for the application credential.

I continue to use the default encryption key. I select the DB instance oracle-rds-database, and then select Next.

I specify values for Secret Name and Description. For this example, I use Database/Development/Oracle-Application-User as the name and enter a description of this secret, and then select Next.

I now configure rotation. Once again, since my application is not using this database credential yet, I’ll configure rotation as part of storing this secret. I select Enable automatic rotation, and set the rotation interval to 60 days.

Next, Secrets Manager requires permissions to rotate this secret on behalf of my application. Earlier in the post, I mentioned that applications credentials have limited permissions and are unable to change their password. Therefore, I will use the superuser credential, Database/Development/Oracle-Superuser, that I stored in Phase 1 to rotate the application credential. With this configuration, Secrets Manager creates a clone application user.
 

Figure 6: Select the superuser credential

Figure 6: Select the superuser credential

Note: Creating a clone application user is the preferred mechanism of rotation because the old version of the secret continues to operate and handle service requests while the new version is prepared and tested. There’s no application downtime while changing between versions.

I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored the application credential in Secrets Manager.

As mentioned in Phase 1, AWS Secrets Manager creates a Lambda function in the same VPC as the database and then triggers this function periodically to rotate the secret. Since I chose to use the existing superuser secret to rotate the application secret, I will grant the rotation Lambda function permissions to retrieve the superuser secret. To grant this permission, I first select role from the confirmation banner.
 

Figure 7: Select the "role" link that's in the confirmation notification

Figure 7: Select the “role” link that’s in the confirmation notification

Next, in the Permissions tab, I select SecretsManagerRDSMySQLRotationMultiUserRolePolicy0. Then I select Edit policy.
 

Figure 8: Edit the policy on the "Permissions" tab

Figure 8: Edit the policy on the “Permissions” tab

In this step, I update the policy (see below) and select Review policy. When following along, remember to replace the placeholder ARN-OF-SUPERUSER-SECRET with the ARN of the secret you stored in Phase 1.


{
  "Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "ec2:CreateNetworkInterface",
			"ec2:DeleteNetworkInterface",
			"ec2:DescribeNetworkInterfaces",
			"ec2:DetachNetworkInterface"
		],
		"Resource": "*"
	},
	{
	    "Sid": "GrantPermissionToUse",
		"Effect": "Allow",
		"Action": [
            "secretsmanager:GetSecretValue"
        ],
		"Resource": "ARN-OF-SUPERUSER-SECRET"
	}
  ]
}

Here’s what it will look like:
 

Figure 9: Edit the policy

Figure 9: Edit the policy

Next, I select Save changes. I have now completed all the steps required to configure rotation for the application credential, Database/Development/Oracle-Application-User.

Phase 3: Retrieve the credential from Secrets Manager programmatically

Now that I have stored the secret in Secrets Manager, I add code to my application to retrieve the database credential from Secrets Manager. I use the sample code from Phase 2 above. This code sets up the client and retrieves and decrypts the secret Database/Development/Oracle-Application-User.

Remember, applications require permissions to retrieve the secret, Database/Development/Oracle-Application-User, from Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Database/Development/Oracle-Application-User secret from Secrets Manager. You can refer to the Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.


{
 "Version": "2012-10-17",
 "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:<AWS-REGION>:<ACCOUNT-NUMBER>:secret: Database/Development/Oracle-Application-User     
 }
}

In the above policy, remember to replace the placeholder <AWS-REGION> with the AWS region that you’re using and the placeholder <ACCOUNT-NUMBER> with the number of your AWS account.

Summary

I explained the key benefits of Secrets Manager as they relate to RDS and showed you how to help meet your compliance requirements by configuring Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

New guide helps financial services customers in Brazil navigate cloud requirements

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/new-guide-helps-financial-services-customers-in-brazil-navigate-cloud-requirements/

We have a new resource to help our financial services customers in Brazil navigate regulatory requirements for using the cloud. The AWS User Guide to Financial Services Regulations in Brazil is a deep dive into the Brazilian National Monetary Council’s Resolution No. 4,658. The cybersecurity cloud resolution is the first of its kind by regulators in Brazil. The guide details how our services may be able to assist you in achieving these security expectations.

The resolution covers topics such as implementing a cybersecurity policy, incident response, entering into agreements with cloud service providers, subcontracting, business continuity, and notification requirements. Our guide addresses each of these issues and provides specific guidance on how you can use AWS to satisfy requirements.

The AWS User Guide to Financial Services Regulations in Brazil is part of a series of publications that seek to facilitate customer compliance. It’s available in English and Portuguese. We’ll continue to monitor the regulatory environment in Brazil and around the world and to publish additional resources.

If you have any questions, please contact your account executive.

Amazon ElastiCache for Redis now PCI DSS compliant, allowing you to process sensitive payment card data in-memory for faster performance

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/amazon-elasticache-redis-now-pci-dss-compliant-payment-card-data-in-memory/

Amazon ElastiCache for Redis has achieved the Payment Card Industry Data Security Standard (PCI DSS). This means that you can now use ElastiCache for Redis for low-latency and high-throughput in-memory processing of sensitive payment card data, such as Customer Cardholder Data (CHD). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and caching service in the cloud. It delivers sub-millisecond response times with millions of requests per second.

To create a PCI-Compliant ElastiCache for Redis cluster, you must use the latest Redis engine version 4.0.10 or higher and current generation node types. The service offers various data security controls to store, process, and transmit sensitive financial data. These controls include in-transit encryption (TLS), at-rest encryption, and Redis AUTH. There’s no additional charge for PCI DSS compliant ElastiCache for Redis.

In addition to PCI, ElastiCache for Redis is a HIPAA eligible service. If you want to use your existing Redis clusters that process healthcare information to also process financial information while meeting PCI requirements, you must upgrade your Redis clusters from 3.2.6 to 4.0.10. For more details, see Upgrading Engine Versions and ElastiCache for Redis Compliance.

Meeting these high bars for security and compliance means ElastiCache for Redis can be used for secure database and application caching, session management, queues, chat/messaging, and streaming analytics in industries as diverse as financial services, gaming, retail, e-commerce, and healthcare. For example, you can use ElastiCache for Redis to build an internet-scale, ride-hailing application and add digital wallets that store customer payment card numbers, thus enabling people to perform financial transactions securely and at industry standards.

To get started, see ElastiCache for Redis Compliance Documentation.

Want more AWS Security news? Follow us on Twitter.

U.K. National Health Services IGToolkit Assessment report now available

Post Syndicated from Stephen McDermid original https://aws.amazon.com/blogs/security/u-k-national-health-services-igtoolkit-assessment-report-now-available/

We know that customers often seek out third-party tools to allow for the baselining and benchmarking of their environment. Additionally, healthcare and life sciences customers (HCLS) have specific needs, which is why we continually strive to meet relevant global standards validating our security and compliance. Today, we’d like to take a look at a new tool, our latest compliance report from the National Health Services (NHS) Information Governance (IG) Toolkit for UK Health, which has a unique element because it allows you to assess us as well compare AWS to other service providers.

The AWS IGToolkit Assessment Report gives you the ability to evaluate how we performed against IGToolkit requirements. Our overall score of 98 percent and grade of “Satisfactory,” which is the highest grade awarded, mean you can use our services to host and process NHS patient data with assurance that we are following exacting requirements for data security.

The NHS IGToolkit site is straightforward to use and reports can be downloaded into a spreadsheet. AWS is categorised as a “Commercial Third Party.” For more information about the NHS IGToolkit, you can download this pdf.

As always, security is a shared responsibility, so while this report allows you to use AWS, you are required to complete your own NHS IGToolkit assessment. Finally, we are also working on addressing the Data Security and Protection Toolkit, which will replace the IGToolkit, and will release further news over the coming months as we move towards submitting our assessment.

 

Powering HIPAA-compliant workloads using AWS Serverless technologies

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/powering-hipaa-compliant-workloads-using-aws-serverless-technologies/

This post courtesy of Mayank Thakkar, AWS Senior Solutions Architect

Serverless computing refers to an architecture discipline that allows you to build and run applications or services without thinking about servers. You can focus on your applications, without worrying about provisioning, scaling, or managing any servers. You can use serverless architectures for nearly any type of application or backend service. AWS handles the heavy lifting around scaling, high availability, and running those workloads.

The AWS HIPAA program enables covered entities—and those business associates subject to the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA)—to use the secure AWS environment to process, maintain, and store protected health information (PHI). Based on customer feedback, AWS is trying to add more services to the HIPAA program, including serverless technologies.

AWS recently announced that AWS Step Functions has achieved HIPAA-eligibility status and has been added to the AWS Business Associate Addendum (BAA), adding to a growing list of HIPAA-eligible services. The BAA is an AWS contract that is required under HIPAA rules to ensure that AWS appropriately safeguards PHI. The BAA also serves to clarify and limit, as appropriate, the permissible uses and disclosures of PHI by AWS, based on the relationship between AWS and customers and the activities or services being performed by AWS.

Along with HIPAA eligibility for most of the rest of the serverless platform at AWS, Step Functions inclusion is a major win for organizations looking to process PHI using serverless technologies, opening up numerous new use cases and patterns. You can still use non-eligible services to orchestrate the storage, transmission, and processing of the metadata around PHI, but not the PHI itself.

In this post, I examine some common serverless use cases that I see in the healthcare and life sciences industry and show how AWS Serverless can be used to build powerful, cost-efficient, HIPAA-eligible architectures.

Provider directory web application

Running HIPAA-compliant web applications (like provider directories) on AWS is a common use case in the healthcare industry. Healthcare providers are often looking for ways to build and run applications and services without thinking about servers. They are also looking for ways to provide the most cost-effective and scalable delivery of secure health-related information to members, providers, and partners worldwide.

Unpredictable access patterns and spiky workloads often force organizations to provision for peak in these cases, and they end up paying for idle capacity. AWS Auto Scaling solves this challenge to a great extent but you still have to manage and maintain the underlying servers from a patching, high availability, and scaling perspective. AWS Lambda (along with other serverless technologies from AWS) removes this constraint.

The above architecture shows a serverless way to host a customer-facing website, with Amazon S3 being used for hosting static files (.js, .css, images, and so on). If your website is based on client-side technologies, you can eliminate the need to run a web server farm. In addition, you can use S3 features like server-side encryption and bucket access policies to lock down access to the content.

Using Amazon CloudFront, a global content delivery network, with S3 origins can bring your content closer to the end user and cut down S3 access costs, by caching the content at the edge. In addition, using AWS [email protected] gives you an ability to bring and execute your own code to customize the content that CloudFront delivers. That significantly reduces latency and improves the end user experience while maintaining the same Lambda development model. Some common examples include checking cookies, inspecting headers or authorization tokens, rewriting URLs, and making calls to external resources to confirm user credentials and generate HTTP responses.

You can power the APIs needed for your client application by using Amazon API Gateway, which takes care of creating, publishing, maintaining, monitoring, and securing APIs at any scale. API Gateway also provides robust ways to provide traffic management, authorization and access control, monitoring, API version management, and the other tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. This allows you to focus on your business logic. Direct, secure, and authenticated integration with Lambda functions allows this serverless architecture to scale up and down seamlessly with incoming traffic.

The CloudFront integration with AWS WAF provides a reliable way to protect your application against common web exploits that could affect application availability, compromise security, or consume excessive resources.

API Gateway can integrate directly with Lambda, which by default can access the public resources. Lambda functions can be configured to access your Amazon VPC resources as well. If you have extended your data center to AWS using AWS Direct Connect or a VPN connection, Lambda can access your on-premises resources, with the traffic flowing over your VPN connection (or Direct Connect) instead of the public internet.

All the services mentioned above (except Amazon EC2) are fully managed by AWS in terms of high availability, scaling, provisioning, and maintenance, giving you a cost-effective way to host your web applications. It’s pay-as-you-go vs. pay-as-you-provision. Spikes in demand, typically encountered during the enrollment season, are handled gracefully, with these services scaling automatically to meet demand and then scale down. You get to keep your costs in control.

All AWS services referenced in the above architecture are HIPAA-eligible, thus enabling you to store, process, and transmit PHI, as long as it complies with the BAA.

Medical device telemetry (ingesting data @ scale)

The ever-increasing presence of IoT devices in the healthcare industry has created the challenges of ingesting this data at scale and making it available for processing as soon as it is produced. Processing this data in real time (or near-real time) is key to delivering urgent care to patients.

The infinite scalability (theoretical) along with low startup times offered by Lambda makes it a great candidate for these kinds of use cases. Balancing ballooning healthcare costs and timely delivery of care is a never-ending challenge. With subsecond billing and no charge for non-execution, Lambda becomes the best choice for AWS customers.

These end-user medical devices emit a lot of telemetry data, which requires constant analysis and real-time tracking and updating. For example, devices like infusion pumps, personal use dialysis machines, and so on require tracking and alerting of device consumables and calibration status. They also require updates for these settings. Consider the following architecture:

Typically, these devices are connected to an edge node or collector, which provides sufficient computing resources to authenticate itself to AWS and start streaming data to Amazon Kinesis Streams. The collector uses the Kinesis Producer Library to simplify high throughput to a Kinesis data stream. You can also use the server-side encryption feature, supported by Kinesis Streams, to achieve encryption-at-rest. Kinesis provides a scalable, highly available way to achieve loose coupling between data-producing (medical devices) and data-consuming (Lambda) layers.

After the data is transported via Kinesis, Lambda can then be used to process this data in real time, storing derived insights in Amazon DynamoDB, which can then power a near-real time health dashboard. Caregivers can access this real-time data to provide timely care and manage device settings.

End-user medical devices, via the edge node, can also connect to and poll an API hosted on API Gateway to check for calibration settings, firmware updates, and so on. The modifications can be easily updated by admins, providing a scalable way to manage these devices.

For historical analysis and pattern prediction, the staged data (stored in S3), can be processed in batches. Use AWS Batch, Amazon EMR, or any custom logic running on a fleet of Amazon EC2 instances to gain actionable insights. Lambda can also be used to process data in a MapReduce fashion, as detailed in the Ad Hoc Big Data Processing Made Simple with Serverless MapReduce post.

You can also build high-throughput batch workflows or orchestrate Apache Spark applications using Step Functions, as detailed in the Orchestrate Apache Spark applications using AWS Step Functions and Apache Livy post. These insights can then be used to calibrate the medical devices to achieve effective outcomes.

Use Lambda to load data into Amazon Redshift, a cost-effective, petabyte-scale data warehouse offering. One of my colleagues, Ian Meyers, pointed this out in his Zero-Administration Amazon Redshift Database Loader post.

Mobile diagnostics

Another use case that I see is using mobile devices to provide diagnostic care in out-patient settings. These environments typically lack the robust IT infrastructure that clinics and hospitals can provide, and often are subjected to intermittent internet connectivity as well. Various biosensors (otoscopes, thermometers, heart rate monitors, and so on) can easily talk to smartphones, which can then act as aggregators and analyzers before forwarding the data to a central processing system. After the data is in the system, caregivers and practitioners can then view and act on the data.

In the above diagram, an application running on a mobile device (iOS or Android) talks to various biosensors and collects diagnostic data. Using AWS mobile SDKs along with Amazon Cognito, these smart devices can authenticate themselves to AWS and access the APIs hosted on API Gateway. Amazon Cognito also offers data synchronization across various mobile devices, which helps you to build “offline” features in your mobile application. Amazon Cognito Sync resolves conflicts and intermittent network connectivity, enabling you to focus on delivering great app experiences instead of creating and managing a user data sync solution.

You can also use CloudFront and [email protected], as detailed in the first use case of this post, to cache content at edge locations and provide some light processing closer to your end users.

Lambda acts as a middle tier, processing the CRUD operations on the incoming data and storing it in DynamoDB, which is again exposed to caregivers through another set of Lambda functions and API Gateway. Caregivers can access the information through a browser-based interface, with Lambda processing the middle-tier application logic. They can view the historical data, compare it with fresh data coming in, and make corrections. Caregivers can also react to incoming data and issue alerts, which are delivered securely to the smart device through Amazon SNS.

Also, by using DynamoDB Streams and its integration with Lambda, you can implement Lambda functions that react to data modifications in DynamoDB tables (and hence, incoming device data). This gives you a way to codify common reactions to incoming data, in near-real time.

Lambda ecosystem

As I discussed in the above use cases, Lambda is a powerful, event-driven, stateless, on-demand compute platform offering scalability, agility, security, and reliability, along with a fine-grained cost structure.

For some organizations, migrating from a traditional programing model to a microservices-driven model can be a steep curve. Also, to build and maintain complex applications using Lambda, you need a vast array of tools, all the way from local debugging support to complex application performance monitoring tools. The following list of tools and services can assist you in building world-class applications with minimal effort:

  • AWS X-Ray is a distributed tracing system that allows developers to analyze and debug production for distributed applications, such as those built using a microservices (Lambda) architecture. AWS X-Ray was recently added to the AWS BAA, opening the doors for processing PHI workloads.
  • AWS Step Functions helps build HIPAA-compliant complex workflows using Lambda. It provides a way to coordinate the components of distributed applications and Lambda functions using visual workflows.
  • AWS SAM provides a fast and easy way of deploying serverless applications. You can write simple templates to describe your functions and their event sources (API Gateway, S3, Kinesis, and so on). AWS recently relaunched the AWS SAM CLI, which allows you to create a local testing environment that simulates the AWS runtime environment for Lambda. It allows faster, iterative development of your Lambda functions by eliminating the need to redeploy your application package to the Lambda runtime.

For more details, see the Serverless Application Developer Tooling webpage.

Conclusion

There are numerous other health care and life science use cases that customers are implementing, using Lambda with other AWS services. AWS is committed to easing the effort of implementing health care solutions in the cloud. Making Lambda HIPAA-eligible is just another milestone in the journey. For more examples of use cases, see Serverless. For the latest list of HIPAA-eligible services, see HIPAA Eligible Services Reference.

Accept a BAA with AWS for all accounts in your organization

Post Syndicated from Kristen Haught original https://aws.amazon.com/blogs/security/accept-a-baa-with-aws-for-all-accounts-in-your-organization/

I’m excited to announce to our healthcare customers and partners that you can now accept a single AWS Business Associate Addendum (BAA) for all accounts within your organization. Once accepted, all current and future accounts created or added to your organization will immediately be covered by the BAA.

Our team is always thinking about how we can reduce manual processes related to your compliance tasks. That’s why I’ve been looking forward to the release of AWS Artifact Organization Agreements, which was designed to simplify the BAA process and improve your experience when designating AWS accounts as HIPAA accounts. Previously, if you wanted to designate several AWS accounts, you had to sign-in to each account individually to accept the BAA or email us. Now, an authorized master account user can accept the BAA once to automatically designate all existing and future member accounts in the organization as HIPAA accounts for use with protected health information (PHI). This release addresses a frequent customer request to be able to quickly designate multiple HIPAA accounts and confirm those accounts are covered under the terms of the BAA.

If you have a BAA in place already and want to leverage this new capability a master account user can accept the new AWS Organizations BAA in AWS Artifact today. To get started, your organization must use AWS Organizations to manage your accounts, and “all features” needs to be enabled. Learn more about creating an organization here.

Once you are using AWS Organizations with all features enabled, and you have the necessary user permissions, then accepting the AWS Organizations BAA takes about two minutes. We’ve created a video that shows you the process, step-by-step.

If your organization prefers to continue managing HIPAA accounts individually, you can still do that.  We have streamlined the process for accepting an individual account BAA as well. It takes less than two minutes to designate a single account as a HIPAA account in AWS Artifact. You can watch the new video here to learn how.

As with all AWS Artifact features, there is no additional cost to use AWS Artifact to review, accept, and manage individual account BAAs or the new organization BAA. To learn more, go to the FAQ page.

 

How to connect to AWS Secrets Manager service within a Virtual Private Cloud

Post Syndicated from Divya Sridhar original https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/

You can now use AWS Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink and keep traffic between your VPC and Secrets Manager within the AWS network.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. When your application running within an Amazon VPC communicates with Secrets Manager, this communication traverses the public internet. By using Secrets Manager with Amazon VPC endpoints, you can now keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity. You can start using Secrets Manager with Amazon VPC endpoints by creating an Amazon VPC endpoint for Secrets Manager with a few clicks on the VPC console or via AWS CLI. Once you create the VPC endpoint, you can start using it without making any code or configuration changes in your application.

The diagram demonstrates how Secrets Manager works with Amazon VPC endpoints. It shows how I retrieve a secret stored in Secrets Manager from an Amazon EC2 instance. When the request is sent to Secrets Manager, the entire data flow is contained within the VPC and the AWS network.

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Solution overview

In this post, I show you how to use Secrets Manager with an Amazon VPC endpoint. In this example, we have an application running on an EC2 instance in VPC named vpc-5ad42b3c. This application requires a database password to an RDS instance running in the same VPC. I have stored the database password in Secrets Manager. I will now show how to:

  1. Create an Amazon VPC endpoint for Secrets Manager using the VPC console.
  2. Use the Amazon VPC endpoint via AWS CLI to retrieve the RDS database secret stored in Secrets Manager from an application running on an EC2 instance.

Step 1: Create an Amazon VPC endpoint for Secrets Manager

  1. Open the Amazon VPC console, select Endpoints, and then select Create Endpoint.
  2. Select AWS Services as the Service category, and then, in the Service Name list, select the Secrets Manager endpoint service named com.amazonaws.us-west-2.secrets-manager.
     
    Figure 2: Options to select when creating an endpoint

    Figure 2: Options to select when creating an endpoint

  3. Specify the VPC you want to create the endpoint in. For this post, I chose the VPC named vpc-5ad42b3c where my RDS instance and application are running.
  4. To create a VPC endpoint, you need to specify the private IP address range in which the endpoint will be accessible. To do this, select the subnet for each Availability Zone (AZ). This restricts the VPC endpoint to the private IP address range specific to each AZ and also creates an AZ-specific VPC endpoint. Specifying more than one subnet-AZ combination helps improve fault tolerance and make the endpoint accessible from a different AZ in case of an AZ failure. Here, I specify subnet IDs for availability zones us-west-2a, us-west-2b, and us-west-2c:
     
    Figure 3: Specifying subnet IDs

    Figure 3: Specifying subnet IDs

  5. Select the Enable Private DNS Name checkbox for the VPC endpoint. Private DNS resolves the standard Secrets Manager DNS hostname https://secretsmanager.<region>.amazonaws.com. to the private IP addresses associated with the VPC endpoint specific DNS hostname. As a result, you can access the Secrets Manager VPC Endpoint via the AWS Command Line Interface (AWS CLI) or AWS SDKs without making any code or configuration changes to update the Secrets Manager endpoint URL.
     
    Figure 4: The "Enable Private DNS Name" checkbox

    Figure 4: The “Enable Private DNS Name” checkbox

  6. Associate a security group with this endpoint. The security group enables you to control the traffic to the endpoint from resources in your VPC. For this post, I chose to associate the security group named sg-07e4197d that I created earlier. This security group has been set up to allow all instances running within VPC vpc-5ad42b3c to access the Secrets Manager VPC endpoint. Select Create endpoint to finish creating the endpoint.
     
    Figure 5: Associate a security group and create the endpoint

    Figure 5: Associate a security group and create the endpoint

  7. To view the details of the endpoint you created, select the link on the console.
     
    Figure 6: Viewing the endpoint details

    Figure 6: Viewing the endpoint details

  8. The Details tab shows all the DNS hostnames generated while creating the Amazon VPC endpoint that can be used to connect to Secrets Manager. I can now use the standard endpoint secretsmanager.us-west-2.amazonaws.com or one of the VPC-specific endpoints to connect to Secrets Manager within vpc-5ad42b3c where my RDS instance and application also resides.
     
    Figure 7: The "Details" tab

    Figure 7: The “Details” tab

Step 2: Access Secrets Manager through the VPC endpoint

Now that I have created the VPC endpoint, all traffic between my application running on an EC2 instance hosted within VPC named vpc-5ad42b3c and Secrets Manager will be within the AWS network. This connection will use the VPC endpoint and I can use it to retrieve my RDS database secret stored in Secrets Manager. I can retrieve the secret via the AWS SDK or CLI. As an example, I can use the CLI command shown below to retrieve the current version of my RDS database secret:

$aws secretsmanager get-secret-value –secret-id MyDatabaseSecret –version-stage AWSCURRENT

Since my AWS CLI is configured for us-west-2 region, it uses the standard Secrets Manager endpoint URL https://secretsmanager.us-west-2.amazonaws.com. This standard endpoint automatically routes to the VPC endpoint since I enabled support for Private DNS hostname while creating the VPC endpoint. The above command will result in the following output:


{
  "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyDatabaseSecret-a1b2c3",
  "Name": "MyDatabaseSecret",
  "VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
  "SecretString": "{\n  \"username\":\"david\",\n  \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": 1523477145.713
} 

Summary

I’ve shown you how to create a VPC endpoint for AWS Secrets Manager and retrieve an RDS database secret using the VPC endpoint. Secrets Manager VPC Endpoints help you meet compliance and regulatory requirements about limiting public internet connectivity within your VPC. It enables your applications running within a VPC to use Secrets Manager while keeping traffic between the VPC and Secrets Manager within the AWS network. You can start using Amazon VPC Endpoints for Secrets Manager by creating endpoints in the VPC console or AWS CLI. Once created, your applications that interact with Secrets Manager do not require any code or configuration changes.

To learn more about connecting to Secrets Manager through a VPC endpoint, read the Secrets Manager documentation. For guidance about your overall VPC network structure, see Practical VPC Design.

If you have questions about this feature or anything else related to Secrets Manager, start a new thread in the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.