Tag Archives: Security, Identity & Compliance

How to deploy CloudHSM to securely share your keys with your SaaS provider

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/security/how-to-deploy-cloudhsm-securely-share-keys-with-saas-provider/

If your organization is using software as a service (SaaS), your data is likely stored and protected by the SaaS provider. However, depending on the type of data that your organization stores and the compliance requirements that it must meet, you might need more control over how the encryption keys are stored, protected, and used. In this post, I’ll show you two options for deploying and managing your own CloudHSM cluster to secure your keys, while still allowing trusted third-party SaaS providers to securely access your HSM cluster in order to perform cryptographic operations. You can also use this architecture when you want to share your keys with another business unit or with an application that’s running in a separate AWS account.

AWS CloudHSM is one of several cryptography services provided by AWS to help you secure your data and keys in the AWS cloud. AWS CloudHSM provides single-tenant HSMs based on third-party FIPS 140-2 Level 3 validated hardware, under your control, in your Amazon Virtual Private Cloud (Amazon VPC). You can generate and use keys on your HSM using CloudHSM command line tools or standards-compliant C, Java, and OpenSSL SDKs.

A related, more widely used service is AWS Key Management Service (KMS). KMS is generally easier to use, cheaper to operate, and is natively integrated with most AWS services. However, there are some use cases for which you may choose to rely on CloudHSM to meet your security and compliance requirements.

Solution Overview

There are two ways you can set up your VPC and CloudHSM clusters to allow trusted third-party SaaS providers to use the HSM cluster for cryptographic operations. The first option is to use VPC peering to allow traffic to flow between the SaaS provider’s HSM client VPC and your CloudHSM VPC, and to utilize a custom application to harness the HSM.

The second option is to use KMS to manage the keys, specifying a custom key store to generate and store the keys. AWS KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage.

Decision Criteria: VPC Peering vs Custom Key Store

The right solution for you will depend on factors like your VPC configuration, security requirements, network setup, and the type of cryptographic operations you need. The following table provides a high-level summary of how these two options compare. Later in this post, I’ll go over both options in detail and explain the design considerations you need to be aware of before deploying the solution in your environment.

Technical ConsiderationsSolution
VPC PeeringCustom Keystore
Are you able to peer or connect your HSM VPC with your SaaS provider?✔
Is your SaaS provider sensitive to costs from KMS usage in their AWS account?✔
Do you need CloudHSM-specific cryptographic tasks like signing, HMAC, or random number generation?✔
Does your SaaS provider need to encrypt your data directly with the Master Key?✔
Does your application rely on a PKCS#11-compliant or JCE-compliant SDK?✔
Does your SaaS provider need to use the keys in AWS services?✔
Do you need to log all key usage activities when SaaS providers use your HSM keys?✔

Option 1: VPC Peering

 

Figure 1: Architecture diagram showing VPC peering between the SaaS provider's HSM client VPC and the customer's HSM VPC

Figure 1: Architecture diagram showing VPC peering between the SaaS provider’s HSM client VPC and the customer’s HSM VPC

Figure 1 shows how you can deploy a CloudHSM cluster in a dedicated HSM VPC and peer this HSM VPC with your service provider’s VPC to allow them to access the HSM cluster through the client/application. I recommend that you deploy the CloudHSM cluster in a separate HSM VPC to limit the scope of resources running in that VPC. Since VPC peering is not transitive, service providers will not have access to any resources in your application VPCs or any other VPCs that are peered with the HSM VPC.

It’s possible to leverage the HSM cluster for other purposes and applications, but you should be aware of the potential drawbacks before you do. This approach could make it harder for you to find non-overlapping CIDR ranges for use with your SaaS provider. It would also mean that your SaaS provider could accidentally overwrite HSM account credentials or lock out your HSMs, causing an availability issue for your other applications. Due to these reasons, I recommend that you dedicate a CloudHSM cluster for use with your SaaS providers and use small VPC and subnet sizes, like /27, so that you’re not wasting IP space and it’s easier to find non-overlapping IP addresses with your SaaS provider.

If you’re using VPC peering, your HSM VPC CIDR cannot overlap with your SaaS provider’s VPC. Deploying the HSM cluster in a separate VPC gives you flexibility in selecting a suitable CIDR range that is non-overlapping with the service provider since you don’t have to worry about your other applications. Also, since you’re only hosting the HSM Cluster in this VPC, you can choose a CIDR range that is relatively small.

Design considerations

Here are additional considerations to think about when deploying this solution in your environment:

  • VPC peering allow resources in either VPC to communicate with each other as long as security groups, NACLS, and routing allow for it. In order to improve security, place only resources that are meant to be shared in the VPC, and secure communication at the port/protocol level by using security groups.
  • If you decide to revoke the SaaS provider’s access to your CloudHSM, you have two choices:
    • At the network layer, you can remove connectivity by deleting the VPC peering or by modifying the CloudHSM security groups to disallow the SaaS provider’s CIDR ranges.
    • Alternately, you can log in to the CloudHSM as Crypto Officer (CO) and change the password or delete the Crypto user that the SaaS provider is using.
  • If you’re deploying CloudHSM across multiple accounts or VPCs within your organization, you can also use AWS Transit Gateway to connect the CloudHSM VPC to your application VPCs. Transit Gateway is ideal when you have multiple application VPCs that needs CloudHSM access, as it easily scales and you don’t have to worry about the VPC peering limits or the number of peering connections to manage.
  • If you’re the SaaS provider, and you have multiple clients who might be interested in this solution, you must make sure that one customer IP space doesn’t overlap with yours. You must also make sure that each customer’s HSM VPC doesn’t overlap with any of the others. One solution is to dedicate one VPC per customer, to keep the client/application dedicated to that customer, and to peer this VPC with your application VPC. This reduces the overlapping CIDR dependency among all your customers.

Option 2: Custom Key Store

As the AWS KMS documentation explains, KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage. When you use a CMK in a custom key store, the cryptographic operations are performed in the HSMs in the cluster. This feature combines the convenience and widespread integration of AWS KMS with the added control of an AWS CloudHSM cluster in your AWS account. This option allows you to keep your master key in the CloudHSM cluster but allows your SaaS provider to use your master key securely by using KMS.

Each custom key store is associated with an AWS CloudHSM cluster in your AWS account. When you connect the custom key store to its cluster, AWS KMS creates the network infrastructure to support the connection. Then it logs into the key AWS CloudHSM client in the cluster using the credentials of a dedicated crypto user in the cluster. All of this is automatically set up, with no need to peer VPCs or connect to your SaaS provider’s VPC.

You create and manage your custom key stores in AWS KMS, and you create and manage your HSM clusters in AWS CloudHSM. When you create CMKs in an AWS KMS custom key store, you view and manage the CMKs in AWS KMS. But you can also view and manage their key material in AWS CloudHSM, just as you would do for other keys in the cluster.

The following diagram shows how some keys can be located in a CloudHSM cluster but be visible through AWS KMS. These are the keys that AWS KMS can use for crypto operations performed through KMS.
 

Figure 2: High level overview of KMS custom key store

Figure 2: High level overview of KMS custom key store

While this option eliminates many of the networking components you need to set up for Option 1, it does limit the type of cryptographic operations that your SaaS provider can perform. Since the SaaS provider doesn’t have direct access to CloudHSM, the crypto operations are limited to the encrypt and decrypt operations supported by KMS, and your SaaS provider must use KMS APIs for all of their operations. This is easy if they’re using AWS services which use KMS already, but if they’re performing operations within their application before storing the data in AWS storage services, this approach could be challenging, because KMS doesn’t support all the same types of cryptographic operations that CloudHSM supports.

Figure 3 illustrates the various components that make up a custom key store and shows how a CloudHSM cluster can connect to KMS to create a customer controlled key store.
 

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Design Considerations

  • Note that when using custom key store, you’re creating a kmsuser CU account in your AWS CloudHSM cluster and providing the kmsuser account credentials to AWS KMS.
  • This option requires your service provider to be able to use KMS as the key management option within their application. Because your SaaS provider cannot communicate directly with the CloudHSM cluster, they must instead use KMS APIs to encrypt the data. If your SaaS provider is encrypting within their application without using KMS, this option may not work for you.
  • When deploying a custom key store, you must not only control access to the CloudHSM cluster, you must also control access to AWS KMS.
  • Because the custom key store and KMS are located in your account, you must give permission to the SaaS provider to use certain KMS keys. You can do this by enabling cross account access. For more information, please refer to the blog post “Share custom encryption keys more securely between accounts by using AWS Key Management Service.”
  • I recommend dedicating an AWS account to the CloudHSM cluster and custom key store, as this simplifies setup. For more information, please refer to Controlling Access to Your Custom Key Store.

Network architecture that is not supported by CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4 shows various networking technologies, like AWS PrivateLink, Network Address Translation (NAT), and AWS Load Balancers, that cannot be used with CloudHSM when placed between the CloudHSM cluster and the client/application. All of these methods mask the real IPs of the HSM cluster nodes from the client, which breaks the communication between the CloudHSM client and the HSMs.

When the CloudHSM client successfully connects to the HSM cluster, it downloads a list of HSM IP addresses which is then stored and used for subsequent connections. When one of the HSM nodes is unavailable, the client/application will automatically try the IP address of the HSM nodes it knows about. When HSMs are added or removed from the cluster, the client is automatically reconfigured. Since the client relies on a current list of IP addresses to transparently handle high availability and failover within the cluster, masking the real IP address of the HSM node thus breaks the communication between the cluster and the client.

You can read more about how the CloudHSM client works in the AWS CloudHSM User Guide.

Summary

In this blog post, I’ve shown you two options for deploying CloudHSM to store your key material while allowing your SaaS provider to access and use those keys on your behalf. This allows you to remain in control of your encryption keys and use a SaaS solution without compromising security.

It’s important to understand the security requirements, network setup, and type of cryptographic operation for each approach, and to choose the option that aligns the best with your goals. As a best practice, it’s also important to understand how to secure your CloudHSM and KMS deployment and to use necessary role-based access control with minimum privilege. Read more about AWS KMS Best Practices and CloudHSM Best Practices.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Vinod Madabushi

Vinod is an Enterprise Solutions Architect with AWS. He works with customers on building highly available, scalable, and secure applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

AWS achieves OSPAR outsourcing standard for Singapore financial industry

Post Syndicated from Brandon Lim original https://aws.amazon.com/blogs/security/aws-achieves-ospar-outsourcing-standard-for-singapore-financial-industry/

AWS has achieved the Outsourced Service Provider Audit Report (OSPAR) attestation for 66 services in the Asia Pacific (Singapore) Region. The OSPAR assessment is performed by an independent third party auditor. AWS’s OSPAR demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines).

The ABS Guidelines are intended to assist financial institutions in understanding approaches to due diligence, vendor management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The ABS Guidelines are closely aligned with the Monetary Authority of Singapore’s Outsourcing Guidelines, and they’re one of the standards that the financial services industry in Singapore uses to assess the capability of their outsourced service providers (including cloud service providers).

AWS’s alignment with the ABS Guidelines demonstrates to customers AWS’s commitment to meeting the high expectations for cloud service providers set by the financial services industry in Singapore. Customers can leverage OSPAR to conduct their due diligence, minimizing the effort and costs required for compliance. AWS’s OSPAR report is now available in AWS Artifact.

You can find additional resources about regulatory requirements in the Singapore financial industry at the AWS Compliance Center. If you have questions about AWS’s OSPAR, or if you’d like to inquire about how to use AWS for your material workloads, please contact your AWS account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Brandon Lim

Brandon is the Head of Security Assurance for Financial Services, Asia-Pacific. Brandon leads AWS’s regulatory and security engagement efforts for the Financial Services industry across the Asia Pacific region. He is passionate about working with Financial Services Regulators in the region to drive innovation and cloud adoption for the financial industry.

Introducing the “Preparing for the California Consumer Privacy Act” whitepaper

Post Syndicated from Julia Soscia original https://aws.amazon.com/blogs/security/introducing-the-preparing-for-the-california-consumer-privacy-act-whitepaper/

AWS has published a whitepaper, Preparing for the California Consumer Protection Act, to provide guidance on designing and updating your cloud architecture to follow the requirements of the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020.

The whitepaper is intended for engineers and solution builders, but it also serves as a guide for qualified security assessors (QSAs) and internal security assessors (ISAs) so that you can better understand the range of AWS products and services that are available for you to use.

The CCPA was enacted into law on June 28, 2018 and grants California consumers certain privacy rights. The CCPA grants consumers the right to request that a business disclose the categories and specific pieces of personal information collected about the consumer, the categories of sources from which that information is collected, the “business purposes” for collecting or selling the information, and the categories of third parties with whom the information is shared. This whitepaper looks to address the three main subsections of the CCPA: data collection, data retrieval and deletion, and data awareness.

To read the text of the CCPA please visit the website for California Legislative Information.

If you have questions or want to learn more, contact your account executive or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Julia Soscia

Julia is a Solutions Architect at Amazon Web Services based out of New York City. Her main focus is to help customers create well-architected environments on the AWS cloud platform. She is an experienced data analyst with a focus in Big Data and Analytics.

Author photo

Anthony Pasquarielo

Anthony is a Solutions Architect at Amazon Web Services. He’s based in New York City. His main focus is providing customers technical guidance and consultation during their cloud journey. Anthony enjoys delighting customers by designing well-architected solutions that drive value and provide growth opportunity for their business.

Author photo

Justin De Castri

Justin is a Manager of Solutions Architecture at Amazon Web Services based in New York City. His primary focus is helping customers build secure, scaleable, and cost optimized solutions that are aligned with their business objectives.

Spring 2019 PCI DSS report now available, 12 services added in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-pci-dss-report-now-available-12-services-added-in-scope/

At AWS Security, continuously raising the cloud security bar for our customers is central to all that we do. Part of that work is focused on our formal compliance certifications, which enable our customers to use the AWS cloud for highly sensitive and/or regulated workloads. We see our customers constantly developing creative and innovative solutions—and in order for them to continue to do so, we need to increase the availability of services within our certifications. I’m pleased to tell you that in the past year, we’ve increased our Payment Card Industry – Data Security Standard (PCI DSS) certification scope by 79%, from 62 services to 111 services, including 12 newly added services in our latest PCI report (listed below), and we were audited by our third-party auditor, Coalfire.

The PCI DSS report and certification cover the 111 services currently in scope that are used by our customers to architect a secure Cardholder Data Environment (CDE) to protect important workloads. The full list of PCI DSS certified AWS services is available on our Services in Scope by Compliance program page. The 12 newly added services for our Spring 2019 report are:

Our compliance reports, including this latest PCI report, are available on-demand through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, please visit the AWS Compliance Programs page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profile: Rustan Leino, Senior Principal Applied Scientist

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-rustan-leino-senior-principal-applied-scientist/

Author


I recently sat down with Rustan from the Automated Reasoning Group (ARG) at AWS to learn more about the prestigious Computer Aided Verification (CAV) Award that he received, and to understand the work that led to the prize. CAV is a top international conference on formal verification of software and hardware. It brings together experts in this field to discuss groundbreaking research and applications of formal verification in both academia and industry. Rustan received this award as a result of his work developing program-verification technology. Rustan and his team have taken his research and applied it in unique ways to protect AWS core infrastructure on which customers run their most sensitive applications. He shared details about his journey in the formal verification space, the significance of the CAV award, and how he plans to continue scaling formal verification for cloud security at AWS.

Congratulations on your CAV Award! Can you tell us a little bit about the significance of the award and why you received it?

Thanks! I am thrilled to jointly receive this award with Jean-Christophe Filliâtre, who works at the CNRS Research Laboratory in France. The CAV Award recognizes fundamental contributions to program verification, that is, the field of mathematically proving the correctness of software and hardware. Jean-Christophe and I were recognized for the building of intermediate verification languages (IVL), which are a central building block of modern program verifiers.

It’s like this: the world relies on software, and the world relies on that software to function correctly. Software is written by software engineers using some programming language. If the engineers want to check, with mathematical precision, that a piece of software always does what it is intended to do, then they use a program verifier for the programming language at hand. IVLs have accelerated the building of program verifiers for many languages. So, IVLs aid the construction of program verifiers which, in turn, improve software quality that, in turn, makes technology more reliable for all.

What is your role at AWS? How are you applying technologies you’ve been recognized by CAV for at AWS?

I am building and applying proof tools to ensure the correctness and security of various critical components of AWS. This lets us deliver a better and safer experience for our customers. Several tools that we apply are based on IVLs. Among them are the SideTrail verifier for timing-based attacks, the VCC verifier for concurrent systems code, and the verification-aware programming language Dafny, all of which are built on my IVL named Boogie.

What does an automated program verification tool do?

An automated program verifier is a tool that checks if a program behaves as intended. More precisely, the verifier tries to construct a correctness proof that shows that the code meets the given specification. Specifications include things like “data at rest on disk drives is always encrypted,” or “the event-handler always eventually returns control back to the caller,” or “the API method returns a properly formatted buffer encrypted under the current session key.” If the verifier detects a discrepancy (that is, a bug), the developer responds by fixing the code. Sometimes, the verifier can’t determine what the answer is. In this case, the developer can respond by helping the tool with additional information, so-called proof hints, until the tool is able to complete the correctness proof or find another discrepancy.

For example, picture a developer who is writing a program. The program is like a letter written in a word processor, but the letter is written in a language that the computer can understand. For cloud security, say the program manages a set of data keys and takes requests to encrypt data under those keys. The developer writes down the intention that each encryption request must use a different key. This is the specification: the what.

Next, the developer writes code that instructs the computer how to respond to a request. The code separates the keys into two lists. An encryption request takes a key from the “not used” list, encrypts the given data, and then places the key on the “used” list.

To see that the code in this example meets the specification, it is crucial to understand the roles of the two lists. A program verifier might not figure this out by itself and would then indicate the part of the code it can’t verify, much like a spell-checker underlines spelling and grammar mistakes in a letter you write. To help the program verifier along, the developer provides a proof hint that says that the keys on the “not used” list have never been returned. The verifier checks that the proof hint is correct and then, using this hint, is able to construct the proof that the code meets the specification.

You’ve designed several verification tools in your career. Can you share how you’re using verification tools such as Dafny and Boogie to provide higher assurances for AWS infrastructure?

Dafny is a Java-like programming language that was designed with verification in mind. Whereas most programming languages only allow you to write code, Dafny allows you to write specifications and code at the same time. In addition, Dafny allows you to write proof hints (in fact, you can write entire proofs). Having specifications, code, and proofs in one language sets you up for an integrated verification experience. But this would remain an intellectual exercise without an automated program verifier. The Dafny language was designed alongside its automated program verifier. When you write a Dafny program, the verifier constantly runs in the background and points out mistakes as you go along, very much like the spell-checker underlines I alluded to. Internally, the Dafny verifier is based on the Boogie IVL.

At AWS, we’re currently using Dafny to write and prove a variety of security-critical libraries. For example: encryption libraries. Encryption is vital for keeping customer data safe, so it makes for a great place to focus energy on formal verification.

You spent time in scientific research roles before joining AWS. Has your experience at AWS caused you to see scientific challenges in a different way now?

I began my career in 1989 in the Microsoft Windows LAN Manager team. Based on my experiences helping network computers together, I became convinced that formally proving the correctness of programs was going to go from a “nice to have” to a “must have” in the future, because of the need for more security in a world where computers are so interconnected. At the time, the tools and techniques for proving programs correct were so rudimentary that the only safe harbor for this type of work was in esoteric research laboratories. Thus, that’s where I could be found. But these days, the tools are increasingly scalable and usable, so finally I made the jump back into development where I’m leading efforts to apply and operationalize this approach, and also to continue my research based on the problems that arise as we do so.

One of the challenges we had in the 1990s and 2000s was that few people knew how to use the tools, even if they did exist. Thus, while in research laboratories, an additional focus of mine has been on making tools that are so easy to use that they can be used in university education. Now, with dozens of universities using my tools and after several eye-opening successes with the Dafny language and verifier, I’m scaling these efforts up with development teams in AWS that can hire the students who are trained with Dafny.

I alluded to continuing research. There are still scientific challenges to make specifications more expressive and more concise, to design programming languages more streamlined for verification, and to make tools more automated, faster, and more predictable. But there’s an equally large challenge in influencing the software engineering process. The two are intimately linked, and cannot be teased apart. Only by changing the process can we hope for larger improvements in software engineering. Our application of formal verification at AWS is teaching us a lot about this challenge. We like to think we’re changing the software engineering world.

What are the next big challenges that we need to tackle in cloud security? How will automated reasoning play a role?

There is a lot of important software to verify. This excites me tremendously. As I see it, the only way we can scale is to distribute the verification effort beyond the verification community, and to get usable verification tools into the hands of software engineers. Tooling can help put the concerns of security engineers into everyday development. To meet this challenge, we need to provide appropriate training and we need to make tools as seamless as possible for engineers to use.

I hear your YouTube channel, Verification Corner, is loved by engineering students. What’s the next video you’ll be creating?

[Rustan laughs] Yes, Verification Corner has been a fun way for me to teach about verification and I receive appreciation from people around the world who have learned something from these videos. The episodes tend to focus on learning concepts of program verification. These concepts are important to all software engineers, and Verification Corner shows the concepts in the context of small (and sometimes beautiful) programs. Beyond learning the concepts in isolation, it’s also important to see the concepts in use in larger programs, to help engineers apply the concepts. I want to devote some future Verification Corner episodes to showing verification “in the trenches;” that is, the application of verification in larger, real-life (and sometimes not so beautiful) programs for cloud security, as we’re continuing to do at AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

How to get specific security information about AWS services

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/how-to-get-specific-security-information-about-aws-services/

We’re excited to announce the launch of dedicated security chapters in the AWS documentation for over 40 services. Security is a key component of your decision to use the cloud. These chapters can help your organization get in-depth information about both the built-in and the configurable security of AWS services. This information goes beyond “how-to.” It can help developers—as well as Security, Risk Management, Compliance, and Product teams—assess a service prior to use, determine how to use a service securely, and get updated information as new features are released.

This initiative is a direct result of customer requests for easy-to-find, easy-to-consume security documentation. Our new chapters provide information about the security of the cloud and in the cloud, as outlined in the AWS Shared Responsibility Model, for each service. The chapters align with the Cloud Adoption Framework: Security Perspective and include information about the following topics, as applicable:

  • Data protection
  • Identity and access management
  • Logging and monitoring
  • Compliance validation
  • Resilience
  • Infrastructure security
  • Configuration and vulnerability analysis
  • Security best practices

You can find links to the security chapters on the AWS Security Documentation page, which will be updated as more security chapters become available. Here are links to the new Security chapters we’ve released so far:

You can give us your feedback by selecting the Feedback button in the lower right corner of any documentation page. We look forward to learning how you use this information within your organization and how we can continue to provide useful resources to you.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Kristen Haught

Kristen is a Security and Compliance Business Development Manager focused on strategic initiatives that enable financial services customers to adopt Amazon Web Services for regulated workloads. She cares about sharing strategies that help customers adopt a culture of innovation, while also strengthening their security posture and minimizing risk in the cloud.

AWS Security Profile: John Backes, Senior Software Development Engineer

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-john-backes-senior-software-development-engineer/

Author


AWS scientists and engineers believe in partnering closely with the academic and research community to drive innovation in a variety of areas of our business, including cloud security. One of the ways they do this is through participating in and sponsoring scientific conferences, where leaders in fields such as automated reasoning, artificial intelligence, and machine learning come together to discuss advancements in their field. The International Conference on Computer Aided Verification (CAV), is one such conference, sponsored and—this year—co-chaired by the AWS Automated Reasoning Group (ARG). CAV is dedicated to the advancement of the theory and practice of computer-aided formal analysis methods for hardware and software systems. This conference will take place next week, July 13-18, 2019 at The New School in New York City.

CAV covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation. CAV also publishes scientific papers from the research community that it considers vital to continue spurring advances in hardware and software verification. One of the authors of a paper accepted this year, Reachability Analysis for AWS-based Networks, is authored by John Backes of AWS. I sat down with him to talk about the unique network-based analysis service, Tiros, that’s described in the paper and how it’s helping to set new standards for cloud network security.

Tell me about yourself: what made you decide to become a software engineer in the automated reasoning space?

It sounds cliche, but I have wanted to work with computers since I was a child. I recently was looking through my old school work, and I found an assignment from the second grade where I wrote about “What I wanted to be when I grow up.” I had drawn a crude picture of someone working on a computer and wrote “I want to be a computer programmer.” At university, I took a class on discrete mathematics where I learned about mathematical induction for the first time; it seemed like magic to me. I struggled a bit to develop proofs for the homework assignments and tests in the course. So the idea of writing a program to perform induction for me automatically became very compelling.

I decided to go to graduate school to do research related to proving the correctness of digital circuits. After graduating, I built automated reasoning tools for proving the correctness of software that controls airplanes and helicopters. I joined AWS because I wanted to prove properties about systems that are used by almost everyone.

I understand that your research paper on Tiros was recently published by CAV. What does the research paper cover?

Many influential papers in the space of automated reasoning have been published in CAV over the past three decades. We are publishing a paper at CAV 2019 about three different types of automated reasoning tools we used in the development of Tiros. It discusses different formal reasoning tools and techniques we used, and what tools and techniques were able to scale and which were not. The paper gives readers a blueprint for how they could build their own automated reasoning services on AWS.

What is Tiros? How is it being used in Amazon Inspector?

Tiros answers reachability questions about Amazon Virtual Private Cloud (Amazon VPC) networks. It allows customers to answer questions like “Which of my EC2 instances are reachable from the internet?” and “Is it possible for this Elastic Network Interface (ENI) to send traffic to that ENI?Amazon Inspector uses Tiros to power its recently launched Network Reachability Rules package. Customers can use this rules package to produce findings about how traffic originating from outside their accounts can reach their Amazon EC2 instances (for example, via an internet gateway, elastic load balancer, or virtual private gateway) and via which ports. Inspector also makes suggestions about how to remediate findings that a customer would like to eliminate. For example, if a customer has an EC2 instance that has port 22 (commonly associated with SSH) open to the internet, Amazon Inspector will suggest what security group needs to be changed to eliminate this finding.

Why are networks difficult to understand? How is Tiros helping to solve that problem?

As customers add more components and open them up to access from more addresses, the number of possible paths that traffic can flow through a network increases exponentially. It may be feasible to test all of the paths through a network with a dozen computers, but it would take longer than the heat death of the universe to test all possible paths of a network with hundreds of components (elastic load balancers, NAT gateways, network access control lists, EC2 instances, and so on). Tiros reasons about all possible network paths completely, using “symbolic methods,” where it does not send any packets but instead treats the network as a mathematical object. It does this by gathering information about how a VPC is configured using the describe APIs of relevant services. It takes this information and generates a set of logical constraints. It then proves properties about these sets of constraints using something called an SMT solver [Editor’s note: discussed below]

Tiros relies on the use of automated reasoning techniques and SMT solvers to provide customers with a better understanding of potential network vulnerabilities. Can you explain what these concepts are and how they’re being used in Tiros?

SMT stands for Satisfiability Modulo Theories. SMT solvers are general purpose software tools that solve a collection of mathematical constraints. The algorithms and heuristics that power these tools have been steadily improving over the past three decades. This means that if you can translate a problem into a form that can solved by an SMT solver then you can take advantage of highly optimized algorithms that have been continuously improved over decades. There are tutorials online about how to use SMT solvers to provide solutions to all sorts of interesting constraints problems. Another AWS service called Zelkova uses SMT solvers to answer questions about IAM policies. Tiros uses an SMT solver called MonoSAT to encode reachability constraints about VPC networks. The figure below shows how we encode constraints about what types of packets are allowed to flow from a subnet to an ENI:
 
Math equation

This diagram is from the CAV paper. It illustrates the constraints that Tiros generates to reason about packets moving from subnets to ENIs. Informally, these constraints say that a packet is allowed to flow from an ENI out to its subnet’s route table if the source IP address of the packet is the same as the source IP address of the ENI. Likewise, a packet can flow from a subnet to an ENI if the destination IP address of the packet is the same as that of the ENI.

Tiros generates all sorts of constraints like this to represent the rules of routing in VPCs. If the SMT solver is able to find a solution to satisfy all of the constraints, then this corresponds to a valid path that a packet can flow through the VPC from some source to some destination. Someone using Tiros can then inspect these paths to determine the source of a potential network misconfiguration.

Is Tiros helping customers meet their compliance requirements? How?

Many customers need to meet compliance standards such as PCI, FedRAMP, and HIPAA. The requirements in these standards call for evidence of properly configured network controls. For example, Requirement 11 of the PCI DSS gives guidance to regularly perform penetration testing and network vulnerability scans. Customers can use Amazon Inspector to automatically schedule assessments on a regular cadence to generate evidence that they can use to help meet this requirement.

What do you tell your friends and family about what you do?

I tell them that AWS is responsible for the security of the cloud, and AWS customers are responsible for their security in the cloud. AWS refers to this concept as the Shared Responsibility Model. I explain that I work on a technology called Tiros that automatically produces mathematical proofs to enable AWS customers to build secure applications in the cloud.

What’s next for Tiros? For automated reasoning at AWS?

AWS is constantly adding new networking features. For example, we recently announced support for Direct Connect in Transit Gateway. Tiros is continuously updated to reason about these new services and features so customers who use the service can see new reachability results as they use new VPC features. Right now, we are really focused on how Tiros can be used to help customers with compliance. We plan to integrate Tiros results into other services to help produce evidence of compliance that customers can provide to auditors.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

How to migrate a digital signing workload to AWS CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-migrate-a-digital-signing-workload-to-aws-cloudhsm/

Is your on-premises Hardware Security Module (HSM) at end-of-life? Does continued maintenance of your on-premises hardware take a lot of time and cost a lot of money? Do you want or need all of your workloads to be performed on AWS? By migrating these workloads to AWS CloudHSM, you receive automated backups, low cost HSMs, managed maintenance, automatic recovery in event of a hardware failure, integrated fault tolerance, and high-availability. One such workload you might consider migrating is secret key material used for digital signing operations.

Enterprise certificate authority (CA) or public key infrastructure (PKI) applications use the private portion of an asymmetric key pair generated and stored in a hardware security module (HSM) to perform signing operations. Examples of such operations include the creation of digital certificates for web-servers or IoT devices, file signatures, or when negotiating a TLS session. Migrating this type of workload to AWS may save you time and money. If your HSM is at end of life and you need an alternative, you can migrate the digital signing workload to AWS CloudHSM in just a few steps.

This post will focus on a workload that allows you to create and use a digital certificate to digitally sign an arbitrary file. I’ll show you how to create a new asymmetric key pair and generate the corresponding certificate signing request (CSR) on AWS CloudHSM. This CSR, once signed by the appropriate issuing CA, allows your new key pair and the associated certificate to be trusted in the same way as the key pairs in your original HSM. You could then move traffic related to signing operations or issuing certificates to your AWS CloudHSM cluster.

Background

Before I walk you through the steps of migrating a certificate signing workload into CloudHSM, I’ll provide a little background information so you’ll know how CloudHSM, PKI, and CAs work together. Every certificate is associated with a key pair made up of a private (secret) key and a public key. The private key associated with a certificate needs to be kept confidential, so it typically resides on a hardware security module (HSM). The public portion of the key pair is not confidential, is included in the certificate, and can be shared with anyone who wants to verify a digital signature made with the corresponding private key. In a PKI, a CA is the trusted entity that issues digital certificates on behalf of end-entities. At the top of the trust hierarchy is a root CA, which is implicitly trusted when it is established because it acts as the root of trust for intermediate CAs and end-entity certificates that may be issued underneath it. Intermediate CAs are trusted because their certificates are signed by the root CA. Intermediate CAs in turn sign end-entity certificates, which are used to authenticate identities of various actors across the data transfer process. A common use case for end-entity certificates is for web servers so that connecting clients can verify the server’s identity. Generally, end-entity certificates are valid for 1-3 years, intermediate CA certificates are valid for 5-10 years, and root CAs are valid for 30 years or more.

Beyond solving for the non-repudiation of objects signed by end-entity certificates to ensure the owner of the private key performed the signing operation, there is still the problem of trusting that the owner of the private key is the identity they claim to be. When evaluating trust in this way, there are generally two options; relying on public CAs or private CAs. Public CAs widely distribute the public keys of their root certificates into popular client trust stores (for example, browsers and operating systems). This allows users to verify that the identity of the end-entity has been attested to by a publicly trusted CA. This helps when the signer and the verifier of the digital asset don’t know each other and haven’t shared cryptographic material with each other in advance to perform future validations. Private CAs are those for which there are no widely distributed copies of their associated public keys. The verifier has to retrieve the public key from the private CA and has to explicitly trust the cert without any third-party attestation of the signer’s identity. This is appropriate for cases when signers and verifiers are in the same company or know each other. Examples of when to use a private CA are securing virtual private networks, data or file replication between internal servers, remote backups, file-sharing, email, or other personal accounts.

Regardless of the certificate trust model you need, AWS CloudHSM can be used to create the initial key pair and CSR for both public and private CA requests. Note that AWS offers some alternatives for certificate management that may simplify your workloads without having to use AWS CloudHSM directly. AWS Certificate Manager (ACM) automatically creates key pairs and issues public or private certificates to identify resources within your organization. For use cases that need capabilities not yet supported by ACM, or in unusual situations in which a single-tenant HSM under your control is required for compliance reasons, you can use AWS CloudHSM directly for key generation and signing operations.

Organizations currently using an on-premises HSM for the creation of asymmetric keys used in digital certificates often use a vendor-proprietary mechanism to replicate key material across multiple HSMs for resiliency. However, this method prevents the key material from ever being transferred to an HSM offered by a different vendor. Consider it “vendor lock-in’ by design. So, the private key corresponding to the certificates you use for signing and authentication are locked inside that HSM. But if they are locked inside, how do you move to AWS CloudHSM? The answer is that you don’t have to rely on these inaccessible keys: you can create a new key pair and use it within AWS CloudHSM to begin issuing end-entity certificates.

Solution overview

I will go over creating a new private key in AWS CloudHSM using the Windows client and using Microsoft certreq to generate a corresponding CSR. You provide this CSR to your private or public CA to receive a signed certificate in return. This certificate and its public key then needs to be propagated to wherever your signatures are verified. At the end of this post, I will show you how to verify your digital signatures using Microsoft SignTool. SignTool is provided by Microsoft to allow Windows users to digitally sign files, verify file signatures, and file timestamps.
 

Figure 1: Procedural diagram

Figure 1: Procedural diagram

As shown in the diagram above, the steps followed in this post are:

  1. Create a new RSA private key using KSP/CNG through the AWS CloudHSM Windows client.
  2. Using Microsoft certreq, create your CSR.
  3. Provide the CSR to your CA for signing.
  4. Use Microsoft SignTool to sign files in your environment.

Note: You may have to register this new certificate with any partners that do not automatically verify the entire certificate chain. This could be 3rd party applications, vendors, or outside entities that utilize your certificates to determine trust.

Prerequisites

In this walkthrough, I assume that you already have an AWS CloudHSM cluster set up and initialized with at least one HSM device, and an Amazon Elastic Compute Cloud (EC2) Windows-based instance with the AWS CloudHSM client, PowerShell, and Windows SDK with Microsoft SignTool installed. You must have a crypto user (CU) on the HSM to perform the steps in this post.

Deploying the solution

Step 1: Create a new private key using KSP/CNG using the AWS CloudHSM Windows client

On your Windows server where the AWS CloudHSM Windows client is installed, use a text editor to create a certificate request file named IISCertRequest.inf. For the purpose of this post, I have filled out an example file below.


[Version]
Signature = "$Windows NT$"
[NewRequest]
Subject = "CN=example.com,C=US,ST=Washington,L=Seattle,O=ExampleOrg,OU=WebServer"
HashAlgorithm = SHA256
KeyAlgorithm = RSA
KeyLength = 2048
ProviderName = "Cavium Key Storage Provider"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
MachineKeySet = True    

Step 2: Using Microsoft certreq, create your CSR

On the same server, open PowerShell and, at the PowerShell prompt, create a CSR from the IISCertRequest.inf file by using the Windows certreq command. Here’s an example of the command. Remember to change out the text in red italics with your own file name.


PS C:\>certreq -new <IISCertRequest.inf IISCertRequest.csr> 
	SDK Version: 2.03
CertReq: Request Created

If successful, you’ll see the “Request Created” message above, as well as the new file <IISCertRequest.csr> on your server. This certificate will be provided to your choice of public CA for certificate issuance. This will need to be completed manually via your public CAs suggested method of certificate request.

Step 3: Provide the CSR to your CA for signing

The CA that had been signing your existing end-entity certificates with keys generated by your original HSM is the one you use to sign the new certificates with keys generated by AWS CloudHSM, as well. There are many CAs to choose from, such as Digicert, Trustwave, GoDaddy, and so on. You will want to follow their steps for submitting your CSR to receive your certificate in return.

Step 4: Use Microsoft SignTool to sign files in your environment

When you receive your signed certificate back from your chosen CA, save a copy locally on your Windows server. Then, move the certificate file to the Personal Certificate Store in Windows so it can be used by other applications, such as Microsoft SignTool. Here’s an example of the command. Be sure to replace the value in <red italics> with your actual certificate name.
PS C:\certreq -accept <signedCertificate.cer>

Now, the certificate is ready for use, and I’ll show you how to use it to sign a file. First, you have to get the thumbprint of your certificate. To do this, open PowerShell as an Administrator (right-click the app and choose Run as Administrator). Type this command:
PS C:\>Get-ChildItem -path cert:\LocalMachine\My

If successful, you should see an output similar to this. Copy the thumbprint that is returned. You’ll need it when you perform the actual signing operation on a file.


Thumbprint				                Subject
---------------						-----------
49DF7HDJT84723FDKCURLSXYRF9830568CXHSUB2		CN=WINDOWS-CA
VJFU57E6DI9DKMCHAKLDFJA8E73739Q04730QU7A		CN=www.example.com, OU=Certif….

To open the SignTool application, navigate to the app’s directory within PowerShell. By default, this is typically:
C:\Program Files (x86)\Windows Kits\<SDK Version> \bin\<version number> \<CPU architecture>

For example, if you had downloaded the Microsoft Windows SDK 10 version, the application would be stored in:

C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64

When you’ve located the directory, sign your file by running the command below. Remember to replace the values in <red italics> with your own values. The test.exe file in this example can be any valid executable file in your directory.
PS C:\>.\signtool.exe sign /v /fd sha256 /sha1 <thumbprint> /sm /as C:\Users\Administrator\Desktop\<test.exe>

You should see a message like this:


Done Adding Additional Store
Successfully signed C:\User\Administrator\Desktop\<test.exe>

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

One last optional item you can do is verify the signature on the file using the command below. Again, replace your values for those in red italics.
PS C:\>.\signtool.exe verify /v /pa C:\Users\Administrators\Desktop\<test.exe>

You’ve now successfully migrated your file signing workload to AWS CloudHSM. If your signing certificate was not issued by a publicly trusted CA but instead by a private CA, make sure to deploy a copy of the root CA certificate and any intermediate certs from the private CA on any systems you want to verify the integrity of your signed file.

Conclusion

In this post, I walked you through creating a new RSA asymmetric key pair to create a CSR. After supplying the CSR to your chosen CA and receiving a signing certificate in return, I then showed you a how to use Microsoft SignTool with AWS CloudHSM to sign files in your environment. You can now use AWS CloudHSM to sign code, documents, or other certificates in the same method of your original HSMs.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Top 10 Security Blog posts in 2019 so far

Post Syndicated from Tom Olsen original https://aws.amazon.com/blogs/security/top-10-security-blog-posts-in-2019-so-far/

Twice a year, we like to share what’s been popular to let you know what everyone’s reading and so you don’t miss something interesting.

One of the top posts so far this year has been the registration announcement for the re:Inforce conference that happened last week. We hope you attended or watched the keynote live stream. Because the conference is now over, we omitted this from the list.

As always, let us know what you want to read about in the Comments section below – we read them all and appreciate the feedback.

The top 10 posts from 2019 based on page views

  1. How to automate SAML federation to multiple AWS accounts from Microsoft Azure Active Directory
  2. How to centralize and automate IAM policy creation in sandbox, development, and test environments
  3. AWS awarded PROTECTED certification in Australia
  4. Setting permissions to enable accounts for upcoming AWS Regions
  5. How to use service control policies to set permission guardrails across accounts in your AWS Organization
  6. Alerting, monitoring, and reporting for PCI-DSS awareness with Amazon Elasticsearch Service and AWS Lambda
  7. Updated whitepaper now available: Aligning to the NIST Cybersecurity Framework in the AWS Cloud
  8. How to visualize Amazon GuardDuty findings: serverless edition
  9. Guidelines for protecting your AWS account while using programmatic access
  10. How to quickly find and update your access keys, password, and MFA setting using the AWS Management Console

If you’re new to AWS and are just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful.

The top 10 posts of all time based on page views

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Securely Connect to Linux Instances Running in a Private Amazon VPC
  5. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  6. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  9. A New and Standardized Way to Manage Credentials in the AWS SDKs
  10. How to Control Access to Your Amazon Elasticsearch Service Domain

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tom Olsen

Tom shares responsibility for the AWS Security Blog with Becca Crockett. If you’ve got feedback about the blog, he wants to hear it in the Comments here or in any post. In his free time, you’ll either find him hanging out with his wife and their frog, in his woodshop, or skateboarding.

author photo

Becca Crockett

Becca co-manages the Security Blog with Tom Olsen. She enjoys guiding first-time blog contributors through the writing process, and she likes to interview people. In her free time, she drinks a lot of coffee and reads things. At work, she also drinks a lot of coffee and reads things.

Re:Inforce 2019 wrap-up and session links

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/reinforce-2019-wrap-up-and-session-links/

re:Inforce conference

A big thank you to the attendees of the inaugural AWS re:Inforce conference for two successful days of cloud security learning. As you head home and look toward next steps for your organization (or if you weren’t able to attend and want to know what all the fuss was about), check out some of the session videos. You can watch the keynote to hear from our AWS CISO Steve Schmidt, view the full list of recorded conference sessions on the AWS YouTube channel, or check out popular sessions by track below.

Re:Inforce leadership sessions

Listen to cloud security leaders talk about key concepts from each track:

Popular sessions by track

View sessions that you might have missed or want to re-watch. (“Popular” determined by number of video views at the time this post was published.)

Security Deep Dive

View the full list of Security Deep Dive break-out sessions.

The Foundation

View the full list of The Foundation break-out sessions.

Governance, Risk & Compliance

View the full list of Governance, Risk & Compliance break-out sessions.

Security Pioneers

View the full list of Security Pioneers break-out sessions.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

How to set up an outbound VPC proxy with domain whitelisting and content filtering

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/how-to-set-up-an-outbound-vpc-proxy-with-domain-whitelisting-and-content-filtering/

Controlling outbound communication from your Amazon Virtual Private Cloud (Amazon VPC) to the internet is an important part of your overall preventive security controls. By limiting outbound traffic to certain trusted domains (called “whitelisting”) you help prevent instances from downloading malware, communicating with bot networks, or attacking internet hosts. It’s not practical to prevent all outbound web traffic, though. Often, you want to allow access to certain well-known domains (for example, to communicate with partners, to download software updates, or to communicate with AWS API endpoints). In this post, I’ll show you how to limit outbound web connections from your VPC to the internet, using a web proxy with custom domain whitelists or DNS content filtering services. The solution is scalable, highly available, and deploys in a fully automated way.

Solution benefits and deliverables

This solution is based on the open source HTTP proxy Squid. The proxy can be used for all workloads running in the VPC, like Amazon Elastic Compute Cloud (EC2) and AWS Fargate. The solution provides you with the following benefits:

  • An outbound proxy that permit connections to whitelisted domains that you define, while presenting customizable error messages when connections are attempted to unapproved domains.
  • Optional domain content filtering based on DNS, delivered by external services like OpenDNS, Quad9, CleanBrowsing, Yandex.DNS or others. For this option, you do need to be a customer of these external services.
  • Transparent encryption handling, due to the extraction of the domain information from the Server Name Indication (SNI) extension in TLS. Encryption in transit is preserved and end-to-end encryption is maintained.
  • An auto-scaling group with Elastic Load Balancing (ELB) Network Load Balancers that spread over several of your existing subnets (and Availability Zones) and scale based on CPU load.
  • One Elastic IP address per proxy instance for internet communication. Sometimes the web sites that you’re communicating want to know your IP address so they can accept traffic from you. Giving the proxies’ elastic IP addresses allows you to know what IP addresses your web connections will come from.
  • Proxy access logs delivered to CloudWatch Logs.
  • Proxy metrics, available in CloudWatch Metrics.
  • Automated solution deployment via AWS CloudFormation.

Out of scope

  • This solution does not serve applications that aren’t proxy capable. Deep packet inspection is also out of scope.
  • TLS encryption is kept end-to-end, and only the SNI extension is examined. For unencrypted traffic (HTTP), only the host header is analyzed.
  • DNS content filtering must be delivered by an external provider; this solution only integrates with it.

Services used, cost, and performance

The solution uses the following services:

In total, the solution costs a few dollars per day depending on the region and the bandwidth usage. If you are using a DNS filtering service, you may also be charged by the service provider.

Note: An existing VPC and internet gateway are prerequisites to this solution, and aren’t included in the pricing calculations.

Solution architecture

 

Figure 1: Solution overview

Figure 1: Solution overview

As shown in Figure 1:

  1. The solution is deployed automatically via an AWS CloudFormation template.
  2. CloudWatch Logs stores the Squid access log so that you can search and analyze it.
  3. The list of allowed (whitelisted) domains is stored in AWS Secrets Manager. The Amazon EC2 instance retrieves the domain list every 5 minutes via cronjob and updates the proxy configuration if the list has changed. The values in Secrets Manager are provisioned by CloudFormation and can be read only by the proxy EC2 instances.
  4. The client running on the EC2 instance must have proxy settings pointing toward the Network Load Balancer. The load balancer will forward the request to the fleet of proxies in the target group.

Prerequisites

  1. You need an already deployed VPC, with public and private subnets spreading over several Availability Zones (AZs). You can find a description of how to set up your VPC environment at Default VPC Setup.
  2. You must have an internet gateway, with routing set up so that only traffic from a public subnet can reach the internet.

You don’t need to have a NAT (network translation address) gateway deployed since this function will be provided by the outbound proxy.

Integration with content filtering DNS services

If you require content filtering from an external company, like OpenDNS or Yandex.DNS, you must register and become a customer of that service. Many have free services, in addition to paid plans if you need advanced statistics and custom categories. This is your responsibility as the customer. (Learn more about the shared responsibility between AWS and the customer.)

Your DNS service provider will assign you a list of DNS IP addresses. You’ll need to enter the IP addresses when you provision (see Installation below).

If the DNS provider requires it, you may give them the source IPs of the proxies. There are four reserved IPs that you can find in the stack output (see Output parameters below).

Installation (one-time setup)

    1. Select the Launch Stack button to launch the CloudFormation template:
      The "Launch Stack" button

      Note: You must sign in your AWS Account in order to launch the stack in the required region. The stack content can also be downloaded here.

    2. Provide the following proxy parameters, as shown in Figure 2:
      • Allowed domains: Enter your whitelisted domains. Use a leading dot (“.”) to indicate subdomains.
      • Custom DNS servers (optional): List any DNS servers that will be used by the proxy. Leave the default value to use the default Amazon DNS server.
      • Proxy Port: Enter the listener port of the proxy.
      • Instance Type: Enter the EC2 instance type that you want to use for the proxies. Instance type will affect vertical scaling capabilities and solution cost. For more information, see Amazon EC2 Instance Types.
      • AMI ID to be used: This field is prepopulated with the Amazon Machine Image (AMI) ID found in AWS Systems Manager Parameter Store. By default, it will point toward the latest Amazon Linux 2 image. You do not need to adjust this value.
      • SSH Key name (optional): Enter the name of the SSH key for your proxy EC2 instances. This is relevant only for debugging, or if you need to log in on the proxy servers. Consider using AWS Systems Manager Session Manager instead of SSH.
    3. Next, provide the following network parameters, as shown in Figure 2:
      • VPC ID: The VPC where the solution will be deployed.
      • Public subnets: The subnets where the proxies will be deployed. Select between 2 and 3 subnets.
      • Private subnets: The subnets where the Network Load Balancer will be deployed. Select between 2 and 3 subnets.
      • Allowed client CIDR: The value you enter here will be added to the proxy security group. By default, the private IP range 172.31.0.0/16 is allowed. The allowed block size is between a /32 netmask and an /8 netmask. This prevents you from using an open IP range like 0.0.0.0/0. If you were to set an open IP range, your proxies would accept traffic from anywhere on the internet, which is a bad practice.

 

Figure 2: Launching the CloudFormation template

Figure 2: Launching the CloudFormation template

 

  • When you’ve entered all your proxy and network parameters, select Next. On the following wizard screens, you can keep the default values and select Next and Create Stack.

 

Find the output parameters

After the stack status has changed to “deployed,” you’ll need to note down the output parameters to configure your clients. Look for the following parameters in the Outputs tab of the stack:

  • The domain name of the proxy that should be configured on the client
  • The port of the proxy that should be configured on the client
  • 4 Elastic IP addresses for the proxy’s instances. These are used for outbound connections to Internet.
  • The CloudWatch Log Group, for access logs.
  • The Security Group that is attached to the proxies.
  • The Linux command to set the proxy. You can copy and paste this to your shell.
Figure 3: Stack output parameters

Figure 3: Stack output parameters

Use the proxy

Proxy setting parameters are specific to every application. Most Linux application use the environment variables http_proxy and https_proxy.

    1. Log in on the Linux EC2 instance that’s allowed to use the proxy.
    2. To set the shell parameter temporarily (only for the current shell session), execute the following export commands:
      
          [[email protected] ~]$ export http_proxy=http://<Proxy-DOMAIN>:<Proxy-Port>
          [[email protected] ~]$ export https_proxy=$http_proxy
          

      1. Replace <Proxy-DOMAIN> with the domain of the load balancer, which you can find in the stack output parameter.
      2. Replace <Proxy-Port> with the port of your proxy, which is also listed in the stack output parameter.

 

  1. Next, you can use cURL (for example) to test the connection. Replace <URL> with one of your whitelisted URLs:
    
            [[email protected] ~]$ curl -k <URL> -k                                                                
            <!DOCTYPE html>
            …
        

  2. You can add the proxy parameter permanently to interactive and non-interactive shells. If you do this, you won’t need to set them again after reloading. Execute the following commands in your application shell:
    
            [[email protected] ~]$ echo 'export http_proxy=http://<Proxy-DOMAIN>:<Proxy-Port>' >> ~/.bashrc
            [[email protected] ~]$ echo 'export https_proxy=$http_proxy' >> ~/.bashrc
            
            [[email protected] ~]$ echo 'export http_proxy=http://<Proxy-DOMAIN>:<Proxy-Port>' >> ~/.bash_profile
            [[email protected] ~]$ echo 'export https_proxy=$http_proxy' >> ~/.bash_profile
        

    1. Replace <Proxy-DOMAIN> with the domain of the load balancer.
    2. Replace <Proxy-Port> with the port of your proxy.

Customize the access denied page

An error page will display when a user’s access is blocked or if there’s an internal error. You can adjust the look and feel of this page (HTML or styles) according to the Squid error directory tag.

Use the proxy access log

The proxy access log is an important tool for troubleshooting. It contains the client IP address, the destination domain, the port, and errors with timestamps. The access logs from Squid are uploaded to CloudWatch. You can find them from the CloudWatch console under Log Groups, with the prefix Proxy, as shown in the figure below.

Figure 4: CloudWatch log with access group

Figure 4: CloudWatch log with access group

You can use CloudWatch Insight to analyze and visualize your queries. See the following figure for an example of denied connections visualized on a timeline:

Figure 5: Access logs analysis with CloudWatch Insight

Figure 5: Access logs analysis with CloudWatch Insight

Monitor your metrics with CloudWatch

The main proxy metrics are upload every five minutes to CloudWatch Metrics in the proxy namespace:

  • client_http.errors /sec – errors in processing client requests per second
  • client_http.hits /sec – cache hits per second
  • client_http.kbytes_in /sec – client uploaded data per second
  • client_http.kbytes_out /sec – client downloaded data per second
  • client_http.requests /sec – number of requests per second
  • server.all.errors /sec – proxy server errors per second
  • server.all.kbytes_in /sec – proxy server uploaded data per second
  • server.all.kbytes_out /sec – proxy downloaded data per second
  • server.all.requests /sec – all requests sent by proxy server per second

In the figure below, you can see an example of metrics. For more information on metric use, see the Squid project information.

Figure 6: Example of CloudWatch metrics

Figure 6: Example of CloudWatch metrics

Manage the proxy configuration

From time to time, you may want to add or remove domains from the whitelist. To change your whitelisted domains, you must update the input values in the CloudFormation stack. This will cause the values stored in Secrets Manager to update as well. Every five minutes, the proxies will pull the list from Secrets Manager and update as needed. This means it can take up to five minutes for your change to propagate. The change will be propagated to all instances without terminating or deploying them.

Note that when the whitelist is updated, the Squid proxy processes are restarted, which will interrupt ALL connections passing through them at that time. This can be disruptive, so be careful about when you choose to adjust the whitelist.

If you want to change other CloudFormation parameters, like DNS or Security Group settings, you can again update the CloudFormation stack with new values. The CloudFormation stack will launch a new instance and terminate legacy instances (a rolling update).

You can change the proxy Squid configuration by editing the CloudFormation template (section AWS::CloudFormation::Init) and updating the stack. However, you should not do this unless you have advanced AWS and Squid experience.

Update the instances

To update your AMI, you can update the stack. If the AMI has been updated with a newer version, then a rolling update will redeploy the EC2 instances and Squid software. This automates the process of patching managed instances with both security-related and other updates. If the AMI has not changed, no update will be performed.

Alternately, you can terminate the instance, and the auto scaling group will launch a new instance with the latest updates for Squid and the OS, starting from scratch. This approach may lead to a short service interruption for the clients served on this instance, during the time in which the load balancer is switching to an active instance.

Troubleshooting

I’ve summarized a few common problems and solutions below.

ProblemSolutions
I receive timeout at client application.
  • Check that you’ve configured the client application to use the proxy. (See Using a proxy, above.)
  • Check that the Security Group allows access from the client instance.
  • Verify that your NACL and routing table allow communication to and from the Network Load Balancer.
I receive an error page that access was blocked by the administrator.Check the stack input parameter for allowed domains. The domains must be comma separated. Included subdomains must start with dot. For example:

  • To include www.amazon.com, specify www.amazon.com
  • To include all subdomains of amazon.com as part of a list, specify .amazon.com
I received a 500 error page from the proxy.
  • Make sure that the proxy EC2 instance has internet access. The public subnets must have an Internet Gateway connected and set as the default route.
  • Check the DNS input parameter in the CloudFormation stack, if you use an external DNS service. Make sure the DNS provider has the correct proxy IPs (if you were required to provide them
The webpage doesn’t look as expected. There are fragments or styles missing.Many pages download content from multiple domains. You need to whitelist all of these domains. Use the access logs in CloudWatch Log to determine which domains are blocked, then update the stack.
On the proxy error page, I receive “unknown certificate issuer.”During the setup, a self-signed certificate for the squid error page is generated. If you need to add your own certificate, you can adapt the CloudFormation template. This requires moderate knowledge of Unix/Linux and AWS CloudFormation.

Conclusion

In this blog post, I showed you how you can configure an outbound proxy for controlling the internet communication from a VPC. If you need Squid support, you can find various offerings on the Squid Support page. AWS forums provides support for Amazon Elastic Compute Cloud (EC2). When you need AWS experts to help you plan, build, or optimise your infrastructure, consider engaging AWS Professional Services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

AWS Security Hub Now Generally Available

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available/

I’m a developer, or at least that’s what I tell myself while coming to terms with being a manager. I’m definitely not an infosec expert. I’ve been paged more than once in my career because something I wrote or configured caused a security concern. When systems enable frequent deploys and remove gatekeepers for experimentation, sometimes a non-compliant resource is going to sneak by. That’s why I love tools like AWS Security Hub, a service that enables automated compliance checks and aggregated insights from a variety of services. With guardrails like these in place to make sure things stay on track, I can experiment more confidently. And with a single place to view compliance findings from multiple systems, infosec feels better about letting me self-serve.

With cloud computing, we have a shared responsibility model when it comes to compliance and security. AWS handles the security of the cloud: everything from the security of our data centers up to the virtualization layer and host operating system. Customers handle security in the cloud: the guest operating system, configuration of systems, and secure software development practices.

Today, AWS Security Hub is out of preview and available for general use to help you understand the state of your security in the cloud. It works across AWS accounts and integrates with many AWS services and third-party products. You can also use the Security Hub API to create your own integrations.

Getting Started

When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year.

The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked.

Findings can be grouped into insights using aggregation statements and filters.

Integrations

In addition to the Compliance standards findings, AWS Security Hub also aggregates and normalizes data from a variety of services. It is a central resource for findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and from 30 AWS partner security solutions.

AWS Security Hub also supports importing findings from custom or proprietary systems. Findings must be formatted as AWS Security Finding Format JSON objects. Here’s an example of an object I created that meets the minimum requirements for the format. To make it work for your account, switch out the AwsAccountId and the ProductArn. To get your ProductArn for custom findings, replace REGION and ACCOUNT_ID in the following string: arn:aws:securityhub:REGION:ACCOUNT_ID:product/ACCOUNT_ID/default.

{
    "Findings": [{
        "AwsAccountId": "12345678912",
        "CreatedAt": "2019-06-13T22:22:58Z",
        "Description": "This is a custom finding from the API",
        "GeneratorId": "api-test",
        "Id": "us-east-1/12345678912/98aebb2207407c87f51e89943f12b1ef",
        "ProductArn": "arn:aws:securityhub:us-east-1:12345678912:product/12345678912/default",
        "Resources": [{
            "Type": "Other",
            "Id": "i-decafbad"
        }],
        "SchemaVersion": "2018-10-08",
        "Severity": {
            "Product": 2.5,
            "Normalized": 11
        },
        "Title": "Security Finding from Custom Software",
        "Types": [
            "Software and Configuration Checks/Vulnerabilities/CVE"
        ],
        "UpdatedAt": "2019-06-13T22:22:58Z"
    }]
}

Then I wrote a quick node.js script that I named importFindings.js to read this JSON file and send it off to AWS Security Hub via the AWS JavaScript SDK.

const fs    = require('fs');        // For file system interactions
const util  = require('util');      // To wrap fs API with promises
const AWS   = require('aws-sdk');   // Load the AWS SDK

AWS.config.update({region: 'us-east-1'});

// Create our Security Hub client
const sh = new AWS.SecurityHub();

// Wrap readFile so it returns a promise and can be awaited 
const readFile = util.promisify(fs.readFile);

async function getFindings(path) {
    try {
        // wait for the file to be read...
        let fileData = await readFile(path);

        // ...then parse it as JSON and return it
        return JSON.parse(fileData);
    }
    catch (error) {
        console.error(error);
    }
}

async function importFindings() {
    // load the findings from our file
    const findings = await getFindings('./findings.json');

    try {
        // call the AWS Security Hub BatchImportFindings endpoint
        response = await sh.batchImportFindings(findings).promise();
        console.log(response);
    }
    catch (error) {
        console.error(error);
    }
}

// Engage!
importFindings();

A quick run of node importFindings.js results in { FailedCount: 0, SuccessCount: 1, FailedFindings: [] }. And now I can see my custom finding in the Security Hub console:

Custom Actions

AWS Security Hub can integrate with response and remediation workflows through the use of custom actions. With custom actions, a batch of selected findings is used to generate CloudWatch events. With CloudWatch Rules, these events can trigger other actions such as sending notifications via a chat system or paging tool, or sending events to a visualization service.

First, we open Settings from the AWS Security Console, and select Custom Actions. Add a custom action and note the ARN.

Then we create a CloudWatch Rule using the custom action we created as a resource in the event pattern, like this:

{
  "source": [
    "aws.securityhub"
  ],
  "detail-type": [
    "Security Hub Findings - Custom Action"
  ],
  "resources": [
    "arn:aws:securityhub:us-west-2:123456789012:action/custom/DoThing"
  ]
}

Our CloudWatch Rule can have many different kinds of targets, such as Amazon Simple Notification Service (SNS) Topics, Amazon Simple Queue Service (SQS) Queues, and AWS Lambda functions. Once our action and rule are in place, we can select findings, and then choose our action from the Actions dropdown list. This will send the selected findings to Amazon CloudWatch Events. Those events will match our rule, and the event targets will be invoked.

Important Notes

  • AWS Config must be enabled for Security Hub compliance checks to run.
  • AWS Security Hub is available in 15 regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
  • AWS Security Hub does not transfer data outside of the regions where it was generated. Data is not consolidated across multiple regions.

AWS Security Hub is already the type of service that I’ll enable on the majority of the AWS accounts I operate. As more compliance standards become available this year, I expect it will become a standard tool in many toolboxes. A 30-day free trial is available so you can try it out and get an estimate of what your costs would be. As always, we want to hear your feedback and understand how you’re using AWS Security Hub. Stay in touch, and happy building!

— Brandon

AWS Security Profiles: Mark Ryland, Director, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-mark-ryland-director-office-of-the-ciso/

Author photo

Mark Ryland at the AWS Summit Berlin keynote

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS and what’s your current role?

I’ve been at AWS for almost eight years. For the first six and a half years, I built the Solutions Architecture and Professional Services teams for AWS’s worldwide public sector sales organization—from five people when I joined, to many hundreds some years later. It was an amazing ride to build such a great team of cloud technology experts.

About a year and a half ago, I transitioned to the AWS Security team. On the Security team, I run a much smaller team called the Office of the CISO. We help manage interaction between our customers and the leadership team for AWS Security. In addition, we have a number of internal projects that we work on to improve interaction and information flow between the Security team and various AWS service teams, and between the AWS security team and the Amazon.com security team.

Why is your team called “the Office of the CISO”?

A lot of people want to talk to Steve Schmidt, our Chief Information Security Officer (CISO) at AWS. If you want to talk to him, it’s very likely that you’re going to talk to me or to my team as a part of that process. There’s only one of him, and there are a few of us. We help Steve scale a bit, and help more customers have direct interaction with senior leadership in AWS Security.

We also provide guidance and leadership to the broader AWS security community, especially to the customer-facing side of AWS. For example, we’re leaders of the Security and Compliance Technical Field Community (TFC) for AWS. The Security TFC is made up of subject matter experts in solutions architecture, professional services, technical account management, and other technical disciplines. We help them to understand and communicate effectively with customers about important security and compliance topics, and to gather customer requirements and funnel them to the right places.

What’s your favorite part of your job?

I love communicating about technology — first diving deep to figure it out for myself, and then explaining it to others. And I love interacting with our customers, both to explain our platform and what we do, and, equally important, to get their feedback. We constantly get great input and great ideas from customers, and we try to leverage that feedback into continuous improvement of our products and services.

What does cloud security mean to you, personally? Why is it a topic you’re passionate about?

I remember being at a private conference on cybersecurity. It was government-oriented, and organized by a Washington DC-based think-tank. A number of senior government officials were talking about challenges in cybersecurity. In the middle of an intense discussion about the big challenges facing the industry, a former, very senior official in the U.S. Government intelligence community said (using a golfing colloquialism), “The great thing about the cloud is that it’s a Mulligan; it’s a do-over. When we make the cloud transition, we can finally do the right things when it comes to cybersecurity.

There’s a lot of truth to that, just in terms of general IT modernization. The cloud simply makes security easier. Not “easy” — there are still challenges. But you’re much more equipped to do the right thing—to build automation, to build tooling, and to take full advantage of the base protections that are built into the platform. With a little bit of care, what you do is going to be better than what you did before. The responsibility that remains for you as the customer is still significant, but because everything is software-defined, you get far more visibility and control. Because everything is API-driven, you can automate just about everything.

Challenges remain; I want to reiterate that it’s never easy to do security right. But it’s so much easier when you don’t have to run the entire stack from the concrete floor up to the application, and when you can rely on the inherent visibility and control provided by a software-defined environment. In short, cloud migration represents the industry’s best opportunity for making big improvements in IT security. I love being in the center of that change for the better, and helping to make it real.

What initiatives are you currently working on that you’re particularly excited about?

Two things. First, we’re laser-focused on improving our AWS Identity and Access Management capabilities. They’re already very sophisticated and very powerful, but they are somewhat uneven across our services, and not as easy to use as they should be. I’m on the periphery of that work, but I’m actively involved in scoping out improvements. One recent example is a big advance in the capabilities of Service Control Policies (SCPs) within AWS Organizations. These now allow extremely fine-grained controls — as expressive as IAM polices—that can easily be applied globally across dozens or hundreds of AWS accounts. For example, you can express a global policy like “nobody but [some very special role] can attach an internet gateway to my VPCs, full stop.”

I’m also a networking geek, and another area I’ve been actively working on is improvements to our built-in networking security features. People have been asking for greater visibility and control over their VPCs. We have a lot of great features like security groups and network ACLs, but there’s a lot more we can and will do. For example, customers are looking for more visibility into what’s going on inside their VPCs beyond our existing VPC Flow Logs feature. We have an exciting announcement at our re:Inforce conference this week about some new capabilities in this area!

You’ll be speaking at re:Inforce about the security benefits of running EC2 instances on the AWS Nitro architecture. At a high level, what’s so innovative about Nitro, and how does it enable better security?

The EC2 Nitro architecture is a fundamental re-imagining of the best way to build a secure virtualization platform. I don’t think there’s anything else like it in the industry. We’ve taken a lot of the complicated software that’s needed for virtualization, which normally runs in a privileged copy of an operating system — the “domain 0,” or “dom0” to use Xen terminology, but present in all modern hypervisors — and we’ve completely eliminated it. All those features are now implemented by custom software and hardware in a set of powerful co-processor computers inside the same physical box as the main Intel processor system board. The Nitro computers present virtual devices to the mainboard as if they were actual hardware devices. You might say the main system board — despite its powerful Intel Xeon processor and big chunks of memory — is really the “co-processor” in these systems; I call it the “customer workload co-processor!” It’s the main Nitro controller and not the system mainboard that’s fundamentally in charge of the overall system, providing a root of trust and a secure layer between the mainboard and the outside world.

There are bunch of great security benefits that flow from this redesign. For example, with the elimination of the dom0 trusted operating system running on the mainboard, we’ve completely eliminated interactive access to these hosts. There’s no SSH, no RDP, no interactive software mechanisms that allow direct human access. I could go on and on, but I’ll stop there — you’ll have to come to my talk on Wednesday! And of course, we’ll post the video online afterward.

You’re also involved with a session to encourage customers to set up “state-of-the-art encryption.” In your view, what are some of the key elements of a “state-of-the-art” approach to encryption?

I came up with the original idea for the session, but was able to hand it off to an even better-suited speaker, so now I’ll just be there to enjoy it. Colm MacCarthaigh will be presenting. Colm is a senior principal engineer in the EC2 networking team, but he’s also the genius behind a number of important innovations in security and networking across AWS. For example, he did some of the original design work on the “shuffle sharding” techniques we use broadly, across AWS, to improve availability and resiliency for multi-tenanted services. Later, he came up with the idea, and, in a few weeks of intense coding, wrote the first version of S2N, our open source TLS implementation that provides far better security than the implementations typically used in the industry. He was also a significant contributor to the TLS 1.3 specification. I encourage everyone to follow him on Twitter, where you’ll learn all kinds of interesting things about cryptography, networking, and the like.

Now, to finally answer your question: Colm will be talking about how AWS does more and more encryption for you automatically, and how multiple layers of encryption can help address different kinds of threats. For example, without actually breaking TLS encryption, researchers have shown that they can figure out the content of an encrypted voice-over-IP (VOIP) call simply by analyzing the timing and size of the packets. So, wrapping TLS sessions inside of other encryption layers is a really good idea. Colm will talk about the importance of layered encryption, plus a bunch of other great topics: how AWS makes it easy to use encryption; where we do it automatically even if you don’t ask for it; how we’re inventing new, more secure means for key distribution; and fun stuff like that. It will be a blast!

What changes do you hope we’ll see across the global security and compliance landscape over the next 5 years?

I think that with innovations like the Nitro architecture for EC2, and with our commitment to continually improving and strengthening other security features and enabling greater automation around things like identity management and anomaly detection, we will come to a point where people will realize that the cloud, in almost every case, is more secure than an on-premises environment. I don’t mean to say that you couldn’t go outside of the cloud and build something secure (as long as you are willing to spend a ton of money). But as a general matter, cloud will become the default option for secure processing of very sensitive data.

We’re not quite there yet, in terms of widespread perception and understanding. There are still quite a few people who haven’t dug very far below the surface of “what is cloud.” There is still a common, visceral reaction to the idea of “public cloud” as being risky. People object to ideas like multitenancy, where you’re sharing physical infrastructure with other customers, as if it’s somehow inherently risky. There are risks, but they are so well mitigated, and we have so much experience controlling those risks, that they’re far outweighed by the big security benefits. Very consistently, as customers become more educated and experienced with the cloud, they tell us that they feel more secure in their cloud infrastructure than they did in their on-premises world. Still, that’s not currently the first reaction. People still start by thinking of the cloud as risky, and it takes time to educate them and change that perspective. So there’s still some important work ahead of us.

What’s your favorite way to relax?

It’s funny, now that I’m getting old, I’m reverting to some of the pursuits and hobbies of my youth. When I was a teenager I was passionate about cycling. I raced bicycles extensively at the regional and national level on both road and track from ages 14 to 18. A few minutes of my claim to 15 minutes of Warholian fame was used up by being in a two-man breakaway with 17-year-old Greg LeMond in a road race in Arizona, although he beat me and everyone else resoundingly in the end! I’ve ridden road bikes and done a bit of mountain biking over the years, but I’m getting back into it now and enjoying it immensely. Of course, there’s far more technology to play with these days, and I can’t resist. I splurged on an expensive pair of pedals with power meters built in, and so now I get detailed data from every ride that I can analyze to prove mathematically that I’m not in very good shape.

One of my other hobbies back in my teenage years was playing guitar — mostly folk-rock acoustic, but also electric and bass guitar in garage bands. That’s another activity I’ve started again. Fortunately, my kids, who are now around college-age plus or minus, all love the music from the 60s and 70s that I dust off and play, and they have great voices, so we have a lot of fun jamming and singing harmonies together.

What’s one thing that a visitor to your hometown of Washington, DC should experience?

The Washington DC area is famous for lots of great tourist attractions. But if you enjoy Michelin Guide-level dining experiences, I’d recommend a restaurant right in my neighborhood. It’s called L’Auberge Chez François, and it’s quite famous. It features Alsatian food (from the eastern region of France, along the German border). It’s an amazing restaurant that’s been there for almost 50 years, and it continues to draw a clientele from across the region and around the world. It’s always packed, so get reservations well in advance!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Ryland

Mark is the director of the Office of the CISO for AWS. He has more than 28 years of experience in the technology industry and has served in leadership roles in cybersecurity, software engineering, distributed systems, technology standardization and public policy. Prior to his current role, he served as the Director of Solution Architecture and Professional Services for the AWS World Public Sector team.

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/set-permission-guardrails-using-iam-access-advisor-analyze-service-last-accessed-information-aws-organization/

You can use AWS Organizations to centrally govern and manage multiple accounts as you scale your AWS workloads. With AWS Organizations, central security administrators can use service control policies (SCPs) to establish permission guardrails that all IAM users and roles in the organization’s accounts adhere to. When teams and projects are just getting started, administrators may allow access to a broader range of AWS services to inspire innovation and agility. However, as developers and applications settle into common access patterns, administrators need to set permission guardrails to remove permissions for services that have not or should not be accessed by their accounts. Whether you’re just getting started with SCPs or have existing SCPs, you can now use AWS Identity and Access Management (IAM) access advisor to help you restrict permissions confidently.

IAM access advisor uses data analysis to help you set permission guardrails confidently by providing you service-last-accessed information for accounts in your organization. By analyzing last-accessed information, you can determine the services not used by IAM users and roles. You can implement permissions guardrails using SCPs that restrict access to those services. For example, you can identify services not accessed in an organizational units (OU) for the last 90 days, create an SCP that denies access to these services, and attach it to the OU to restrict access to all IAM users and roles across the accounts in the OU. You can view service-last-accessed information for your accounts, OUs, and your organization using the IAM console in the account you used to create your organization. You can access this information programmatically using IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first review the service-last-accessed information provided by IAM access advisor using the IAM console. Next, I walk through an example to demonstrate how you can use this information to remove permissions for services not accessed by IAM users and roles within your production OU by creating an SCP.

Use IAM access advisor to view service-last-accessed information using the AWS management console

Access advisor provides an access report that displays a list of services and the last-accessed timestamps for when an IAM principal accessed each service. To view the access report in the console, sign in to the IAM console using the account you used to create your organization. Additionally, you need to enable SCPs on your organization root to view the access report. You can view the service-last-accessed information in two ways. First, you can use the Organization activity view to review the service-last-accessed information for an organizational entity such as an account or OU. Second, you can use the SCP view to review the service-last-accessed information for services allowed by existing SCPs attached to your organizational entities.

The Organization activity view lists your OUs and accounts. You can select an OU or account to view the services that the entity is allowed to access and the service-last-accessed information for those services. This tells you services that have not been accessed in an organizational entity. Using this information, you can remove permissions for these services by creating a new SCP and attaching it the organizational entity or updating an existing SCP attached to the entity.

The SCP view lists all the SCPs in your organization. You can select a SCP to view the services allowed by the SCP and the service-last-accessed information for those services. The service-last-accessed information is the last-accessed timestamp across all the organizational entities that the SCP is attached to. This tells you services that have not been accessed but are allowed by the SCP. Using this information, you can refine your existing permission guardrails to remove permissions for services not accessed for your existing SCPs.

Figure 1 shows an example of the access report for an OU. You can see the service-last-accessed information for all services that IAM users and roles can access in all the accounts in ProductionOU. You can see that services such as AWS Ground Station and Amazon GameLift have not been used in the last year. You can also see that Amazon DynamoDB was last accessed in account Application1 10 days ago.
 

Figure 1: An example access report for an OU

Figure 1: An example access report for an OU

Now that I’ve described how to view service-last-accessed information, I will walk through an example.

Example: Restrict access to services not accessed in production by creating an SCP

For this example, assume ExampleCorp uses AWS Organizations to organize their development, test, and production environments into organizational units (OUs). Alice is a central security administrator responsible for managing the accounts in the production OU for ExampleCorp. She wants to ensure that her production OU called ProductionOU has permissions to only the services that are required to run existing workloads. Currently, Alice hasn’t set any permission guardrails on her production OU. I will show you how you can help Alice review the service-last-accessed information for her production OU and set a permission guardrail confidently using a SCP to restrict access to services not accessed by ExampleCorp developers and applications in production.

Prerequisites

  1. Ensure that the SCP policy type is enabled for the organization. If you haven’t enabled SCPs, you can enable it for your organization root by following the steps mentioned in Enabling and Disabling a Policy Type on a Root.
  2. Ensure that your IAM roles or users have appropriate permissions to view the access report, you can do so by attaching the IAMAccessAdvisorReadOnly managed policy.

How to review service-last-accessed information for ProductionOU in the IAM console

In this section, you’ll review the service-last-accessed information using IAM access advisor to determine the services that have not been accessed across all the accounts in ProductionOU.

  1. Start by signing in to the IAM console in the account that you used to create the organization.
  2. In the left navigation pane, under the AWS Organizations section, select the Organization activity view.

    Note: Enabling the SCP policy type does not set any permission guardrails for your organization unless you start attaching SCPs to accounts and OUs in your organization.

  3. In the Organization activity view, select ProductionOU from the organization structure displayed on the console so you can review the service last accessed information across all accounts in that OU.
     
    Figure 2: Select 'ProductionOU' from the organizational structure

    Figure 2: Select ‘ProductionOU’ from the organizational structure

  4. Selecting ProductionOU opens the Details and activity tab, which displays the access report for this OU. In this example, I have no permission guardrail set on the ProductionOU, so the default FULLAWSACCESS SCP is attached, allowing the ProductionOU to have access to all services. The access report displays all AWS services along with their last-accessed timestamps across accounts in the OU.
     
    Figure 3: The service access report

    Figure 3: The service access report

  5. Review the access report for ProductionOU to determine services that have not been accessed across accounts in this OU. In this example, there are multiple accounts in ProductionOU. Based on the report, you can identify that services Ground Station and GameLift have not been used in 365 days. Using this information, you can confidently set a permission guardrail by creating and attaching a new SCP that removes permissions for these services from ProductionOU. You can use a different time period, such as 90 days or 6 months, to determine if a service is not accessed based on your preference.
     
    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

Create and attach a new SCP to ProductionOU in the AWS Organizations console

In this section, you’ll use the access insights you gained from using IAM access advisor to create and attach a new SCP to ProductionOU that removes permissions to Ground Station and GameLift.

  1. In the AWS Organizations console, select the Policies tab, and then select Create policy.
  2. In the Create new policy window, give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: ProductionGuardrail
    • Description: Restricts permissions to services not accessed in ProductionOU.
  3. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.
     
    Figure 5: SCP editor tool

    Figure 5: SCP editor tool

  4. Next, add the services you want to restrict. Using the left panel, select services Ground Station and GameLift. Denying access to services using SCPs is a powerful action if these services are in use. From the service last accessed information I reviewed in step 6 of the previous section, I know these services haven’t been used for more than 365 days, so it is safe to remove access to these services. In this example, I’m not adding any resource or condition to my policy statement.
     
    Figure 6: Add the services you want to restrict

    Figure 6: Add the services you want to restrict

  5. Next, use the Resource policy element, which allows you to provide specific resources. In this example, I select the resource type as All Resources.
  6.  

    Figure 9: Select resource type as All Resources

    Figure 7: Select resource type as “All Resources”

  7. Select the Create Policy button to create your policy. You can see the new policy in the Policies tab.
     
    Figure 10: The new policy on the “Policies” tab

    Figure 8: The new policy on the “Policies” tab

  8. Finally, attach the policy to ProductionOU where you want to apply the permission guardrail.

Alice can now review the service-last-accessed information for the ProductionOU and set permission guardrails for her production accounts. This ensures that the permission guardrail Alice set for her production accounts provides permissions to only the services that are required to run existing workloads.

Summary

In this post, I reviewed how access advisor provides service-last-accessed information for AWS organizations. Then, I demonstrated how you can use the Organization activity view to review service-last-accessed information and set permission guardrails to restrict access only to the services that are required to run existing workloads. You can also retrieve service-last-accessed information programmatically. To learn more, visit the documentation for retrieving service last accessed information using APIs.

If you have comments about using IAM access advisor for your organization, submit them in the Comments section below. For questions related to reviewing the service last accessed information through the console or programmatically, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

How to host and manage an entire private certificate infrastructure in AWS

Post Syndicated from Josh Rosenthol original https://aws.amazon.com/blogs/security/how-to-host-and-manage-an-entire-private-certificate-infrastructure-in-aws/

AWS Certificate Manager (ACM) Private Certificate Authority (CA) now offers the option for managing online root CAs and a full online PKI hierarchy. You can now host and manage your organization’s entire private certificate infrastructure in AWS. Supporting a full hierarchy expands AWS Certificate Manager (ACM) Private Certificate Authority capabilities.

CA administrators can use ACM Private CA to create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs. Customers can create secure and highly available CAs in any one of the AWS Regions in which ACM Private CA is available, without building and maintaining their own on-premises CA infrastructure. ACM Private CA provides essential security for operating a CA in accordance with your internal compliance rules and security best practices. ACM Private CA is secured with AWS-managed hardware security modules (HSMs), removing the operational and cost burden from customers.

An overview of CA hierarchy

Certificates are used to establish identity and secure connections. A resource presents a certificate to a server to establish its identity. If the certificate is valid, and a chain can be constructed from the certificate to a trusted root CA, the server can positively identify and trust the resource.

A CA hierarchy provides strong security and restrictive access controls for the most-trusted root CA at the top of the trust chain, while allowing more permissive access and bulk certificate issuance for subordinate CAs lower in the chain.

The root CA is a cryptographic building block (root of trust) upon which certificates can be issued. It’s comprised of a private key for signing (issuing) certificates and a root certificate that identifies the root CA and binds the private key to the name of the CA. The root certificate is distributed to the trust stores of each entity in an environment. When resources attempt to connect with one another, they check the certificates that each entity presents. If the certificates are valid and a chain can be constructed from the certificate to a root certificate installed in the trust store, a “handshake” occurs between resources that cryptographically prove the identity of each entity to the other. This creates an encrypted communication channel (TLS/SSL) between them.

How to configure a CA hierarchy with ACM Private CA

You can use root CAs to create a CA hierarchy without the need for an external root CA, and start issuing certificates to identify resources within your organizations. You can create root and subordinate CAs in nearly any configuration you want, including defining a CA structure to fit your needs or replicating an existing CA structure.

To get started, you can use the ACM Private CA console, APIs, or CLI to create a root and subordinate CA and issue certificates from the subordinate CA.
 

Figure 1: Issue certificates after creating a root and subordinate CA

Figure 1: Ceating a root CA

You can create a two-level CA hierarchy using the ACM console in less than 10 minutes using the ACM Private CA console wizard, which walks you through each step of creating a root or subordinate CA. When you create a subordinate CA, the wizard prompts you to chain the subordinate to a parent CA.
 

Figure 2: Walk through each step with the ACM Private CA console wizard

Figure 2: The “Install subordinate CA certificate” page

After creating a new root CA, you need to distribute the new root to the trust stores in your servers’ operating systems and browsers. If you want a simple, one-level CA hierarchy for development and testing, you can create a root certificate authority and start issuing private certificates directly from the root CA.

Note: The trade-off of this approach is that you can’t revoke the root CA certificate because the root CA certificate is installed in your trust stores. To effectively “untrust” the root CA in this scenario, you would need to replace the root CA in your trust stores with a new root CA.

Offline versus online root CAs

Some organizations, and all public CAs, keep their root CAs offline (that is, disconnected from the network) in a physical vault. In contrast, most organizations have root CAs that are connected to the network only when they’re used to sign the certificates of CAs lower in the chain. For example, customers might create a root CA with a 20-year lifetime, and disable it under normal circumstances to prevent it from being used except when enabled by a privileged administrator when it’s necessary to sign CA certificates for a child CA. Because using the root CA to issue a certificate is a rare and carefully controlled operation, customers monitor logs, audit reports, and generate alarms notifying them when their root CA is used to issue a certificate. Subordinate issuing CAs are the lowest in the hierarchy. They are typically used for bulk certificate issuance that identify devices and resources. Subordinate issuing CAs typically have shorter lifetimes (1-2 years), and fewer policy controls and monitors.

With ACM Private CA, you can create a trusted root CA with a lifetime of 10 or more years. All CA private keys are protected by FIPS 140-2 level 3 HSMs. You can verify the CA is used only for authorized purposes by reviewing AWS CloudTrail logs and audit reports. You can further protect against mis-issuance by configuring AWS Identity and Access Management (IAM) permissions that limit access to your CA. With an ACM Private CA, you can revoke certificates issued from your CA and use the certificate revocation list (CRL) generated by ACM Private CA to provide revocation information to clients. This simplifies configuration and deployment.

Customer use cases for root CA hierarchy

There are three common use cases for root CA hierarchy.

The most common use case is customers who are advanced PKI users and already have an offline root CA protected by an HSM. However, when it comes to development and network staging, they don’t want to use the same root CA and certificate. The new root CA hierarchy feature allows them to easily stand up a PKI for their test environment that mimics production, but uses a separate root of trust.

The second use case is customers who are interested in using a private CA but don’t have strong knowledge of PKI, nor have they invested in HSMs. These customers have gotten by, generating a root CA using OpenSSL. With the offering of root CA hierarchy, they’re now able to stand up a root CA within ACM Private CA that is protected by an HSM and restricted by IAM access policy. This increases the security of their hierarchy and simplifies their deployment.

The third use case is customers who are evaluating an internal PKI and also looking at managing an offline HSM. These customers recognize the significant process, management, cost, and training investments to stand up the full infrastructure required. Customers can remove these costs by managing their organization’s entire private certificate infrastructure in AWS.

How to get started

With ACM Private CA root CA hierarchy feature, you can create a PKI hierarchy and begin issuing private certificates for identity and securing TLS communication. To get started, open the ACM Private CA console. To learn more, read getting started with AWS Certificate Manager and getting started in the ACM Private CA user guide.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Josh Rosenthol

Josh is a Product Manager who helps solve customer problems with public and private certificate and CAs from AWS. He enjoys listening to customers describe their use cases and translate them into improvements to AWS Certificate Manager and ACM Private CA.

Author

Todd Cignetti

Todd Cignetti is a Principal Product Manager at Amazon Web Services. He is responsible for AWS Certificate Manager (ACM) and ACM Private CA. He focuses on helping AWS customers identify and secure their resources and endpoints with public and private certificates.

How to prompt users to reset their AWS Managed Microsoft AD passwords proactively

Post Syndicated from Tekena Orugbani original https://aws.amazon.com/blogs/security/how-to-prompt-users-to-reset-their-aws-managed-microsoft-ad-passwords-proactively/

If you’re an AWS Directory Service administrator, you can reset your directory users’ passwords from the AWS console or the CLI when their passwords expire. However, you can improve your efficiency by reducing the number of requests for password resets. You can also help improve the security of your organization by having your users proactively reset their directory passwords before they expire. In this post, I describe the steps you can take to set up a solution to send regular reminders to your AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) users to prompt them to change their password before it expires. This will help prevent users from being locked out when their passwords expire and also reduce the number of reset requests sent to administrators.

Solution Overview

When users’ passwords expire, they typically contact their directory service administrator to help them reset their password. For security reasons, they then need to reset their password again on their computer so that the administrator has no knowledge of the new password. This process is time-consuming and impacts productivity. In this post, I present a solution to remind users automatically to reset AWS Managed Microsoft AD passwords. The following diagram and description explains how the solution works.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A script running on an AWS Managed Microsoft AD domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance (Notification Server) searches the AWS Managed Microsoft AD for all enabled user accounts and retrieves their names, email addresses, and password expiry dates.
  2. Using the permissions of the IAM role attached to the Notification Server, the script obtains the SES SMTP credentials stored in AWS Secrets Manager.
  3. With the SMTP credentials obtained in Step 2, the script then securely connects to Amazon Simple Email Service (Amazon SES.)
  4. Based on your preferences, Amazon SES sends domain password expiry notifications to the users’ mailboxes.

A separate process for updating the SES credentials stored in AWS Secrets Manager occurs as follows:

  1. A CloudWatch rule triggers a Lambda function.
  2. The Lambda function generates new SES SMTP credentials from the SES IAM Username.
  3. The Lambda function then updates AWS Secrets Manager with the new SES credentials.
  4. The Lambda function then deletes the previous IAM access key.

Prerequisites

The instructions in this post assume that you’re familiar with how to create Amazon EC2 for Windows Server instances, use Remote Desktop Protocol (RDP) to log in to the instances, and have completed the following tasks:

  1. Create an AWS Microsoft AD directory.
  2. Join an Amazon EC2 for Windows Server instance to the AWS Microsoft AD domain to use as your Notification Server.
  3. Sign up for Amazon Simple Email Service (Amazon SES).
  4. Remove Amazon EC2 throttling on port 25 for your EC2 instance.
  5. Remove your Amazon SES account from the Amazon SES sandbox so you can also send email to unverified recipients.

Note: You can use your AWS Microsoft Directory management instance as the Notification Server. For the steps below, use any account that is a member of the AWS delegated Administrators’ group.

Summary of the steps

  1. Verify an Amazon SES email address.
  2. Create Amazon SES SMTP credentials.
  3. Store the Amazon SES SMTP credentials in AWS Secrets Manager.
  4. Create an IAM role with read permissions to the secret in AWS Secrets Manager.
  5. Set up and test the notification script.
  6. Set up Windows Task Scheduler.
  7. Configure automatic rotation of the SES Credentials stored in Secrets Manager.

STEP 1: Verify an Amazon SES email address

To prevent unauthorized use, Amazon SES requires that you verify the email address that you use as a “From,” “Source,” “Sender,” or “Return-Path”.

To verify the email address you will use as the sending address, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation pane, under Identity Management, select Email Addresses.
  3. Select Verify a New Email Address, and then enter the email address.
  4. Select Verify This Email Address.

An email will be sent to the specified email address with a link to verify the email address. Once you verify the email, you’ll see the Verification Status as verified in the SES console.

In the image below, I have four verified email addresses:
 

Figure 2: Verified email addresses

Figure 2: Verified email addresses

STEP 2: Create Amazon SES SMTP credentials

You must create an Amazon SES SMTP user name and password to access the Amazon SES SMTP interface and send email using the service. To do this, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation bar, select SMTP Settings.
  3. In the content pane, make a note of the Server Name as you will use this when sending the email in Step 5. Select Create My SMTP Credentials.
     
    Figure 3: Make a note of the SES SMTP Server Name

    Figure 3: Make a note of the SES SMTP Server Name

  4. Specify a value for the IAM User Name field. Make a note of this IAM User Name as you will need in Step 7 later. In this post, I use the placeholder, ses-smtp-user-eu-west-1, as the user name (as shown below):
     
    Figure 4: Make a note of SES IAM User Name

    Figure 4: Make a note of SES IAM User Name

  5. Select Create.

Make a note of the SMTP Username and SMTP Password you created because you’ll use these in later steps. This is as shown below in my example.
 

Figure 5: Make a note of the SES SMTP Username and SMTP Password

Figure 5: Make a note of the SES SMTP Username and SMTP Password

STEP 3: Store the Amazon SES SMTP credentials in AWS Secrets Manager

In this step, use AWS Secrets Manager to store the Amazon SES SMTP credentials created in Step 2. You will reference this credential when you execute the script in the Notification Server.

Complete the following steps to store the Amazon SES SMTP credentials in AWS Secrets Manager:

  1. Sign in to the AWS Secrets Manager Console.
  2. Select Store a new secret, and then select Other types of secrets.
  3. Under Secret Key/value, enter the Amazon SES SMTP Username in the left box and the Amazon SES SMTP Password in the right box, and then select Next.
     
    Figure 6: Enter the Amazon SES SMTP user name and password

    Figure 6: Enter the Amazon SES SMTP user name and password

  4. In the next screen, enter the string AWS-SES as the name of the secret. Enter an optional description for the secret and add an optional tag and select Next.

    Note: I recommend using AWS-SES as the name of your secret. If you choose to use some other name, you will have to update PowerShell script in Step 5. I also recommend creating the secret in the same region as the Notification Server. If you create your secret in a different region, you will also have to update PowerShell script in Step 5.

     

    Figure 7: Enter "AWS-SES" as the secret name

    Figure 7: Enter “AWS-SES” as the secret name

  5. On next screen, leave the default setting as Disable automatic rotation and select Next. You will come back later in Step 7 where you will use a Lambda function to rotate the secret at specified intervals.
  6. To store the secret, in the last screen, select Store. Now select the secret and make a note of the ARN of the secret as shown in in Figure 8.
     
    Figure 8: Make a note of the Secret ARN

    Figure 8: Make a note of the Secret ARN

Step 4: Create IAM role with permissions to read the secret

Create an IAM role that grants permissions to read the secret created in Step 3. Then, attach this role to the Notification Server to enable your script to read this secret. Complete the following steps:

  1. Log in to the IAM Console.
  2. In the navigation bar, select Policies.
  3. In the content pane, select Create Policy, and then select JSON.
  4. Replace the content with the following snippet while specifying the ARN of the secret you created earlier in step 3:
    
        {
            "Version": "2012-10-17",
            "Statement": {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "<arn-of-the-secret-created-in-step-3>"
            }
        }                
        

    Here is how it looks in my example after I replace with the ARN of my Secrets Manager secret:
     

    Figure 9: Example policy

    Figure 9: Example policy

  5. Select Review policy.
  6. On the next screen, specify a name for the policy. In my example, I have specified Access-Ses-Secret as the name of the policy. Also specify a description for the policy, and then select Create policy.
  7. In the navigation pane, select Roles.
  8. In the content pane, select Create role.
  9. On the next page, select EC2, and then select Next: Permissions.
  10. Select the policy you created, and then select Next: Tags.
  11. Select Next: Review, provide a name for the role. In my example, I have specified SecretsManagerReadAccessRole as the name. Select Create Role.

Now, complete the following steps to attach the role to the Notification Server:

  1. From the Amazon EC2 Console, select the Notification Server instance.
  2. Select Actions, select Instance Settings, and then select Attach/Replace IAM Role.
     
    Figure 10: Select "Attach/Replace IAM Role"

    Figure 10: Select “Attach/Replace IAM Role”

  3. On the Attach/Replace IAM Role page, choose the role to attach from the drop-down list. For this post, I choose SecretsManagerReadAccessRole and select Apply.

    Here is how it looks in my example:
     

    Figure 11: Example "Attach/Replace IAM Role"

    Figure 11: Example “Attach/Replace IAM Role”

STEP 5: Setup and Test the Notification Script

In this section, you’re going to test the script by sending a sample notification email to an end user to remind the user to change their password. To test the script, log into your Notification Server using your AWS Microsoft Managed AD default Admin account. Then, complete the following steps:

  1. Install the PowerShell Module for Active Directory by opening PowerShell as Administrator and run the following command:

    Install-WindowsFeature -Name RSAT-AD-PowerShell

  2. Download the script to the Notification Server. In my example, I downloaded the script and stored in the location

    c:\scripts\PasswordExpiryNotify.ps1

  3. Create a new user in Active Directory and ensure you enter a valid email address for the new user.

    Note: Make sure to clear the User must change password at next logon check box when creating the user; otherwise, you will get an invalid output from the command in the next step.

    For this example, I created a test user named RandomUser in Active Directory.

  4. In the PowerShell Window, execute the following command to determine the number of days remaining before the password for the user expires. In this example, I run the following to determine the number of days remaining before the RandomUser account password expires:

    (New-TimeSpan -Start ((Get-Date).ToLongDateString()) -End ((Get-ADUser -Identity ‘RandomUser’ -Properties “msDS-UserPasswordExpiryTimeComputed”|Select @{Name=”exp”;Expression={[datetime]::FromFileTime($_.”msDS-UserPasswordExpiryTimeComputed”).tolongdatestring()}}) | Select -ExpandProperty exp)).Days

    In my example, I get “15” as the output.

  5. To test the script, navigate to the location of the script on your Notification Server and execute the following:

    .\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2> ” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <NUMBER OF DAYS>

    In this example, I navigate to c:\scripts\ and execute:

    .\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk [email protected]” -NotifyDays 15

A new email will be sent to user’s mailbox. Verify the user has received the email.

Note: I can update these instructions to send multiple email reminders to users. For example, if I want to notify users on three occasions (first notification 15 days before password expiration, then 7 days, and one more when there is only 1 day) I would execute the following:

.\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk <[email protected]>” -NotifyDays 1,7,15

Step 6: Set up a Windows Task Scheduler

Now that you have tested the script and confirmed that the solution is working as expected, you can set up a Windows Scheduled Task to execute the script daily. To do this:

  1. Open Task Scheduler.
  2. Right-click Task Scheduler Library, and then select Create Task.
  3. Specify a name for the task.
  4. On the Triggers tab, select New.
  5. Select Daily, and then select OK.
  6. On the Actions tab, select New.
  7. Inside Program/Script, type PowerShell.exe
  8. In the Add arguments (optional) box, type the following command, including the full path to the script.

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2>” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <DAY,DAY,DAY>

    In my example, I type the following:

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer ’email-smtp.eu-west-1.amazonaws.com’ -from ‘IT Servicedesk [email protected]’ -NotifyDays 1,7,15”

  9. Select OK twice, and then enter your password when prompted to complete the steps.

The script will now run daily at the specified time and will send password expiration email notifications to your AWS Managed Microsoft AD users. In my example, a password expiration reminder email is sent to my AWS Managed Microsoft AD users 15 days before expiration, 7 days before expiration, and then 1 day before expiration.

Here is a sample email:
 

Figure 12: Sample password expiration email

Figure 12: Sample password expiration email

Note: You can edit the script to change the notification message to suit your requirements.

Step 7: Configure automatic update of the SES credentials

In this final section, you’re going to setup the configuration to automatically update the secret (that is, the SES credentials stored in AWS Secrets Manager) at regular intervals. To achieve this, you will use an Amazon Lambda function that will do the following:

  1. Create a new access key using the IAM user you used to create the SES SMTP Credentials in Step 2 (ses-smtp-user-eu-west-1 in my example).
  2. Generate a new SES SMTP User password from the created IAM secret access key.
  3. Update the SES credentials stored in AWS Secrets Manager.
  4. Delete the old IAM access key.

Complete the following steps to enable automatic update of the SES credentials:

First you will create the IAM policy which you will attach to a role that will be assumed by the lambda function. This policy will provide the permissions to create new access keys for the SES IAM user and permissions to update the SES credentials stored in AWS Secrets Manager.

  1. Log in to the IAM Console, and in the navigation bar, select Policies.
  2. In the content pane, select Create Policy, and then select JSON.
  3. Replace the content with the following script while specifying the ARN of the IAM user we used to create the SES SMTP credentials in Step 2 and the ARN of the secret stored in Secrets Manager that you noted in Step 3.
    
        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": "iam:*AccessKey*",
                    "Resource": "<arn-of-iam-user-created-in-step-2>"
                },
                {
                    "Effect": "Allow",
                    "Action": "secretsmanager:UpdateSecret",
                    "Resource": "<arn-of-secret-stored-in-secret-manager>"
                }
                ]
        }              
        

    Here is the JSON for the policy in my example:
     

    Figure 13: Example policy

    Figure 13: Example policy

  4. Select Review Policy, and then specify a name and a description for the policy. In my example, I have specified the name of the policy as iam-secretsmanager-access-for-lambda.

    Here is how it looks in my example:
     

    Figure 14: Specify a name and description for the policy

    Figure 14: Specify a name and description for the policy

  5. Select Create Policy

Now, create an IAM role and attach this policy.

  1. In the navigation bar, select Roles and select Create Role.
  2. Under the Choose the service that will use this role, select Lambda, and then select Next: Permissions.
  3. On the next page, select the policy you just created and select Next: Tags. Add an optional tag and select Next: Review.
  4. Specify a name for the role and description, and then select Create role. In my example, I have named the role: LambdaRoleRotatateSesSecret.

Now, you will create a Lambda function that will assume the created role:

  1. Log on to the AWS Lambda console and select Create Function
  2. Specify a name for the function, and then, under Runtime, select Python 3.7.
  3. Under execution role, select User an existing role, and then select the role you created earlier.

    Here are the settings I used in my example:
     

    Figure 15: Settings on the "Create function" page

    Figure 15: Settings on the “Create function” page

  4. Select Create function, copy the following Python code, and then paste it in the Function Code section.
    
        import boto3
        import os      #required to fetch environment variables
        import hmac    #required to compute the HMAC key
        import hashlib #required to create a SHA256 hash
        import base64  #required to encode the computed key
        import sys     #required for system functions
        
        iam = boto3.client('iam')
        sm = boto3.client('secretsmanager')
        
        SES_IAM_USERNAME = os.environ['SES_IAM_USERNAME']
        SECRET_ID = os.environ['SECRET_ID']
        
        def lambda_handler(event, context):
            print("Getting current credentials...")
            old_key = iam.list_access_keys(UserName=SES_IAM_USERNAME)['AccessKeyMetadata'][0]['AccessKeyId']
        
            print("Creating new credentials...")
            new_key = iam.create_access_key(UserName=SES_IAM_USERNAME)
            print("New credentials created...")
            
            smtp_username = '%s' % (new_key['AccessKey']['AccessKeyId'])
            iam_sec_access_key = '%s' % (new_key['AccessKey']['SecretAccessKey'])
            
             
            # These variables are used when calculating the SMTP password.
            message = 'SendRawEmail'
            version = '\x02'
            
            # Compute an HMAC-SHA256 key from the AWS secret access key.
            signatureInBytes = hmac.new(iam_sec_access_key.encode('utf-8'),message.encode('utf-8'),hashlib.sha256).digest()
            # Prepend the version number to the signature.
            signatureAndVersion = version.encode('utf-8') + signatureInBytes
            # Base64-encode the string that contains the version number and signature.
            smtpPassword = base64.b64encode(signatureAndVersion)
            # Decode the string and print it to the console.
            ses_smtp_pass = smtpPassword.decode('utf-8')
            secret_string = '{"%s": "%s"}' % (new_key['AccessKey']['AccessKeyId'], ses_smtp_pass)
            print("Updating credentials in SecretsManager...")
            sm_res = sm.update_secret(
                SecretId=SECRET_ID,
                SecretString=secret_string
                )
            print(sm_res)
            
            print("Deleting old key")
            del_res = iam.delete_access_key(
                UserName=SES_IAM_USERNAME,
                AccessKeyId=old_key
                )
                print(del_res) 
        

    Here is what it will look like:
     

    Figure 16: The Python code pasted in the "Function Code" section

    Figure 16: The Python code pasted in the “Function Code” section

  5. In the Environment variables section, specify the two environment variables required by the Lambda Python code as follows:
    
                SECRET_ID: AWS-SES
                SES_IAM_USERNAME: <SES-IAM-USERNAME-NOTED-IN-STEP 2>  
        

    Here is how my environment variables look:
     

    Figure 17: The Python code pasted in the "Function Code" section page

    Figure 17: The Python code pasted in the “Function Code” section

  6. Select Save.

    You have now created a Lambda function that can update the SES credentials stored in AWS Secrets Manager.

    You will now set up CloudWatch to trigger the Lambda function at scheduled intervals.

  7. Open the Amazon CloudWatch Console.
  8. In the navigation pane, select Rules and, in the content pane, select Create Rule.
  9. Under Event Source, select Schedule, and then select Fixed rate of. Specify how often you would like CloudWatch to trigger the Lambda function. In my example, I have chosen to update the SES credentials every 30 days.
  10. Under Targets, select Add Target, and then select Lambda Function.
  11. In Function, select the Lambda function you just created, and then select Configure details.
     
    Figure 18: Create new CloudWatch rule

    Figure 18: Create new CloudWatch rule

  12. Specify a name for the rule, enter a description, make sure the State check box is selected, and then select Create rule.

The SES credentials stored in AWS Secrets Manager will now be updated based on the scheduled intervals you specified in CloudWatch.

Conclusion

In this post, I showed how you can set up a solution to remind your AWS Directory Service for Microsoft Active Directory users to change their passwords before expiration. I demonstrated how you can achieve this using a combination of a script and Amazon SES. I also showed you how you can configure rotation of the Amazon SES credentials on your preferred schedule.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the Amazon SES forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tekena Orugbani

Tekena is a Cloud Support Engineer at the AWS Cape Town office. He has many years of experience working with Windows Systems, virtualization/cloud technologies, and directory services. When he’s not helping customers make the most of their cloud investments, he enjoys hanging out with his family and watching Premier League football (soccer).

How to sign up for a Leadership Session at re:Inforce 2019

Post Syndicated from Ashley Nelson original https://aws.amazon.com/blogs/security/how-to-sign-up-for-a-leadership-session-at-reinforce-2019/

The first annual re:Inforce conference is one week away and with two full days of security, identity, and compliance learning ahead, I’m looking forward to the community building opportunities (such as Capture the Flag) and the hundreds of sessions that dive deep into how AWS services can help keep businesses secure in the cloud. The track offerings are built around four main topics (Governance, Risk & Compliance; Security Deep Dive; Security Pioneers; and The Foundation) and to help highlight each track, AWS security experts will headline four Leadership Sessions that cover the overall track structure and key takeaways from the conference.

Join one—or all—of these Leadership Sessions to hear AWS security experts discuss top cloud security trends. But I recommend reserving your spot now – seating is limited for these sessions. (See below for instructions on how to reserve a seat.)

Leadership Sessions at re:Inforce 2019

When you attend a Leadership Session, you’ll learn about AWS services and solutions from the folks who are responsible for them end-to-end. These hour-long sessions are presented by AWS security leads who are experts in their fields. The sessions also provide overall strategy and best practices for safeguarding your environments. See below for the list of Leadership Sessions offered at re:Inforce 2019.

Leadership Session: Security Deep Dive

Tuesday, Jun 25, 12:00 PM – 1:00 PM
Speakers: Bill Reid (Sr Mgr, Security and Platform – AWS); Bill Shinn (Sr Principal, Office of the CISO – AWS)

In this session, Bill Reid, Senior Manager of Security Solutions Architects, and Bill Shinn, Senior Principal in the Office of the CISO, walk attendees through the ways in which security leadership and security best practices have evolved, with an emphasis on advanced tooling and features. Both speakers have provided frontline support on complex security and compliance questions posed by AWS customers; join them in this master class in cloud strategy and tactics.

Leadership Session: Foundational Security

Tuesday, Jun 25, 3:15 PM – 4:15 PM
Speakers: Don “Beetle” Bailey (Sr Principal Security Engineer – AWS); Rohit Gupta (Global Segment Leader, Security – AWS); Philip “Fitz” Fitzsimons (Lead, Well-Architected – AWS); Corey Quinn (Cloud Economist – The Duckbill Group)

Senior Principal Security Engineer Don “Beetle” Bailey and Corey Quinn from the highly acclaimed “Last Week in AWS” newsletter present best practices, features, and security updates you may have missed in the AWS Cloud. With more than 1,000 service updates per year being released, having expert distillation of what’s relevant to your environment can accelerate your adoption of the cloud. As techniques for operationalizing cloud security, compliance, and identity remain a critical business need, this leadership session considers a strategic path forward for all levels of enterprises and users, from beginner to advanced.

Leadership Session: Aspirational Security

Wednesday, Jun 26, 11:45 AM – 12:45 PM
Speaker: Eric Brandwine (VP/Distinguished Engineer – AWS)

How does the cloud foster innovation? Join Vice President and Distinguished Engineer Eric Brandwine as he details why there is no better time than now to be a pioneer in the AWS Cloud, discussing the changes that next-gen technologies such as quantum computing, machine learning, serverless, and IoT are expected to make to the digital and physical spaces over the next decade. Organizations within the large AWS customer base can take advantage of security features that would have been inaccessible even five years ago; Eric discusses customer use cases along with simple ways in which customers can realize tangible benefits around topics previously considered mere buzzwords.

Leadership Session: Governance, Risk, and Compliance

Wednesday, Jun 26, 2:45 PM – 3:45 PM
Speakers: Chad Woolf (VP of Security – AWS); Rima Tanash (Security Engineer – AWS); Hart Rossman (Dir, Global Security Practice – AWS)

Vice President of Security Chad Woolf, Director of Global Security Practice Hart Rossman, and Security Engineer Rima Tanash explain how governance functionality can help ensure consistency in your compliance program. Some specific services covered are Amazon GuardDuty, AWS Config, AWS CloudTrail, Amazon CloudWatch, Amazon Macie, and AWS Security Hub. The speakers also discuss how customers leverage these services in conjunction with each other. Additional attention is paid to the concept of “elevated assurance,” including how it may transform the audit industry going forward. Finally, the speakers discuss how AWS secures its own environment, as well as talk about the control frameworks of specific compliance regulations.

How to reserve a seat

Unlike the Keynote session delivered by AWS CISO Steve Schmidt, you must reserve a seat for Leadership Sessions to guarantee entrance. Seats are limited, so put down that coffee, pause your podcast, and follow these steps to secure your spot.

  1. Log into the re:Inforce Session Catalog with your registration credentials. (Not registered yet? Head to the Registration page and sign up.)
  2. Select Event Catalog from the Dashboard.
  3. Enter “Leadership Session” in the Keyword Search box and check the “Exact Match” box to filter your results.
  4. Select the Scheduling Options dropdown to view the date and location of the session.
  5. Select the plus mark to add it to your schedule.
  6. How to add a leadership session to your schedule

And that’s it! Your seat is now reserved. While you’re at it, check out the other available sessions, chalk talks, workshops, builders sessions, and security jams taking place during the event. You can customize your schedule to focus on security topics most relevant to your role, or take the opportunity to explore something new. The session catalog is subject to change, so be sure to check back to see what’s been added. And if you have any questions, email the re:Inforce team at [email protected].

Hope to see you there!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Ashley Nelson

Ashley is a Content Manager within AWS Security. Ashley oversees both print and digital content, and has over six years of experience in editorial and project management roles. Originally from Boston, Ashley attended Lesley University where she earned her degree in English Literature with a minor in Psychology. Ashley is passionate about books, food, video games, and Oxford Commas.

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/working-backward-from-iam-policies-and-principal-tags-to-standardized-names-and-tags-for-your-aws-resources/

When organizations first adopt AWS, they have to make many decisions that will lay the foundation for their future footprint in the cloud. Part of this includes making decisions about the number of AWS accounts you choose to operate, but another fundamental task is constructing practical access control policies so that your application teams can’t affect each other’s resources within the same account. With AWS Identity and Access Management (IAM), you can customize granular access control policies that are appropriate for your organization, helping you follow security best practices such as separation-of-duties and least-privilege. As every organization is different, you’ll need to carefully consider what your cloud security policies are and how they relate to your cloud engineering teams. Things to consider include who should be authorized to perform which actions, how your teams operate with one another, and which IAM mechanisms are suitable for ensuring that only authorized access is allowed.

In this blog post, I’ll show you an approach that works backwards, starting with a set of customer requirements, then utilizing AWS features such as IAM conditions and principal tagging. Combined with an AWS resource naming and tagging strategy, this approach can help you meet your access control objectives. AWS recently enabled tags on IAM principals (users and roles), which allows you to create a single reusable policy that provides access based on the tags of the IAM principal. When you combine this feature with a standardized resource naming and tagging convention, you can craft a set of IAM roles and policies suitable for your organization.

AWS features used in this approach

To follow along, you should have a working knowledge of IAM and tagging, and familiarity with the following concepts:

Introducing Example Corporation

To illustrate the strategies I discuss, I’ll refer to a fictitious customer throughout my post: Example Corporation is a large organization that wants to use their existing Microsoft Active Directory (AD) as their identity store, with Active Directory Federation Services (AD FS) as the means to federate into their AWS accounts. They also have multiple business projects, some of which will need their own AWS accounts, and others that will share AWS accounts due to the dependencies of the applications within those projects. Each project has multiple application teams who do not need to access each other’s AWS resources.

Example Corporation’s access control requirements

Example Corporation doesn’t always dedicate a single AWS account to one team or one environment. Sometimes, multiple project teams work within the same account, and sometimes they have more than one environment in an account. Figure 1 shows how the Website Marketing and Customer Marketing project teams (each of which has multiple application teams) share two AWS accounts: a development and staging AWS account and a production AWS account. Although production has a dedicated AWS account, Example Corporation has decided that a shared development and staging account is acceptable.
 

Figure 1: AWS accounts shared by Example Corp's teams

Figure 1: AWS accounts shared by Example Corp’s teams

The development and staging environments share an AWS account, and the two teams do work closely together. All projects within an account will be allowed access to the read-only metadata of other resources, such as EC2 instance names, tags, and IAM information. However, each project team wants to prevent their application resources from being modified by the other team’s members.

Initial decisions for supporting shared account access control

Example Corporation decides to continue using their existing identity federation solution for access to AWS, as the existing processes for handling joiners, movers, and leavers can be extended to manage identities within AWS. They will enable this via Security Assertion Markup Language (SAML) provided by ADFS to allow Example Corporation’s AD users to access AWS by assuming IAM roles. Initially, they will create three IAM roles—project administrator, application administrator, and application operator—with additional roles to come later.

The company knows they need to implement access controls through IAM, and they’ve created an initial list of AWS services (EC2, RDS, S3, SNS, and Amazon CloudWatch) to secure. Infrastructure as code (IaC) is a new concept at Example Corporation, so they want to keep initial IAM roles and policies as simple as possible. IAM principal tags will help them reuse standard policies across accounts. Principal tags are global condition keys assigned to a user or role. They can be used within a condition to ensure that a new resource is tagged on creation with a value that matches your principal. They can also be used to verify that an existing resource has a matching tag prior to allowing an action against that resource.

Many, but not all, AWS services support tag-based authorization of AWS resources. For services that don’t support tag-based authorization, Example Corporation will enable access control by utilizing ARN paths with wildcards (ARN matching). The name of the resource and its ARN path will explicitly state which projects, applications, and operators have access to that resource. This will require the company to design and enforce a mandatory naming convention.

Please see the IAM user guide for an up-to-date a list of resources that support tag-based authorization.

Using multiple tags to meet access control requirements

The web and marketing teams have settled on three common roles and have decided their access levels as follows:

  • Project administrator: Able to access and modify all resources for a specific project, including all the resources belonging to application teams under the project.
  • Application administrator: Able to access and modify only the resources owned by a particular application team.
  • Application operator: Able to access and modify only the resources owned by a specific application team, plus those that reside within one of three environments: development, staging, or production.

 

Figure 2: Example Corp's teams - administrators and operators with AWS access

Figure 2: Example Corp’s teams—administrators and operators with AWS access

As for the principal tags, there will be three unique tags named with the prefix access-, with tag values that differentiate the roles and their resources from other projects, applications, and environments.

Finally, because the AWS account is shared, Example Corporation needs to account for the service usage costs of the two teams. By adding a mandatory tag for “cost center,” they can determine the costs of the web team’s resources versus the marketing team’s resources in AWS Cost Explorer and AWS Cost and Usage Report.

Below is an example of the web team’s tags.

IAM principal tags used for the website project administrator role:

Tag nameTag value
access-projectweb
cost-center123456

Tags for the website application administrator role:

Tag nameTag value
access-projectweb
access-applicationnginx
cost-center123456

Tags for the website application operator role—specifically for developer access to the dev environment:

Tag nameTag value
access-projectweb
access-applicationnginx
access-environmentdev
cost-center123456

Access control for AWS services and resources that support tag-based authorization

Example Corporation now needs to write IAM policies for their targeted resources. They begin with EC2, as that will be their most widely used service. The IAM documentation for EC2 shows that most write actions (create, modify, delete) support tag-based authorization, allowing the principal to execute the action only if the resource’s tag matches a predefined value.

For example, the following policy statement will only allow EC2 instances to be started or stopped if the resource tag value matches the “web” project name:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"web"
        }
    }
}         

However, if Example Corporation uses a policy variable instead of hardcoding the project name, the company can reuse the policy by taking advantage of the aws:PrincipalTag condition key:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"${aws:PrincipalTag/access-project}"
        }
    }
}    

Without policy variables, every IAM policy for every project would need a unique value to control access to the resource. Because the text of every policy document would be different, Example Corporation wouldn’t be able to reuse policies from one account to another or from one environment to another. Variables allow them to deploy the same policy file to all of their accounts, while allowing the effect of the policy to differ based on the tags that are used in each account.

As a result, Example Corporation will base the right to manipulate resources like EC2 on resource tags as much as possible. It is important, then, for their teams to tag each resource at the time of creation, if the resource supports it. Untagged resources won’t be manageable, but resources tagged properly will become automatically manageable. The company will use the aws:RequestTag IAM condition key to ensure that the requested access tags and cost allocation tags are assigned at the time of EC2 creation. The IAM policy associated with the application-operator role will therefore be:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "ec2:CreateAction": "RunInstances"
        }
    }
}

If someone tries to create an EC2 instance without setting proper tags, the RunInstances API call will fail. The application-administrator policy will be similar, with the added ability to create a resource in any environment:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-zone": [ "dev", "stg", "prd" ],   
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}    

And finally, the project-administrator policy will have the most access. Note that even though this policy is for a project administrator, the user is still limited to modifying resources only within three environments. In addition, to ensure that all resources have the required access-application tag, Example Corporation has added a null condition to verify that the tag value is non-empty:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        },
        "Null": {
            "aws:RequestTag/access-application": false
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}

Access control for AWS services and resources without tag-based authorization

Some services don’t support tag-based authorization. In those cases, Example Corporation will use ARN pattern matching. Many AWS resources use ARNs that contain a user-created name. Therefore, the company’s proposal is to name resources following a naming convention. A name will look like: [project]-[application]-[environment]-myresourcename. For resources that are globally unique, such as S3, Example Corporation additionally requires its abbreviated name, “exco,” to be at the beginning of the resource so as to avoid a naming collision with another corporation’s buckets:


arn:aws:s3:::exco-web-nginx-dev-staticassets

To enforce this naming convention, they craft a reusable IAM policy that ensures that only intended users with matching access-project, access-application, and access-environment tag values can modify their resources. In addition, using * wildcard matches, they are able to allow for custom resource name suffixes such as staticassets in the above example. Using an AWS SNS topic as an example, a snippet of the IAM policy associated with the application-operator role will look like this:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-${aws:PrincipalTag/access-environment}-*"
    ]
} 

And here’s an IAM policy for an application-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],            
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-dev-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-stg-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-prd-*"
    ]
}

And finally, here’s the IAM policy for a project-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:*" 
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-*"
    ]
}

The above policies have two caveats, however. First, they require that the principal tags have values that do not include a hyphen, as it is used as a delimiter according to Example Corporation’s new tag-based convention for access control. In addition, a forward slash cannot be used, as it is in use within ARNs by many AWS resources, such as S3 buckets:


arn:aws:s3:::awsexamplebucket/exampleobject.png

It is important that the company doesn’t let users create resources with disallowed or invalid tags. The following application admin permissions boundary policy uses a condition to permit IAM roles to be created, but only if they are tagged appropriately. Please note that these are just snippets of the boundary policy for the sake of illustration:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*"
    ]
}

And likewise, this permissions boundary policy attached to the project admin will do the same:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-*"
    ]
}

Note that the above boundary policies can be also be crafted using allow statements and multiple explicit deny statements.

Example Corporation’s resource naming convention requirements

As shown in the above examples, Example Corporation has given project teams the ability to create resources with name-based access control for services that currently do not support tag-based authorization (such as SQS and S3). Through the use of wildcards, teams can still provide custom names to their resources to differentiate from other resources created within the same team.

AWS resources have various limits on the structure and composition of names, so the company restricts the character length on access tags. For example, Amazon ElastiCache cluster names must be 20 alphanumeric characters or less, including hyphens. Most AWS resources have higher character limits, but Example Corporation limits their [project]-[application]-[environment] prefix to a 3-character project ID, 5-character application ID, and 3-character maximum environment name to satisfy their requirements, as this will equal a total of 14 characters (for example, web-nginx-prd-), which leaves 6 characters remaining for the user-specified cluster name.

Summary of Key Decisions

  • Services that support tag-based authorization (TBA) must have resources that follow a tagging convention for access control. Tagging on resource creation will be enforced where possible.
  • Services that do not support TBA must have resources that follow a naming convention. The cost center tag will still be required and will be applied after resource creation.
  • Services that do not support TBA, and cannot have user-specified names in their ARN (less common), will be addressed on a case-by-case basis. They will either need to allow access for all projects and application teams sharing the same account, or allow access via a custom IAM policy created on a case-by-case basis so that only the desired team can access the resource. Each IAM role should leave a few unused slots short of the maximum number of policies allowed per role in order to accommodate custom policies.
  • It is acceptable to allow basic List* and Describe* IAM permissions for AWS resources for all users who log in to the account, as the company’s project teams work closely together.
  • IAM user and role names created by project and application admins must adhere to the approved resource naming conventions. Admins themselves will have a permissions boundary policy applied to their roles. This policy, in turn, will require that all users and roles the admins create have a permissions boundary policy. This is especially important for roles associated with resources that can potentially create or modify IAM resources, such as EC2 and Lambda.
  • Active Directory users who need access to AWS resources must assume different IAM roles in order to utilize the different levels of access that the project admin, application admin, and application operator each provide. Users must also assume a different role if they need access to a different project. This is because each role’s tag has a single value. In this scheme, a single role cannot be assigned to multiple projects or application teams.

Conclusion

Example Corporation was able to allow their project teams to share the same AWS account while still limiting access to a majority of the account’s AWS resources. Through the use of IAM principal tagging, combined with a resource naming and tagging convention, they created a reusable set of IAM policies that restricted access not only between project and application admins and users, but also between different development, stage, and production users.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael is a Professional Services Consultant who has assisted commercial and Federal customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Definitely not an AWS Security Profile: Corey Quinn, a “Cloud Economist” who doesn’t work here

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/definitely-not-an-aws-security-profile-corey-quinn-a-cloud-economist-who-doesnt-work-here/

platypus scowling beside cloud

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You don’t work at AWS, but you do have deep experience with AWS Services. Can you talk about how you developed that experience and the work that you do as a “Cloud Economist?”

I see those sarcastic scare-quotes!

I’ve been using AWS for about a decade in a variety of environments. It sounds facile, but it turns out that being kinda good at something starts with being abjectly awful at it first. Once you break things enough times, you start to learn how to wield them in more constructive ways.

I have a background in SRE-style work and finance. Blending those together into a made-up thing called “Cloud Economics” made sense and focused on a business problem that I can help solve. It starts with finding low-effort cost savings opportunities in customer accounts and quickly transitions into building out costing predictions, allocating spend—and (aligned with security!) building out workable models of cloud governance that don’t get in an engineer’s way.

This all required me to be both broad and deep across AWS’s offerings. Somewhere along the way, I became something of a go-to resource for the community. I don’t pretend to understand how it happened, but I’m incredibly grateful for the faith the broader community has placed in me.

You’re known for your snarky newsletter. When you meet AWS employees, how do they tend to react to you?

This may surprise you, but the most common answer by far is that they have no idea who I am.

It turns out AWS employs an awful lot of people, most of whom have better things to do than suffer my weekly snarky slings and arrows.

Among folks who do know who I am, the response has been nearly universal appreciation. It seems that the newsletter is received in which the spirit I intend it—namely, that 90–95% of what AWS does is awesome. The gap between that and perfection offers boundless opportunities for constructive feedback—and also hilarity.

The funniest reaction I ever got was when someone at a Summit registration booth saw “Last Week in AWS” on my badge and assumed I was an employee serving out the end of his notice period.

“Senior RageQuit Engineer” at your service, I suppose.

You’ve been invited to present during the Leadership Session for the re:Inforce Foundation Track with Beetle. What have you got planned?

Ideally not leaving folks asking incredibly pointed questions about how the speaker selection process was mismanaged! If all goes well, I plan on being able to finish my talk without being dragged off the stage by AWS security!

I kid. But my theory of adult education revolves around needing to grab people’s attention before you can teach them something. For better or worse, my method for doing that has always been humor. While I’m cognizant that messaging to a large audience of security folks requires a delicate touch, I don’t subscribe to the idea that you can’t have fun with it as well.

In short: if nothing else, it’ll be entertaining!

What’s one thing that everyone should stop reading and go do RIGHT NOW to improve their security posture?

Easy. Log into the console of your organization’s master account and enable AWS CloudTrail for all regions and all accounts in your organization. Direct that trail to a locked-down S3 bucket in a completely separate, highly restricted account, and you’ve got a forensic log of all management options across your estate.

Worst case, you’ll thank me later. Best case, you’ll never need it.

It’s important, so what’s another security thing everyone should do?

Log in to your AWS accounts right now and update your security contact to your ops folks. It’s not used for marketing; it’s a point of contact for important announcements.

If you’re like many rapid-growth startups, your account is probably pointing to your founder’s personal email address— which means critical account notices are getting lost among Amazon.com sock purchase receipts.

That is not what being “SOC-compliant” means.

From a security perspective, what recent AWS release are you most excited about?

It was largely unheralded, but I was thrilled to see AWS Systems Manager Parameter Store (it’s a great service, though the name could use some work) receive higher API rate limits; it went from 40 to 1,000 requests per second.

This is great for concurrent workloads and makes it likelier that people will manage secrets properly without having to roll their own.

Yes, I know that AWS Secrets Manager is designed around secrets, but KMS-encrypted parameters in Parameter Store also get the job done. If you keep pushing I’ll go back to using Amazon Route 53 TXT records as my secrets database… (Just kidding. Please don’t do this.)

In your opinion, what’s the biggest challenge facing cloud security right now?

The same thing that’s always been the biggest challenge in security: getting people to care before a disaster happens.

We see the same thing in cloud economics. People care about monitoring and controlling cloud spend right after they weren’t being diligent and wound up with an unpleasant surprise.

Thankfully, with an unexpectedly large bill, you have a number of options. But you don’t get a do-over with a data breach.

The time to care is now—particularly if you don’t think it’s a focus area for you. One thing that excites me about re:Inforce is that it gives an opportunity to reinforce that viewpoint.

Five years from now, what changes do you think we’ll see across the cloud security landscape?

I think we’re already seeing it now. With the advent of things like AWS Security Hub and AWS Control Tower (both currently in preview), security is moving up the stack.

Instead of having to keep track of implementing a bunch of seemingly unrelated tooling and rulesets, higher-level offerings are taking a lot of the error-prone guesswork out of maintaining an effective security posture.

Customers aren’t going to magically reprioritize security on their own. So it’s imperative that AWS continue to strive to meet them where they are.

What are the comparative advantages of being a cloud economist vs. a platypus keeper?

They’re more alike than you might expect. The cloud has sharp edges, but platypodes are venomous.

Of course, large bills are a given in either space.

You sometimes rename or reimagine AWS services. How should the Security Blog rebrand itself?

I think the Security Blog suffers from a common challenge in this space.

It talks about AWS’s security features, releases, and enhancements—that’s great! But who actually identifies as its target market?

Ideally, everyone should; security is everyone’s job, after all.

Unfortunately, no matter what user persona you envision, a majority of the content on the blog isn’t written for that user. This potentially makes it less likely that folks read the important posts that apply to their use cases, which, in turn, reinforces the false narrative that cloud security is both impossibly hard and should be someone else’s job entirely.

Ultimately, I’d like to see it split into different blogs that emphasize CISOs, engineers, and business tracks. It could possibly include an emergency “this is freaking important” feed.

And as to renaming it, here you go: you’d be doing a great disservice to your customers should you name it anything other than “AWS Klaxon.”

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Corey Quinn

Corey is the Cloud Economist at the Duckbill Group. Corey specializes in helping companies fix their AWS bills by making them smaller and less horrifying. He also hosts the AWS Morning Brief and Screaming in the Cloud podcasts and curates Last Week in AWS, a weekly newsletter summarizing the latest in AWS news, blogs, and tools, sprinkled with snark.