Tag Archives: Security, Identity & Compliance

AWS Security Profile: Ron Cully, Principal Product Manager, AWS Identity

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profile-ron-cully-principal-product-manager-aws-identity/


In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for nearly four years. I’m a Principal Product Manager in AWS Identity. I spent most of my time covering our Managed Active Directory products, and over the past year I’ve taken on management for AWS Single Sign-On and AWS Identity and Access Management (IAM).

How do you explain your job to non-tech friends?

Identity is what people use when they sign in to their services. What we work on is the back-end systems that authenticate and manage access so that people have secure access to their services.

What are you currently working on that you’re excited about?

Wow, it’s hard to pick just one. So, I’d say I’m most excited about the work that we’re doing so that customers can use identities that they already have across all of AWS.

What’s the most challenging part of your job?

Making sure that we deliver the most important features that customers want, in the right sequence, as quickly as possible. To do that, we need to focus on the key pain points customers have right now and resolve those pain points in ways that are the most meaningful to them. We also need to make sure that we have the right roadmap and keep doing that on an iterative basis.

What’s your favorite part of your job?

I get to work with some really incredibly smart people inside and outside of Amazon. It’s a really interesting space to be in. There’s a lot happening at the industry level, and we’re trying to sort out the puzzle of how we bring things together given what customers have and use today. Customers have all of this existing technology that they want to use, and they have a lot of investments in it. We want to make it possible for them to use those investments in new innovative ways that make their lives easier.

The AWS Identity team is growing rapidly. What are some of the biggest challenges that teams face during rapid growth?

One key challenge is hiring. How do we find great people? Amazon has some pretty high bars, and we need to find the right people that can ramp up quickly to help us solve the challenges that we want to go fix. The other thing is making sure that we stay on the same page. There’s a lot of work that we’re doing across a lot of different areas. So it’s important to stay in coordination so that we deliver the most important things that solve our customers’ current pain points.

What advice would you give to people coming on board the AWS Identity team?

Make sure that you’re highly customer focused. Dive deep because we really need to understand the details of what’s going on and what customers are trying to accomplish. Be a really effective communicator by breaking things down into the simplest terms. I find that often, people get so caught up in technology that they get lost in the technology. It’s really important to remember that we’re solving problems that are very visceral to human beings. In order to get the correct results, you need to be able to communicate in a way that makes sense to anybody.

Which Amazon leadership principles have you relied on the most in your own career at AWS?

Certainly Customer Obsession. That’s absolutely imperative. Dive Deep of course. Learn and Be Curious is huge. But also a less popular principle: Have Backbone; Disagree and Commit. It’s important that we have healthy discussions. This principle isn’t about being confrontational. It’s about being smart about how you synthesize the information that you learn from your customers and bring forth your ideas and opinions in a respectful way. It’s important to have a healthy conversational debate about what’s right for customers, so that we can drive important things forward when they need to be done. At the same time, we must recognize that not all ideas or their timing are right. It’s important to understand the bigger picture of what’s going on, understand that a different approach might be better in that particular moment, and commit to moving forward as a team after the debate is finished.

What’s the most common misperception you encounter about AWS Identity?

I think there’s a huge amount of confusion in the Active Directory area about what you can and can’t do, and how it relates to what customers are doing with Azure AD. We probably have the best managed active directory in the cloud. But, people sometimes confuse Active Directory with Azure AD, which are completely different technologies. So, we try to help customers understand how our product works relative to Azure AD. They are complementary; they can work together.

Another area that’s confusing for customers is choosing which AWS identity system to use today. AWS identity systems have grown organically over time. We’ve listened to customers and added features, and so now we have a couple of different ways of approaching identity. We started out with IAM users and groups. Then over the past few years, we’ve made it possible to use Active Directory identities in AWS. We’ve also been embracing the use of standards-based federation. Federation enables customers who use identity systems like Okta, Ping, Google, or Azure AD, to use those identities to sign into AWS. Due to this organic change, customers can choose between managing identities as IAM, create them in AWS SSO, bring them in from Active Directory by using AWS SSO, or use SAML federation through IAM. We also have the Cognito product that people have been adapting to use with IAM federation. Based on the state of where the technologies are now, it can be confusing for customers to know which identity system is the right one to use right now so they are on the right path going into the future. This is an area we are working hard to simplify and clarify for our customers.

What do you think is the biggest challenge facing the identity space right now?

I think it’s helping customers understand how to use the identity system that they have now—broadly, across all of the applications and services that they want to use—and how to provide them with a consistent experience. I think that’s one of the key industry challenges. We’ve come a long way, but there’s still a lot of road ahead of us to make that all possible at the industry level.

Looking to the future, how do you think the authorization and authentication landscape will evolve?

I think we’ll start to see more convergence on interoperable technologies for authentication. There’s some evolution already happening between the SAML model of authentication and OIDC (OpenID Connect). And I think we’ll start to see more convergence. One sticky spot in the industry right now is how to set up federation. It can be complicated and time consuming to set up, and there’s work that we’re doing in this space to help make it easier. We did a technology demonstration at identiverse last June using the Fast Federation standards draft to connect IDPs and service providers together. In our demonstration, we showed how Fast Fed makes it possible to connect AWS SSO to Google in a couple of clicks. That enables customers to use the identities they already have and use AWS SSO as their AWS integrated permissions management tool to grant access to resources across all their AWS accounts. I think Fast Fed will really help customers because today it’s so complicated to try and connect identity providers to tens or hundreds of applications.

What does identity mean to you on a personal level?

When I think about identity, it’s about who I am, and there are different contexts for that, such as who I am as a consumer or who I am as an employee. Let’s focus on who I am as an employee: Today I may have different user identities and credentials, each to a different system. I also have to manage my passwords for each of those identities. If I make a mistake and use the wrong sign-in or password, I get blocked, and I might get locked out. These things get in the way of focusing on my job. Another example is that if I change my role within a company, I need access to new resources, and there are old resources that I should no longer be able to access. It’s really a pain today for people to navigate getting my access to resources set up correctly. It can take a month before you have all of the different permissions to access the things you need. So when I look at what I want to do for customers, it’s about “how do I make it really easy for people to get access to the things they need without compromising security?” I want to make it so that people can have one identity to use, and when there’s a change to their identity, the system automatically gives them access to what they need and removes access to what they don’t need. People shouldn’t have to go through all the painful processes of going to websites and talking to managers to get them to change group membership.

Will you be doing anything at re:Invent this year?

I’m involved in a few sessions.

I’ll be talking about our single sign-on product, AWS Single Sign-On. It enables customers to centrally manage access to the AWS Console, accounts, roles, and applications using identities from their Active Directory, or identities they create in AWS SSO. We’ll be talking about some exciting new features that we’ve released in that product area since the last re:Invent.

I’m also involved in a session about how enterprises can use Active Directory in the cloud. Customers have a lot of investment in their Windows environments on premises, and they’re migrating their workloads into the cloud. As they do that, those Windows workloads in the cloud need access to Active Directory. Customers often don’t want to manage the Active Directory infrastructure in the cloud. The operational pain of doing that detracts from what they’re trying to do, which is to get to the cloud and actually convert into server-less technologies where they get better economies of scale and more flexibility. AWS offers a managed Active Directory solution that customers can use with their Windows workloads while eliminating the overhead of operating Active Directory domain controllers in the cloud.

What are you hoping that your audience will do differently as a result of attending?

I would love to see customers realize they can take advantage of the services we offer in new ways, and then go home and deploy them. I would hope that they go back and do a proof of concept—go play with it and understand what it can do, see what kind of value it can bring, and then build out from there. Armed with the right information I think customers can streamline some processes in terms of how to get on to the cloud and take advantage of the cloud faster.

What do you recommend that first-time attendees do at Re:Invent?

There’s so much amazing content that’s there, you won’t be able to get it all. So, get clear about what information you’re after, go through the session list, and get registered for the sessions. Sometimes these fill up fast! If you’re coming with a team, divide and conquer. But also leave some time to learn something new in an area you’re less familiar with. Also, take advantage of the presenters. Ask us questions! We’re here to help customers learn as much as they can. If you see me there, stop me and ask your questions!

If you had to pick any other job, what would you want to do with your life?

I would probably want to be in food safety. I used to not care about food at all. Then, I went to an event where I made a life decision that made me think about my health and made me think about my food. So I started understanding more about food. I began realizing how much happens with our food today that we just don’t know about. There are a lot of things that I really don’t align with. I would love to see more transparency about our food so that we could have the ability to pick and choose what we want to eat based upon our values. If it wasn’t food safety, maybe politics.

Want more AWS Security news? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Ron Cully

Ron Cully is a Principal Product Manager at AWS where he leads feature and roadmap planning for workforce identity products at AWS. Ron has over 20 years of industry experience in product and program management of networking and directory related products. He is passionate about delivering secure, reliable solutions that help make it easier for customers to migrate directory aware applications and workloads to the cloud.

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 18 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-moderate-provisional-authorization-18-services/

It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations.

Today, we’ve achieved FedRAMP authorizations for 6 services in our AWS US East/West Regions:

We also received 14 service authorizations in our AWS GovCloud (US) Regions:

In total, we now offer 48 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 43 services authorized in our AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope page.

Our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.

We care deeply about our customers’ needs, and compliance is my team’s priority. As we expand in the federal space, we want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of AWS Security’s Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the Federal government, and over 13 years’ experience supporting CISO teams with risk management framework (RMF) activities.

Updated whitepaper available: “Navigating GDPR Compliance on AWS”

Post Syndicated from Carmela Gambardella original https://aws.amazon.com/blogs/security/updated-whitepaper-available-navigating-gdpr-compliance-on-aws/

The European Union’s General Data Protection Regulation 2016/679 (GDPR) safeguards EU citizens’ fundamental right to privacy and to personal data protection. In order to make local regulations coherent and homogeneous, the GDPR introduces and defines stringent new standards in terms of compliance, security and data protection.

The updated version of our Navigating GDPR Compliance on AWS whitepaper (.pdf) explains the role that AWS plays in your GDPR compliance process and shows how AWS can help your organization accelerate the process of aligning your compliance programs to the GDPR by using AWS cloud services.

AWS compliance, data protection, and security experts work with customers across the world to help them run workloads in the AWS Cloud, including customers who must operate within GDPR requirements. AWS teams also review what AWS is responsible for to make sure that our operations comply with the requirements of the GDPR so that customers can continue to use AWS services. The whitepaper provides guidelines to better orient you to the wide variety of AWS security offerings and to help you identify the service that best suits your GDPR compliance needs.

If you have feedback about this blog post, please submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Carmela Gambardella

Carmela graduated in Computer Science at the Federico II University of Naples, Italy. She has worked in a variety of roles at large IT companies, including as a software engineer, security consultant, and security solutions architect. Her areas of interest include data protection, security and compliance, application security, and software engineering. In April 2018, she joined the AWS Public Sector Solution Architects team in Italy.

Author photo

Giuseppe Russo

Giuseppe is a Security Assurance Manager for AWS in Italy. He has a Master’s Degree in Computer Science with a specialization in cryptography, security and coding theory. Giuseppe is s a seasoned information security practitioner with many years of experience engaging key stakeholders, developing guidelines, and influencing the security market on strategic topics such as privacy and critical infrastructure protection.

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-byron-cook-director-aws-automated-reasoning-group/

Author


Byron Cook leads the AWS Automated Reasoning Group, which automates proof search in mathematical logic and builds tools that provide AWS customers with provable security. Byron has pushed boundaries in this field, delivered real-world applications in the cloud, and fostered a sense of community amongst its practitioners. In recognition of Byron’s contributions to cloud security and automated reasoning, the UK’s Royal Academy of Engineering elected him as one of 7 new Fellows in computing this year.

I recently sat down with Byron to discuss his new Fellowship, the work that it celebrates, and how he and his team continue to use automated reasoning in new ways to provide higher security assurance for customers in the AWS cloud.

Congratulations, Byron! Can you tell us a little bit about the Royal Academy of Engineering, and the significance of being a Fellow?

Thank you. I feel very honored! The Royal Academy of Engineering is focused on engineering in the broad sense; for example, aeronautical, biomedical, materials, etc. I’m one of only 7 Fellows elected this year that specialize in computing or logic, making the announcement really unique.

As for what the Royal Academy of Engineering is: the UK has Royal Academies for key disciplines such as music, drama, etc. The Royal Academies focus financial support and recognition on these fields, and gives a location and common meeting place. The Royal Academy of Music, for example, is near Regent’s Park in West London. The Royal Academy of Engineering’s building is in Carlton Place, one of the most exclusive locations in central London near Pall Mall and St. James’ Park. I’ve been to a number of lectures and events in that space. For example, it’s where I spoke ten years ago when I was the recipient of the Roger Needham prize. Some examples of previously elected Fellows include Sir Frank Whittle, who invented the jet engine; radar pioneer Sir George MacFarlane, and Sir Tim Berners-Lee, who developed the world-wide web.

Can you tell us a little bit about why you were selected for the award?

The letter I received from the Royal Academy says it better than I could say myself:

“Byron Cook is a world-renowned leader in the field of formal verification. For over 20 years Byron has worked to bring this field from academic hypothesis to mechanised industrial reality. Byron has made major research contributions, built influential tools, led teams that operationalised formal verification activities, and helped establish connections between others that have dramatically accelerated growth of the area. Byron’s tools have been applied to a wide array of topics, e.g. biological systems, computer operating systems, programming languages, and security. Byron’s Automated Reasoning Group at Amazon is leading the field to even greater success”.

Formal verification is the one term here that may be foreign to you, so perhaps I should explain. Formal verification is the use of mathematical logic to prove properties of systems. Euclid, for example, used formal verification in ~300 BC to prove that the Pythagorean theorem holds for all possible right-angled triangles. Today we are using formal verification to prove things about all possible configurations of a computer program might reach. When I founded Amazon’s Automated Reasoning Group, I named it that because my ambition was to automate all of the reasoning performed during formal verification.

Can you give us a bit of detail about some of the “research contributions and tools” mentioned in the text from Royal Academy of Engineering?

Probably my best-known work before joining Amazon was on the Terminator tool. Terminator was designed to reason at compile-time about what a given computer program would eventually do when running in production. For example, “Will the program eventually halt?” This is the famous “Halting problem,” proved undecidable in the 1930s. The Terminator tool piloted a new approach to the problem which is popular now, based on the idea of incrementally improving the best guess for a proof based on failed proof attempts. This was the first known approach capable of scaling termination proving to industrial problems. My colleagues and I used Terminator to find bugs in device drivers that could cause operating systems to become unresponsive. We found many bugs in device drivers that ran keyboards, mice, network devices, and video cards. The Terminator tool was also the basis of BioModelAnaylzer. It turns out that there’s a connection between diseases like Leukemia and the Halting problem: Leukemia is a termination bug in the genetic-regulatory pathways in your blood. You can think of it in the same way you think of a device driver that’s stuck in an infinite loop, causing your computer to freeze. My tools helped answer fundamental questions that no tool could solve before. Several pharmaceutical companies use BioModelAnaylzer today to understand disease and find new treatment options. And these days, there is an annual international competition with many termination provers that are much better than the Terminator. I think that this is what Royal Academy is talking about when they say I moved the area from “academic hypothesis to mechanized industrial reality.”

I have also worked on problems related to the question of P=NP, the most famous open problem in computing theory. From 2000-2006, I built tools that made NP feel equal to P in certain limited circumstances to try and understand the problem better. Then I focused on circumstances that aligned with important industrial problems, like proving the absence of bugs in microprocessors, flight control software, telecommunications systems, and railway control systems. These days the tools in this space are incredibly powerful. You should check out the software tools CVC4 or Z3.

And, of course, there’s my work with the Automated Reasoning Group, where I’ve built a team of domain experts that develop and apply formal verification tools to a wide variety of problems, helping make the cloud more secure. We have built tools that automatically reason about the semantics of policies, networks, cryptography, virtualization, etc. We reason about the implementation of Amazon Web Services (AWS) itself, and we’ve built tools that help customers prove the correctness of their AWS-based implementations.

Could you go into a bit more detail about how this work connects to Amazon and its customers?

AWS provides cloud services globally. Cloud is shorthand for on-demand access to IT resources such as compute, storage, and analytics via the Internet with pay-as-you-go pricing. AWS has a wide variety of customers, ranging from individuals to the largest enterprises, and practically all industries. My group develops mathematical proof tools that help make AWS more secure, and helps AWS customers understand how to build in the cloud more securely.

I first became an AWS customer myself when building BioModelAnaylzer. AWS allowed us working on this project to solve major scientific challenges (see this Nature Scientific Report for an example) using very large datacenters, but without having to buy the machines, maintain the machines, maintain the rooms that the machines would sit in, the A/C system that would keep them cool, etc. I was also able to easily provide our customers with access to the tool via the cloud, because it’s all over the internet. I just pointed people to the end-point on the internet and, presto, they were using the tool. About 5 years before developing BioModelAnalyzer, I was developing proof tools for device drivers and I gave a demo of the tool to my executive leadership. At the end of the demo, I was asked if 5,000 machines would help us do more proofs. Computationally, the answer was an obvious “yes,” but then I thought a minute about the amount of overhead required to manage a fleet of 5,000 machines and reluctantly replied “No, but thank you very much for the offer!” With AWS, it’s not even a question. Anyone with an Amazon account can provision 5,000 machines for practically nothing. In less than 5 minutes, you can be up and running and computing with thousands of machines.

What I love about working at AWS is that I can focus a very small team on proving the correctness of some aspect of AWS (for example, the cryptography) and, because of the size and importance of the customer base, we make much of the world meaningfully more secure. Just to name a few examples: s2n (the Amazon TLS implementation); the AWS Key Management Service (KMS), which allows customers to securely store crypto keys; and networking extensions to the IoT operating system Amazon FreeRTOS, which customers use to link cloud to IoT devices, such as robots in factories. We also focus on delivering service features that help customers prove the correctness of their AWS-based implementations. One example is Tiros, which powers a network reachability feature in Amazon Inspector. Another example is Zelkova, which powers features in services such as Amazon S3, AWS Config, and AWS IoT Device Defender.

When I think of mathematical logic I think of obscure theory and messy blackboards, not practical application. But it sounds like you’ve managed to balance the tension between theory and practical industrial problems?

I think that this is a common theme that great scientists don’t often talk about. Alan Turing, for example, did his best work during the war. John Snow, who made fundamental contributions to our understanding of germs and epidemics, did his greatest work while trying to figure out why people were dying in the streets of London. Christopher Stratchey, one of the founders of our field, wrote:

“It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles in their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.”

Throughout my career, I’ve been at the intersection of practical and theoretical. In the early days, this was driven by necessity: I had two children during my PhD and, frankly, I needed the money. But I soon realized that my deep connection to real engineering problems was an advantage and not a disadvantage, and I’ve tried through the rest of my career to stay in that hot spot of commercially applicable problems while tackling abstract mathematical topics.

What’s next for you? For the Automated Reasoning Group? For your scientific field?

The Royal Academy of Engineering kindly said that I’ve brought “this field from academic hypothesis to mechanized industrial reality.” That’s perhaps true, but we are very far from done: it’s not yet an industrial standard. The full power of automated reasoning is not yet available to everyone because today’s tools are either difficult to use or weak. The engineering challenge is to make them both powerful and easy to use. With that I believe that they’ll become a key part of every software engineer’s daily routine. What excites me is that I believe that Amazon has a lot to teach me about how to operationalize the impossible. That’s what Amazon has done over and over again. That’s why I’m at Amazon today. I want to see these proof techniques operating automatically at Amazon scale.

Links:
Provable security webpage
Lecture: Fundamentals for Provable Security at AWS
Lecture: The evolution of Provable Security at AWS
Lecture: Automating compliance verification using provable security
Lecture: Byron speaks about Terminator at University of Colorado
https://biomodelanalyzer.org/

If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

How to migrate symmetric exportable keys from AWS CloudHSM Classic to AWS CloudHSM

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/migrate-symmetric-exportable-keys-aws-cloudhsm-classic-aws-cloudhsm/

In August 2017, we announced the “new” AWS CloudHSM service, which had a lot of improvements over AWS CloudHSM Classic (for clarity in this post I will refer to the services as New CloudHSM and CloudHSM Classic). These advantages in security, scalability, usability, and economy, included FIPS 140-2 Level 3 certification, fully managed high availability and backup, a management console, and lower costs.

Now, we turn another page. The Luna 5 HSMs used for CloudHSM Classic are reaching end of life, and the CloudHSM Classic service is being subsequently decommissioned, so CloudHSM Classic users must migrate cryptographic key material to the New CloudHSM.

In this post, I’ll show you how to use the RSA OAEP (optimal asymmetric encryption padding) wrapping mechanism, which was introduced in the CloudHSM client version 2.0.0, to move key material from CloudHSM Classic to New CloudHSM without exposing the plain text of the key material outside the HSM boundaries. You’ll use an RSA public key to wrap the key material (export it in encrypted form) on CloudHSM Classic, then use the corresponding RSA Private Key to unwrap it on New CloudHSM.

NOTE: This solution only works for symmetric exportable keys. Asymmetric keys on CloudHSM Classic can’t be exported. To replace non-exportable and asymmetric keys, you must generate new keys on New CloudHSM, then use the old keys to decrypt and the new keys to re-encrypt your data.

Solution overview

My solution shows you how to use the CKDemo utility on CloudHSM Classic, and key_mgmt_util on New CloudHSM, to: generate an RSA wrapping key pair; use it to wrap keys on CloudHSM Classic; and then unwrap the keys on New CloudHSM. These are all done via the RSA OAEP mechanism.
The following diagram provides a summary of the steps involved in the solution:

Figure 1: Solution overview

Figure 1: Solution overview

  1. Generate the RSA wrapping key pair on New CloudHSM.
  2. Export the RSA Public Key to the New CloudHSM client instance.
  3. Move the RSA public key to the CloudHSM Classic client instance.
  4. Import the RSA public key to CloudHSM Classic.
  5. Wrap the key using the imported RSA public key.
  6. Move the wrapped key to the New CloudHSM client instance.
  7. Unwrap the key on New CloudHSM with the RSA Private Key.

NOTE: You can perform the same procedure using supported libraries, such as JCE (Java Cryptography Extension) and PKCS#11. For example, you can use the wrap_with_imported_rsa_key sample to import an RSA public key into CloudHSM Classic, use that key to wrap your CloudHSM Classic keys, and then use the rsa_wrapping sample (specifically the rsa_oaep_unwrap_key function) to unwrap the keys into New CloudHSM using the RSA OAEP mechanism.

Prerequisites

  1. An active New CloudHSM cluster with at least one active hardware security module (HSM). Follow the Getting Started Guide to create and initialize a New CloudHSM cluster.
  2. An Amazon Elastic Compute Cloud (Amazon EC2) instance with the New CloudHSM client installed and configured to connect to the New CloudHSM cluster. You can refer to the Getting Started Guide to configure and connect the client instance.
  3. New CloudHSM CU (crypto user) credentials.
  4. An EC2 instance with the CloudHSM Classic client installed and configured to connect to the CloudHSM Classic partition or the high-availability (HA) partition group that contains the keys you want to migrate. You can refer to this guide to install and configure a CloudHSM Classic Client.
  5. The Password of the CloudHSM Classic partition or HA partition group that contains the keys you want to migrate.
  6. The handle of the symmetric key on CloudHSM Classic you want to migrate.

Step 1: Generate the RSA wrapping key pair on CloudHSM

1.1. On the New CloudHSM client instance, run the key_mgmt_util command line tool, and log in as the CU, as described in Getting Started with key_mgmt_util.


Command:  loginHSM -u CU -s <CU user> -p <CU password>
    
	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

1.2. Run the following
genRSAKeyPair command to generate an RSA key pair with the label
classic_wrap. Take note of the private and public key handles, as they’ll be used in the coming steps.


Command:  genRSAKeyPair -m 2048 -e 65537 -l classic_wrap

	Cfm3GenerateKeyPair returned: 0x00 : HSM Return: SUCCESS

	Cfm3GenerateKeyPair:    public key handle: 407    private key handle: 408

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 2: Export the RSA public key to the New CloudHSM client instance

2.1. Run the following exportPubKey command to export the RSA public key to the New CloudHSM client instance using the public key handle you received in step 1.2 (407, in my example). This will export the public key to a file named wrapping_public.pem.


Command:  exportPubKey -k <public key handle> -out wrapping_public.pem

PEM formatted public key is written to wrapping_public.pem

	Cfm3ExportPubKey returned: 0x00 : HSM Return: SUCCESS

Step 3: Move the RSA public key to the CloudHSM Classic client instance

Move the RSA Public Key to the CloudHSM Classic client instance using scp (or any other tool you prefer).

Step 4: Import the RSA public key to CloudHSM Classic

4.1. On the CloudHSM Classic instance, use the cmu command as shown below to import the RSA public key with the label classic_wrap. You’ll need the partition or HA partition group password for this command, plus the slot number of the partition or HA partition group (you can get the slot number of your partition or HA partition group using the vtl listSlots command).


# cmu import -inputFile=wrapping_public.pem -label classic_wrap
Select token
 [1] Token Label: partition1
 [2] Token Label: partition2
 [3] Token Label: partition3
 Enter choice: <slot number>
Please enter password for token in slot 1 : <password>

4.2. Run the below command to get the handle (highlighted below) of the imported key.


# cmu list -label classic_wrap
Select token
 [1] Token Label: partition1
 [2] Token Label: partition2
 [3] Token Label: partition3
 Enter choice: <slot number>
Please enter password for token in slot 1 : <password>
handle=149	label=classic_wrap

4.3. Run the CKDemo utility.


# ckdemo

4.4. Open a session to the partition or HA partition group slot.


Enter your choice : 1

Slots available:
	slot#1 - LunaNet Slot
	slot#2 - LunaNet Slot
	...
Select a slot: <slot number>

SO[0], normal user[1], or audit user[2]? 1

Status: Doing great, no errors (CKR_OK)

4.5. Log in using the partition or HA partition group pin.


Enter your choice : 3
Security Officer[0]
Crypto-Officer  [1]
Crypto-User     [2]:
Audit-User      [3]: 1
Enter PIN          : <password>

Status: Doing great, no errors (CKR_OK)

4.6. Change the CKA_WRAP attribute of the imported RSA public key to be able to use it for wrapping using the imported public key handle you received in step 4.2 above (149, in my example).


Enter your choice : 25

Which object do you want to modify (-1 to list available objects) : <imported public key handle>

Edit template for set attribute operation.

(1) Add Attribute   (2) Remove Attribute   (0) Accept Template :1

 0 - CKA_CLASS                  1 - CKA_TOKEN
 2 - CKA_PRIVATE                3 - CKA_LABEL
 4 - CKA_APPLICATION            5 - CKA_VALUE
 6 - CKA_XXX                    7 - CKA_CERTIFICATE_TYPE
 8 - CKA_ISSUER                 9 - CKA_SERIAL_NUMBER
10 - CKA_KEY_TYPE              11 - CKA_SUBJECT
12 - CKA_ID                    13 - CKA_SENSITIVE
14 - CKA_ENCRYPT               15 - CKA_DECRYPT
16 - CKA_WRAP                  17 - CKA_UNWRAP
18 - CKA_SIGN                  19 - CKA_SIGN_RECOVER
20 - CKA_VERIFY                21 - CKA_VERIFY_RECOVER
22 - CKA_DERIVE                23 - CKA_START_DATE
24 - CKA_END_DATE              25 - CKA_MODULUS
26 - CKA_MODULUS_BITS          27 - CKA_PUBLIC_EXPONENT
28 - CKA_PRIVATE_EXPONENT      29 - CKA_PRIME_1
30 - CKA_PRIME_2               31 - CKA_EXPONENT_1
32 - CKA_EXPONENT_2            33 - CKA_COEFFICIENT
34 - CKA_PRIME                 35 - CKA_SUBPRIME
36 - CKA_BASE                  37 - CKA_VALUE_BITS
38 - CKA_VALUE_LEN             39 - CKA_LOCAL
40 - CKA_MODIFIABLE            41 - CKA_ECDSA_PARAMS
42 - CKA_EC_POINT              43 - CKA_EXTRACTABLE
44 - CKA_ALWAYS_SENSITIVE      45 - CKA_NEVER_EXTRACTABLE
46 - CKA_CCM_PRIVATE           47 - CKA_FINGERPRINT_SHA1
48 - CKA_OUID                  49 - CKA_X9_31_GENERATED
50 - CKA_PRIME_BITS            51 - CKA_SUBPRIME_BITS
52 - CKA_USAGE_COUNT           53 - CKA_USAGE_LIMIT
54 - CKA_EKM_UID               55 - CKA_GENERIC_1
56 - CKA_GENERIC_2             57 - CKA_GENERIC_3
58 - CKA_FINGERPRINT_SHA256
Select which one: 16
Enter boolean value: 1

CKA_WRAP=01

(1) Add Attribute   (2) Remove Attribute   (0) Accept Template :0

Status: Doing great, no errors (CKR_OK)

Step 5: Wrap the key using the imported RSA public key

5.1. Check whether the symmetric key you want to migrate is exportable. This can be done by following the below command using the handle of the key you want to migrate, and confirming the value of the CKA_EXTRACTABLE attribute (highlighted below) is equal to 1. Otherwise, the key can’t be exported.


Enter your choice : 27

Enter handle of object to display (-1 to list available objects): <handle of the key to be migrated>
Object handle=120
CKA_CLASS=00000004
CKA_TOKEN=01
CKA_PRIVATE=01
CKA_LABEL=Generated AES Key
CKA_KEY_TYPE=0000001f
CKA_ID=
CKA_SENSITIVE=01
CKA_ENCRYPT=01
CKA_DECRYPT=01
CKA_WRAP=01
CKA_UNWRAP=01
CKA_SIGN=01
CKA_VERIFY=01
CKA_DERIVE=01
CKA_START_DATE=
CKA_END_DATE=
CKA_VALUE_LEN=00000020
CKA_LOCAL=01
CKA_MODIFIABLE=01
CKA_EXTRACTABLE=01
CKA_ALWAYS_SENSITIVE=01
CKA_NEVER_EXTRACTABLE=00
CKA_CCM_PRIVATE=00
CKA_FINGERPRINT_SHA1=f8babf341748ba5810be21acc95c6d4d9fac75aa
CKA_OUID=29010002f90900005e850700
CKA_EKM_UID=
CKA_GENERIC_1=
CKA_GENERIC_2=
CKA_GENERIC_3=
CKA_FINGERPRINT_SHA256=7a8efcbff27703e281617be3c3d484dc58df6a78f6b144207c1a54ad32a98c00

Status: Doing great, no errors (CKR_OK)

5.2. Wrap the key using the imported RSA public key. This will create a file called wrapped.key that contains the wrapped key. Make sure to use handles of public key handle you received in step 4.2 above (149, in my example), and the handle of the key you want to migrate.


Enter your choice : 60
[1]DES-ECB        [2]DES-CBC        [3]DES3-ECB       [4]DES3-CBC
                                    [7]CAST3-ECB      [8]CAST3-CBC
[9]RSA            [10]TRANSLA       [11]DES3-CBC-PAD  [12]DES3-CBC-PAD-IPSEC
[13]SEED-ECB      [14]SEED-CBC      [15]SEED-CBC-PAD  [16]DES-CBC-PAD
[17]CAST3-CBC-PAD [18]CAST5-CBC-PAD [19]AES-ECB       [20]AES-CBC
[21]AES-CBC-PAD   [22]AES-CBC-PAD-IPSEC [23]ARIA-ECB  [24]ARIA-CBC
[25]ARIA-CBC-PAD
[26]RSA_OAEP    [27]SET_OAEP
Select mechanism for wrapping: 26

Enter filename of OAEP Source Data [0 for none]: 0

Enter handle of wrapping key (-1 to list available objects) : <imported public key handle>

Enter handle of key to wrap (-1 to list available objects) : <handle of the key to be migrated>
Wrapped key was saved in file wrapped.key

Status: Doing great, no errors (CKR_OK)

Step 6: Move the wrapped key to the New CloudHSM client instance

Move the wrapped key to the New CloudHSM client instance using scp (or any other tool you prefer).

Step 7: Unwrap the key on New CloudHSM with the RSA Private Key

7.1. On the New CloudHSM client instance, run the key_mgmt_util and login as the CU.


Command:  loginHSM -s <CU user> -p <CU password>

	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

7.2. Run the following unWrapKey command to unwrap the key using the RSA private key handle you received in step 1.2 (408, in my example). The output of the command should show the handle of the wrapped key (highlighted below).


Command:  unWrapKey -f wrapped.key -w <private key handle> -m 8 -noheader -l unwrapped_aes -kc 4 -kt 31

	Cfm3CreateUnwrapTemplate2 returned: 0x00 : HSM Return: SUCCESS

	Cfm2UnWrapWithTemplate3 returned: 0x00 : HSM Return: SUCCESS

	Key Unwrapped.  Key Handle: 410

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Conclusion

Using RSA OAEP for key migration ensures that your key material doesn’t leave the HSM boundary in plain text, as it’s encrypted using an RSA public key before being exported from CloudHSM Classic, and it can only be decrypted by New CloudHSM through the RSA private key that is generated and kept on New CloudHSM.

My post provides an example of how to use the ckdemo and key_mgmt_util utilities for the migration, but the same procedure can also be performed using the supported software libraries, such as the Java JCE library or the PKCS#11 library,a to migrate larger volumes of keys in an automated manner.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir is an Application Security Engineer who worksa with different team to ensure AWS services, applications, and websites are designed and implemented to the highest security standards. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

Tips for building a cloud security operating model in the financial services industry

Post Syndicated from Stephen Quigg original https://aws.amazon.com/blogs/security/tips-for-building-a-cloud-security-operating-model-in-the-financial-services-industry/

My team helps financial services customers understand how AWS services operate so that you can incorporate AWS into your existing processes and security operations centers (SOCs). As soon as you create your first AWS account for your organization, you’re live in the cloud. So, from day one, you should be equipped with certain information: you should understand some basics about how our products and services work, you should know how to spot when something bad could happen, and you should understand how to recover from that situation. Below is some of the advice I frequently offer to financial services customers who are just getting started.

How to think about cloud security

Security is security – the principles don’t change. Many of the on-premises security processes that you have now can extend directly to an AWS deployment. For example, your processes for vulnerability management, security monitoring, and security logging can all be transitioned over.

That said, AWS is more than just infrastructure. I sometimes talk to customers who are only thinking about the security of their AWS Virtual Private Clouds (VPCs), and about the Amazon Elastic Compute Cloud (EC2) instances running in those VPCs. And that’s good; its traditional network security that remains quite standard. But I also ask my customers questions that focus on other services they may be using. For example:

  • How are you thinking about who has Database Administrator (DBA) rights for Amazon Aurora Serverless? Aurora Serverless is a managed database service that lets AWS do the heavy lifting for many DBA tasks.
  • Do you understand how to configure (and monitor the configuration of) your Amazon Athena service? Athena lets you query large amounts of information that you’ve stored in Amazon Simple Storage Service (S3).
  • How will you secure and monitor your AWS Lambda deployments? Lambda is a serverless platform that has no infrastructure for you to manage.

Understanding AWS security services

As a customer, it’s important to understand the information that’s available to you about the state of your cloud infrastructure. Typically, AWS delivers much of that information via the Amazon CloudWatch service. So, I encourage my customers to get comfortable with CloudWatch, alongside our AWS security services. The key services that any security team needs to understand include:

  • Amazon GuardDuty, which is a threat detection system for the cloud.
  • AWS Cloudtrail, which is the log of AWS API services.
  • VPC Flow Logs, which enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
  • AWS Config, which records all the configuration changes that your teams have made to AWS resources, allowing you to assess those changes.
  • AWS Security Hub, which offers a “single pane of glass” that helps you assess AWS resources and collect information from across your security services. It gives you a unified view of resources per Region, so that you can more easily manage your security and compliance workflow.

These tools make it much quicker for you to get up to speed on your cloud security status and establish a position of safety.

Getting started with automation in the cloud

You don’t have to be a software developer to use AWS. You don’t have to write any code; the basics are straightforward. But to optimize your use of AWS and to get faster at automating, there is a real advantage if you have coding skills. Automation is the core of the operating model. We have a number of tutorials that can help you get up to speed.

Self-service cloud security resources for financial services customers

There are people like me who can come and talk to you. But to keep you from having to wait for us, we also offer a lot of self-service cloud security resources on our website.

We offer a free digital training course on AWS security fundamentals, plus webinars on financial services topics. We also offer an AWS security certification, which lets you show that your security knowledge has been validated by a third-party.

There are also a number of really good videos you can watch. For example, we had our inaugural security conference, re:Inforce, in Boston this past June. The videos and slides from the conference are now on YouTube, so you can sit and watch at your own pace. If you’re not sure where to start, try this list of popular sessions.

Finding additional help

You can work with a number of technology partners to help extend your security tools and processes to the cloud.

  • Our AWS Professional Services team can come and help you on site. In addition, we can simulate security incidents with you tohelp you get comfortable with security and cloud technology and how to respond to incidents.
  • AWS security consulting partners can also help you develop processes or write the code that you might need.
  • The AWS Marketplace is a wonderful self-service location where you can get all sorts of great security solutions, including finding a consulting partner.

And if you’re interested in speaking directly to AWS, you can always get in touch. There are forms on our website, or you can reach out to your AWS account manager and they can help you find the resources that are necessary for your business.

Conclusion

Financial services customers face some tough security challenges. You handle large amounts of data, and it’s really important that this data is stored securely and that its privacy is respected. We know that our customers do lots of due diligence of AWS before adopting our services, and they have many different regulatory environments within which they have to work. In turn, we want to help customers understand how they can build a cloud security operating model that meets their needs while using our services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Stephen Quigg

Stephen Quigg is a Principal Securities Solutions Architect within AWS Financial Services. Quigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

How to use AWS Secrets Manager to securely store and rotate SSH key pairs

Post Syndicated from Maitreya Ranganath original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-securely-store-rotate-ssh-key-pairs/

AWS Secrets Manager provides full lifecycle management for secrets within your environment. In this post, Maitreya and I will show you how to use Secrets Manager to store, deliver, and rotate SSH keypairs used for communication within compute clusters. Rotation of these keypairs is a security best practice, and sometimes a regulatory requirement. Traditionally, these keypairs have been associated with a number of tough challenges. For example, synchronizing key rotation across all compute nodes, enable detailed logging and auditing, and manage access to users in order to modify secrets.

However, rotating the keypair on all compute clusters’ nodes must be done in a tightly coordinated fashion, and failures generally result in availability risks. Moreover, the keypairs themselves are highly sensitive security credentials which must be carefully controlled with fine-grain access controls, detailed monitoring, and audit logging. These are precisely the types of tough challenges that AWS Secrets Manger solves for you.

In this post, we’ll show you how to secure, rotate, and use SSH keypairs for inter-cluster communication. You’ll use an AWS CloudFormation template to launch a cluster and configure Secrets Manager. Then we’ll show you how to use Secrets Manager to deliver the keypair to the cluster and use it for management operations, such as securely copying a file between nodes. Finally, we’ll use Secrets Manager to seamlessly rotate the keypair used by the cluster without any changes or outages. In this post, we’ve highlighted compute clusters, but you can use Secrets Manager to apply this solution directly to any SSH based use-case.

Solution overview

The following architecture diagram presents an overview of the solution:
 

Figure 1: Solution architecture

Figure 1: Solution architecture

The sample architecture created by CloudFormation includes one master node, three worker nodes, AWS Secret Manager—which utilizes a rotation AWS Lambda function—and AWS Systems Manager. Setting up the cluster is out of scope for this post; in our walkthrough, we’ll focus on the keypair rotation architecture.

Secrets Manager uses staging labels to identify different versions of a secret during rotation. A staging label is a text string. For example, by default, AWSCURRENT is attached to the current version of the secret, while AWSPENDING will be attached to new versions of the secret before they have been verified and deployed to corresponding resources.

As shown in the diagram:

  1. A secret is created in AWS Secrets Manager. The secret holds the SSH keypair that the master node will use to connect to the other nodes in the cluster. Upon keypair rotation, Secrets Manager will invoke a Lambda function (labeled 1.a in the diagram). The Lambda function will perform four steps:
    • 1.b: createSecret – create a new SSH keypair and store the private key as a new version of the secret.
    • 1.c: setSecret – label the newly created secret version with the label AWSPENDING and copy the public key to the worker nodes with AWS Systems Manager Run Command.

    The Lambda function will also perform two steps not shown in the diagram:

    • testSecret – verify that the new SSH keypair has been successfully deployed by invoking a test SSH connection.
    • finishSecret – set the staging label AWSCURRENT to the new secret version and remove the old keys from the worker nodes. This will also set the staging label AWSPREVIOUS to the old secret, allowing your administrator to have the ‘last known password’ if something goes wrong.

    An overview of the rotation Lambda function is available in the AWS Secrets Manager user guide. You have full control over the rotation function so that you can customize it to your needs. Note that no key is installed on the master node. Instead, the function will retrieve the private key from Secrets Manager only when it needs to securely communicate with the worker nodes. That private key is not saved on the master node’s filesystem but rather in volatile memory (per best practice, the private key variable is overwritten after successful authentication and deleted before the script exits); details about keeping secret data in volatile memory will follow later in this post.

  2. When the master node needs to communicate with any worker node, it will use an AWS SDK (Python Boto3) to read the SSH private key from Secrets Manager (2.a) and use the private key to establish an SSH tunnel with the worker nodes (2.b). The master node is authorized to read the private key from Secrets Manager because an AWS Identity and Access Management (IAM) role with a policy that allows it to access the secret is attached to the master node. The corresponding public key was deployed to each of the worker nodes during the rotation process in step one above.
  3. The secrets in Secrets Managers are encrypted with AWS Key Management System (KMS), and every version of the secret is encrypted with a unique data encryption key. The SSH key pair in the cluster will periodically rotate based on a configurable rotation interval, which you’ll configure from the Secrets Manager console later in this post. Each rotation repeats the process described in steps 1-2, resulting in a new version of the secret. Each new version will be encrypted using a new KMS data key, which provides an extra layer of security.
  4. The AWS Systems Manager Run Command will use the Amazon Elastic Compute Cloud (EC2) tag RotateSSHKeys with a value of True to identify the cluster’s worker node instances. Note that if you rely on tags as a security control, you must have clear governance and control over which users are able to change the tags and tag values on your EC2 instances.

Solution cost

Today, this solution will cost $0.48 an hour for the four T2.micro EC2 instances that comprise the sample cluster. Secrets manager has a 30-day trial period, after which one secret will cost $0.40 per month and $0.05 per 10,000 API calls. There is no additional charge for AWS Systems Manager.

Deploying the sample solution

In this section, you’ll deploy a test stack that demonstrates the entire solution. After deployment, you’ll log in to the master node and securely copy a file to one of the worker nodes. Finally, you’ll use Secrets Manager to rotate and deploy a new SSH keypair. The CloudFormation templates and secret rotation code are available in the AWS GitHub repository.

Set up the sample deployment by selecting the AWS CloudFormation Launch Stack button bellow; by default, the stack will be deployed in the us-east-1 (N. Virginia) Region.
 
Select this image to open a link that starts building the CloudFormation stack

The template creates an Amazon Amazon Virtual Private Cloud (VPC), private and public subnets, EC2 instances (master node and mock cluster), and the IAM role and policies used for the EC2 instances.

  1. Select your EC2 SSH key pair and input your IP range as stack parameters. In the YourIPRange field, enter the CIDR of your machine or network only, as this ensures only hosts from your network can access the master server. You may leave all other parameters as default. This CloudFormation template launches four t2.micro instances in a new VPC. One instance will be tagged as MasterServer and the rest will be tagged WorkerServer1-3.

    Note: The SSH keypair referenced here will be used to connect from your local computer to the master node. It is distinct from the SSH keypair used by the master node to connect to the worker nodes.

     

    Figure 2: Enter the CIDR of your machine or network

    Figure 2: Enter the CIDR of your machine or network

    Important: For simplicity, the master node you’ll create in this walkthrough will be in a public subnet, making it accessible from the CIDR you provided in Step 2. However, this is not the most secure approach possible. Follow the guidance in the Amazon EC2 VPC documentation to securely configure your cluster in a private subnet following the “defense in depth” principal.

  2. Monitor the status of the stack. When the status is CREATE_COMPLETE, the deployment is ready. Select the Outputs tab to find information about the newly created resources, and write down the master node’s public DNS and a worker node IP address. You’ll need both later in this post.
  3. Select the Launch Stack button to launch the AWS CloudFormation template that will deploy the Lambda function used by Secrets Manager, Accept the default values for the parameters. This template is designed for reusability; it can be applied to any SSH rotation use-case.
     
    Select this image to open a link that starts building the CloudFormation stack

Next, create and configure a new secret from the Secrets Manager console to store the cluster communication SSH keypair.

Configuring a secret in AWS Secrets Manager

The CloudFormation template did not deploy a secret, so follow these steps to create a secret from the console and rotation function configuration. To create a new secret:

  1. Open the AWS Secrets Manager console and select Store New Secret.
  2. Select Other type of secrets, then select the Plaintext tab.
  3. As shown in Figure 3, enter {} to create an empty JSON value with no properties. This value will be initially populated with a keypair by the rotation Lambda function.
     
    Figure 3: Create an empty JSON value with no properties

    Figure 3: Create an empty JSON value with no properties

  4. Keep the default encryption key and select Next. We’re keeping the default encryption key for the sake of simplicity in this example, but security best practices suggest using a Customer Master Key (CMK) that you’ve created.
  5. In Step 2: Name and description, name the secret /dev/ssh. The path of a secret can be used in the secret’s IAM policy to restrict users and roles to a secret or hierarchy of secrets. For example, the IAM policy could include /dev/* or /prod/* to control access to secrets in development or production, respectively.
  6. Add a description, then select Next.
     
    Figure 4: Add a description

    Figure 4: Add a description

  7. In Step 3: Configure rotation, choose Enable automatic rotation and enable a rotation interval of your choice, which you can configure using the rotation interval dropdown list.
  8. Select the Choose an AWS Lambda function drop-down and choose RotateSSH. This is the Lambda function that was deployed by the CloudFormation template.
  9. Select Next, then review your configuration and select Store. When the new secrets configuration is stored, the rotation Lambda function is immediately invoked, populating the value of the secret.
     
    Figure 5: Configure the rotation

    Figure 5: Configure the rotation

Testing the sample solution

With the secret configuration completed and the instances up and running, you’re now going to securely copy a file from the master node to one of the worker nodes, using the SSH key stored in Secrets Manager to test the solution.

  1. Log in to the master node via SSH, using the EC2 key that you specified in the CloudFormation template.
  2. Once connected, securely copy a file from the master node to the worker node using SCP (secure copy protocol) by entering the command below. Replace <private-ip-of-worker> with the worker node IP you copied down in step 3:
    
                python copy_file.py ec2-user <private-ip-of-worker>
            

Figure 6 shows ssh login to master node, and the copy_file.py command to worker node.
 

Figure 6: The <span style="font-family: courier">ssh</span> login to master node, and the <span style="font-family: courier">copy_file.py</span> command

Figure 6: The ssh login to master node, and the copy_file.py command

During execution, the python script will use the Secrets Manager get_secret_value API to retrieve the secret, which includes the private key. It will then use this key to establish a secure SSH connection with the worker nodes, without saving the private key on any master node storage.

You can review the copy_file.py on the master node or on GitHub. In the get_private_key() function, you can read the secret value, which includes the private key:


    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name)           

In the copy_file function, create a secured SSH tunnel to copy a file using the private key from memory, using Paramiko, a Python implementation of SSHv2.


    private_key_str = io.StringIO()
    # Write private key to a memory file
    private_key_str.write(private_key)
    
    # Create key object
    key = paramiko.RSAKey.from_private_key(private_key_str)
    
    # Open a channel and authenticate 
    trans = paramiko.Transport(ip, 22) 
    trans.start_client()
    trans.auth_publickey(user, key)
    del key        

To demonstrate the rotation of the SSH keypair, you’ll now manually invoke the rotation function:

  1. Return to the Secrets Manager console, select your /dev/ssh secret, and choose Retrieve Secret Value to see the key pair.
  2. Select Rotate secret immediately. In the pop-up window, confirm your choice by selecting Rotate.
     
    Figure 7: Set the "Secret value" and "Rotation configuration"

    Figure 7: Set the “Secret value” and “Rotation configuration”

  3. Choose Rotate again to complete the rotation.
     
    Figure 8: Select "Rotate"

    Figure 8: Select “Rotate”

  4. Select the Close button to refresh the view, and then choose Retrieve Secret Value again.
  5. Once the rotation has completed, you can inspect the new keypair via the Secrets Manager console. Go back to the terminal and run the same python script to copy a file using SCP. Replace <private-ip-of-worker> with your own worker node ID:
    
                    python copy_file.py ec2-user <private-ip-of-worker>
            

The file has now been transferred successfully using a new key pair, with no updates required.

Auditing and monitoring

You can monitor and audit all APIs used to create and rotate your keys in Secrets Manager via AWS CloudTrail. To view CloudTrail events, follow these steps:

  1. Open the CloudTrail console and select Event history.
  2. From the Filter dropdown field, select Event source, enter secret in the filter field, then select secretsmanager.amazonaws.com from the dropdown menu.
  3. From here, you can review Secrets Manager’s events, such as GetSecretValue, PutSecretValue, UpdateSecretVersionStage (which modifies the staging labels attached to a version of a secret), and RotationSucceeded, in the CloudTrail event history. These event logs help to audit secrets configuration, rotation, and access.
     
    Figure 9: The "Event history" window

    Figure 9: The “Event history” window

Additionally, Secrets Manager can work with CloudWatch Events to trigger alerts when administrator-specified operations occur in an organization (for example, to notify you of a secret deletion attempt).

Cleaning up the CloudFormation Stack

To delete the entire CloudFormation stack:

  1. Select the stack named RotateSSH from the CloudFormation console.
  2. Select Actions, and then Delete Stack. This will delete all AWS resources created by the stack.
  3. Repeat the steps above to delete the stack named MasterWorkers.
  4. From the AWS Secrets Manager console, delete the secret /dev/ssh. Read more about Deleting and Restoring a Secret in the AWS Secrets Manager User Guide.

Conclusion

In this post, we demonstrate how you can use AWS Secrets Manager to store, rotate, and deliver SSH keypairs in order to secure communication within a compute cluster. Keys are securely encrypted and stored in AWS Secret Manager, which will also rotate the keys and install public keys on all nodes for you. By using this method, you won’t have to manually deploy SSH Keys on the various EC2 instances or manually rotate them. APIs associated with secrets management and rotation are logged in CloudTrail for auditing and monitoring. This key rotation solution is serverless. It does not require any servers to maintain and can scale rapidly.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Author

Assaf Namer

Assaf is a Senior Solutions Architect. He likes coding, hackathons, and enjoys helping customers building reliable and secure cloud solutions. Outside of work, Assaf enjoys spinning and tennis.

Author

Maitreya Ranganath

Maitreya is a Solutions Architect with the Enterprise team. He has a focus on Security and Compliance and enjoys helping customers architect secure, scalable, and cost-effective solutions on AWS.

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.

 

 

AWS and the European Banking Authority Guidelines on Outsourcing

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-european-banking-authority-guidelines-on-outsourcing/

Financial institutions across the globe use AWS to transform the way they do business. It’s exciting to watch our customers in the financial services industry innovate on AWS in unique ways, across all geos and use cases. Regulations continue to evolve in this space, and we’re working hard to help customers proactively respond to new rules and guidelines. In many cases, the AWS Cloud makes it easier than ever before for customers to comply with different regulations and frameworks around the world.

The European Banking Authority (EBA), an EU financial supervisory authority, recently provided EU financial institutions (which includes credit institutions, certain investment firms, and payment institutions) with new outsourcing guidelines (PDF), which also apply to the use of cloud services. We’re ready and able to support our customers’ compliance with their obligations under the EBA Guidelines and to help meet and exceed their regulators’ expectations. We offer our customers a wide range of services that can simplify and directly assist in complying with the new guidelines, which take effect on September 30, 2019.

What do the EBA Guidelines mean for AWS customers?

The EBA Guidelines establish technology-neutral outsourcing requirements for EU financial institutions, and there is a particular focus on the outsourcing of “critical or important functions.” For AWS and our customers, the key takeaway is that the EBA Guidelines allow for EU financial institutions to use cloud services for material, regulated workloads. When considering or using third-party services, many EU financial institutions already follow due diligence, risk management, and regulatory notification processes that are similar to those processes laid out in the EBA Guidelines. To meet and exceed the EBA Guidelines’ requirements on security, resiliency, and assurance, EU financial institutions can use a variety of AWS security and compliance services.

Risk-based approach

The EBA Guidelines incorporate a risk-based approach that expects regulated entities to identify, assess, and mitigate the risks associated with any outsourcing arrangement. The risk-based approach outlined in the EBA Guidelines is consistent with the long-standing AWS shared responsibility model. This approach applies throughout the EBA Guidelines, including the areas of risk assessment, contractual and audit requirements, data location and transfer, and security implementation.

  • Risk assessment: The EBA Guidelines emphasize the need for EU financial institutions to assess the potential impact of outsourcing arrangements on their operational risk. The AWS shared responsibility model helps customers formulate their risk assessment approach because it illustrates how their security and management responsibilities change depending on the AWS services they use. For example, AWS operates some controls on behalf of customers, such as data center security, while customers operate other controls, such as event logging. In practice, AWS services help customers assess and improve their risk profile relative to traditional, on-premises environments.
  • Contractual and audit requirements: The EBA Guidelines lay out requirements for the written agreement between an EU financial institution and its service provider, including access and audit rights. For EU financial institutions running regulated workloads on AWS services, we offer the EBA Financial Services Addendum to address the EBA Guidelines’ contractual requirements. We also provide these institutions the ability to comply with the audit requirements in the EBA Guidelines through the AWS Security & Audit Series, including participation in an Audit Symposium, to facilitate customer audits. To align with regulatory requirements and expectations, our EBA addendum and audit program incorporate feedback that we’ve received from a variety of financial supervisory authorities across EU member states. EU financial services customers interested in learning more about the addendum or about the audit engagements offered by AWS can reach out to their AWS account teams.
  • Data location and transfer: The EBA Guidelines do not put restrictions on where an EU financial institution can store and process its data, but rather state that EU financial institutions should “adopt a risk-based approach to data storage and data processing location(s) (i.e. country or region) and information security considerations.” Our customers can choose which AWS Regions they store their content in, and we will not move or replicate your customer content outside of your chosen Regions unless you instruct us to do so. Customers can replicate and back up their customer content in more than one AWS Region to meet a variety of objectives, such as availability goals and geographic requirements.
  • Security implementation: The EBA Guidelines require EU financial institutions to consider, implement, and monitor various security measures. Using AWS services, customers can meet this requirement in a scalable and cost-effective way while improving their security posture. Customers can use AWS Config or AWS Security Hub to simplify auditing, security analysis, change management, and operational troubleshooting. As part of their cybersecurity measures, customers can activate Amazon GuardDuty, which provides intelligent threat detection and continuous monitoring, to generate detailed and actionable security alerts. Amazon Inspector automatically assesses a customer’s AWS resources for vulnerabilities or deviations from best practices and then produces a detailed list of security findings prioritized by level of severity. Customers can also enhance their security by using AWS Key Management Service (creation and control of encryption keys), AWS Shield (DDoS protection), and AWS WAF (filtering of malicious web traffic). These are just a few of the 500+ services and features we offer that enable strong availability, security, and compliance for our customers.

As reflected in the EBA Guidelines, it’s important to take a balanced approach when evaluating responsibilities in a cloud implementation. We are responsible for the security of the AWS Global Infrastructure. In the EU, we currently operate AWS Regions in Ireland, Frankfurt, London, Paris, and Stockholm, with our new Milan Region opening soon. For all of our data centers, we assess and manage environmental risks, employ extensive physical and personnel security controls, and guard against outages through our resiliency and testing procedures. In addition, independent, third-party auditors test more than 2,600 standards and requirements in the AWS environment throughout the year.

Conclusion

We encourage customers to learn about how the EBA Guidelines apply to their organization. Our teams of security, compliance, and legal experts continue to work with our EU financial services customers, both large and small, to support their journey to the AWS Cloud. AWS is closely following how regulatory authorities apply the EBA Guidelines locally and will provide further updates as needed. If you have any questions about compliance with the EBA Guidelines and their application to your use of AWS, or if you require the EBA Financial Services Addendum, please reach out to your account representative or request to be contacted.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS Cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah. Follow Chad on Twitter.

How to add DNS filtering to your NAT instance with Squid

Post Syndicated from Nicolas Malaval original https://aws.amazon.com/blogs/security/how-to-add-dns-filtering-to-your-nat-instance-with-squid/

Note from September 4, 2019: We’ve updated this blog post, initially published on January 26, 2016. Major changes include: support of Amazon Linux 2, no longer having to compile Squid 3.5, and a high availability version of the solution across two availability zones.

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources on a virtual private network that you’ve defined. On an Amazon VPC, many people use network address translation (NAT) instances and NAT gateways to enable instances in a private subnet to initiate outbound traffic to the Internet, while preventing the instances from receiving inbound traffic initiated by someone on the Internet.

For security and compliance purposes, you might have to filter the requests initiated by these instances (also known as “egress filtering”). Using iptables rules, you could restrict outbound traffic with your NAT instance based on a predefined destination port or IP address. However, you might need to enforce more complex security policies, such as allowing requests to AWS endpoints only, or blocking fraudulent websites, which you can’t easily achieve by using iptables rules.

In this post, I discuss and give an example of how to use Squid, a leading open-source proxy, to implement a “transparent proxy” that can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet.

The solution architecture

In this section, I present the architecture of the high availability NAT solution and explain how to configure Squid to filter traffic transparently. Later in this post, I’ll provide instructions about how to implement and test the solution.

The following diagram illustrates how the components in this process interact with each other. Squid Instance 1 intercepts HTTP/S requests sent by instances in Private Subnet 1, including the Testing Instance. Squid Instance 1 then initiates a connection with the destination host on behalf of the Testing Instance, which goes through the Internet gateway. This solution spans two Availability Zones, with Squid Instance 2 intercepting requests sent from the other Availability Zone. Note that you may adapt the solution to span additional Availability Zones.
 

Figure 1: The solution spans two Availability Zones

Figure 1: The solution spans two Availability Zones

Intercepting and filtering traffic

In each availability zone, the route table associated to the private subnet sends the outbound traffic to the Squid instance (see Route Tables for a NAT Device). Squid intercepts the requested domain, then applies the following filtering policy:

  • For HTTP requests, Squid retrieves the host header field included in all HTTP/1.1 request messages. This specifies the Internet host being requested.
  • For HTTPS requests, the HTTP traffic is encapsulated in a TLS connection between the instance in the private subnet and the remote host. Squid cannot retrieve the host header field because the header is encrypted. A feature called SslBump would allow Squid to decrypt the traffic, but this would not be transparent for the client because the certificate would be considered invalid in most cases. The feature I use instead, called SslPeekAndSplice, retrieves the Server Name Indication (SNI) from the TLS initiation. The SNI contains the requested Internet host. As a result, Squid can make filtering decisions without decrypting the HTTPS traffic.

Note 1: Some older client-side software stacks do not support SNI. Here are the minimum versions of some important stacks and programming languages that support SNI: Python 2.7.9 and 3.2, Java 7 JSSE, wget 1.14, OpenSSL 0.9.8j, cURL 7.18.1

Note 2: TLS 1.3 introduced an optional extension that allows the client to encrypt the SNI, which may prevent Squid from intercepting the requested domain.

The SslPeekAndSplice feature was introduced in Squid 3.5 and is implemented in the same Squid module as SslBump. To enable this module, Squid requires that you provide a certificate, though it will not be used to decode HTTPS traffic. The solution creates a certificate using OpenSSL.


mkdir /etc/squid/ssl
cd /etc/squid/ssl
openssl genrsa -out squid.key 4096
openssl req -new -key squid.key -out squid.csr -subj "/C=XX/ST=XX/L=squid/O=squid/CN=squid"
openssl x509 -req -days 3650 -in squid.csr -signkey squid.key -out squid.crt
cat squid.key squid.crt >> squid.pem        

The following code shows the Squid configuration file. For HTTPS traffic, note the ssl_bump directives instructing Squid to “peek” (retrieve the SNI) and then “splice” (become a TCP tunnel without decoding) or “terminate” the connection depending on the requested host.


visible_hostname squid
cache deny all

# Log format and rotation
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %ssl::>sni %Sh/%<a %mt
logfile_rotate 10
debug_options rotate=10

# Handling HTTP requests
http_port 3128
http_port 3129 intercept
acl allowed_http_sites dstdomain "/etc/squid/whitelist.txt"
http_access allow allowed_http_sites

# Handling HTTPS requests
https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
acl SSL_port port 443
http_access allow SSL_port
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist.txt"
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1 all
ssl_bump peek step2 allowed_https_sites
ssl_bump splice step3 allowed_https_sites
ssl_bump terminate step2 all
http_access deny all       

The text file located at /etc/squid/whitelist.txt contains the list of whitelisted domains, with one domain per line. In this blog post, I’ll show you how to configure Squid to allow requests to *.amazonaws.com, which corresponds to AWS endpoints. Note that you can restrict access to a specific set of AWS services that you’ve defined (see Regions and Endpoints for a detailed list of endpoints), or you can set your own list of domains.

Note: An alternate approach is to use VPC endpoints to privately connect your VPC to supported AWS services without requiring access over the Internet (see VPC Endpoints). Some supported AWS services allow you to create a policy that controls the use of the endpoint to access AWS resources (see VPC Endpoint Policies, and VPC Endpoints for a list of supported services).

You may have noticed that Squid listens on port 3129 for HTTP traffic and 3130 for HTTPS. Because Squid cannot directly listen to 80 and 443, you have to redirect the incoming requests from instances in the private subnets to the Squid ports using iptables. You do not have to enable IP forwarding or add any FORWARD rule, as you would do with a standard NAT instance.


sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129
sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130       

The solution stores the files squid.conf and whitelist.txt in an Amazon Simple Storage Service (S3) bucket and runs the following script every minute on the Squid instances to download and update the Squid configuration from S3. This makes it easy to maintain the Squid configuration from a central location. Note that it first validates the files with squid -k parse and then reload the configuration with squid -k reconfigure if no error was found.


        cp /etc/squid/* /etc/squid/old/
        aws s3 sync s3://<s3-bucket> /etc/squid
        squid -k parse && squid -k reconfigure || (cp /etc/squid/old/* /etc/squid/; exit 1)     

The solution then uses the CloudWatch Agent on the Squid instances to collect and store Squid logs in Amazon CloudWatch Logs. The log group /filtering-nat-instance/cache.log contains the error and debug messages that Squid generates and /filtering-nat-instance/access.log contains the access logs.

An access log record is a space-delimited string that has the following format:

<time> <response_time> <client_ip> <status_code> <size> <method> <url> <sni> <remote_host> <mime>

The following table describes the fields of an access log record.

FieldDescription
timeRequest time in seconds since epoch
response_timeResponse time in milliseconds
client_ipClient source IP address
status_codeSquid request status and HTTP response code sent to the client. For example, a HTTP request to an unallowed domain logs TCP_DENIED/403, and a HTTPS request to a whitelisted domain logs TCP_TUNNEL/200
sizeTotal size of the response sent to client
methodRequest method like GET or POST.
urlRequest URL received from the client. Logged for HTTP requests only
sniDomain name intercepted in the SNI. Logged for HTTPS requests only
remote_hostSquid hierarchy status and remote host IP address
mimeMIME content type. Logged for HTTP requests only

The following are some examples of access log records:


1563718817.184 14 10.0.0.28 TCP_DENIED/403 3822 GET http://example.com/ - HIER_NONE/- text/html
1563718821.573 7 10.0.0.28 TAG_NONE/200 0 CONNECT 172.217.7.227:443 example.com HIER_NONE/- -
1563718872.923 32 10.0.0.28 TCP_TUNNEL/200 22927 CONNECT 52.216.187.19:443 calculator.s3.amazonaws.com ORIGINAL_DST/52.216.187.19 –   

Designing a high availability solution

The Squid instances introduce a single point of failure for the private subnets. If a Squid instance fails, the instances in its associated private subnet cannot send outbound traffic anymore. The following diagram illustrates the architecture that I propose to address this situation within an Availability Zone.
 

Figure 2: The architecture to address if a Squid instance fails within an Availability Zone

Figure 2: The architecture to address if a Squid instance fails within an Availability Zone

Each Squid instance is launched in an Amazon EC2 Auto Scaling group that has a minimum size and a maximum size of one instance. A shell script is run at startup to configure the instances. That includes installing and configuring Squid (see Running Commands on Your Linux Instance at Launch).

The solution uses the CloudWatch Agent and its procstat plugin to collect the CPU usage of the Squid process every 10 seconds. For each Squid instance, the solution creates a CloudWatch alarm that watches this custom metric and goes to an ALARM state when a data point is missing. This can happen, for example, when Squid crashes or the Squid instance fails. Note that for my use case, I consider watching the Squid process a sufficient approach to determining the health status of a Squid instance, although it cannot detect eventual cases of the Squid process being alive but unable to forward traffic. As a workaround, you can use an end-to-end monitoring approach, like using witness instances in the private subnets to send test requests at regular intervals and collect the custom metric.

When an alarm goes to ALARM state, CloudWatch sends a notification to an Amazon Simple Notification Service (SNS) topic which then triggers an AWS Lambda function. The Lambda function marks the Squid instance as unhealthy in its Auto Scaling group, retrieves the list of healthy Squid instances based on the state of other CloudWatch alarms, and updates the route tables that currently route traffic to the unhealthy Squid instance to instead route traffic to the first available healthy Squid instance. While the Auto Scaling group automatically replaces the unhealthy Squid instance, private instances can send outbound traffic through the Squid instance in the other Availability Zone.

When the CloudWatch agent starts collecting the custom metric again on the replacement Squid instance, the alarm reverts to OK state. Similarly, CloudWatch sends a notification to the SNS topic, which then triggers the Lambda function. The Lambda function completes the lifecycle action (see Amazon EC2 Auto Scaling Lifecycle Hooks) to indicate that the replacement instance is ready to serve traffic, and updates the route table associated to the private subnet in the same availability zone to route traffic to the replacement instance.

Implementing and testing the solution

Now that you understand the architecture behind this solution, you can follow the instructions in this section to implement and test the solution in your AWS account.

Implementing the solution

First, you’ll use AWS CloudFormation to provision the required resources. Select the Launch Stack button below to open the CloudFormation console and create a stack from the template. Then, follow the on-screen instructions.

Select this image to open a link that starts building the CloudFormation stack

CloudFormation will create the following resources:

  • An Amazon Virtual Private Cloud (Amazon VPC) with an internet gateway attached.
  • Two public subnets and two private subnets on the Amazon VPC.
  • Three route tables. The first route table is associated to the public subnets to make them publicly accessible. The other two route tables are associated to the private subnets.
  • An S3 bucket to store the Squid configuration files, and two Lambda-based custom resources to add the files squid.conf and whitelist.txt to this bucket.
  • An IAM role to grant the Squid instances permissions to read from the S3 bucket and use the CloudWatch agent.
  • A security group to allow HTTP and HTTPS traffic from instances in the private subnets.
  • A launch configuration to specify the template of Squid instances. That includes commands to run at startup for automating the initial configuration.
  • Two Auto Scaling groups that use this launch configuration to launch the Squid instances.
  • A Lambda function to redirect the outbound traffic and recover a Squid instance when it fails.
  • Two CloudWatch alarms to watch the custom metric sent by Squid instances and trigger the Lambda function when the health status of Squid instances changes.
  • An EC2 instance in the first private subnet to test the solution, and an IAM role to grant this instance permissions to use the SSM agent. Session Manager, which I introduce in the next paragraph, uses this SSM agent (see Working with SSM Agent)

Testing the solution

After the stack creation has completed (it can take up to 10 minutes), connect onto the Testing Instance using Session Manager, a capability of AWS Systems Manager that lets you manage instances through an interactive shell without the need to open an SSH port:

  1. Open the AWS Systems Manager console.
  2. In the navigation pane, choose Session Manager.
  3. Choose Start Session.
  4. For Target instances, choose the option button to the left of Testing Instance.
  5. Choose Start Session.

Note: Session Manager makes calls to several AWS endpoints (see Working with SSM Agent). If you prefer to restrict access to a defined set of AWS services, make sure to whitelist the associated domains.

After the connection is made, you can test the solution with the following commands. Only the last three requests should return a valid response, because Squid allows traffic to *.amazonaws.com only.


curl http://www.amazon.com
curl https://www.amazon.com
curl http://calculator.s3.amazonaws.com/index.html
curl https://calculator.s3.amazonaws.com/index.html
aws ec2 describe-regions --region us-east-1         

To find the requests you just made in the access logs, here’s how to browse the Squid logs in Amazon CloudWatch Logs:

  1. Open the Amazon CloudWatch console.
  2. In the navigation pane, choose Logs.
  3. For Log Groups, choose the log group /filtering-nat-instance/access.log.
  4. Choose Search Log Group to view and search log records.

To test how the solution behaves when a Squid instance fails, you can terminate one of the Squid instances manually in the Amazon EC2 console. Then, watch the CloudWatch alarm change its state in the Amazon CloudWatch console, or watch the solution change the default route of the impacted route table in the Amazon VPC console.

You can now delete the CloudFormation stack to clean up the resources that were just created.

Discussion: Transparent or forward proxy?

The solution that I describe in this blog is fully transparent for instances in the private subnets, which means that instances don’t need to be aware of the proxy and can make requests as if they were behind a standard NAT instance. An alternate solution is to deploy a forward proxy in your Amazon VPC and configure instances in private subnets to use it (see the blog post How to set up an outbound VPC proxy with domain whitelisting and content filtering for an example). In this section, I discuss some of the differences between the two solutions.

Supportability

A major drawback with forward proxies is that the proxy must be explicitly configured on every instance within the private subnets. For example, you can configure the HTTP_PROXY and HTTPS_PROXY environment variables on Linux instances, but some applications or services, like yum, require their own proxy configuration, or don’t support proxy usage. Note also that some AWS services and features, like Amazon EMR or Amazon SageMaker notebook instances, don’t support using a forward proxy at the time of this post. However, with TLS 1.3, a forward proxy is the only option to restrict outbound traffic if the SNI is encrypted.

Scalability

Deploying a forward proxy on AWS usually consists of a load balancer distributing traffic to a set of proxy instances launched in an Auto Scaling group. Proxy instances can be launched or terminated dynamically depending on the demand (also known as “horizontal scaling”). With forward proxies, each route table can route traffic to a single instance at a time, and changing the type of the instance is the only way to increase or decrease the capacity (also known as “vertical scaling”).

The solution I present in this post does not dynamically adapt the instance type of the Squid instances based on the demand. However, you might consider a mechanism in which the traffic from a private subnet is temporarily redirected through another Availability Zone while the Squid instance is being relaunched by Auto Scaling with a smaller or larger instance type.

Mutualization

Deploying a centralized proxy solution and using it across multiple VPCs is a way of reducing cost and operational complexity.

With a forward proxy, instances in private subnets send IP packets to the proxy load balancer. Therefore, sharing a forward proxy across multiple VPCs only requires connectivity between the “instance VPCs” and a proxy VPC that has VPC Peering or equivalent capabilities.

With a transparent proxy, instances in private subnets sends IP packets to the remote host. VPC Peering does not support transitive routing (see Unsupported VPC Peering Configurations) and cannot be used to share a transparent proxy across multiple VPCs. However, you can now use an AWS Transit Gateway that acts as a network transit hub to share a transparent proxy across multiple VPCs. I give an example in the next section.

Sharing the solution across multiple VPCs using AWS Transit Gateway

In this section, I give an example of how to share a transparent proxy across multiple VPCs using AWS Transit Gateway. The architecture is illustrated in the following diagram. For the sake of simplicity, the diagram does not include Availability Zones.
 

Figure 3: The architecture for a transparent proxy across multiple VPCs using AWS Transit Gateway

Figure 3: The architecture for a transparent proxy across multiple VPCs using AWS Transit Gateway

Here’s how instances in the private subnet of “VPC App” can make requests via the shared transparent proxy in “VPC Shared:”

  1. When instances in VPC App make HTTP/S requests, the network packets they send have the public IP address of the remote host as the destination address. These packets are forwarded to the transit gateway, based on the route table associated to the private subnet.
  2. The transit gateway receives the packets and forwards them to VPC Shared, based on the default route of the transit gateway route table.
  3. Note that the transit gateway attachment resides in the transit gateway subnet. When the packets arrive in VPC Shared, they are forwarded to the Squid instance because the next destination has been determined based on the route table associated to the transit gateway subnet.
  4. The Squid instance makes requests on behalf of the source instance (“Instances” in the schema). Then, it sends the response to the source instance. The packets that it emits have the IP address of the source instance as the destination address and are forwarded to the transit gateway according to the route table associated to the public subnet.
  5. The transit gateway receives and forwards the response packets to VPC App.
  6. Finally, the response reaches the source instance.

In a high availability deployment, you could have one transit gateway subnet per Availability Zone that sends traffic to the Squid instance that resides in the same Availability Zone, or to the Squid instance in another Availability Zone if the instance in the same Availability Zone fails.

You could also use AWS Transit Gateway to implement a transparent proxy solution that scales horizontally. This allows you to add or remove proxy instances based on the demand, instead of changing the instance type. With this approach, you must deploy a fleet of proxy instances – launched by an Auto Scaling group, for example – and mount a VPN connection between each instance and the transit gateway. The proxy instances need to support ECMP (“Equal Cost Multipath routing”; see Transit Gateways) to equally spread the outbound traffic between instances. I don’t describe this alternative architecture further in this blog post.

Conclusion

In this post, I’ve shown how you can use Squid to implement a high availability solution that filters outgoing traffic to the Internet and helps meet your security and compliance needs, while being fully transparent for the back-end instances in your VPC. I’ve also discussed the key differences between transparent proxies and forward proxies. Finally, I gave an example of how to share a transparent proxy solution across multiple VPCs using AWS Transit Gateway.

If you have any questions or suggestions, please leave a comment below or on the Amazon VPC forum.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Nicolas Malaval

Nicolas is a Solution Architect for Amazon Web Services. He lives in Paris and helps our healthcare customers in France adopt cloud technology and innovate with AWS. Before that, he spent three years as a Consultant for AWS Professional Services, working with enterprise customers.

64 AWS services achieve HITRUST certification

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/64-aws-services-achieve-hitrust-certification/

We’re excited to announce that 64 AWS services are now certified for the Health Information Trust Alliance (HITRUST) Common Security Framework (CSF).

The full list of AWS services that were audited by a third party auditor and certified under HITRUST CSF is available on our Services in Scope by Compliance Program page. You can view and download our HITRUST CSF certification here:

The HITRUST certification allows AWS customers to tailor their security control baselines to a variety of factors including, but not limited to, regulatory requirements and organization type.

The HITRUST Alliance has established the CSF as a certifiable framework that can be leveraged by organizations to comply with ISO/IEC 27000 series and HIPAA related requirements. The HITRUST CSF is already widely adopted by leading organizations in a variety of industries in their approach to security and privacy. Please visit the HITRUST Alliance website for more information.

As always, we value your feedback and questions and commit to helping customers achieve and maintain the highest standard of security and compliance. Please feel free to reach out to the team through the AWS Compliance Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nine AWS Security Hub best practices

Post Syndicated from Ketan Srivastava original https://aws.amazon.com/blogs/security/nine-aws-security-hub-best-practices/

AWS Security Hub is a security and compliance service that became generally available on June 25, 2019. It provides you with extensive visibility into your security and compliance status across multiple AWS accounts, in a single dashboard per region. The service helps you monitor critical settings to ensure that your AWS accounts remain secure, allowing you to notice and react quickly to any changes in your environment.

AWS Security Hub aggregates, organizes, and prioritizes security findings from supported AWS services—that’s Amazon GuardDuty, Amazon Inspector, and Amazon Macie at the time this post was published—and from various AWS partner security solutions. AWS Security Hub also generates its own findings, based on automated, resource-level and account-level configuration and compliance checks using service-linked AWS Config rules plus other analytic techniques. These checks help you keep your AWS accounts compliant with industry standards and best practices, such as the Center for Internet Security (CIS) AWS Foundations standard.

In this post, I’ll provide nine best practices to help you use AWS Security Hub as effectively as possible.

1. Use the AWS Labs script to turn on Security Hub in all your AWS accounts in all regions and to establish your existing Amazon GuardDuty master/member hierarchy

As a best practice, you should continuously monitor all regions across all of your AWS accounts for unauthorized behavior or misconfigurations, even in regions that you don’t use heavily. AWS already recommends that you do this when using monitoring services like AWS Config and AWS CloudTrail. I recommend that you enable Security Hub in every region available in your AWS accounts.

In addition, you can also invite other AWS accounts to enable Security Hub and share findings with your AWS account. If you send an invitation and it is accepted by the other account owner, your Security Hub account is designated as the master account, and any associated Security Hub accounts become your member accounts. Users from the master account will then be able to view Security Hub findings from member accounts.

To simplify these configurations, you can utilize the AWS Labs script available on GitHub, which provides a step-by-step guide to automate this process. This script allows you to enable (and disable) AWS Security Hub simultaneously across a list of associated AWS accounts and bulk-add them to become your Security Hub members; it sends invitations from the master account and automatically accepts invitations in all member accounts. To run the script, you must have the AWS account IDs and root email addresses of the AWS accounts that you want as your Security Hub members. (Note that you should only share your root email address and account ID with AWS accounts that you trust. Visit the IAM best practices page to learn more about how to keep access to your AWS accounts secured.)

By default, the Security Hub master/member association is independent of the relationships that you’ve established between your Amazon GuardDuty or Amazon Macie accounts and other associated accounts. If you have an existing master/member hierarchy in GuardDuty or Macie, you can export that list of accounts into a CSV file and then use it with the script. For example in GuardDuty, use the ListMembers API to export the AWS Account ID and email of all member accounts, as follows:

aws guardduty list-members –detector-id <Detector ID> –query "Members[].[AccountId, Email]" –output text | awk ‘{print $1 "," $2}’

The output of the above command will be your GuardDuty member account IDs and their corresponding root email addresses, one per line and separated with a comma as shown below:

12345678910,[email protected]
98765432101,[email protected]

2. Enable AWS Config in all AWS accounts and regions and leave the AWS CIS Foundations standard check enabled

When you enable Security Hub in any region, the AWS CIS standard checks are enabled by default. I recommend leaving them enabled; they are important security measures that are applicable to all AWS accounts.

To run most of these checks, Security Hub uses service-linked AWS Config rules. Because of this, you should make sure that AWS Config is turned on and recording all supported resources, including global resources, in all accounts and regions where Security Hub is deployed. You are not charged by AWS Config for these service-linked rules. You are only charged via Security Hub’s pricing model.

3. Use specific managed IAM policies for different types of Security Hub users

You can choose to allow a large group of users to access List and Read Security Hub actions, which will permit them to view your security findings. However, you should allow only a small group of users to access the Security Hub Write actions. This will permit only authorized users to archive, resolve, or remediate the findings.

You can use AWS managed policies to give your employees the permissions they need to get started. These policies are already available in your account and are maintained and updated by AWS. To grant more granular permission to your Security Hub users, I recommend that you create your own customer managed policies. A great place to start with this is to import an existing AWS managed policy. That way, you know that the policy is initially correct, and all you need to do is customize it for your environment.

AWS categorizes each service action into one of five access levels based on what each action does: List, Read, Write, Permissions management, or Tagging. To determine which access level to include in the IAM policies that you assign to your users, you can view the policy summary by navigating from the IAM Console to Policies, then selecting any AWS managed or customer managed policy. Next, on the Summary page, under the Permissions tab, select Policy summary (see Figure 1). For more details and examples of access level classification, see Understanding Access Level Summaries Within Policy Summaries.
 

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

4. Use tags for access controls and cost allocation

A SecurityHub::Hub resource represents the implementation of the AWS Security Hub service per region in your AWS account. Security Hub allows you to assign metadata to your SecurityHub::Hub resource in the form of tags. Each tag is a string consisting of a user-defined key and an optional key-value that makes it easier for you to identify and manage the AWS resources in your environment.

You can control access permissions by using tags on your SecurityHub::Hub resource. For example, you can allow a group of developer IAM entities to manage and update only the SecurityHub::Hub resources that have the tag key developer associated with them. This can help you restrict access to your production SecurityHub::Hub resources, while allowing your developers to continue testing in their developer environment.

For more information on the supported tag-based conditions which you can use with the Security Hub APIs, refer to Condition Keys for AWS Security Hub. Please note that when you use tag-based conditions for access control, you must define who can modify those tags.

To make it easier to categorize and track your AWS costs, you can also activate cost allocation tags. This helps you organize your SecurityHub::Hub resource costs. AWS generates a cost allocation report as a CSV file, with your usage and costs grouped according to your active tags. You can apply tags that represent business categories (such as cost centers, application names, or project environments) to organize your costs.

For more information on commonly used tagging categories and effective tagging strategies, read about AWS Tagging Strategies.

5. Integrate and enable your existing security products (with 34 integrations today and more to come)

Numerous tools can help you understand the security and compliance posture of your AWS accounts, but these tools generate their own set of findings, often in different formats. Security Hub normalizes the findings.

With Security Hub, findings generated from integrated providers (both third-party services and AWS services) are ingested using a standard findings format, which eliminates the need for security teams to convert the data. You can currently integrate 34 findings providers to import and/or export findings with Security Hub. Some partner products, like PagerDuty, Splunk, and Slack, can receive findings from Security Hub, although they don’t generate findings.

If you want to add a third-party partner product to your AWS environment, you can choose the Purchase link from the Security Hub console’s Integrations page and navigate to AWS Marketplace. Once purchased, choose the Configure link to navigate to step-by-step instructions to install the product and configure its integration with Security Hub. Then choose Enable integration to create a product subscription in your account for that third-party provider (see Figure 2).

After you enable a subscription, a resource policy is automatically attached to it. The resource policy defines the permissions that Security Hub needs to accept and process the product’s findings. You can also enable the subscription via the API and CloudFormation.
 

Figure 2: Integrating partner findings provider with Security Hub

Figure 2: Integrating partner findings provider with Security Hub

6. Build out customized remediation playbooks using Amazon CloudWatch Events, AWS Systems Manager Automation documents, and AWS Step Functions to automatically resolve findings that don’t require human intervention

Security Hub automatically sends all findings to Amazon CloudWatch Events. This integration helps you automate your response to threat incidents by allowing you to take specific actions using AWS Systems Manager Automation documents, OpsItems, and AWS Step Functions. Using these tools, you can create your own incident handling plan. This will allow your security team to focus on strengthening the security of your AWS environments rather than on remediating the current findings.
 

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

7. Create custom actions to send a copy of a Security Hub finding to a resource that is internal or external to your AWS account, enabling additional visibility and remediation options for the finding

Because of its integration with CloudWatch Events, you can use Security Hub to create custom actions that will send specific findings to ticketing, chat, email, or automated remediation systems. Custom actions can also be sent to your own AWS resources, such as AWS Systems Manager OpsCenter, AWS Lambda or Amazon Kinesis, allowing you to do your own remediation or data capture related to the finding.

For an in-depth look at this architecture, plus specific examples of how to implement custom actions, see How to Integrate AWS Security Hub Custom Actions with PagerDuty and How to Enable Custom Actions in AWS Security Hub.

In addition, Security Hub gives you the option to choose a language-specific AWS SDK so that you can use custom actions to resolve findings programmatically. Below, I’ll demonstrate how you can implement this using AWS Lambda and AWS SDK for Python (Boto3). In my example, I’ll remediate the finding generated by Security Hub for CIS check 2.4, “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” For this walk-through, I assume that you have the necessary AWS IAM permissions to work with Security Hub, CloudWatch Events, Lambda and AWS CloudTrail.
 

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

As shown in figure 4:

  1. When findings against CIS check 2.4 are generated in Security Hub, Security Hub will send them to CloudWatch Events using custom actions that I’ll describe below.
  2. CloudWatch Events will send the findings to a Lambda function that has been configured as the target.
  3. The Lambda function will utilize a Python script to check whether the finding has been generated against CIS check 2.4. If it has, the Lambda function will identify the affected CloudTrail trail and configure it with CloudWatch Logs to monitor the trail logs.

Prerequisites

  1. You must configure an IAM Role for AWS CloudTrail to assume so that it can deliver events to your CloudWatch Logs log group. For more information about how to do this, refer to the AWS CloudTrail documentation. I’ll refer to this role as the CloudTrail role.
  2. To deploy the Lambda function, you must configure an IAM Role for the Lambda function to assume. I’ll refer to this role as the Lambda execution role. The following sample policy includes the permissions that you’ll assign to it for this example. Please replace <CloudTrail_CloudWatchLogs_Role> with the CloudTrail role that you created in the previous step. Depending on your use case, you can restrict this IAM policy further to grant least privilege, which is a recommended IAM Best Practice.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "cloudtrail:UpdateTrail",
                "iam:GetRole"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::012345678910:role/<CloudTrail_CloudWatchLogs_Role>"
        }
    ]
}     

Solution deployment

  1. Create a custom action in AWS Security Hub and associate it with a CloudWatch Events rule that you configure for your Security Hub findings. Follow the instructions laid out in the Security Hub user guide for the exact steps to do this.
  2. Create a Lambda Function, which will complete the auto-remediation of the CIS 2.4 findings:
    1. Open the Lambda Console and select Create function.
    2. On the next page, choose Author from scratch.
    3. Under Basic information, enter a name for your function. For Runtime, select Python 3.7.
       
      Figure 5: Updating basic information to create the Lambda function

      Figure 5: Updating basic information to create the Lambda function

    4. Under Permissions, expand Choose or create an execution role.
    5. Under Execution role, select the drop down menu and change the setting to Use an existing role.
    6. Under Existing role, select the Lambda execution role that you created earlier, then select Create function.
       
      Figure 6: Updating basic information and permissions to create the Lambda function

      Figure 6: Updating basic information and permissions to create the Lambda function

    7. Delete the default function code and paste the code I’ve provided below:
      
              import json, boto3
              cloudtrail_client = boto3.client('cloudtrail')
              cloudwatchlogs_client = boto3.client('logs')
              iam_client = boto3.client('iam')
              
              role_details = iam_client.get_role(RoleName='<CloudTrail_CloudWatchLogs_Role>')
              
              def lambda_handler(event, context):
                  # First off all, let us see if the JSON sent by CWE has any Security Hub findings.
                  if 'detail' in event.keys() and 'findings' in event['detail'].keys() and len(event['detail']['findings']) > 0:
                      print("There are some findings. Let's check them!")
                      print("Number of findings: %i" % len(event['detail']['findings']))
              
                      # Then we need to filter out the findings. In this code snippet, we'll handle only findings related to CloudTrail trails for integration with CloudWatch Logs.
                      for finding in event['detail']['findings']:
                          if 'Title' in finding.keys():
                              if 'Ensure CloudTrail trails are integrated with CloudWatch Logs' in finding['Title']:
                                  print("There's a CloudTrail-related finding. I can handle it!")
              
                                  if 'Compliance' in finding.keys() and 'Status' in finding['Compliance'].keys():
                                      print("Compliance Status: %s" % finding['Compliance']['Status'])
              
                                      # We can skip compliant findings, and evaluate only the non-compliant ones.                        
                                      if finding['Compliance']['Status'] == 'PASSED':
                                          continue
              
                                      # For each non-compliant finding, we need to get specific pieces of information so as to create the correct log group and update the CloudTrail trail.                        
                                      for resource in finding['Resources']:
                                          resource_id = resource['Id']
                                          cloudtrail_name = resource['Details']['Other']['name']
                                          loggroup_name = 'CloudTrail/' + cloudtrail_name
                                          print("ResourceId for the finding is %s" % resource_id)
                                          print("LogGroup name: %s" % loggroup_name)
              
                                          # At this point, we can create the log group using the name extracted from the finding.
                                          try:
                                              response_logs = cloudwatchlogs_client.create_log_group(logGroupName=loggroup_name)
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
              
                                          # For updating the CloudTrail trail, we need to have the ARN of the log group. Let's retrieve it now.                            
                                          response_logsARN = cloudwatchlogs_client.describe_log_groups(logGroupNamePrefix = loggroup_name)
                                          print("LogGroup ARN: %s" % response_logsARN['logGroups'][0]['arn'])
                                          print("The role used by CloudTrail is: %s" % role_details['Role']['Arn'])
              
                                          # Finally, let's update the CloudTrail trail so that it sends logs to the new log group created.
                                          try:
                                              response_cloudtrail = cloudtrail_client.update_trail(
                                                  Name=cloudtrail_name,
                                                  CloudWatchLogsLogGroupArn = response_logsARN['logGroups'][0]['arn'],
                                                  CloudWatchLogsRoleArn = role_details['Role']['Arn']
                                              )
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
                              else:
                                  print("Title: %s" % finding['Title'])
                                  print("This type of finding cannot be handled by this function. Skipping it…")
                          else:
                              print("This finding doesn't have a title and so cannot be handled by this function. Skipping it…")
                  else:
                      print("There are no findings to remediate.")            
              

    8. After pasting the code, replace <CloudTrail_CloudWatchLogs_Role> with your CloudTrail role and select Save to save your Lambda function.
       
      Figure 7: Editing Lambda code to replace the correct CloudTrail role

      Figure 7: Editing Lambda code to replace the correct CloudTrail role

  3. Go to your CloudWatch console and select Rules in the navigation pane on the left.
    1. From the list of CloudWatch rules that you see, select the rule which you created in Step 1 of this solution deployment.
    2. Then, select Actions on the top right of the page and choose Edit.
    3. On the Step 1: Create rule page, under Targets, choose Lambda function and select the Lambda function you created in Step 2.
    4. Select Configure details.
    5. On the Step 2: Configure rule details page, select Update rule.
       
      Figure 8: Adding your created Lambda function as Target for the CloudWatch rule

      Figure 8: Adding your created Lambda function as target for the CloudWatch rule

  4. Configuration is now complete, and you can test your rule. Go to your AWS Security Hub console and select Compliance standards in the navigation pane.
    1. Next, select CIS AWS Foundations.
       
      Figure 9: Compliance standards page in the Security Hub console

      Figure 9: Compliance standards page in the Security Hub console

    2. Search for the rule 2.4 Ensure CloudTrail trails are integrated with CloudWatch Logs and select it.
       
      Figure 10: Locating CIS check 2.4 in the Security Hub console

      Figure 10: Locating CIS check 2.4 in the Security Hub console

    3. If you’ve left the default AWS Security Hub CIS checks enabled (along with AWS Config service in the same region), and if you have CloudTrail trails in that region which are not yet configured to deliver events to CloudWatch Logs, you should see a low severity finding with a Failed Compliance status.
    4. Select the failed finding by selecting the checkbox and choosing the Actions button.
    5. Finally, from the dropdown menu, select the custom action that you created in Step 1 to send the finding to CloudWatch Events. CloudWatch Events will send the finding to your Lambda function, which you configured as the target for the rule in step 3. The Lambda function will automatically identify the affected CloudTrail trail and configure CloudWatch Logs log group for you. The log group will have the same name as your trail for identification purposes. You can modify the code to suit your needs further.

    Note: There may be a delay before the compliance status of the remediated resource changes. Once the CIS AWS Foundations Standard is enabled, Security Hub will run the checks within 2 hours. After that, the checks are automatically run once every 24 hours.

     

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub Console

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub console

    8. Customize your insights using the default “managed insights” as templates and use them to prioritize resources and findings to act upon

    A Security Hub “insight” is a collection of related findings to which one or more Security Hub filters have been applied. Insights can help you organize your findings and identify security risks that need immediate attention.

    Security Hub offers several managed (default) insights. You can use these as templates for new insights, and modify them depending on your use case. You can save these modified queries as new custom insights to ensure an even greater visibility of your AWS accounts. Please refer to the documentation for step-by-step instructions on how to create custom insights.
     

    Figure 12: Creating a Security Hub custom insight

    Figure 12: Creating a Security Hub custom insight

    9. Use the free trial to evaluate what your costs could be

    Security Hub provides a 30-day free trial for all AWS accounts and regions. The trial is a good way to evaluate how much Security Hub will cost, on average, to monitor threats and compliance in your environments. You can view an estimate by navigating from the Security Hub console to Settings, then Usage (see Figure 13).
     

    Figure 13: Estimating your Security Hub costs

    Figure 13: Estimating your Security Hub costs

    Conclusion

    AWS Security Hub allows you to have more visibility into the security and compliance status of your AWS environments. Using the Security Hub best practices discussed here, security teams can spend more time on incident remediation and recovery rather than incident detection and organization. Security Hub has undergone HIPAA, ISO, PCI, and SOC certification. To learn more about Security Hub, refer to the AWS Security Hub documentation.

    If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Author

    Ketan Srivastava

    Ketan is a Cloud Support Engineer at AWS. He enjoys the fact that, at AWS, there are always so many opportunities to build things better for our customers and learn from these opportunities. Outside of work, he plays MOBAs and travels to new places with his wife. He holds a Master of Science degree from Rochester Institute of Technology.

How to deploy CloudHSM to securely share your keys with your SaaS provider

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/security/how-to-deploy-cloudhsm-securely-share-keys-with-saas-provider/

If your organization is using software as a service (SaaS), your data is likely stored and protected by the SaaS provider. However, depending on the type of data that your organization stores and the compliance requirements that it must meet, you might need more control over how the encryption keys are stored, protected, and used. In this post, I’ll show you two options for deploying and managing your own CloudHSM cluster to secure your keys, while still allowing trusted third-party SaaS providers to securely access your HSM cluster in order to perform cryptographic operations. You can also use this architecture when you want to share your keys with another business unit or with an application that’s running in a separate AWS account.

AWS CloudHSM is one of several cryptography services provided by AWS to help you secure your data and keys in the AWS cloud. AWS CloudHSM provides single-tenant HSMs based on third-party FIPS 140-2 Level 3 validated hardware, under your control, in your Amazon Virtual Private Cloud (Amazon VPC). You can generate and use keys on your HSM using CloudHSM command line tools or standards-compliant C, Java, and OpenSSL SDKs.

A related, more widely used service is AWS Key Management Service (KMS). KMS is generally easier to use, cheaper to operate, and is natively integrated with most AWS services. However, there are some use cases for which you may choose to rely on CloudHSM to meet your security and compliance requirements.

Solution Overview

There are two ways you can set up your VPC and CloudHSM clusters to allow trusted third-party SaaS providers to use the HSM cluster for cryptographic operations. The first option is to use VPC peering to allow traffic to flow between the SaaS provider’s HSM client VPC and your CloudHSM VPC, and to utilize a custom application to harness the HSM.

The second option is to use KMS to manage the keys, specifying a custom key store to generate and store the keys. AWS KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage.

Decision Criteria: VPC Peering vs Custom Key Store

The right solution for you will depend on factors like your VPC configuration, security requirements, network setup, and the type of cryptographic operations you need. The following table provides a high-level summary of how these two options compare. Later in this post, I’ll go over both options in detail and explain the design considerations you need to be aware of before deploying the solution in your environment.

Technical ConsiderationsSolution
VPC PeeringCustom Keystore
Are you able to peer or connect your HSM VPC with your SaaS provider?✔
Is your SaaS provider sensitive to costs from KMS usage in their AWS account?✔
Do you need CloudHSM-specific cryptographic tasks like signing, HMAC, or random number generation?✔
Does your SaaS provider need to encrypt your data directly with the Master Key?✔
Does your application rely on a PKCS#11-compliant or JCE-compliant SDK?✔
Does your SaaS provider need to use the keys in AWS services?✔
Do you need to log all key usage activities when SaaS providers use your HSM keys?✔

Option 1: VPC Peering

 

Figure 1: Architecture diagram showing VPC peering between the SaaS provider's HSM client VPC and the customer's HSM VPC

Figure 1: Architecture diagram showing VPC peering between the SaaS provider’s HSM client VPC and the customer’s HSM VPC

Figure 1 shows how you can deploy a CloudHSM cluster in a dedicated HSM VPC and peer this HSM VPC with your service provider’s VPC to allow them to access the HSM cluster through the client/application. I recommend that you deploy the CloudHSM cluster in a separate HSM VPC to limit the scope of resources running in that VPC. Since VPC peering is not transitive, service providers will not have access to any resources in your application VPCs or any other VPCs that are peered with the HSM VPC.

It’s possible to leverage the HSM cluster for other purposes and applications, but you should be aware of the potential drawbacks before you do. This approach could make it harder for you to find non-overlapping CIDR ranges for use with your SaaS provider. It would also mean that your SaaS provider could accidentally overwrite HSM account credentials or lock out your HSMs, causing an availability issue for your other applications. Due to these reasons, I recommend that you dedicate a CloudHSM cluster for use with your SaaS providers and use small VPC and subnet sizes, like /27, so that you’re not wasting IP space and it’s easier to find non-overlapping IP addresses with your SaaS provider.

If you’re using VPC peering, your HSM VPC CIDR cannot overlap with your SaaS provider’s VPC. Deploying the HSM cluster in a separate VPC gives you flexibility in selecting a suitable CIDR range that is non-overlapping with the service provider since you don’t have to worry about your other applications. Also, since you’re only hosting the HSM Cluster in this VPC, you can choose a CIDR range that is relatively small.

Design considerations

Here are additional considerations to think about when deploying this solution in your environment:

  • VPC peering allow resources in either VPC to communicate with each other as long as security groups, NACLS, and routing allow for it. In order to improve security, place only resources that are meant to be shared in the VPC, and secure communication at the port/protocol level by using security groups.
  • If you decide to revoke the SaaS provider’s access to your CloudHSM, you have two choices:
    • At the network layer, you can remove connectivity by deleting the VPC peering or by modifying the CloudHSM security groups to disallow the SaaS provider’s CIDR ranges.
    • Alternately, you can log in to the CloudHSM as Crypto Officer (CO) and change the password or delete the Crypto user that the SaaS provider is using.
  • If you’re deploying CloudHSM across multiple accounts or VPCs within your organization, you can also use AWS Transit Gateway to connect the CloudHSM VPC to your application VPCs. Transit Gateway is ideal when you have multiple application VPCs that needs CloudHSM access, as it easily scales and you don’t have to worry about the VPC peering limits or the number of peering connections to manage.
  • If you’re the SaaS provider, and you have multiple clients who might be interested in this solution, you must make sure that one customer IP space doesn’t overlap with yours. You must also make sure that each customer’s HSM VPC doesn’t overlap with any of the others. One solution is to dedicate one VPC per customer, to keep the client/application dedicated to that customer, and to peer this VPC with your application VPC. This reduces the overlapping CIDR dependency among all your customers.

Option 2: Custom Key Store

As the AWS KMS documentation explains, KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage. When you use a CMK in a custom key store, the cryptographic operations are performed in the HSMs in the cluster. This feature combines the convenience and widespread integration of AWS KMS with the added control of an AWS CloudHSM cluster in your AWS account. This option allows you to keep your master key in the CloudHSM cluster but allows your SaaS provider to use your master key securely by using KMS.

Each custom key store is associated with an AWS CloudHSM cluster in your AWS account. When you connect the custom key store to its cluster, AWS KMS creates the network infrastructure to support the connection. Then it logs into the key AWS CloudHSM client in the cluster using the credentials of a dedicated crypto user in the cluster. All of this is automatically set up, with no need to peer VPCs or connect to your SaaS provider’s VPC.

You create and manage your custom key stores in AWS KMS, and you create and manage your HSM clusters in AWS CloudHSM. When you create CMKs in an AWS KMS custom key store, you view and manage the CMKs in AWS KMS. But you can also view and manage their key material in AWS CloudHSM, just as you would do for other keys in the cluster.

The following diagram shows how some keys can be located in a CloudHSM cluster but be visible through AWS KMS. These are the keys that AWS KMS can use for crypto operations performed through KMS.
 

Figure 2: High level overview of KMS custom key store

Figure 2: High level overview of KMS custom key store

While this option eliminates many of the networking components you need to set up for Option 1, it does limit the type of cryptographic operations that your SaaS provider can perform. Since the SaaS provider doesn’t have direct access to CloudHSM, the crypto operations are limited to the encrypt and decrypt operations supported by KMS, and your SaaS provider must use KMS APIs for all of their operations. This is easy if they’re using AWS services which use KMS already, but if they’re performing operations within their application before storing the data in AWS storage services, this approach could be challenging, because KMS doesn’t support all the same types of cryptographic operations that CloudHSM supports.

Figure 3 illustrates the various components that make up a custom key store and shows how a CloudHSM cluster can connect to KMS to create a customer controlled key store.
 

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Design Considerations

  • Note that when using custom key store, you’re creating a kmsuser CU account in your AWS CloudHSM cluster and providing the kmsuser account credentials to AWS KMS.
  • This option requires your service provider to be able to use KMS as the key management option within their application. Because your SaaS provider cannot communicate directly with the CloudHSM cluster, they must instead use KMS APIs to encrypt the data. If your SaaS provider is encrypting within their application without using KMS, this option may not work for you.
  • When deploying a custom key store, you must not only control access to the CloudHSM cluster, you must also control access to AWS KMS.
  • Because the custom key store and KMS are located in your account, you must give permission to the SaaS provider to use certain KMS keys. You can do this by enabling cross account access. For more information, please refer to the blog post “Share custom encryption keys more securely between accounts by using AWS Key Management Service.”
  • I recommend dedicating an AWS account to the CloudHSM cluster and custom key store, as this simplifies setup. For more information, please refer to Controlling Access to Your Custom Key Store.

Network architecture that is not supported by CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4 shows various networking technologies, like AWS PrivateLink, Network Address Translation (NAT), and AWS Load Balancers, that cannot be used with CloudHSM when placed between the CloudHSM cluster and the client/application. All of these methods mask the real IPs of the HSM cluster nodes from the client, which breaks the communication between the CloudHSM client and the HSMs.

When the CloudHSM client successfully connects to the HSM cluster, it downloads a list of HSM IP addresses which is then stored and used for subsequent connections. When one of the HSM nodes is unavailable, the client/application will automatically try the IP address of the HSM nodes it knows about. When HSMs are added or removed from the cluster, the client is automatically reconfigured. Since the client relies on a current list of IP addresses to transparently handle high availability and failover within the cluster, masking the real IP address of the HSM node thus breaks the communication between the cluster and the client.

You can read more about how the CloudHSM client works in the AWS CloudHSM User Guide.

Summary

In this blog post, I’ve shown you two options for deploying CloudHSM to store your key material while allowing your SaaS provider to access and use those keys on your behalf. This allows you to remain in control of your encryption keys and use a SaaS solution without compromising security.

It’s important to understand the security requirements, network setup, and type of cryptographic operation for each approach, and to choose the option that aligns the best with your goals. As a best practice, it’s also important to understand how to secure your CloudHSM and KMS deployment and to use necessary role-based access control with minimum privilege. Read more about AWS KMS Best Practices and CloudHSM Best Practices.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Vinod Madabushi

Vinod is an Enterprise Solutions Architect with AWS. He works with customers on building highly available, scalable, and secure applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

AWS achieves OSPAR outsourcing standard for Singapore financial industry

Post Syndicated from Brandon Lim original https://aws.amazon.com/blogs/security/aws-achieves-ospar-outsourcing-standard-for-singapore-financial-industry/

AWS has achieved the Outsourced Service Provider Audit Report (OSPAR) attestation for 66 services in the Asia Pacific (Singapore) Region. The OSPAR assessment is performed by an independent third party auditor. AWS’s OSPAR demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines).

The ABS Guidelines are intended to assist financial institutions in understanding approaches to due diligence, vendor management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The ABS Guidelines are closely aligned with the Monetary Authority of Singapore’s Outsourcing Guidelines, and they’re one of the standards that the financial services industry in Singapore uses to assess the capability of their outsourced service providers (including cloud service providers).

AWS’s alignment with the ABS Guidelines demonstrates to customers AWS’s commitment to meeting the high expectations for cloud service providers set by the financial services industry in Singapore. Customers can leverage OSPAR to conduct their due diligence, minimizing the effort and costs required for compliance. AWS’s OSPAR report is now available in AWS Artifact.

You can find additional resources about regulatory requirements in the Singapore financial industry at the AWS Compliance Center. If you have questions about AWS’s OSPAR, or if you’d like to inquire about how to use AWS for your material workloads, please contact your AWS account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Brandon Lim

Brandon is the Head of Security Assurance for Financial Services, Asia-Pacific. Brandon leads AWS’s regulatory and security engagement efforts for the Financial Services industry across the Asia Pacific region. He is passionate about working with Financial Services Regulators in the region to drive innovation and cloud adoption for the financial industry.

Introducing the “Preparing for the California Consumer Privacy Act” whitepaper

Post Syndicated from Julia Soscia original https://aws.amazon.com/blogs/security/introducing-the-preparing-for-the-california-consumer-privacy-act-whitepaper/

AWS has published a whitepaper, Preparing for the California Consumer Protection Act, to provide guidance on designing and updating your cloud architecture to follow the requirements of the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020.

The whitepaper is intended for engineers and solution builders, but it also serves as a guide for qualified security assessors (QSAs) and internal security assessors (ISAs) so that you can better understand the range of AWS products and services that are available for you to use.

The CCPA was enacted into law on June 28, 2018 and grants California consumers certain privacy rights. The CCPA grants consumers the right to request that a business disclose the categories and specific pieces of personal information collected about the consumer, the categories of sources from which that information is collected, the “business purposes” for collecting or selling the information, and the categories of third parties with whom the information is shared. This whitepaper looks to address the three main subsections of the CCPA: data collection, data retrieval and deletion, and data awareness.

To read the text of the CCPA please visit the website for California Legislative Information.

If you have questions or want to learn more, contact your account executive or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Julia Soscia

Julia is a Solutions Architect at Amazon Web Services based out of New York City. Her main focus is to help customers create well-architected environments on the AWS cloud platform. She is an experienced data analyst with a focus in Big Data and Analytics.

Author photo

Anthony Pasquarielo

Anthony is a Solutions Architect at Amazon Web Services. He’s based in New York City. His main focus is providing customers technical guidance and consultation during their cloud journey. Anthony enjoys delighting customers by designing well-architected solutions that drive value and provide growth opportunity for their business.

Author photo

Justin De Castri

Justin is a Manager of Solutions Architecture at Amazon Web Services based in New York City. His primary focus is helping customers build secure, scaleable, and cost optimized solutions that are aligned with their business objectives.

Spring 2019 PCI DSS report now available, 12 services added in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-pci-dss-report-now-available-12-services-added-in-scope/

At AWS Security, continuously raising the cloud security bar for our customers is central to all that we do. Part of that work is focused on our formal compliance certifications, which enable our customers to use the AWS cloud for highly sensitive and/or regulated workloads. We see our customers constantly developing creative and innovative solutions—and in order for them to continue to do so, we need to increase the availability of services within our certifications. I’m pleased to tell you that in the past year, we’ve increased our Payment Card Industry – Data Security Standard (PCI DSS) certification scope by 79%, from 62 services to 111 services, including 12 newly added services in our latest PCI report (listed below), and we were audited by our third-party auditor, Coalfire.

The PCI DSS report and certification cover the 111 services currently in scope that are used by our customers to architect a secure Cardholder Data Environment (CDE) to protect important workloads. The full list of PCI DSS certified AWS services is available on our Services in Scope by Compliance program page. The 12 newly added services for our Spring 2019 report are:

Our compliance reports, including this latest PCI report, are available on-demand through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, please visit the AWS Compliance Programs page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profile: Rustan Leino, Senior Principal Applied Scientist

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-rustan-leino-senior-principal-applied-scientist/

Author


I recently sat down with Rustan from the Automated Reasoning Group (ARG) at AWS to learn more about the prestigious Computer Aided Verification (CAV) Award that he received, and to understand the work that led to the prize. CAV is a top international conference on formal verification of software and hardware. It brings together experts in this field to discuss groundbreaking research and applications of formal verification in both academia and industry. Rustan received this award as a result of his work developing program-verification technology. Rustan and his team have taken his research and applied it in unique ways to protect AWS core infrastructure on which customers run their most sensitive applications. He shared details about his journey in the formal verification space, the significance of the CAV award, and how he plans to continue scaling formal verification for cloud security at AWS.

Congratulations on your CAV Award! Can you tell us a little bit about the significance of the award and why you received it?

Thanks! I am thrilled to jointly receive this award with Jean-Christophe Filliâtre, who works at the CNRS Research Laboratory in France. The CAV Award recognizes fundamental contributions to program verification, that is, the field of mathematically proving the correctness of software and hardware. Jean-Christophe and I were recognized for the building of intermediate verification languages (IVL), which are a central building block of modern program verifiers.

It’s like this: the world relies on software, and the world relies on that software to function correctly. Software is written by software engineers using some programming language. If the engineers want to check, with mathematical precision, that a piece of software always does what it is intended to do, then they use a program verifier for the programming language at hand. IVLs have accelerated the building of program verifiers for many languages. So, IVLs aid the construction of program verifiers which, in turn, improve software quality that, in turn, makes technology more reliable for all.

What is your role at AWS? How are you applying technologies you’ve been recognized by CAV for at AWS?

I am building and applying proof tools to ensure the correctness and security of various critical components of AWS. This lets us deliver a better and safer experience for our customers. Several tools that we apply are based on IVLs. Among them are the SideTrail verifier for timing-based attacks, the VCC verifier for concurrent systems code, and the verification-aware programming language Dafny, all of which are built on my IVL named Boogie.

What does an automated program verification tool do?

An automated program verifier is a tool that checks if a program behaves as intended. More precisely, the verifier tries to construct a correctness proof that shows that the code meets the given specification. Specifications include things like “data at rest on disk drives is always encrypted,” or “the event-handler always eventually returns control back to the caller,” or “the API method returns a properly formatted buffer encrypted under the current session key.” If the verifier detects a discrepancy (that is, a bug), the developer responds by fixing the code. Sometimes, the verifier can’t determine what the answer is. In this case, the developer can respond by helping the tool with additional information, so-called proof hints, until the tool is able to complete the correctness proof or find another discrepancy.

For example, picture a developer who is writing a program. The program is like a letter written in a word processor, but the letter is written in a language that the computer can understand. For cloud security, say the program manages a set of data keys and takes requests to encrypt data under those keys. The developer writes down the intention that each encryption request must use a different key. This is the specification: the what.

Next, the developer writes code that instructs the computer how to respond to a request. The code separates the keys into two lists. An encryption request takes a key from the “not used” list, encrypts the given data, and then places the key on the “used” list.

To see that the code in this example meets the specification, it is crucial to understand the roles of the two lists. A program verifier might not figure this out by itself and would then indicate the part of the code it can’t verify, much like a spell-checker underlines spelling and grammar mistakes in a letter you write. To help the program verifier along, the developer provides a proof hint that says that the keys on the “not used” list have never been returned. The verifier checks that the proof hint is correct and then, using this hint, is able to construct the proof that the code meets the specification.

You’ve designed several verification tools in your career. Can you share how you’re using verification tools such as Dafny and Boogie to provide higher assurances for AWS infrastructure?

Dafny is a Java-like programming language that was designed with verification in mind. Whereas most programming languages only allow you to write code, Dafny allows you to write specifications and code at the same time. In addition, Dafny allows you to write proof hints (in fact, you can write entire proofs). Having specifications, code, and proofs in one language sets you up for an integrated verification experience. But this would remain an intellectual exercise without an automated program verifier. The Dafny language was designed alongside its automated program verifier. When you write a Dafny program, the verifier constantly runs in the background and points out mistakes as you go along, very much like the spell-checker underlines I alluded to. Internally, the Dafny verifier is based on the Boogie IVL.

At AWS, we’re currently using Dafny to write and prove a variety of security-critical libraries. For example: encryption libraries. Encryption is vital for keeping customer data safe, so it makes for a great place to focus energy on formal verification.

You spent time in scientific research roles before joining AWS. Has your experience at AWS caused you to see scientific challenges in a different way now?

I began my career in 1989 in the Microsoft Windows LAN Manager team. Based on my experiences helping network computers together, I became convinced that formally proving the correctness of programs was going to go from a “nice to have” to a “must have” in the future, because of the need for more security in a world where computers are so interconnected. At the time, the tools and techniques for proving programs correct were so rudimentary that the only safe harbor for this type of work was in esoteric research laboratories. Thus, that’s where I could be found. But these days, the tools are increasingly scalable and usable, so finally I made the jump back into development where I’m leading efforts to apply and operationalize this approach, and also to continue my research based on the problems that arise as we do so.

One of the challenges we had in the 1990s and 2000s was that few people knew how to use the tools, even if they did exist. Thus, while in research laboratories, an additional focus of mine has been on making tools that are so easy to use that they can be used in university education. Now, with dozens of universities using my tools and after several eye-opening successes with the Dafny language and verifier, I’m scaling these efforts up with development teams in AWS that can hire the students who are trained with Dafny.

I alluded to continuing research. There are still scientific challenges to make specifications more expressive and more concise, to design programming languages more streamlined for verification, and to make tools more automated, faster, and more predictable. But there’s an equally large challenge in influencing the software engineering process. The two are intimately linked, and cannot be teased apart. Only by changing the process can we hope for larger improvements in software engineering. Our application of formal verification at AWS is teaching us a lot about this challenge. We like to think we’re changing the software engineering world.

What are the next big challenges that we need to tackle in cloud security? How will automated reasoning play a role?

There is a lot of important software to verify. This excites me tremendously. As I see it, the only way we can scale is to distribute the verification effort beyond the verification community, and to get usable verification tools into the hands of software engineers. Tooling can help put the concerns of security engineers into everyday development. To meet this challenge, we need to provide appropriate training and we need to make tools as seamless as possible for engineers to use.

I hear your YouTube channel, Verification Corner, is loved by engineering students. What’s the next video you’ll be creating?

[Rustan laughs] Yes, Verification Corner has been a fun way for me to teach about verification and I receive appreciation from people around the world who have learned something from these videos. The episodes tend to focus on learning concepts of program verification. These concepts are important to all software engineers, and Verification Corner shows the concepts in the context of small (and sometimes beautiful) programs. Beyond learning the concepts in isolation, it’s also important to see the concepts in use in larger programs, to help engineers apply the concepts. I want to devote some future Verification Corner episodes to showing verification “in the trenches;” that is, the application of verification in larger, real-life (and sometimes not so beautiful) programs for cloud security, as we’re continuing to do at AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

How to get specific security information about AWS services

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/how-to-get-specific-security-information-about-aws-services/

We’re excited to announce the launch of dedicated security chapters in the AWS documentation for over 40 services. Security is a key component of your decision to use the cloud. These chapters can help your organization get in-depth information about both the built-in and the configurable security of AWS services. This information goes beyond “how-to.” It can help developers—as well as Security, Risk Management, Compliance, and Product teams—assess a service prior to use, determine how to use a service securely, and get updated information as new features are released.

This initiative is a direct result of customer requests for easy-to-find, easy-to-consume security documentation. Our new chapters provide information about the security of the cloud and in the cloud, as outlined in the AWS Shared Responsibility Model, for each service. The chapters align with the Cloud Adoption Framework: Security Perspective and include information about the following topics, as applicable:

  • Data protection
  • Identity and access management
  • Logging and monitoring
  • Compliance validation
  • Resilience
  • Infrastructure security
  • Configuration and vulnerability analysis
  • Security best practices

You can find links to the security chapters on the AWS Security Documentation page, which will be updated as more security chapters become available. Here are links to the new Security chapters we’ve released so far:

You can give us your feedback by selecting the Feedback button in the lower right corner of any documentation page. We look forward to learning how you use this information within your organization and how we can continue to provide useful resources to you.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Kristen Haught

Kristen is a Security and Compliance Business Development Manager focused on strategic initiatives that enable financial services customers to adopt Amazon Web Services for regulated workloads. She cares about sharing strategies that help customers adopt a culture of innovation, while also strengthening their security posture and minimizing risk in the cloud.

AWS Security Profile: John Backes, Senior Software Development Engineer

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-john-backes-senior-software-development-engineer/

Author


AWS scientists and engineers believe in partnering closely with the academic and research community to drive innovation in a variety of areas of our business, including cloud security. One of the ways they do this is through participating in and sponsoring scientific conferences, where leaders in fields such as automated reasoning, artificial intelligence, and machine learning come together to discuss advancements in their field. The International Conference on Computer Aided Verification (CAV), is one such conference, sponsored and—this year—co-chaired by the AWS Automated Reasoning Group (ARG). CAV is dedicated to the advancement of the theory and practice of computer-aided formal analysis methods for hardware and software systems. This conference will take place next week, July 13-18, 2019 at The New School in New York City.

CAV covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation. CAV also publishes scientific papers from the research community that it considers vital to continue spurring advances in hardware and software verification. One of the authors of a paper accepted this year, Reachability Analysis for AWS-based Networks, is authored by John Backes of AWS. I sat down with him to talk about the unique network-based analysis service, Tiros, that’s described in the paper and how it’s helping to set new standards for cloud network security.

Tell me about yourself: what made you decide to become a software engineer in the automated reasoning space?

It sounds cliche, but I have wanted to work with computers since I was a child. I recently was looking through my old school work, and I found an assignment from the second grade where I wrote about “What I wanted to be when I grow up.” I had drawn a crude picture of someone working on a computer and wrote “I want to be a computer programmer.” At university, I took a class on discrete mathematics where I learned about mathematical induction for the first time; it seemed like magic to me. I struggled a bit to develop proofs for the homework assignments and tests in the course. So the idea of writing a program to perform induction for me automatically became very compelling.

I decided to go to graduate school to do research related to proving the correctness of digital circuits. After graduating, I built automated reasoning tools for proving the correctness of software that controls airplanes and helicopters. I joined AWS because I wanted to prove properties about systems that are used by almost everyone.

I understand that your research paper on Tiros was recently published by CAV. What does the research paper cover?

Many influential papers in the space of automated reasoning have been published in CAV over the past three decades. We are publishing a paper at CAV 2019 about three different types of automated reasoning tools we used in the development of Tiros. It discusses different formal reasoning tools and techniques we used, and what tools and techniques were able to scale and which were not. The paper gives readers a blueprint for how they could build their own automated reasoning services on AWS.

What is Tiros? How is it being used in Amazon Inspector?

Tiros answers reachability questions about Amazon Virtual Private Cloud (Amazon VPC) networks. It allows customers to answer questions like “Which of my EC2 instances are reachable from the internet?” and “Is it possible for this Elastic Network Interface (ENI) to send traffic to that ENI?Amazon Inspector uses Tiros to power its recently launched Network Reachability Rules package. Customers can use this rules package to produce findings about how traffic originating from outside their accounts can reach their Amazon EC2 instances (for example, via an internet gateway, elastic load balancer, or virtual private gateway) and via which ports. Inspector also makes suggestions about how to remediate findings that a customer would like to eliminate. For example, if a customer has an EC2 instance that has port 22 (commonly associated with SSH) open to the internet, Amazon Inspector will suggest what security group needs to be changed to eliminate this finding.

Why are networks difficult to understand? How is Tiros helping to solve that problem?

As customers add more components and open them up to access from more addresses, the number of possible paths that traffic can flow through a network increases exponentially. It may be feasible to test all of the paths through a network with a dozen computers, but it would take longer than the heat death of the universe to test all possible paths of a network with hundreds of components (elastic load balancers, NAT gateways, network access control lists, EC2 instances, and so on). Tiros reasons about all possible network paths completely, using “symbolic methods,” where it does not send any packets but instead treats the network as a mathematical object. It does this by gathering information about how a VPC is configured using the describe APIs of relevant services. It takes this information and generates a set of logical constraints. It then proves properties about these sets of constraints using something called an SMT solver [Editor’s note: discussed below]

Tiros relies on the use of automated reasoning techniques and SMT solvers to provide customers with a better understanding of potential network vulnerabilities. Can you explain what these concepts are and how they’re being used in Tiros?

SMT stands for Satisfiability Modulo Theories. SMT solvers are general purpose software tools that solve a collection of mathematical constraints. The algorithms and heuristics that power these tools have been steadily improving over the past three decades. This means that if you can translate a problem into a form that can solved by an SMT solver then you can take advantage of highly optimized algorithms that have been continuously improved over decades. There are tutorials online about how to use SMT solvers to provide solutions to all sorts of interesting constraints problems. Another AWS service called Zelkova uses SMT solvers to answer questions about IAM policies. Tiros uses an SMT solver called MonoSAT to encode reachability constraints about VPC networks. The figure below shows how we encode constraints about what types of packets are allowed to flow from a subnet to an ENI:
 
Math equation

This diagram is from the CAV paper. It illustrates the constraints that Tiros generates to reason about packets moving from subnets to ENIs. Informally, these constraints say that a packet is allowed to flow from an ENI out to its subnet’s route table if the source IP address of the packet is the same as the source IP address of the ENI. Likewise, a packet can flow from a subnet to an ENI if the destination IP address of the packet is the same as that of the ENI.

Tiros generates all sorts of constraints like this to represent the rules of routing in VPCs. If the SMT solver is able to find a solution to satisfy all of the constraints, then this corresponds to a valid path that a packet can flow through the VPC from some source to some destination. Someone using Tiros can then inspect these paths to determine the source of a potential network misconfiguration.

Is Tiros helping customers meet their compliance requirements? How?

Many customers need to meet compliance standards such as PCI, FedRAMP, and HIPAA. The requirements in these standards call for evidence of properly configured network controls. For example, Requirement 11 of the PCI DSS gives guidance to regularly perform penetration testing and network vulnerability scans. Customers can use Amazon Inspector to automatically schedule assessments on a regular cadence to generate evidence that they can use to help meet this requirement.

What do you tell your friends and family about what you do?

I tell them that AWS is responsible for the security of the cloud, and AWS customers are responsible for their security in the cloud. AWS refers to this concept as the Shared Responsibility Model. I explain that I work on a technology called Tiros that automatically produces mathematical proofs to enable AWS customers to build secure applications in the cloud.

What’s next for Tiros? For automated reasoning at AWS?

AWS is constantly adding new networking features. For example, we recently announced support for Direct Connect in Transit Gateway. Tiros is continuously updated to reason about these new services and features so customers who use the service can see new reachability results as they use new VPC features. Right now, we are really focused on how Tiros can be used to help customers with compliance. We plan to integrate Tiros results into other services to help produce evidence of compliance that customers can provide to auditors.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

How to migrate a digital signing workload to AWS CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-migrate-a-digital-signing-workload-to-aws-cloudhsm/

Is your on-premises Hardware Security Module (HSM) at end-of-life? Does continued maintenance of your on-premises hardware take a lot of time and cost a lot of money? Do you want or need all of your workloads to be performed on AWS? By migrating these workloads to AWS CloudHSM, you receive automated backups, low cost HSMs, managed maintenance, automatic recovery in event of a hardware failure, integrated fault tolerance, and high-availability. One such workload you might consider migrating is secret key material used for digital signing operations.

Enterprise certificate authority (CA) or public key infrastructure (PKI) applications use the private portion of an asymmetric key pair generated and stored in a hardware security module (HSM) to perform signing operations. Examples of such operations include the creation of digital certificates for web-servers or IoT devices, file signatures, or when negotiating a TLS session. Migrating this type of workload to AWS may save you time and money. If your HSM is at end of life and you need an alternative, you can migrate the digital signing workload to AWS CloudHSM in just a few steps.

This post will focus on a workload that allows you to create and use a digital certificate to digitally sign an arbitrary file. I’ll show you how to create a new asymmetric key pair and generate the corresponding certificate signing request (CSR) on AWS CloudHSM. This CSR, once signed by the appropriate issuing CA, allows your new key pair and the associated certificate to be trusted in the same way as the key pairs in your original HSM. You could then move traffic related to signing operations or issuing certificates to your AWS CloudHSM cluster.

Background

Before I walk you through the steps of migrating a certificate signing workload into CloudHSM, I’ll provide a little background information so you’ll know how CloudHSM, PKI, and CAs work together. Every certificate is associated with a key pair made up of a private (secret) key and a public key. The private key associated with a certificate needs to be kept confidential, so it typically resides on a hardware security module (HSM). The public portion of the key pair is not confidential, is included in the certificate, and can be shared with anyone who wants to verify a digital signature made with the corresponding private key. In a PKI, a CA is the trusted entity that issues digital certificates on behalf of end-entities. At the top of the trust hierarchy is a root CA, which is implicitly trusted when it is established because it acts as the root of trust for intermediate CAs and end-entity certificates that may be issued underneath it. Intermediate CAs are trusted because their certificates are signed by the root CA. Intermediate CAs in turn sign end-entity certificates, which are used to authenticate identities of various actors across the data transfer process. A common use case for end-entity certificates is for web servers so that connecting clients can verify the server’s identity. Generally, end-entity certificates are valid for 1-3 years, intermediate CA certificates are valid for 5-10 years, and root CAs are valid for 30 years or more.

Beyond solving for the non-repudiation of objects signed by end-entity certificates to ensure the owner of the private key performed the signing operation, there is still the problem of trusting that the owner of the private key is the identity they claim to be. When evaluating trust in this way, there are generally two options; relying on public CAs or private CAs. Public CAs widely distribute the public keys of their root certificates into popular client trust stores (for example, browsers and operating systems). This allows users to verify that the identity of the end-entity has been attested to by a publicly trusted CA. This helps when the signer and the verifier of the digital asset don’t know each other and haven’t shared cryptographic material with each other in advance to perform future validations. Private CAs are those for which there are no widely distributed copies of their associated public keys. The verifier has to retrieve the public key from the private CA and has to explicitly trust the cert without any third-party attestation of the signer’s identity. This is appropriate for cases when signers and verifiers are in the same company or know each other. Examples of when to use a private CA are securing virtual private networks, data or file replication between internal servers, remote backups, file-sharing, email, or other personal accounts.

Regardless of the certificate trust model you need, AWS CloudHSM can be used to create the initial key pair and CSR for both public and private CA requests. Note that AWS offers some alternatives for certificate management that may simplify your workloads without having to use AWS CloudHSM directly. AWS Certificate Manager (ACM) automatically creates key pairs and issues public or private certificates to identify resources within your organization. For use cases that need capabilities not yet supported by ACM, or in unusual situations in which a single-tenant HSM under your control is required for compliance reasons, you can use AWS CloudHSM directly for key generation and signing operations.

Organizations currently using an on-premises HSM for the creation of asymmetric keys used in digital certificates often use a vendor-proprietary mechanism to replicate key material across multiple HSMs for resiliency. However, this method prevents the key material from ever being transferred to an HSM offered by a different vendor. Consider it “vendor lock-in’ by design. So, the private key corresponding to the certificates you use for signing and authentication are locked inside that HSM. But if they are locked inside, how do you move to AWS CloudHSM? The answer is that you don’t have to rely on these inaccessible keys: you can create a new key pair and use it within AWS CloudHSM to begin issuing end-entity certificates.

Solution overview

I will go over creating a new private key in AWS CloudHSM using the Windows client and using Microsoft certreq to generate a corresponding CSR. You provide this CSR to your private or public CA to receive a signed certificate in return. This certificate and its public key then needs to be propagated to wherever your signatures are verified. At the end of this post, I will show you how to verify your digital signatures using Microsoft SignTool. SignTool is provided by Microsoft to allow Windows users to digitally sign files, verify file signatures, and file timestamps.
 

Figure 1: Procedural diagram

Figure 1: Procedural diagram

As shown in the diagram above, the steps followed in this post are:

  1. Create a new RSA private key using KSP/CNG through the AWS CloudHSM Windows client.
  2. Using Microsoft certreq, create your CSR.
  3. Provide the CSR to your CA for signing.
  4. Use Microsoft SignTool to sign files in your environment.

Note: You may have to register this new certificate with any partners that do not automatically verify the entire certificate chain. This could be 3rd party applications, vendors, or outside entities that utilize your certificates to determine trust.

Prerequisites

In this walkthrough, I assume that you already have an AWS CloudHSM cluster set up and initialized with at least one HSM device, and an Amazon Elastic Compute Cloud (EC2) Windows-based instance with the AWS CloudHSM client, PowerShell, and Windows SDK with Microsoft SignTool installed. You must have a crypto user (CU) on the HSM to perform the steps in this post.

Deploying the solution

Step 1: Create a new private key using KSP/CNG using the AWS CloudHSM Windows client

On your Windows server where the AWS CloudHSM Windows client is installed, use a text editor to create a certificate request file named IISCertRequest.inf. For the purpose of this post, I have filled out an example file below.


[Version]
Signature = "$Windows NT$"
[NewRequest]
Subject = "CN=example.com,C=US,ST=Washington,L=Seattle,O=ExampleOrg,OU=WebServer"
HashAlgorithm = SHA256
KeyAlgorithm = RSA
KeyLength = 2048
ProviderName = "Cavium Key Storage Provider"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
MachineKeySet = True    

Step 2: Using Microsoft certreq, create your CSR

On the same server, open PowerShell and, at the PowerShell prompt, create a CSR from the IISCertRequest.inf file by using the Windows certreq command. Here’s an example of the command. Remember to change out the text in red italics with your own file name.


PS C:\>certreq -new <IISCertRequest.inf IISCertRequest.csr> 
	SDK Version: 2.03
CertReq: Request Created

If successful, you’ll see the “Request Created” message above, as well as the new file <IISCertRequest.csr> on your server. This certificate will be provided to your choice of public CA for certificate issuance. This will need to be completed manually via your public CAs suggested method of certificate request.

Step 3: Provide the CSR to your CA for signing

The CA that had been signing your existing end-entity certificates with keys generated by your original HSM is the one you use to sign the new certificates with keys generated by AWS CloudHSM, as well. There are many CAs to choose from, such as Digicert, Trustwave, GoDaddy, and so on. You will want to follow their steps for submitting your CSR to receive your certificate in return.

Step 4: Use Microsoft SignTool to sign files in your environment

When you receive your signed certificate back from your chosen CA, save a copy locally on your Windows server. Then, move the certificate file to the Personal Certificate Store in Windows so it can be used by other applications, such as Microsoft SignTool. Here’s an example of the command. Be sure to replace the value in <red italics> with your actual certificate name.
PS C:\certreq -accept <signedCertificate.cer>

Now, the certificate is ready for use, and I’ll show you how to use it to sign a file. First, you have to get the thumbprint of your certificate. To do this, open PowerShell as an Administrator (right-click the app and choose Run as Administrator). Type this command:
PS C:\>Get-ChildItem -path cert:\LocalMachine\My

If successful, you should see an output similar to this. Copy the thumbprint that is returned. You’ll need it when you perform the actual signing operation on a file.


Thumbprint				                Subject
---------------						-----------
49DF7HDJT84723FDKCURLSXYRF9830568CXHSUB2		CN=WINDOWS-CA
VJFU57E6DI9DKMCHAKLDFJA8E73739Q04730QU7A		CN=www.example.com, OU=Certif….

To open the SignTool application, navigate to the app’s directory within PowerShell. By default, this is typically:
C:\Program Files (x86)\Windows Kits\<SDK Version> \bin\<version number> \<CPU architecture>

For example, if you had downloaded the Microsoft Windows SDK 10 version, the application would be stored in:

C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64

When you’ve located the directory, sign your file by running the command below. Remember to replace the values in <red italics> with your own values. The test.exe file in this example can be any valid executable file in your directory.
PS C:\>.\signtool.exe sign /v /fd sha256 /sha1 <thumbprint> /sm /as C:\Users\Administrator\Desktop\<test.exe>

You should see a message like this:


Done Adding Additional Store
Successfully signed C:\User\Administrator\Desktop\<test.exe>

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

One last optional item you can do is verify the signature on the file using the command below. Again, replace your values for those in red italics.
PS C:\>.\signtool.exe verify /v /pa C:\Users\Administrators\Desktop\<test.exe>

You’ve now successfully migrated your file signing workload to AWS CloudHSM. If your signing certificate was not issued by a publicly trusted CA but instead by a private CA, make sure to deploy a copy of the root CA certificate and any intermediate certs from the private CA on any systems you want to verify the integrity of your signed file.

Conclusion

In this post, I walked you through creating a new RSA asymmetric key pair to create a CSR. After supplying the CSR to your chosen CA and receiving a signing certificate in return, I then showed you a how to use Microsoft SignTool with AWS CloudHSM to sign files in your environment. You can now use AWS CloudHSM to sign code, documents, or other certificates in the same method of your original HSMs.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.