Tag Archives: Security, Identity & Compliance

Building an AWS Landing Zone from Scratch in Six Weeks

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/building-an-aws-landing-zone-from-scratch-in-six-weeks/

In an effort to deliver a simpler, smarter, and more unified experience on its website, the UK’s Ministry of Justice and its Lead Technical Architect, James Abley, created a bespoke AWS Landing Zone, a pre-defined template for an AWS account or infrastructure. And they did it in six weeks.

Supporting 33 agencies and public bodies, and making sure they all work together, the Ministry of Justice is at the heart of the United Kingdom’s justice system. Its task is to look after all parts of the justice system, including the courts, prisons, probation services, and legal aid, striving to bring the principles of justice to life for everyone in society.

In a This Is My Architecture video, shot at 2018 re:Invent in Las Vegas, James talks with AWS Solutions Architect, Simon Treacy, about the importance of delivering a consistent experience to his website’s customers, a mix of citizen and internal legal aid agency case workers.

Utilizing a number of AWS services, James walks us through the user experience, and he why decided to put AWS CoudFront and AWS Web Application Firewall (WAF) up front to improve the security posture of the ministry’s legacy applications and extend their lifespan. James also explained how he split traffic between two availability zones, using AWS Elastic Load Balancing (ELB) to provide higher availability and resilience, which will help with zero downtime deployment later on.

 

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

 

AWS Security Profiles: Tracy Pierce, Senior Consultant, Security Specialty, Remote Consulting Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-tracy-pierce-senior-consultant-security-specialty-rcs/

AWS Security Profiles: Tracy Pierce, Senior Consultant, Security Specialty, Remote Consulting Services

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You’ve worn a lot of hats at AWS. What do you do in your current role, and how is it different from previous roles?

I joined AWS as a Customer Support Engineer. Currently, I’m a Senior Consultant, Security Specialty, for Remote Consulting Services, which is part of the AWS Professional Services (ProServe) team.

In my current role, I work with ProServe agents and Solution Architects who might be out with customers onsite and who need stuff built. “Stuff” could be automation, like AWS Lambda functions or AWS CloudFormation templates, or even security best practices documentation… you name it. When they need it built, they come to my team. Right now, I’m working on an AWS Lambda function to pull AWS CloudTrail logs so that you can see if anyone is making policy changes to any of your AWS resources—and if so, have it written to an Amazon Aurora database. You can then check to see if it matches the security requirements that you have set up. It’s fun! It’s new. I’m developing new skills along the way.

In my previous Support role, my work involved putting out fires, walking customers through initial setup, and showing them how to best use resources within their existing environment and architecture. My position as a Senior Consultant is a little different—I get to work with the customer from the beginning of a project rather than engaging much later in the process.

What’s your favorite part of your job

Talking with customers! I love explaining how to use AWS services. A lot of people understand our individual services but don’t always understand how to use multiple services together. We launch so many features and services that it’s understandably hard to keep up. Getting to help someone understand, “Hey, this cool new service will do exactly what I want!” or showing them how it can be combined in a really cool way with this other new service—that’s the fun part.

What’s the most challenging part of your job?

Right now? Learning to code. I don’t have a programming background, so I’m learning Python on the fly with the help of some teammates. I’m a very graphic-oriented, visual learner, so writing lines of code is challenging. But I’m getting there.

What career advice would you offer to someone just starting out at AWS?

Find a thing that you’re passionate about, and go for it. When I first started, I was on the Support team in the Linux profile, but I loved figuring out permissions and firewall rules and encryption. I think AWS had about ten services at the time, and I kept pushing myself to learn as much as I could about AWS Identity and Access Management (IAM). I asked enough questions to turn myself into an expert in that field. So, my best advice is to find a passion and don’t let anything hold you back.

What inspires you about security? Why is it something you’re passionate about?

It’s a puzzle, and I love puzzles. We’re always trying to stay one step ahead, which means there’s always something new to learn. Every day, there are new developments. Working in Security means trying to figure out how this ever-growing set of puzzles and pieces can fit together—if one piece could potentially open a back door, how can you find a different piece that will close it? Figuring out how to solve these challenges, often while others in the security field are also working on them, is a lot of fun.

In your opinion, what’s the biggest challenge facing cloud security right now?

There aren’t enough people focusing on cybersecurity. We’re in an era where people are migrating from on-prem to cloud, and it requires a huge shift in mindset to go from working with on-prem hardware to systems that you can no longer physically put your hands on. People are used to putting in physical security restraints, like making sure doors locks and badges are required for entry. When you move to the cloud, you have to start thinking not just about security group rules—like who’s allowed access to your data—but about all the other functions, features, and permissions that are a part of your cloud environment. How do you restrict those permissions? How do you restrict them for a certain team versus certain projects? How can you best separate job functions, projects, and teams in your organization? There’s so much more to cybersecurity than the stories of “hackers” you see on TV.

What’s the most common misperception you encounter about cloud security?

That it’s a one-and-done thing. I meet a lot of people who think, “Oh, I set it up” but who haven’t touched their environment in four years. The cloud is ever-changing, so your production environment and workloads are ever-changing. They’re going to grow; they’ll need to be audited in some fashion. It’s important to keep on top of that. You need to audit permissions, audit who’s accessing which systems, and make sure the systems are doing what they’re supposed to. You can’t just set it up and be finished.

How do you help educate customers about these types of misperceptions?

I go to AWS Pop-up Lofts periodically, plus conferences like re:Inforce and re:Invent, where I spend a lot of time helping people understand that security is a continuous thing. Writing blog posts also really helps, since it’s a way to show customers new ways of securing their environment using methods that they might not have considered. I can take edge cases that we might hear about from one or two customers, but which probably affect hundreds of other organizations, and reach out to them with some different setups.

You’re leading a re:Inforce builders session called “Automating password and secrets, and disaster recovery.” What’s a builders session?

Builders sessions are basically labs: There will be a very short introduction to the session, where you’re introduced to the concepts and services used in the lab. In this case, I’ll talk a little about how you can make sure your databases and resources are resilient and that you’ve set up disaster recovery scenarios.

After that, I walk around while people try out the services, hands-on, for themselves, and I see if anyone has questions. A lot of people learn better if they actually get a chance to play with things instead of just read about them. If people run into issues, like, “Why does the code say this for example?” or “Why does it create this folder over here in a different region?” I can answer those questions in the moment.

How did you arrive at your topic?

It’s based on a blog post that I wrote, called “How to automate replication of secrets in AWS Secrets Manager across AWS Regions.” It was a highly requested feature from customers that were already dealing with RDS databases. I actually wrote two posts–the second post focused on Windows passwords, and it demonstrated how you can have a secure password for Windows without having to share an SSH key across multiple entities in an organization. These two posts gave me the idea for the builders session topic: I want to show customers that you can use Secrets Manager to store sensitive information without needing to have a human manually read it in plain text.

A lot of customers are used to an on-premises access model, where everything is physical and things are written in a manual—but then you have to worry about safeguarding the manual so that only the appropriate people can read it. With the approach I’m sharing, you can have two or three people out of your entire organization who are in charge of creating the security aspects, like password policy, creation, rotation, and function. And then all other users can log in: The system pulls the passwords for them, inputs the passwords into the application, and the users do not see them in plain text. And because users have to be authenticated to access resources like the password, this approach prevents people from outside your organization from going to a webpage and trying to pull that secret and log in. They’re not going to have permissions to access it. It’s one more way for customers to lock down their sensitive data.

What are you hoping that your audience will do differently as a result of this session?

I hope they’ll begin migrating their sensitive data—whether that’s the keys they’re using to encrypt their client-side databases, or their passwords for Windows—because their data is safer in the cloud. I want people to realize that they have all of these different options available, and to start figuring ways to utilize these solutions in their own environment.

I also hope that people will think about the processes that they have in their own workflow, even if those processes don’t extend to the greater organization and it’s something that only affects their job. For example, how can they make changes so that someone can’t just walk into their office on any given day and see their password? Those are the kinds of things I hope people will start thinking about.

Is there anything else you want people to know about your session?

Security is changing so much and so quickly that nobody is 100% caught up, so don’t be afraid to ask for help. It can feel intimidating to have to figure out new security methods, so I like to remind people that they shouldn’t be afraid to reach out or ask questions. That’s how we all learn.

You love otters. What’s so great about them?

I’m obsessed with them—and otters and security actually go together! When otters are with their family group, they’re very dedicated to keeping outsiders away and not letting anybody or anything get into their den, into their home, or into their family. There are large Amazon river otters that will actually take on Cayman alligators, as a family group, to make sure the alligators don’t get anywhere near the nest and attack the pups. Otters also try to work smarter, not harder, which I’ve found to be a good motto. If you can accomplish your goal through a small task, and it’s efficient, and it works, and it’s secure, then go for it. That’s what otters do.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Spring 2019 SOC 2 Type 1 Privacy report now available

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-soc-2-type-1-privacy-report-now-available/

At AWS, our customers’ security and privacy is of the highest importance and we continue to provide transparency into our security and privacy posture. Following our first SOC 2 Type 1 Privacy report released in December 2018, AWS is proud to announce the release of the Spring 2019 SOC 2 Type 1 privacy report. The Spring 2019 SOC 2 Privacy report provides you with a third-party attestation of our systems and the suitability of the design of our privacy controls. The report also provides a detailed description of those controls, the same controls that AWS uses to address the GDPR requirements around data security and privacy.

This updated report is a part of the SOC family of reports, and has been updated to align with the new Association of International Certified Professional Accountants (AICPA) Trust Service Criteria. The Trust Service Criteria align with the Committee of Sponsoring Organizations of the Treadway Commission (COSO) 2013 framework which has been designed to better address cybersecurity risks.

The highlights of the new Trust Service Criteria include:

  • A definition of principal service commitments and system requirements.
  • Restructuring and addition of supplemental criteria to better address cybersecurity risks.
  • New description criteria requiring the disclosure of system incidents.

The scope of the privacy report includes systems that AWS uses to collect personal information and all 104 services and locations in scope for the latest AWS SOC reports. You can download the new SOC 2 Type I Privacy report now through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Spring 2019 SOC reports now available with 104 services in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-soc-reports-now-available-with-104-services-in-scope/

We’re celebrating the addition of 31 new services in scope with our latest SOC report, pushing AWS past the century mark for the first time – with 104 total services in scope, to be exact! These services are now available under our System and Organizational Controls (SOC) 1, 2, and 3 audits, including the 31 new services added during this most recent audit cycle. These SOC reports are now available to you on demand in the AWS Management Console. The SOC 3 report can also be downloaded online as a pdf.

The SOC 2 report has been updated to align with the new Association of International Certified Professional Accountants (AICPA) Trust Service Criteria. The new Trust Service Criteria align with the Committee of Sponsoring Organizations of the Treadway Commission (COSO) 2013 framework and are designed to provide flexibility that better addresses cybersecurity risks. The new Trust Service Criteria provide customers with more information on how AWS mitigates cybersecurity risks. Updates related to the new Trust Service Criteria are as follows:

  • Restructuring and realignment of the Trust Service Criteria with the COSO 2013 Framework.
  • Restructuring and addition of supplemental criteria to better address cybersecurity risks.
  • Inclusion of the 17 COSO principles within the SOC 2 common criteria.
  • Additional points of focus added to all criteria, such as requirements to add additional description around service commitments and system requirements.

Here are the 31 services newly added to our SOC scope:

As always, my team strives to bring services into the scope of our compliance programs based on your architectural and regulatory needs. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope across any of our compliance programs.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Create fine-grained session permissions using IAM managed policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/create-fine-grained-session-permissions-using-iam-managed-policies/

As a security best practice, AWS Identity and Access Management (IAM) recommends that you use temporary security credentials from AWS Security Token Service (STS) when you access your AWS resources. Temporary credentials are short-term credentials generated dynamically and provided to the user upon request. Today, one of the most widely used mechanisms for requesting temporary credentials in AWS is an IAM role. The advantage of using an IAM role is that multiple users in your organization can assume the same IAM role. By default, all users assuming the same role get the same permissions for their role session. To create distinctive role session permissions or to further restrict session permissions, users or systems can set a session policy when assuming a role. A session policy is an inline permissions policy which users pass in the session when they assume the role. You can pass the policy yourself, or you can configure your broker to insert the policy when your identities federate in to AWS (if you have an identity broker configured in your environment). This allows your administrators to reduce the number of roles they need to create, since multiple users can assume the same role yet have unique session permissions. If users don’t require all the permissions associated to the role to perform a specific action in a given session, your administrator can configure the identity broker to pass a session policy to reduce the scope of session permissions when users assume the role. This helps administrators set permissions for users to perform only those specific actions for that session.

With today’s launch, AWS now enables you to specify multiple IAM managed policies as session policies when users assume a role. This means you can use multiple IAM managed policies to create fine-grained session permissions for your user’s sessions. Additionally, you can centrally manage session permissions using IAM managed policies.
In this post, I review session policies and their current capabilities, introduce the concept of using IAM managed policies as session policies to control session permissions, and show you how to use managed policies to create fine-grained session permissions in AWS.

How do session policies work?

Before I walk through an example, I’ll review session policies.

A session policy is an inline policy that you can create on the fly and pass in the session during role assumption to further scope the permissions of the role session. The effective permissions of the session are the intersection of the role’s identity-based policies and the session policy. The maximum permissions that a session can have are the permissions that are allowed by the role’s identity-based policies. You can pass a single inline session policy programmatically by using the policy parameter with the AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, and GetFederationToken API operations.

Next, I’ll provide an example with an inline session policy to demonstrate how you can restrict session permissions.

Example: Passing a session policy with AssumeRole API to restrict session permissions

Consider a scenario where security administrator John has administrative privileges when he assumes the role SecurityAdminAccess in the organization’s AWS account. When John assumes this role, he knows the specific actions he’ll perform using this role. John is cautious of the role permissions and follows the practice of restricting his own permissions by using a session policy when assuming the role. This way, John ensures that at any given point in time, his role session can only perform the specific action for which he assumed the SecurityAdminAccess role.

In my example, John only needs permissions to access an Amazon Simple Storage Service (S3) bucket called NewHireOrientation in the same account. He passes a session policy using the policy.json file below to reduce his session permissions when assuming the role SecurityAdminAccess.


{
"Version":"2012-10-17",
"Statement":[{
    "Sid":"Statement1",
    "Effect":"Allow",
    "Action":["s3:GetBucket", "s3:GetObject"],
    "Resource": ["arn:aws:s3:::NewHireOrientation", "arn:aws:s3:::NewHireOrientation/*"]
    }]
}  

In this example, the action and resources elements of the policy statement allow access only to the NewHireOrientation bucket and all the objects inside this bucket.

Using the AWS Command Line Interface (AWS CLI), John can pass the session policy’s file path (that is, file://policy.json) while calling the AssumeRole API with the following commands:


aws sts assume-role 
--role-arn "arn:aws:iam::111122223333:role/SecurityAdminAccess" 
--role-session-name "s3-session" 
--policy file://policy.json 

When John assumes the SecurityAdminAccess role using the above command, his effective session permissions are the intersection of the permissions on the role and the session policy. This means that although the SecurityAdminAccess role had administrative privileges, John’s resulting session permissions are s3:GetBucket and s3:GetObject on the NewHireOrientation bucket. This way, John can ensure he only has access to the NewHireOrientation bucket for this session.

Using IAM managed policies as session policies

You can now pass up to 10 IAM managed policies as session policies. This gives you the ability to further restrict session permissions. The managed policy you pass can be AWS managed or customer managed. To pass managed policies as session policies, you need to specify the Amazon Resource Name (ARN) of the IAM policies using the new policy-arns parameter in the AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederationToken API operations. You can use existing managed policies or create new policies in your account and pass them as session policies with any of the aforementioned APIs. The managed policies passed in the role session must be in the same account as that of the role. Additionally, you can pass an inline session policy and ARNs of managed policies in the same role session. To learn more about the sizing guidelines for session policies, please review the STS documentation.

Next, I’ll provide an example using IAM managed policies as session policies to help you understand how you can use multiple managed policies to create fine-grained session permissions.

Example: Passing IAM managed policies in a role session

Consider an example where Mary has a software development team in California (us-west-1) working on a project using Amazon Elastic Compute Cloud (EC2). This team needs permissions to spin up new EC2 instances to meet the project’s scalability requirements. Mary’s organization has a security policy that requires developers to create and manage AWS resources in their respective geographic locations only. This means a developer from California should have permissions to launch new EC2 instances only in California. Now, Mary’s organization has an identity and authentication system such as Active Directory, for which all employees already have identities created. Additionally, there is a custom identity broker application which verifies that employees are signed into the existing identity and authentication system. This broker application is configured to obtain temporary security credentials for the employees using the AssumeRole API. (To learn more about using identity provider and identity broker with AWS, please see AWS Federated Authentication with Active Directory Federation Services..)

Mary creates a managed policy called DevCalifornia and adds a region restriction for California using the aws:RequestedRegion condition key. Following the best practice of granting least privilege, Mary lists out the specific actions the developers would need for spinning up EC2 instances:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "ec2:RunInstances"                               
            ],
            "Resource": "*",
                    "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "us-west-1"
                }
            }
        }
        
    ]
}    

The above policy grants specific permissions to launch EC2 instances. The condition element of the policy sets a restriction on the Region where these actions can be performed. The condition key aws:RequestedRegion ensures that these service-specific actions can only be performed in California.

For Mary’s team’s use case, instead of creating a new role Mary uses an existing role in her account called EC2Admin, which has the AmazonEC2FullAccess AWS managed policy attached to it, granting full access to Amazon EC2. Next, Mary configures the identity broker in such a way that the developers from the team in California can assume the EC2Admin role but with reduced session permissions. The broker passes the DevCalifornia managed policy as a session policy to reduce the scope of the session permissions when a developer from Mary’s team assumes the role. This way, Mary can ensure the team remains compliant with her organization’s security policy.

If performed using the AWS CLI, the command would look like this:

aws sts assume-role –role-arn “arn:aws:iam::444455556666:role/AppDev” –role-session-name “teamCalifornia-session” –policy-arns arn=”arn:aws:iam::444455556666:policy/DevCalifornia”

If you want to pass multiple managed policies as session policies, then the command would look like this:

aws sts assume-role –role-arn “arn:aws:iam::<accountID>:role/<RoleName>” –role-session-name “<example-session>” –policy-arns arn=”arn:aws:iam::<accountID>:policy/<PolicyName1>” arn=”arn:aws:iam::<accountID>:policy/<PolicyName2>”

In the above example, PolicyName1 and PolicyName2 can be AWS managed or customer managed policies. You can also use them in conjunction, where PolicyName1 is an AWS managed policy and PolicyName2 a customer managed policy.

Conclusion

You can now use IAM managed policies as session policies in role sessions and federated sessions to create fine-grained session permissions. You can use this functionality today by creating IAM managed policies using your existing inline session policies and referencing their policy ARNs in your role sessions. You can also keep using your existing session policy and pass the ARNs of IAM managed policies using the new policy-arn parameter to further scope your session permissions.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

How to share encrypted AMIs across accounts to launch encrypted EC2 instances

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/how-to-share-encrypted-amis-across-accounts-to-launch-encrypted-ec2-instances/

Do you encrypt your Amazon Machine Instances (AMIs) with AWS Key Management Service (AWS KMS) customer master keys (CMKs) for regulatory or compliance reasons? Do you launch instances with encrypted root volumes? Do you create a golden AMI and distribute it to other accounts in your organization for standardizing application-specific Amazon Elastic Compute Cloud (Amazon EC2) instance launches? If so, then we have good news for you!

We’re happy to announce that you can now share AMIs encrypted with customer-managed CMKs across accounts with a single API call. Additionally, you can launch an EC2 instance directly from an encrypted AMI that has been shared with you. Previously, this was possible for unencrypted AMIs only. Extending this capability to encrypted AMIs simplifies your AMI distribution process and reduces the snapshot storage cost associated with the older method of sharing encrypted AMIs that resulted in a copy in each of your accounts.

In this post, we demonstrate how you can share an encrypted AMI between accounts and launch an encrypted, Amazon Elastic Block Store (Amazon EBS) backed EC2 instance from the shared AMI.

Prerequisites for sharing AMIs encrypted with a customer-managed CMK

Before you can begin sharing the encrypted AMI and launching an instance from it, you’ll need to set up your AWS KMS key policy and AWS Identity and Access Management (IAM) policies.

For this walkthrough, you need two AWS accounts:

  1. A source account in which you build a custom AMI and encrypt the associated EBS snapshots.
  2. A target account in which you launch instances using the shared custom AMI with encrypted snapshots.

In this example, I’ll use fictitious account IDs 111111111111 and 999999999999 for the source account and the target account, respectively. In addition, you’ll need to create an AWS KMS customer master key (CMK) in the source account in the source region. For simplicity, I’ve created an AWS KMS CMK with the alias cmkSource under account 111111111111 in us-east-1. I’ll use this cmkSource to encrypt my AMI (ami-1234578) that I’ll share with account 999999999999. As you go through this post, be sure to change the account IDs, source account, AWS KMS CMK, and AMI ID to match your own.

Create the policy setting for the source account

First, an IAM user or role in the source account needs permission to share the AMI (an EC2 ModifyImageAttribute operation). The following JSON policy document shows an example of the permissions an IAM user or role policy needs to have. In order to share ami-12345678, you’ll need to create a policy like this:


    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2: ModifyImageAttribute",
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1::image/<12345678>"
            ]
        }
 ] 
}

Second, the target account needs permission to use cmkSource for re-encrypting the snapshots. The following steps walk you through how to add the target account’s ID to the cmkSource key policy.

  1. From the IAM console, select Encryption Keys in the left pane, and then select the source account’s CMK, cmkSource, as shown in Figure 1:
     
    Figure 1: Select "cmkSource"

    Figure 1: Select “cmkSource”

  2. Look for the External Accounts subsection, type the target account ID (for example, 999999999999), and then select Add External Account.
     
    Figure 2: Enter the target account ID and then select "Add External Account"

    Figure 2: Enter the target account ID and then select “Add External Account”

Create a policy setting for the target account

Once you’ve configured the source account, you need to configure the target account. The IAM user or role in the target account needs to be able to perform the AWS KMS DescribeKey, CreateGrant, ReEncrypt* and Decrypt operations on cmkSource in order to launch an instance from a shared encrypted AMI. The following JSON policy document shows an example of these permissions:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:<111111111111>:key/<key-id of cmkSource>"
            ]                                                    
        }
    ]
}

Once you’ve completed the configuration steps in the source and target accounts, then you’re ready to share and launch encrypted AMIs. The actual sharing of the encrypted AMI is no different from sharing an unencrypted AMI. If you want to use the AWS Management Console, then follow the steps described in Sharing an AMI (Console). To use the AWS Command Line Interface (AWS CLI), follow the steps described in Sharing an AMI (AWS CLI).

Below is an example of what the CLI command to share the AMI would look like:


aws ec2 modify-image-attribute --image-id <ami-12345678> --launch-permission "Add=[{UserId=<999999999999>}]"

Launch an instance from the shared encrypted AMI

To launch an AMI that was shared with you, set the AMI ID of the shared AMI in the image-id parameter of Run-Instances API/CLI. Optionally, to re-encrypt the volumes with a custom CMK in your account, you can specify the KmsKeyId in the Block Device Mapping as follows:


    $> aws ec2 run-instances 
    --image-id ami-
    --count 1 
    --instance-type m4.large 
    --region us-east-1
    --subnet-id subnet-aec2fc86 
    --key-name 2016KeyPair 
    --security-group-ids sg-f7dbc78e subnet-id subnet-aec2fc86 
    --block-device-mappings file://mapping.json    

Where the mapping.json contains the following:


[
    {
        "DeviceName": "/dev/xvda",
        "Ebs": {
                "Encrypted": true,
                "KmsKeyId": "arn:aws:kms:us-east-1:<999999999999>:key/<abcd1234-a123-456a-a12b-a123b4cd56ef>"
        }
    },
]

While launching the instance from a shared encrypted AMI, you can specify a CMK of your choice. You may also choose cmkSource to encrypt volumes in your account. However, we recommend that you re-encrypt the volumes using a CMK in the target account. This protects you if the source CMK is compromised, or if the source account revokes permissions, which could cause you to lose access to any encrypted volumes you created using cmkSource.

Summary

In this blog post, we discussed how you can easily share encrypted AMIs across accounts to launch encrypted instances. AWS has extended the same capabilities to snapshots, allowing you to create encrypted EBS volumes from shared encrypted snapshots.

This feature is available through the AWS Management Console, AWS CLI, or AWS SDKs at no extra charge in all commercial AWS regions except China. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nishit Nagar

Nishit is a Senior Product Manager at AWS.

How to quickly launch encrypted EBS-backed EC2 instances from unencrypted AMIs

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/how-to-quickly-launch-encrypted-ebs-backed-ec2-instances-from-unencrypted-amis/

An Amazon Machine Image (AMI) provides the information that you need to launch an instance (a virtual server) in your AWS environment. There are a number of AMIs on the AWS Marketplace (such as Amazon Linux, Red Hat or Ubuntu) that you can use to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance. When you launch an instance from these AMIs, the resulting volumes are unencrypted. However, for regulatory purposes or internal compliance reasons, you might need to launch instances with encrypted root volumes. Previously, this required you to follow a a multi-step process in which you maintained a separate, encrypted copy of the AMI in your account in order to launch instances with encrypted volumes. Now, you no longer need to maintain multiple AMI copies for encryption. To launch an encrypted Amazon Elastic Block Store (Amazon EBS) backed instance from an unencrypted AMI, you can directly specify the encryption properties in your existing workflow (such as with the RunInstances API or the Launch Instance Wizard). This simplifies the process of launching instances with encrypted volumes and reduces your associated AMI storage costs.

In this post, we demonstrate how you can start from an unencrypted AMI and launch an encrypted EBS-backed Amazon EC2 instance. We’ll show you how to do this from both the AWS Management Console, and using the RunInstances API with the AWS Command Line Interface (AWS CLI).

Launching an instance from the AWS Management Console

  1. Sign into the AWS Management Console and open the EC2 console.
  2. Select Launch instance, then follow the prompts of the launch wizard:
    1. In step 1 of the wizard, select the Amazon Machine Image (AMI) that you want to use.
    2. In step 2 of the wizard, select your instance type.
    3. In step 3, provide additional configuration details. For details about configuring your instances, see Launching an Instance.
    4. In step 4, specify your EBS volumes. The encryption properties of the volumes will be inherited from the AMI that you’ve chosen. If you’re using an unencrypted AMI, it will show up as “Not Encrypted.” From the dropdown, you can then select a customer master key (CMK) for encrypting the volume. You may select the same CMK for each volume that you want to create, or you may use a different CMK for each volume.
       
      Figure 1: Specifying your EBS volumes

      Figure 1: Specifying your EBS volumes

  3. Select Review and then Launch. Your instance will launch with an encrypted Amazon EBS volume that uses the CMK you selected. To learn more about the launch wizard, see Launching an Instance with Launch Wizard.

Launching an instance from the RunInstances API

From the RunInstances API/CLI, you can provide the kmsKeyID for encrypting the volumes that will be created from the AMI by specifying encryption in the BlockDeviceMapping (BDM) object. If you don’t specify the kmsKeyID in BDM but set the encryption flag to “true,” then AWS Managed CMK will be used for encrypting the volume.

For example, to launch an encrypted instance from an Amazon Linux AMI with an additional empty 100 GB of data volume (/dev/sdb), the API call would be as follows:


    $> aws ec2 run-instances 
    --image-id ami-009d6802948d06e52
    --count 1 
    --instance-ype m4.large 
    --region us-east-1
    --subnet-id subnet-aec2fc86 
    --key-name 2016KeyPair 
    --security-group-ids sg-f7dbc78e subnet-id subnet-aec2fc86 
    --block-device-mappings file://mapping.json    

Where the mapping.json contains the following:


[
    {
        "DeviceName": "/dev/xvda",
        "Ebs": {
                "Encrypted": true,
                "KmsKeyId": "arn:aws:kms:<us-east-1:012345678910>:key/<Example_Key_ID_12345>"
        }
    },

    {
        "DeviceName": "/dev/sdb",
        "Ebs": {
            "DeleteOnTermination": true,
            "VolumeSize": 100,
            "VolumeType": "gp2",
            "Encrypted": true,
            "KmsKeyId": "arn:aws:kms:<us-east-1:012345678910>:key/<Example_Key_ID_12345>"
        }
    }
]

You may specify different keys for different volumes. Providing a kmsKeyID without the encryption flag will result in an API error.

Conclusion

In this blog post, we demonstrated how you can quickly launch encrypted, EBS-backed instances from an unencrypted AMI in a few steps. You can also use the same process to launch EBS-backed encrypted volumes from unencrypted snapshots. This simplifies the process of launching instances with encrypted volumes and reduces your AMI storage costs.

This feature is available through the AWS Management Console, AWS CLI, or AWS SDKs at no extra charge in all commercial AWS regions except China. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nishit Nagar

Nishit is a Senior Product Manager at AWS.

Improve availability and latency of applications by using AWS Secret Manager’s Python client-side caching library

Post Syndicated from Paavan Mistry original https://aws.amazon.com/blogs/security/improve-availability-and-latency-of-applications-by-using-aws-secret-managers-python-client-side-caching-library/

Note from May 10, 2019: We’ve updated a code sample for accuracy.


Today, AWS Secrets Manager introduced a client-side caching library for Python that improves the availability and latency of accessing and distributing credentials to your applications. It can also help you reduce the cost associated with retrieving secrets. In this post, I’ll walk you through the following topics:

  • An overview of the Secrets Manager client-side caching library for Python
  • How to use the Python client-side caching library to retrieve a secret

Here are the key benefits of client-side caching libraries:

  • Improved availability: You can cache secrets to reduce the impact of network availability issues such as increased response times and temporary loss of network connectivity.
  • Improved latency: Retrieving secrets from the local cache is faster than retrieving secrets by sending API requests to Secrets Manager within a Virtual Private Network (VPN) or over the Internet.
  • Reduced cost: Retrieving secrets from the cache can reduce the number of API requests made to and billed by Secrets Manager.
  • Automatic refresh of secrets: The library updates the cache by calling Secrets Manager periodically, ensuring your applications use the most current secret value. This ensures any regularly rotated secrets are automatically retrieved.
  • Implementation in just two steps: Add the Python library dependency to your application, and then provide the identifier of the secret that you want the library to use.

Using the Secrets Manager client-side caching library for Python

First, I’ll walk you through an example in which I retrieve a secret without using the Python cache. Then I’ll show you how to update your code to use the Python client-side caching library.

Retrieving a secret without using a cache

Using the AWS SDK for Python (Boto3), you can retrieve a secret from Secrets Manager using the API call flow, as shown below.

Figure 1: Diagram showing GetSecretValue API call without the Python cache

Figure 1: Diagram showing GetSecretValue API call without the Python cache

To understand the benefits of using a cache, I’m going to create a sample secret using the AWS Command Line Interface (AWS CLI):


aws secretsmanager create-secret --name python-cache-test --secret-string "cache-test"

The code below demonstrates a GetSecretValue API call to AWS Secrets Manager without using the cache feature. Each time the application makes a call, the AWS Secrets Manager GetSecretValue API will also be called. This increases the secret retrieval latency. Additionally, there is a minor cost associated with an API call made to the AWS Secrets Manager API endpoint.


    import boto3
    import base64
    from botocore.exceptions import ClientError
    
    def get_secret():
    
        secret_name = "python-cache-test"
        region_name = "us-west-2"
    
        # Create a Secrets Manager client
        session = boto3.session.Session()
        client = session.client(
            service_name='secretsmanager',
            region_name=region_name
        )
    
        # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
        # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
        # We rethrow the exception by default.
    
        try:
            get_secret_value_response = client.get_secret_value(
                SecretId=secret_name
            )
        except ClientError as e:
            if e.response['Error']['Code'] == 'DecryptionFailureException':
                # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
        else:
            # Decrypts secret using the associated KMS CMK.
            # Depending on whether the secret is a string or binary, one of these fields will be populated.
            if 'SecretString' in get_secret_value_response:
                secret = get_secret_value_response['SecretString']
                print(secret)
            else:
                decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                
        # Your code goes here.
    
    get_secret()       

Using the Python client-side caching library to retrieve a secret

Using the Python cache feature, you can now use the cache library to reduce calls to the AWS Secrets Manager API, improving the availability and latency of your application. As shown in the diagram below, when you implement the Python cache, the call to retrieve the secret is routed to the local cache before reaching the AWS Secrets Manager API. If the secret exists in the cache, the application retrieves the secret from the client-side cache. If the secret does not exist in the client-side cache, the request is routed to the AWS Secrets Manager endpoint to retrieve the secret.

Figure 2: Diagram showing GetSecretValue API call using Python client-side cache

                                Figure 2: Diagram showing GetSecretValue API call using Python client-side cache

In the example below, I’ll implement a Python cache to retrieve the secret from a local cache, and hence avoid calling the AWS Secrets Manager API:


    import boto3
	import base64
	from aws_secretsmanager_caching import SecretCache, SecretCacheConfig

	from botocore.exceptions import ClientError

	def get_secret():

    	secret_name = "python-cache-test"
    	region_name = "us-west-2"

    	# Create a Secrets Manager client
    	session = boto3.session.Session()
    	client = session.client(
        	service_name='secretsmanager',
        	region_name=region_name
    	)

    	try:
        	# Create a cache
        	cache = SecretCache(SecretCacheConfig(),client)

        	# Get secret string from the cache
        	get_secret_value_response = cache.get_secret_string(secret_name)

    	except ClientError as e:
        	if e.response['Error']['Code'] == 'DecryptionFailureException':
            	# Deal with the exception here, and/or rethrow at your discretion.
            	raise e
    	else:
            	secret = get_secret_value_response
            	print(secret)
    	# Your code goes here.
	get_secret()    

The cache allows advanced configuration using the SecretCacheConfig library. This library allows you to define cache configuration parameters to help meet your application security, performance, and cost requirements. The SDK enforces the configuration thresholds on maximum cache size, default secret version stage to request, and secret refresh interval between requests. It also allows configuration of various exception thresholds. Further detail on this library is provided in the library.

Based on the secret refresh interval defined in your cache configuration, the cache will check the version of the secret at the defined interval, using the DescribeSecret API to determine if a new version is available. If there is a newer version of the secret, the cache will update to the latest version from AWS Secrets Manager, using the GetSecretValue API. This ensures that an updated version of the secret is available in the cache.

Additionally, the Python client-side cache library allows developers to retrieve secrets from the cache directly, using the secret name through decorator functions. An example of using a decorator function is shown below:


    from aws_secretsmanager_caching.decorators import InjectKeywordedSecretString
 
    class TestClass:
        def __init__(self):
            pass
     
        @InjectKeywordedSecretString('python-cache-test', cache, arg1='secret_key1', arg2='secret_key2')
        def my_test_function(arg1, arg2):
            print("arg1: {}".format(arg1))
            print("arg2: {}".format(arg2))
     
    test = TestClass()
    test.my_test_function()    

To delete the secret created in this post, run the command below:


aws secretsmanager delete-secret --secret-id python-cache-test --force-delete-without-recovery

Summary

In this post, we’ve showed how you can improve availability, reduce latency, and reduce API call cost for your secrets by using the Secrets Manager client-side caching library for Python. To get started managing secrets, open the Secrets Manager console. To learn more, read How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Paavan Mistry

Paavan is a Security Specialist Solutions Architect at AWS where he enjoys solving customers’ cloud security, risk, and compliance challenges. Outside of work, he enjoys reading about leadership, politics, law, and human rights.

How to BYOK (bring your own key) to AWS KMS for less than $15.00 a year using AWS CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-year-using-aws-cloudhsm/

Note: BYOK is helpful for certain use cases, but I recommend that you familiarize yourself with KMS best practices before you adopt this approach. You can review best practices in the AWS Key Management Services Best Practices (.pdf) whitepaper.

May 14, 2019: We’ve updated a sentence to clarify that this solution does not include instructions for creating an AWS CloudHSM cluster. For information about how to do this, please refer to our documentation.


Back in 2016, AWS Key Management Service (AWS KMS) announced the ability to bring your own keys (BYOK) for use with KMS-integrated AWS services and custom applications. This feature allows you more control over the creation, lifecycle, and durability of your keys. You decide the hardware or software used to generate the customer-managed customer master keys (CMKs), you determine when or if the keys will expire, and you get to upload your keys only when you need them and delete them when you’re done.

The documentation that walks you through how to import a key into AWS KMS provides an example that uses OpenSSL to create your master key. Some customers, however, don’t want to use OpenSSL. While it’s a valid method of creating the key material associated with a KMS CMK, security best practice is to perform key creation on a hardware security module (HSM) or a hardened key management system.

However, using an on-premises HSM to create and back up your imported keys can become expensive. You have to plan for factors like the cost of the device itself, its storage in a datacenter, electricity, maintenance of the device, and network costs, all of which can add up. An on-premises HSM device could run upwards of $10K annually even if used sparingly, in addition to the cost of purchasing the device in the first place. Even if you’re only using the HSM for key creation and backup and don’t need it on an ongoing basis, you might still need to keep it running to avoid complex re-initialization processes. This is where AWS CloudHSM comes in. CloudHSM offers HSMs that are under your control, in your virtual private cloud (VPC). You can spin up an HSM device, create your key material, export it, import it into AWS KMS for use, and then terminate the HSM (since CloudHSM saves your HSM state using secure backups). Because you’re only billed for the time your HSM instance is active, you can perform all of these steps for less than $15.00 a year!

Solution pricing

In this post, I’ll show you how to use an AWS CloudHSM cluster (which is a group that CloudHSM uses to keep all HSMs inside in sync with one another) with one HSM to create your key material. Only one HSM is needed, as you’ll use this HSM just for key generation and storage. Ongoing crypto operations that use the key will all be performed by KMS. AWS CloudHSM comes with an hourly fee that changes based on your region of choice; be sure to check the pricing page for updates prior to use. AWS KMS has a $1.00/month charge for all customer-managed CMKs, including those that you import yourself. In the following chart, I’ve calculated annual costs for some regions. These assume that you want to rotate your imported key annually and that you perform this operation once per year by running a single CloudHSM for one hour. I’ve included 12 monthly installments of $1.00/mo for your CMK. Depending on the speed of your application, additional costs may apply for KMS usage, which can be found on the AWS KMS pricing page.

REGIONCLOUDHSM PRICING STRUCTUREKMS PRICING STRUCTUREANNUAL TOTAL COST
US-EAST-1$1.60/HR$1/MO$13.60/YR
US-WEST-2$1.45/HR$1/MO$13.45/YR
EU-WEST-1$1.47/HR$1/MO$13.47/YR
US-GOV-EAST-1$2.08/HR$1/MO$14.08/YR
AP-SOUTHEAST-1$1.86/HR$1/MO$13.86/YR
EU-CENTRAL-1$1.92/HR$1/MO$13.92/YR
EU-NORTH-1$1.40/HR$1/MO$13.40/YR
AP-SOUTHEAST-2$1.99/HR$1/MO$13.99/YR
AP-NORTHEAST-1$1.81/HR$1/MO$13.81/YR

Solution overview

I’ll walk you through the process of creating a CMK in AWS KMS with no key material associated and then downloading the public key and import token that you’ll need in order to import the key material later on. I’ll also show you how to create, securely wrap, and export your symmetric key from AWS CloudHSM. Finally, I’ll show you how to import your key material into AWS KMS and then terminate your HSM to save on cost. The following diagram illustrates the steps covered in this post.
 

Figure 1: The process of creating a CMK in AWS KMS

Figure 1: The process of creating a CMK in AWS KMS

  1. Create a customer master key (CMK) in AWS KMS that has no key material associated.
  2. Download the import wrapping key and import token from KMS.
  3. Import the wrapping key provided by KMS into the HSM.
  4. Create a 256 bit symmetric key on AWS CloudHSM.
  5. Use the imported wrapping key to wrap the symmetric key.
  6. Import the symmetric key into AWS KMS using the import token from step 2.
  7. Terminate your HSM, which triggers a backup. Delete or leave your cluster, depending on your needs.

Prerequisites

In this walkthrough, I assume that you already have an AWS CloudHSM cluster set up and initialized with at least one HSM device, and an Amazon Elastic Compute Cloud (EC2) instance running Amazon Linux OS with the AWS CloudHSM client installed. You must have a crypto user (CU) on the HSM to perform the key creation and export functions. You’ll also need an IAM user or role with permissions to both AWS KMS and AWS CloudHSM, and with credentials configured in your AWS Command Line Interface (AWS CLI). Make sure to store your CU information and IAM credentials (if a user is created for this activity) in a safe location. You can use AWS Secrets Manager for secure storage.

Deploying the solution

For my demonstration, I’ll be using the AWS CLI. However, if you prefer working with a graphical user interface, step #1 and the important portion of step #3 can be done in the AWS KMS Console. All other steps are via AWS CLI only.

Step 1: Create the CMK with no key material associated

Begin by creating a customer master key (CMK) in AWS KMS that has no key material associated. The CLI command to create the CMK is as follows:

$ aws kms create-key –origin EXTERNAL –region us-east-1

If successful, you’ll see an output on the CLI similar to below. The KeyState will be PendingImport and the Origin will be EXTERNAL.


{
    "KeyMetadata": {
        "AWSAccountId": "111122223333",
        "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab",
        "Arn": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
        "CreationDate": 1551119868.343,
        "Enabled": false,
        "Description": "",
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyState": "PendingImport",
        "Origin": "EXTERNAL",
        "KeyManager": "CUSTOMER"
    }
}

Step 2: Download the public key and import token

With the CMK ID created, you now need to download the import wrapping key and the import token. Wrapping is a method of encrypting the key so that it doesn’t pass in plaintext over the network. You need both the wrapping key and the import token in order to import a key into AWS KMS. You’ll use the public key to encrypt your key material, which ensures that your key is not exposed as it’s imported. When AWS KMS receives the encrypted key material, it will use the corresponding private component of the import wrapping key to decrypt it. The public and private key pair used for this action will always be a 2048-bit RSA key that is unique to each import operation. The import token contains metadata to ensure the key material is imported to the correct CMK ID. Both the import token and import wrapping key are on a time limit, so if you don’t import your key material within 24 hours of downloading them, the import wrapping key and import token will expire and you’ll need to download a new set from KMS.

  1. (OPTIONAL) In my example command, I’ll use the wrapping algorithm RSAES_OAEP_SHA_256 to encrypt and securely import my key into AWS KMS. If you’d like to choose a different algorithm, you may use any of the following: RSAES_OAEP_SHA_256, RSAES_OAEP_SHA_1, or RSAES_PKCS1_V1_5.
  2. In the command below, replace the example key ID (shown in red) with the key ID you received in step 1. If you’d like, you can also modify the wrapping algorithm. Then run the command.

    $ aws kms get-parameters-for-import –key-id <1234abcd-12ab-34cd-56ef-1234567890ab> –wrapping-algorithm <RSAES_OAEP_SHA_256> –wrapping-key-spec RSA_2048 –region us-east-1

    If the command is successful, you’ll see an output similar to what’s below. Your output will contain the import token and import wrapping key. Copy this output to a safe location, as you’ll need it in the next step. Keep in mind, these items will expire after 24 hours.

    
        {
            "KeyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab ",
            "ImportToken": "AQECAHjS7Vj0eN0qmSLqs8TfNtCGdw7OfGBfw+S8plTtCvzJBwAABrMwggavBgkqhkiG9w0BBwagggagMIIGnAIBADCCBpUGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMpK6U0uSCNsavw0OJAgEQgIIGZqZCSrJCf68dvMeP7Ah+AfPEgtKielFq9zG4aXAIm6Fx+AkH47Q8EDZgf16L16S6+BKc1xUOJZOd79g5UdETnWF6WPwAw4woDAqq1mYKSRtZpzkz9daiXsOXkqTlka5owac/r2iA2giqBFuIXfr2RPDB6lReyrhAQBYTwefTHnVqosKVGsv7/xxmuorn2iBwMfyqoj2gmykmoxrmWmCdK+jRdWKBHtriwplBTEHpUTdHmQnQzGVThfr611XFCQDZ+uk/wpVFY4jZ/Z/…"
            "PublicKey": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4KCiW36zofngmyVTOpMuFfHAXLWTfl2BAvaeq5puewoMt1zPwBP23qKdG10L7ChTVt4H9PCs1vzAWm2jGeWk5fO…",
            "ParametersValidTo": 1551207484.208
        }
        

  3. Because the output of the command is base64 encoded, you must base64 decode both components into binary data before use. To do this, use your favorite text editor to save the output into two separate files. Name one ImportToken.b64—it will have the output of the “ImportToken” section from above. Name the other PublicKey.b64—it will have the output of the “PublicKey” section from above. You will find example commands to save both below.
    
        echo 'AQECAHjS7Vj0eN0qmSLqs8TfNtCGdw7OfGBfw+S8plTtCvzJBwAABrMwggavBgkqhkiG9w0BBwagggagMIIGnAIBADCCBpUGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMpK6U0uSCNsavw0OJAgEQgIIGZqZCSrJCf68dvMeP7Ah+AfPEgtKielFq9zG4aXAIm6Fx+AkH47Q8EDZgf16L16S6+BKc1xUOJZOd79g5UdETnWF6WPwAw4woDAqq1mYKSRtZpzkz9daiXsOXkqTlka5owac/r2iA2giqBFuIXfr2RPDB6lReyrhAQBYTwefTHnVqosKVGsv7/xxmuorn2iBwMfyqoj2gmykmoxrmWmCdK+jRdWKBHtriwplBTEHpUTdHmQnQzGVThfr611XFCQDZ+uk/wpVFY4jZ/Z' > /ImportToken.b64
    
        echo 'MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4KCiW36zofngmyVTOpMuFfHAXLWTfl2BAvaeq5puewoMt1zPwBP23qKdG10L7ChTVt4H9PCs1vzAWm2jGeWk5fO' > /PublicKey.b64 
        

  4. On both saved files, you must then run the following commands, which will base64 decode them and save them in their binary form:
    
        $ openssl enc -d -base64 -A -in PublicKey.b64 -out PublicKey.bin
    
        $ openssl enc -d -base64 -A -in ImportToken.b64 -out ImportToken.bin
        

Step 3: Import the import wrapping key provided by AWS KMS into your HSM

Now that you’ve created the CMK ID and prepared for the key material import, you’re going to move to AWS CloudHSM to create and export your symmetric encryption key. You’re going to import the import wrapping key provided by AWS KMS into your HSM. Before completing this step, be sure you’ve set up the prerequisites I listed at the start of this post.

  1. Log into your EC2 instance with the AWS CloudHSM client package installed, and launch the Key Management Utility. You will launch the utility using the command below:

    $ /opt/cloudhsm/bin/key_mgmt_util

  2. After launching the key_mgmt_util, you’ll need to log in as the CU user. Do so with the command below, being sure to replace <ExampleUser> and <Password123> for your own CU user and password.

    Command: loginHSM -u CU -p <Password123> -s <ExampleUser>

    You should see an output similar to the example below, letting you know that you’ve logged in correctly:

        
        Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS
        Cluster Error Status
        Node id 1 and err state 0x00000000 : HSM Return: SUCCESS
        

    Next, you’ll import the import wrapping key provided by KMS into the HSM, so that you can use it to wrap the symmetric key you’re going to create.

  3. Open the file PublicKey.b64 in your favorite text editor and add the line —–BEGIN PUBLIC KEY—– at the very top of the file and the line —–END PUBLIC KEY—— at the very end. (An example of how this should look is below.) Save this new file as PublicKey.pem.
       
        -----BEGIN PUBLIC KEY-----
        MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0UnLXHbP/jC/QykQQUroaB0xQYYVaZge//NrmIdYtYvaw32fcEUYjTHovWkYkmifUFVYkNaWKNRGuICo1oAY5cgowgxYcsBXZ4Pk2h3v43tIsxG63ZDLKDFju/dtGaa8+CwR8mR…
        -----END PUBLIC KEY-----
        

  4. Use the importPubKey command from the key_mgmt_util to import the key file. An example of the command is below. For the -l flag (label), I’ve named my example <wrapping-key> so that I’ll know what it’s for. You can name yours whatever you choose. Also, remember to replace <PublicKey.pem> with the actual filename of your public key.

    Command: importPubKey -l <wrapping-key> -f <PublicKey.pem>

    If successful, your output should look similar to the following example. Make note of the key handle, as this is the identifying ID for your wrapping key:

          
        Cfm3CreatePublicKey returned: 0x00 : HSM Return: SUCCESS
    
        Public Key Handle: 30 
        
        Cluster Error Status
        Node id 2 and err state 0x00000000 : HSM Return: SUCCESS
        

Step 4: Create a symmetric key on AWS CloudHSM

Next, you’ll create a symmetric key that you want to export from CloudHSM, so that you can import it into AWS KMS.

The first parameter you’ll use for this action is -t with the value 31. This is the key-type flag and 31 equals an AES key. KMS only accepts the AES key type. The second is -s with a value of 32. This is the key-size flag, and 32 equals a 32-byte key (256 bits). The last flag is the -l flag with the value BYOK_Key. This flag is simply a label to name your key, so you may alter this value however you wish.

Here’s what my example looks like:

Command: genSymKey -t 31 -s 32 -l BYOK-KMS

If successful, your output should look similar to what’s below. Make note of the key handle, as this will be the identifying ID of the key you wish to import into AWS KMS.

  
Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

Symmetric Key Created. Key Handle: 20

Cluster Error Status
Node id 1 and err state 0x00000000 : HSM Return: SUCCESS

Step 5: Use the imported import wrapping key to wrap the symmetric key

Now that you’ve imported the import wrapping key into your HSM in the PublicKey.pem file, I’ll show you how to use the PublicKey.pem file and the key_mgmt_util to wrap your symmetric key out of the HSM.

An example of the command is below. Here’s how to customize the parameters:

  • -k refers to the key handle of your symmetric key. Replace the variable that follows -k with your own key handle ID from step 4.
  • -w refers to the key handle of your wrapping key. Replace the variable with your own ID from step 3.4
  • -out refers to the file name of your exported key. Replace the variable with a file name of your choice.
  • -m refers to the wrapping-mechanism. In my example, I’ve used RSA-OAEP, which has a value of 8.
  • -t refers to the hash-type. In my example, I’ve used SHA256, which has a value of 3.
  • -noheader refers to the -noheader flag. This omits the header that specifies CloudHSM-specific key attributes.

Command: wrapKey -k <20> -w <30> -out <KMS-BYOK-March2019.bin> -m 8 -t 3 -noheader

If successful, you should see an output similar to what’s below:

 
Cfm2WrapKey5 returned: 0x00 : HSM Return: SUCCESS

	Key Wrapped.

	Wrapped Key written to file "KMS-BYOK-March2019.bin length 256

	Cfm2WrapKey returned: 0x00 : HSM Return: SUCCESS

With that, you’ve completed the AWS CloudHSM steps. You now have a symmetric key, securely wrapped for transport into AWS KMS. You can log out of the Key Management Utility by entering the word exit.

Step 6: Import the wrapped symmetric key into AWS KMS using the key import function

You’re ready to import the wrapped symmetric key into AWS KMS using the key import function. Make the following updates to the sample command that I provide below:

  • Replace the key-id value and file names with your own.
  • Leave the expiration model parameter blank, or choose from
    KEY_MATERIAL_EXPIRES or KEY_MATERIAL_DOES_NOT_EXPIRE. If you leave the parameter blank, it will default to KEY_MATERIAL_EXPIRES. This expiration model requires you to set the –valid-to parameter, which can be any time of your choosing so long as it follows the format in the example. If you choose KEY_MATERIAL_DOES_NOT_EXPIRE, you may leave the –valid-to option out of the command. To enforce an expiration and rotation of your key material, best practice is to use the KEY_MATERIAL_EXPIRES option and a date of 1-2 years.

Here’s my sample command:

$ aws kms import-key-material –key-id <1234abcd-12ab-34cd-56ef-1234567890ab> –encrypted-key-material fileb://<KMS-BYOK-March2019.bin> –import-token fileb://<ImportToken.bin> –expiration-model KEY_MATERIAL_EXPIRES –valid-to 2020-03-01T12:00:00-08:00 –region us-east-1

You can test that the import was successful by using the key ID to encrypt a small (under 4KB) file. An example command is below:

$ aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext file://test.txt –region us-east-1

A successful call will output something similar to below:

 
{
    "KeyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab ", 
    "CiphertextBlob": "AQICAHi4Pj/Jhk6aHMZNegOpYFnnuiEKAtjRWwD33TdDnKNY5gHbUVdrwptz…"
}

Step 7: Terminate your HSM (which triggers a backup)

The last step in the process is to Terminate your HSM (which triggers a backup). Since you’ve imported the key material into AWS KMS, you no longer need to run the HSM. Terminating the HSM will automatically trigger a backup, which will take an exact copy of the key material and the users on your HSM and store it, encrypted, in an Amazon Simple Storage Service (Amazon S3) bucket. If you need to re-import your key into KMS or if your company’s security policy requires an annual rotation of your KMS master key(s), when the time comes, you can spin up an HSM from your CloudHSM backup and be back to where you started. You can re-import your existing key material or create key materials for your new CMK, either way, you’ll need to request a fresh import wrapping key and import token from KMS.

Deleting an HSM can be done with the command below. Replace <cluster-example> with your actual cluster ID and <hsm-example> with your HSM ID:

$ aws cloudhsmv2 delete-hsm –cluster-id <cluster-example> –hsm-id <hsm-example> –region us-east-1

Following these steps, you will have successfully defined a KMS CMK, created your own key material on a CloudHSM device, and then imported it into AWS KMS for use. Dependent upon region, all of this can be done for less than $15.00 a year.

Summary

In this post, I walked you through creating a CMK ID without encryption key material associated, creating and then exporting a symmetric key from AWS CloudHSM, and then importing it into AWS KMS for use with KMS-integrated AWS services. This process allows you to maintain ownership and management over your master keys, create them on FIPS 140-2 Level 3 validated hardware, and maintain a copy for disaster recovery scenarios. This saves not only the time and personnel required to maintain your own HSMs on-premises, but the cost of hardware, electricity, and housing of the device.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM or AWS KMS forums.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to migrate your EC2 Oracle Transparent Data Encryption (TDE) database encryption wallet to CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-migrate-your-ec2-oracle-transparent-data-encryption-tde-database-encryption-wallet-to-cloudhsm/

In this post, I’ll show you how to migrate an encryption wallet for an Oracle database installed on Amazon EC2 from using an outside HSM to using AWS CloudHSM. Transparent Data Encryption (TDE) for Oracle is a common use case for Hardware Security Module (HSM) devices like AWS CloudHSM. Oracle TDE uses what is called “envelope encryption.” Envelope encryption is when the encryption key used to encrypt the tables of your database is in turn encrypted by a master key that resides either in a software keystore or on a hardware keystore, like an HSM. This master key is non-exportable by design to protect the confidentiality and integrity of your database encryption. This gives you a more granular encryption scheme on your data.

An encryption wallet is an encrypted container used to store the TDE master key for your database. The encryption wallet needs to be opened manually after a database startup and prior to the TDE encrypted data being accessed, so the master key is available for data decryption. The process I talk about in this post can be used with any non-AWS hardware or software encryption wallet, or a hardware encryption wallet that utilizes AWS CloudHSM Classic. For my examples in this tutorial, I will be migrating from a CloudHSM Classic to a CloudHSM cluster. It is worth noting that Gemalto has announced the end-of-life for Luna 5 HSMs, which our CloudHSM Classic fleet uses.

Note: You cannot migrate from an Oracle instance in
Amazon Relational Database Service (Amazon RDS) to AWS CloudHSM. You must install the Oracle database on an Amazon EC2 instance. Amazon RDS is not currently integrated with AWS CloudHSM.

When you move from one type of encryption wallet to another, new TDE master keys are created inside the new wallet. To ensure that you have access to backups that rely on your old HSM, consider leaving the old HSM running for your normal recovery window period. The steps I discuss will perform the decryption of your TDE keys and then re-encrypt them with the new TDE master key for you.

Once you’ve migrated your Oracle databases to use AWS CloudHSM as your encryption wallet, it’s also a good idea to set up cross-region replication for disaster recovery efforts. With copies of your database and encryption wallet in another region, you can be back in production quickly should a disaster occur. I’ll show you how to take advantage of this by setting up cross-region snapshots of your Oracle database Amazon Elastic Block Store (EBS) volumes and copying backups of your CloudHSM cluster between regions.

Solution overview

For this solution, you will modify the Oracle database’s encryption wallet to use AWS CloudHSM. This is completed in three steps, which will be detailed below. First, you will switch from the current encryption wallet, which is your original HSM device, to a software wallet. This is done by reverse migrating to a local wallet. Second, you’ll replace the PKCS#11 provider of your original HSM with the CloudHSM PKCS#11 software library. Third, you’ll switch the encryption wallet for your database to your CloudHSM cluster. Once this process is complete, your database will automatically re-encrypt all data keys using the new master key.

To complete the disaster recovery (DR) preparation portion of this post, you will perform two more steps. These consist of copying over snapshots of your EC2 volumes and your CloudHSM cluster backups to your DR region. The following diagram illustrates the steps covered in this post.
 

Figure 1: Steps to migrate your EC2 Oracle TDE database encryption wallet to CloudHSM

Figure 1: Steps to migrate your EC2 Oracle TDE database encryption wallet to CloudHSM

  1. Switch the current encryption wallet for the Oracle database TDE from your original HSM to a software wallet via a reverse migration process.
  2. Replace the PKCS#11 provider of your original HSM with the AWS CloudHSM PKCS#11 software library.
  3. Switch your encryption wallet to point to your AWS CloudHSM cluster.
  4. (OPTIONAL) Set up cross-region copies of the EC2 instance housing your Oracle database
  5. (OPTIONAL) Set up a cross-region copy of your recent CloudHSM cluster backup

Prerequisites

This process assumes you have the below items already set up or configured:

Deploy the solution

Now that you have the basic steps, I’ll go into more detail on each of them. I’ll show you the steps to migrate your encryption wallet to a software wallet using a reverse migration command.

Step 1: Switching the current encryption wallet for the Oracle database TDE from your original HSM to a software wallet via a reverse migration process.

To begin, you must configure the sqlnet.ora file for the reverse migration. In Oracle databases, the sqlnet.ora file is a plain-text configuration file that contains information like encryption, route of connections, and naming parameters that determine how the Oracle server and client must use the capabilities for network database access. You will want to create a backup so you can roll back in the event of any errors. You can make a copy with the below command. Make sure to replace </path/to/> with the actual path to your sqlnet.ora file location. The standard location for this file is “$ORACLE_HOME/network/admin“, but check your setup to ensure this is correct.

cp </path/to/>sqlnet.ora </path/to/>sqlnet.ora.backup

The software wallet must be created before you edit this file, and it should preferably be empty. Then, using your favorite text editor, open the sqlnet.ora file and set the below configuration. If an entry already exists, replace it with the below text.


ENCRYPTION_WALLET_LOCATION=
  (SOURCE=(METHOD=FILE)(METHOD_DATA=
    (DIRECTORY=<path_to_keystore>)))

Make sure to replace the <path_to_keystore> with the directory location of your destination wallet. The destination wallet is the path you choose for the local software wallet. You will notice in Oracle the words “keystore” and “wallet” are interchangeable for this post. Next, you’ll configure the wallet for the reverse migration. For this, you will use the ADMINISTER KEY MANAGEMENT statement with the SET ENCRYPTION KEY and REVERSE MIGRATE clauses as shown in the example below.

By using the REVERSE MIGRATE USING clause in your statement, you ensure the existing TDE table keys and tablespace encryption keys are decrypted by the hardware wallet TDE master key and then re-encrypted with the software wallet TDE master key. You will need to log into the database instance as a user that has been granted the ADMINISTER KEY MANAGEMENT or SYSKM privileges to run this statement. An example of the login is below. Make sure to replace the <sec_admin> and <password> with your administrator user name and password for the database.


sqlplus c##<sec_admin> syskm
Enter password: <password> 
Connected.

Once you’re connected, you’ll run the SQL statement below. Make sure to replace <password> with your own existing wallet password and <username:password> with your own existing wallet user ID and password. We are going to run this statement with the WITH BACKUP parameter, as it’s always ideal to take a backup in case something goes incorrectly.

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY <password> REVERSE MIGRATE USING “<username:password>” WITH BACKUP;

If successful, you will see the text keystore altered. When complete, you do not need to restart your database or manually re-open the local wallet as the migration process loads this into memory for you.

With the migration complete, you’ll now move onto the next step of replacing the PKCS#11 provider of your original HSM with the CloudHSM PKCS#11 software library. This library is a PKCS#11 standard implementation that communicates with the HSMs in your cluster and is compliant with PKCS#11 version 2.40.

Step 2: Replacing the PKCS#11 provider of your original HSM with the AWS CloudHSM PKCS#11 software library.

You’ll begin by installing the software library with the below two commands.

wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL6/cloudhsm-client-pkcs11-latest.el6.x86_64.rpm

sudo yum install -y ./cloudhsm-client-pkcs11-latest.el6.x86_64.rpm

When installation completes, you will be able to find the CloudHSM PKCS#11 software library files in the directory, the default directory for AWS CloudHSM’s software library installs. To ensure processing speed and throughput capabilities of the HSMs, I suggest installing a Redis cache as well. This cache stores key handles and attributes locally, so you may access them without making a call to the HSMs. As this step is optional and not required for this post, I will leave the link for installation instructions here. With the software library installed, you want to ensure the CloudHSM client is running. You can check this with the command below.

sudo start cloudhsm-client

Step 3: Switching your encryption wallet to point to your AWS CloudHSM cluster.

Once you’ve verified the client is running, you’re going to create another backup of the sqlnet.ora file. It’s always a good idea to take backups before making any changes. The command would be similar to below, replacing </path/to/> with the actual path to your sqlnet.ora file.

cp </path/to/>sqlnet.ora </path/to/>sqlnet.ora.backup2

With this done, again open the sqlnet.ora file with your favorite text editor. You are going to edit the line encryption_wallet_location to resemble the below text.


ENCRYPTION_WALLET_LOCATION=
  (SOURCE=(METHOD=HSM))

Save the file and exit. You will need to create the directory where your Oracle database will expect to find the library file for the AWS CloudHSM PKCS#11 software library. You do this with the command below.

sudo mkdir -p /opt/oracle/extapi/64/hsm

With the directory created, you next copy over the CloudHSM PKCS#11 software library from the original installation directory to this one. It is important this new directory only contain the one library file. Should any files exist in the directory that are not directly related to the way you installed the CloudHSM PKCS#11 software library, remove them. The command to copy is below.

sudo cp /opt/cloudhsm/lib/libcloudhsm_pkcs11_standard.so /opt/oracle/extapi/64/hsm

Now, modify the ownership of the directory and everything inside. The Oracle user must have access to these library files to run correctly. The command to do this is below.

sudo chown -R oracle:dba /opt/oracle

With that done, you can start your Oracle database. This completes the migration of your encryption wallet and TDE keys from your original encryption wallet to a local wallet, and finally to CloudHSM as the new encryption wallet. Should you decide you wish to create new TDE master encryption keys on CloudHSM, you can follow the steps here to do so.

These steps are optional, but helpful in the event you must restore your database to production quickly. For customers that leverage DR environments, we have two great blog posts here and here to walk you through each step of the cross-region replication process. The first uses a combination of AWS Step Functions and Amazon CloudWatch Events to copy your EBS snapshots to your DR region, and the second showcases how to copy your CloudHSM cluster backups to your DR region.

Summary

In this post, I walked you through how to migrate your Oracle TDE database encryption wallet to point it to CloudHSM for secure storage of your TDE. I showed you how to properly install the CloudHSM PKCS#11 software library and place it in the directory for Oracle to find and use. This process can be used to migrate most on-premisis encryption wallet to AWS CloudHSM to ensure security of your TDE keys and meet compliance requirements.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

AWS Security Profiles: Paul Hawkins, Security Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-paul-hawkins-security-solutions-architect/

Leading up to AWS Summit Sydney, we’re sharing our conversation with Paul Hawkins, who helped put together the summit’s “Secure” track, so you can learn more about him and some of the interesting work that he’s doing.


What does a day in the life of an AWS Security Solutions Architect look like?

That’s an interesting question because it varies a lot. As a Security Solutions Architect, I cover Australia and New Zealand with Phil Rodrigues—we split a rather large continent between us. Our role is to help AWS account teams help AWS customers with security, risk, and compliance in the cloud. If a customer is coming to the cloud from a on-premises environment, their security team is probably facing a lot of changes. We help those teams understand what it’s like to run security on AWS—how to think about the shared responsibility model, opportunities to be more secure, and ways to enable their businesses. My conversations with customers range from technical deep-dives to high-level questions about how to conceptualize security, governance, and risk in a cloud environment. A portion of my work also involves internal enablement. I can’t scale to talk to every customer, so teaching other customer-facing AWS teams how to have conversations about security is an important part of my role.

How do you explain your job to non-tech friends?

I say that my job has two functions. First, I explain complex things to people who are not experts in that domain so that they can understand how to do them. Second, I help people be more secure in the cloud. Sometimes I then have to explain what “the cloud” is. How I explain my job to friends reminds me of how I explain the cloud to brand new customers—people usually figure out how it works by comparing it to their existing mental models.

What’s your favorite part of your job?

Showing people that moving to the cloud doesn’t have to be scary. In fact, it’s often an opportunity to be more secure. The cloud is a chance for security folks—who have traditionally been seen as a “Department of No” or a blocker to the rest of the organization—to become an enabling function. Fundamentally, every business is about customer trust. And how do you make sure you maintain the trust of your customers? You keep their data secure. I get to help the organizations I work with to get applications and capabilities out to their customers more swiftly—and also more securely. And that’s a really great thing.

Professionally, what’s your background? What prompted you to move into the field of cloud security at AWS?

I used to work in a bank as a Security Architect. I was involved in helping the business move a bunch of workloads into AWS. In Australia, we have a regulator called APRA (the Australian Prudential Regulatory Authority). If you’re an insurance or financial services company who is running a material workload (that is, a workload that has a significant impact on the banking or financial services industry) you have to submit information to this regulator about how you’re going to run the workload, how you’re going to operationalize it, what your security posture looks like, and so on. After reviewing how you’re managing risk, APRA will hopefully give you a “no objection” response. I worked on the first one of those submissions in Australia.

Working in a bank with a very traditional IT organization and getting to bring that project over the finish line was a great experience. But I also witnessed a shift in perspective from other teams about what “security” meant. I moved from interacting with devs who didn’t want to come talk to us because we were the security folks to a point, just before I left, where I was getting emails from people in the dev and engineering teams, telling me how they built controls because we empowered them with the idea that “security is everyone’s job.” I was getting emails saying, “I built this control, but I think we can use it in other places!” Having the dev community actively working with the security team to make things better for the entire organization was an amazing cultural change. I wanted to do more work like that.

Are there any unique challenges—security or otherwise—that customers in New Zealand or Australia face when moving to the cloud?

If you look at a lot of regulators in this region, like the APRA, or an Australian government standard like IRAP, or compare these regulators with programs like FedRAMP in the US, you’ll see that everything tends to roll up toward requirements in the style of (for example) the NIST Cybersecurity Framework. When it comes to security, the fundamentals don’t change much. You need to identify who has access to your environment, you need to protect your network, you need good logging visibility, you need to protect data using encryption, and you need to have a mechanism for responding instantly to changes. I do think Australia has some interesting challenges in terms of the geographical size of the country, and the distant spread of population between the east and west coasts. Otherwise, the challenges are similar to what customers face globally: understanding shared responsibility, understanding how to build securely on the cloud, and understanding the differences from what they’ve traditionally been doing.

What’s the most common misperception you encounter about cloud security?

People think it’s a lot harder than it is. Some people also have the tendency to focus on esoteric edge cases, when the most important thing to get right is the foundation. And the foundation is actually straightforward: You follow best practices, and you follow the Well-Architected Framework. AWS provides a lot of guidance on these topics.

I talk to a lot of security folks, architects, instant responders, and CISOs, and I end up saying something similar to everyone: As you begin your cloud journey, you’ve probably got a hundred things you’re worried about. That’s reasonable. As a security person, your job is to worry about what can happen. But you should focus on the top five things that you need to do right now, and the top five things that are going to require a bit of thought across your organization. Get those things done. And then chip away at the rest—you can’t solve everything all at once. It’s better to get the foundations in and start building while raising your organization’s security bar, rather than spin your wheels for months because you can’t map out every edge case.

During the course of your work, what cloud security trends have you noticed that you’re excited about?

I’m really pleased to see more and more organizations genuinely embrace automation. Keeping humans away from systems is a great way to drive consistency: consistency of environments means you can have consistency of response, which is important for security.

As humans, we aren’t great at doing the same thing repeatedly. We’ll do okay for a bit, but then we get distracted. Automated systems are really good at consistently doing the same things. If you’re starting at the very beginning of an environment, and you build your AWS accounts consistently, then your monitoring can also be consistent. You don’t have to build a complicated list of exceptions to the rules. And that means you can have automation covering how you build environments, how you build applications into environments, and how to detect and respond in environments. This frees up the people in your organization to focus on the interesting stuff. If people are focused on interesting challenges, they’re going to be more engaged, and they’re going to deliver great things. No one wants to just put square pegs in square holes every day at work. We want to be able to exercise our minds. Security automation enables that.

What does cloud security mean to you, personally?

I genuinely believe that I have an incredible opportunity to help as many customers as possible be more secure in the cloud. “Being more secure in the cloud” doesn’t just mean configuring AWS services in a way that’s sensible—it also means helping drive cultural change, and moving peoples’ perceptions of security away from, “Don’t go talk to those people because they’re going to yell at you” to security as an enabling function for the business. Security boils down to “keeping the information of humans protected.” Whether that’s banking information or photos on a photo-sharing site, the fundamental approach should be the same. At the end of the day, we’re building things that humans will use. As security people, we need to make it easier for our engineers to build securely, as well as for end users to be confident their data is protected—whatever that data is.

I get to help these organizations deliver their services in a way that’s safer and enables them to move faster. They can build new features without having to worry about enduring a six-month loop of security people saying, “No, you can’t do that.” Helping people understand what’s possible with the technology and helping people understand how to empower their teams through that technology is an incredibly important thing for all parts of an organization, and it’s deeply motivating to me.

Five years from now, what changes do you think we’ll see across the cloud security and compliance landscape?

I believe the ways people think about security in the cloud will continue to evolve. AWS is releasing more higher-function services like Amazon GuardDuty and AWS Security Hub that make it easier for customers to be more secure. I believe people will become more comfortable using these tools as they start to understand that we’re devoting a huge amount of effort to making these services provide useful, actionable information for customers, rather than just being another set of logs to look at. This will allow customers to focus on the aspects of their organization that deliver business value, while worrying less about the underlying composition of services.

At the moment, people approach cloud security by applying a traditional security mindset. It’s normal to come to the cloud from a physical environment, where you could touch and see the servers, and you could see blinking lights. This background can color the ways that people think about the cloud. In a few years’ time, as people become more comfortable with this new way of thinking about security, I think customers will start to come to us right out of the gate with questions like, “What specifics services do I need, and how do I configure them to make my environment better?”

You’ve been involved in putting together the Security track at the upcoming AWS summit in Sydney. What were you looking for as you selected session topics?

We have ten talks in the “Secure” track, and we’ve selected topics to address as many customer needs as possible. That means sessions for people who are just getting started and have questions like, “What foundational things can I turn on to ensure I start my cloud journey securely?” It also means sessions for people who are very adept at using cloud services and want to know things like, “How do I automate incidence response and forensics?” We’ve also talked to people who run organizations that don’t even have a security team—often small startups—who want to get their heads wrapped around cloud security. So, hopefully we have sessions that appeal to a range of customers.

We’re including existing AWS customers in nine out of the ten talks. These customers fall across the spectrum, from some of our largest enterprise customers to public sector, startups, mid-market, and financial services. We’ve tried to represent both breadth and depth of customer stories, because we think these stories are incredibly important. We had a few customers in the track last year, and we got a great response from the audience, who appreciated the chance to hear from peers, or people in similar industries, about how they improved their security posture on AWS.

What do you hope people gain from attending the “Secure” track?

Regardless of the specific sessions that people attend, I want them walk away saying, “Wow, I can do this in my organization right now.” I want people to see the art of the possible for cloud security. You can follow the prescriptive advice from various talks, go out, and do things for your organization. It shouldn’t be some distant, future goal, either. We offer prescriptive guidance for what you can do right now.

Say you’re in a session about secrets management. We might say, This is the problem we’re talking about, this is how to approach it using AWS Identity and Access Management (IAM) roles, and if you can’t use AWS IAM roles, here how to use AWS Secrets Manager. Next, here’s a customer to talk about how they think of secrets management in a multi-account environment. Next, here are a bunch of different use cases. Finally, here are the places you can go for further information, and here’s how you can get started today.” My hope is that the talk inspires people to go and build and be more secure immediately. I want people to leave the Summit and immediately start building.

We’re really proud of the track. We’ve got a range of customer perspectives and a range of topics that hopefully covers as much of the amazing breadth of cloud security as we can fit into ten talks.

Sometimes you make music and post it to Soundcloud. Who are your greatest musical influences?

Argh. There are so many. I went through a big Miles Davis phase, more from listening than in any way capable of being that good. I also draw inspiration from shouty English punk bands like the Buzzcocks, plus quite a lot of hip-hop. That includes both classic hip-hop like De La Soul or A Tribe Called Quest and more recent stuff like Run the Jewels. They’re an American band out of Atlanta who I listen to quite a lot at the moment. There are a lot of different groups I could point to, depending on mood. I’ve been posting my music to Soundcloud for ten years or so. Long enough that I should be better. But it’s a journey, and of course time is a limiting factor—AWS is a very busy place.

We understand that you’ve switched from playing cricket to baseball. What turned you into a baseball fan?

I moved from Sydney to Melbourne. In Sydney, the climate is okay for playing outdoor cricket in the winter. But in Melbourne, it’s not really a winter sport. I was looking for something to do, so I started playing winter baseball with a local team. The next summer, I played both cricket and baseball—they were on different days—but it became quite confusing because there are some fundamental differences. I ended up enjoying baseball more, and it took a bit less time. Cricket is definitely a full day event. Plus, one of the great things about baseball is that as a hitter you’re sort of expected to fail 60% of the time. But you get another go. If you’re out at cricket, that is it for your day. With baseball, you’re engaged for a lot longer during the game.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Paul Hawkins

Paul has more than 15 years of information security experience with a significant part of that working with financial services. He works across the range of AWS customers from start-ups to the largest enterprises to improve their security, risk, and compliance in the cloud.

Trimming AWS WAF logs with Amazon Kinesis Firehose transformations

Post Syndicated from Tino Tran original https://aws.amazon.com/blogs/security/trimming-aws-waf-logs-with-amazon-kinesis-firehose-transformations/

In an earlier post, Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight, published on March 28, 2019, the authors showed you how to stream WAF logs with Amazon Kinesis Firehose for visualization using QuickSight. This approach used no filtering of the logs so that you could visualize the full data set. However, you are often only interested in seeing specific events. Or you might be looking to minimize log size to save storage costs. In this post, I show you how to apply rules in Amazon Kinesis Firehose to trim down logs. You can then apply the same visualizations you used in the previous solution.

AWS WAF is a web application firewall that supports full logging of all the web requests it inspects. For each request, AWS WAF logs the raw HTTP/S headers along with information on which AWS WAF rules were triggered. Having complete logs is useful for compliance, auditing, forensics, and troubleshooting custom and Managed Rules for AWS WAF. However, for some use cases, you might not want to log all of the requests inspected by AWS WAF. For example, to reduce the volume of logs, you might only want to log the requests blocked by AWS WAF, or you might want to remove certain HTTP header or query string parameter values from your logs. In many cases, unblocked requests are often already stored in your CloudFront access logs or web server logs and, therefore, using AWS WAF logs can result in redundant data for these requests, while logging blocked traffic can help you to identify bad actors or root cause false positives.

In this post, I’ll show you how to create an Amazon Kinesis Data Firehose stream to filter out unneeded records, so that you only retain log records for requests that were blocked by AWS WAF. From here, the logs can be stored in Amazon S3 or directed to SIEM (Security information and event management) and log analysis tools.

To simplify things, I’ll provide you with a CloudFormation template that will create the resources highlighted in the diagram below:
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A Kinesis Data Firehose delivery stream is used to receive log records from AWS WAF.
  2. An IAM role for the Kinesis Data Firehose delivery stream, with permissions needed to invoke Lambda and write to S3.
  3. A Lambda function used to filter out WAF records matching the default action before the records are written to S3.
  4. An IAM role for the Lambda function, with the permissions needed to create CloudWatch logs (for troubleshooting).
  5. An S3 bucket where the WAF logs will be stored.

Prerequisites and assumptions

  • In this post, I assume that the AWS WAF default action is configured to allow requests that don’t explicitly match a blocking WAF rule. So I’ll show you how to omit any records matching the WAF default action.
  • You need to already have a AWS WAF WebACL created. In this example, you’ll use a WebACL generated from the AWS WAF OWASP 10 template. For more information on deploying AWS WAF to a CloudFront or ALB resource, see the Getting Started page.

Step 1: Create a Kinesis Data Firehose delivery stream for AWS WAF logs

In this step, you’ll use the following CloudFormation template to create a Kinesis Data Firehose delivery stream that writes logs to an S3 bucket. The template also creates a Lambda function that omits AWS WAF records matching the default action.

Here’s how to launch the template:

  1. Open CloudFormation in the AWS console.
  2. For WAF deployments on Amazon CloudFront, select region US-EAST-1. Otherwise, create the stack in the same region in which your AWS WAF Web ACL is deployed.
  3. Select the Create Stack button.
  4. In the CloudFormation wizard, select Specify an Amazon S3 template URL and copy and paste the following URL into the text box, then select Next:
    https://s3.amazonaws.com/awsiammedia/public/sample/TrimAWSWAFLogs/KinesisWAFDeliveryStream.yml
  5. On the options page, leave the default values and select Next.
  6. Specify the following and then select Next:
    1. Stack name: (for example, kinesis-waf-logging). Make sure to note your stack name, as you’ll need to provide it later in the walkthrough.
    2. Buffer size: This value specifies the size in MB for which Kinesis will buffer incoming records before processing.
    3. Buffer interval: This value specifies the interval in seconds for which Kinesis will buffer incoming records before processing.

    Note: Kinesis will trigger data delivery based on which buffer condition is satisfied first. This CloudFormation sets the default buffer size to 3MB and interval size to 900 seconds to match the maximum transformation buffer size and intervals which is set by this template. To learn more about Kinesis Data Firehose buffer conditions, read this documentation.

     

    Figure 2: Specify the stack name, buffer size, and buffer interval

    Figure 2: Specify the stack name, buffer size, and buffer interval

  7. Select the check box for I acknowledge that AWS CloudFormation might create IAM resources and choose Create.
  8. Wait for the template to finish creating the resources. This will take a few minutes. On the CloudFormation dashboard, the status next to your stack should say CREATE_COMPLETE.
  9. From the AWS Management Console, open Amazon Kinesis and find the Data Firehose delivery stream on the dashboard. Note that the name of the stream will start with aws-waf-logs- and end with the name of the CloudFormation. This prefix is required in order to configure AWS WAF to write logs to the Kinesis stream.
  10. From the AWS Management Console, open AWS Lambda and view the Lambda function created from the CloudFormation template. The function name should start with the Stack name from the CloudFormation template. I included the function code generated from the CloudFormation template below so you can see what’s going on.

    Note: Through CloudFormation, the code is deployed without indentation. To format it for readability, I recommend using the code formatter built into Lambda under the edit tab. This code can easily be modified for custom record filtering or transformations.

    
        'use strict';
    
        exports.handler = (event, context, callback) => {
            /* Process the list of records and drop those containing Default_Action */
            const output = event.records.map((record) => {
                const entry = (new Buffer(record.data, 'base64')).toString('utf8');
                if (!entry.match(/Default_Action/g)){
                    return {
                        recordId: record.recordId,
                        result: 'Ok',
                        data: record.data,
                    };
                } else {
                    return {
                        recordId: record.recordId,
                        result: 'Dropped',
                        data: record.data,
                    };
                }
            });
        
            console.log(`Processing completed.  Successful records ${output.length}.`);
            callback(null, { records: output });
        };"        
        

You now have a Kinesis Data Firehose stream that AWS WAF can use for logging records.

Cost Considerations

This template sets the Kinesis transformation buffer size to 3MB and buffer interval to 900 seconds (the maximum values) in order to reduce the number of Lambda invocations used to process records. On average, an AWS WAF record is approximately 1-1.5KB. With a buffer size of 3MB, Kinesis will use 1 Lambda invocation per 2000-3000 records. Visit the AWS Lambda website to learn more about pricing.

Step 2: Configure AWS WAF Logging

Now that you have an active Amazon Kinesis Firehose delivery stream, you can configure your AWS WAF WebACL to turn on logging.

  1. From the AWS Management Console, open WAF & Shield.
  2. Select the WebACL for which you would like to enable logging.
  3. Select the Logging tab.
  4. Select the Enable Logging button.
  5. Next to Amazon Kinesis Data Firehose, select the stream that was created from the CloudFormation template in Step 1 (for example, aws-waf-logs-kinesis-waf-stream) and select Create.

Congratulations! Your AWS WAF WebACL is now configured to send records of requests inspected by AWS WAF to Kinesis Data Firehose. From there, records that match the default action will be dropped, and the remaining records will be stored in S3 in JSON format.

Below is a sample of the logs generated from this example. Notice that there are only blocked records in the logs.
 

Figure 3: Sample logs

Figure 3: Sample logs

Conclusion

In this blog, I’ve provided you with a CloudFormation template to generate a Kinesis Data Firehose stream that can be used to log requests blocked by AWS WAF, omitting requests matching the default action. By omitting the default action, I have reduced the number of log records that must be reviewed to identify bad actors, tune new WAF rules, and/or root cause false positives. For unblocked traffic, consider using CloudFront’s access logs with Amazon Athena or CloudWatch Logs Insights to query and analyze the data. To learn more about AWS WAF logs, read our developer guide for AWS WAF.

If you have feedback about this blog post, , please submit them in the Comments section below. If you have issues with AWS WAF, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tino Tran

Tino is a Senior Edge Specialized Solutions Architect based out of Florida. His main focus is to help companies deliver online content in a secure, reliable, and fast way using AWS Edge Services. He is a experienced technologist with a background in software engineering, content delivery networks, and security.

AWS Security Profiles: CJ Moses, Deputy CISO and VP of Security Engineering

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profiles-cj-moses-deputy-ciso-and-vp-of-security-engineering/

We recently sat down with CJ Moses, Deputy, Chief Information Security Officer (CISO), to learn about his day-to-day as a cybersecurity executive. He also shared more about his passion for racecar driving and why AWS is partnering with the SRO GT World Challenge America series this year.


How long have you been with AWS, and what is your role?

I’ve been with AWS since December 2007. I came to AWS from the FBI, along with current AWS CISO Steve Schmidt and VP/Distinguished Engineer Eric Brandwine. Together, we started the east coast AWS office. I’m now the Deputy CISO and VP of Security Engineering at AWS.

What excites you most about your role?

I like that every day brings something new and different. In the security space, it’s a bit of a cat and mouse game. It’s our job to be very much ahead of the adversaries. Continuing to engineer and innovate to keep our customers’ data secure allows us to do that. I also love providing customers things that didn’t exist in the past. The very first initiative that Steve, Eric and I worked on when we came from the government sector was Amazon Virtual Private Cloud (Amazon VPC), as well as the underlying network that gives customers virtually isolated network environments. By the end of 2011, the entire Amazon.com web fleet was running on this VPC infrastructure. Creating this kind of scalable offering and allowing our customers to use it was, as we like to say, making history.

We continue to work on new features and services that make it easier to operate more securely in the cloud than in on-premises datacenters. Over the past several years, we’ve launched services like Amazon GuardDuty (threat detection), Amazon Macie (sensitive data classification), and most recently AWS Security Hub (comprehensive security and compliance status monitoring). I love working for a company that’s committed to paying attention to customer feedback—and then innovating to not only meet those needs, but exceed them.

What’s the most challenging part of your role?

Juggling all the different aspects of it. I have responsibility for a lot of things, from auditing the physical security of our data centers to the type of hypervisor we need to use when thinking about new services and features. In the past, it was a Xen hypervisor based on open source software, but today AWS has built hardware and software components to eliminate the previous virtualization overhead. The Nitro system, as we call it, has had performance, availability, and security engineered into it from the earliest design phases, which is the right way to do it. Being able to go that full breadth of physical to virtual is challenging, but these challenges are what energize and drive me.

You’re an avid racecar driver. Tell us a bit about why you got into racing. What are the similarities between racecar driving and your job?

I got into racecar driving years ago, by accident. I bought an Audi A4 and it came with a membership to the Quattro club and they sent me a flyer for a track day, which is essentially a “take your streetcar to the track” event. I was hooked and began spending more and more time at the track. I’ve found that racecar drivers are extremely type A and very competitive. And because Amazon is very much a driven, fast-moving company, I think that what you need to have in place to succeed at Amazon is similar to what you need to be great on the racetrack. I like to tell my AWS team that they should be tactically impatient, yet strategically patient, and that applies to motorsports equally. You can’t win the race if you wreck in the first turn, but you also can’t win if you never get off the starting line.

Another similarity is the need to be laser-focused on the task at hand. In both environments, you need to be able to clear your mind of distractions and think from a new perspective. At AWS, our customer obsession drives what we do, the services and offerings we create, and our company culture. When I get in a racecar, there’s no time to think about anything except what’s at hand. When I’m streaming down the straightaway doing 180 mph, I need to focus on when to hit the brakes or when to make the next turn. When I get out of that car, I can then re-focus and bring new perspective to work and life.

AWS is the official cloud and machine learning provider of the SRO GT World Challenge America series this year. What drove the decision to become a partner?

We co-sponsored executive summits with our partner, CrowdStrike, at the SRO Motorsports race venues last year and are doing the same thing this year. But this year, we thought that it made sense to increase brand awareness and gain access to the GT Paddock Club for our guests. Paddock access allows you to see the cars up close and talk to drivers. It’s like a backstage pass at a concert.

In the paddock, every single car and team has sponsors or are self-funded—it’s like a small business-to-business environment. During our involvement last year, we didn’t see those businesses connecting with each other. What we hope to do this year is elevate the cybersecurity learning experience. We’re bringing in cybersecurity executives from across a range of companies to start dialogues with one another, with us, and with the companies represented in the paddock. The program is designed to build meaningful relationships and cultivate a shared learning experience on cybersecurity and AWS services in a setting where we can provide a once-in-a-lifetime experience for our guests. The cybersecurity industry is driven by trust-based relationships and genuinely being there for our customers. I believe our partnership with the SRO GT World Challenge series will provide a platform that helps us reinforce this.

What’s the connection between racecars and cloud security? How are AWS Security services being used at the racetrack?

With racing, there are tremendous workloads, such as telemetry and data acquisition, that you can stream from a car—essentially, hundreds of channels of data. There are advanced processing requirements for computational fluid dynamics, for example, both of air dynamics around the outside of the car and of air intake into and exhaust out of the engine. All these workloads and all this data are proprietary to the racing teams: The last thing you want is a competing racing team getting that data. This issue is analogous to data protection concerns amongst today’s traditional businesses. Also similar to traditional businesses, many racing teams need to be able to share data with each other in specific ways, to meet specific needs. For example, some teams might have multiple cars and drivers. Each of those drivers will need varying levels of access to data. AWS enables them to share that data in isolation, ensuring teams share only what’s needed. This kind of isolation would be difficult to achieve with a traditional data center or in many other environments.

AWS is also being used in new ways by GT World Challenge to help racecar drivers and partners make more real-time, data-driven decisions. For the first time, drivers and other racing partners will be able to securely stream telemetry directly to the AWS cloud. This will help drivers better analyze their driving and which parts of the course they need to improve upon. Crew chiefs and race engineers will have the data along with the advanced analytics to help them make informed decisions, such as telling drivers when it’s the most strategic time to make a pit stop.

This data will also help enhance the fan experience later this year. Spectators will be privy to some of the data being streamed through AWS and used by drivers, giving them a more intimate understanding of the velocity at which decisions need to be made on the track. We hope fans will be excited by this innovation.

What do you hope AWS customers will gain from attending GT World Challenge races?

I think the primary value is the opportunity to build relationships with experts and executives in the cybersecurity space, while enjoying the racetrack experience. We want to continue operating at the speed of innovation for our customers, and being able to build trust with them face-to-face helps enable this. We also keenly value the opportunity for customers to provide us feedback to influence how we think and what we offer at AWS, and I believe these events will provide opportunities where these conversations can easily take place.

In addition, we’ll be teasing out information about AWS re:Inforce (our upcoming security conference in Boston this June) at the GT Paddock Club. This includes information about the content, what to expect at the conference, and key dates. For anyone who wants to learn more about this event, I encourage them to visit https://reinforce.awsevents.com/, where you can read about our different session tracks, Steve Schmidt’s keynote, and other educational experiences we have planned.

How are you preparing for the races you’ll be in this year?

I’ve never raced professionally before, so driving the Audi R8 LMS GT4 this season has been a new experience for me. I’ve got my second race coming up this weekend (April 12th-14th), at the Pirelli GT4 Challenge at Long Beach, where AWS is the title sponsor. To prepare at home, I have a racing simulator that uses AWS services, as well as three large monitors, a steering wheel, pedals, and a moving and vibrating seat that I train on. I take what I learn from this simulation to the track in the actual car any chance I get. As the belated Jacob Levnon, a VP at AWS, was famous for saying, “There is no compression algorithm for experience.” I also do a lot of mental preparation by reading lap notes, watching videos of professional drivers, and working with my coaches. I’m grateful for the opportunity to be able to race this season and thank those that have helped me on this journey, both at AWS Security and on the racetrack.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profiles: Olivier Klein, Head of Emerging Technologies in the APAC region

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-olivier-klein-head-of-emerging-technologies-in-the-apac-region/

Leading up to AWS Summit Singapore, we’re sharing our conversation with keynote speaker Olivier Klein about his work with emerging technology and about the overlap between “emerging technology” and “cloud security.”


You’re the “Head of Emerging Technologies in the APAC region” on your team at AWS. What kind of work do you do?

I continuously explore new technologies. By “technologies”, I don’t only mean AWS services, but also technologies that exists in the wider market. My goal is to understand how these developments can help our customers chart the course of their own digital transformation. There’s a lot happening—including advances in AWS offerings. I’m seeing evolution in terms of core AWS compute, storage and database services all the way up to higher-level services such as AI, machine learning services, deep learning, augmented or virtual reality, and even cryptographically verifiable distributed data stores (such as blockchain). My role involves taking in all these various facets of “emerging technology” and answering the question: how do these innovations help our customers solve problems or improve their businesses? Then I work to provide best practices around which type of technology is best at solving which particular type of challenge.

Given the rapid pace of technological development, how do you keep track of what’s happening in the space?

My approach is two-fold. First, there’s the element of exploring new technologies and trying to wrap my own head around them to see how they can be useful. But I’m guided by the Amazon way of approaching a challenge: that is, I work backward from the customer. I try to closely monitor the types of challenges our customers are facing. Based on what I’ve seen first-hand, plus the feedback I’ve received from the rest of my team and from Solutions Architects out in the field on what customers are struggling with, I figure out which technologies would help address those pain points.

I’m a technologist; I get excited by technology. But I don’t believe in using new technologies just for the sake of using them. The technology should solve for a particular business outcome. For example, two of my recent areas of focus are artificial intelligence and machine learning. A lot of companies are either already using some form of machine learning or AI, or looking into it. But it’s not something you should do just for the sake of doing it. You need to figure out the specific business outcomes you want to achieve, and then decide which kinds of technology can help. Maybe machine learning is part of it. In the space of computer vision and natural language processing I’ve seen a lot of recent advancement that allows you to tackle new use cases and scenarios. But machine learning won’t always be the right kind of technology for you. So my primary focus is on helping customers makes sense of what types of tech they should be using to address specific scenarios and solve for specific business outcomes.

What’s your favorite part of your job?

It’s really exciting to see how technology can come to life and solve interesting problems at scale across many different industries, and for customer of all sizes, from startups to medium-sized businesses to large enterprises. They all face interesting challenges and it’s rewarding to be able to assist in that problem-solving process.

How would you describe the relationship between “emerging tech” and “cloud security”?
Security is a changing landscape. Earlier, I mentioned that machine learning and AI are an area of emerging tech that I focus on and that a lot of customers are getting on top of. I think similar trends are happening in the cloud security space. Traditionally, if you think of “security,” you probably think about physical boundaries, firewalls, and boxes that you need to protect. But when you move to the cloud, you have to rethink that model—the cloud offers all sorts of new capabilities. Take for example Amazon Macie, a service that allows you to use machine learning to understand data access patterns, to classify your data, to understand which data sets have personally identifiable information, and to potentially serve as a protection mechanism to ensure your data privacy.

More broadly, a cloud environment fundamentally changes what you’re able to do with security: Everything is programmable. Everything can be event-driven. Everything is code. An entire infrastructure can be put together as code. By this, I mean that you have the ability to detect and understand changes within your environment as they happen. You can have automated rules, automated account configurations, and machine learning algorithms that verify any kind of change. These systems can not only make your environment fully auditable, they can prevent changes as they’re happening, whether that’s a potential threat or an alteration to the environment that could carry security risk. Before this, securing your environment meant going through approvals, setting and configuring servers, routers and firewalls, and putting a lot of boundaries around them. That approach can work, but it doesn’t scale well, and it doesn’t always accommodate this new world where people want to experiment and be agile without compromising on security.

Security remains the most important consideration—but if you move to the cloud, you have a plethora of services that enable to you to create a controlled environment where any activity can immediately be checked against your security posture. Ultimately, this allows security professionals to become enablers. They can help people build effectively and securely, instead of the more traditional model of, “Here’s a list of all the things you can’t do.”

What are some of the most common misperceptions that you encounter about cloud security?

The cloud takes away the heavy lifting traditionally associated with security, and I’ve found that for some people, this is a difficult mental shift. AWS removes the entire problem of the physical boundaries and protections that you need to put in place to secure your servers and your data centers, and instead allows you to focus on securing and building applications.

Physical environments tend to foster a more reactive way of thinking about security. For example, you can log everything, and if something goes wrong, you can go back and check the logs to see what happened—but because there are so many manual interactions involved, it’s probably not a fast process. You’re always a little behind. AWS enables you to be much more proactive. For example, you might use AWS CloudTrail, which logs any kind of activity in your account against the entire AWS platform, and you might combine CloudTrail with AWS Config, which allows you to look at any configurational change within your environment and track it over a period of time. Combined, these services allow you to say, “If any change within my environment matches X set of rules, I want to be notified. If the change is compliant with the rules that I’ve set up, great—carry on. If it isn’t, I want to immediately revoke or remove the change, or maybe revoke the permissions or the credentials of the individual responsible for the change.” And we can give you a bunch of predefined rule sets that are ready to be compliant with certain scenarios, or you can build your own. Compare this to a physical data center: If someone goes and cuts a cable, how do you look into that? How long does it take? On AWS, any change can immediately be verified against your rule sets. You can immediately know what happened and can immediately and automatically take action. That’s a fundamental game changer for security—the ability to react as it happens. This difference is something that I really try to emphasize for our customers.

In your experience, how does the cloud adoption landscape differ between APAC and other markets?

In the Asia Pacific market, we have a lot of new companies starting to pop up and build against the entire global ecosystem, and against an entire global platform from their regions, which I think is really exciting. It’s a very fast-moving market. One of the key benefits of using AWS products and services is the tremendous agility that you get. You have the ability to build things fairly easily, create platforms and services at very large scale across multiple geographies, build them up, tear them down, and experiment with them using a plethora of services. I think in 2018, on average, we had three to five new capabilities made available to any developer—to any builder out there—every single day. That’s a fundamental game changer in the way you build systems. It’s really exciting to see what our customers are doing with all the new services and features.

What are some of the challenges—security or otherwise—that you see customers frequently face as they move to the cloud?

I think it comes back to that same challenge of showing customers that they don’t just need to take an existing model—whether their security infrastructure or anything else—and move it into a cloud environment without making any changes. You can do that, if you want. We provide you with many migration and integration services to do so effectively. But I really encourage customers to ask themselves, “How can I re-architect to optimize the benefits I’m getting from AWS for the specific use cases and applications I’m building?”

I believe that the true benefits of cloud computing come if you build in a way that’s either cloud-native or optimized for best practices. AWS allows you to build applications in a very agile, but also very lean manner. Look at concepts such as containerization, or even the idea of deploying applications that are completely serverless on top of AWS—you basically just deploy the code pieces for your application, we fully run and manage it, and you only pay for the execution time. Or look at storage or databases: traditionally, it might have made sense to put everything into a relational database. But if you want to build in a really agile, scalable, and cost-effective manner, that might not be the best option. And again, AWS provides you with so many choices: you can choose a database that allows you to look at relations, a database that allows you to run at hyper-growth scale across multiple geographies at the same time, a database for key value pair stores, graph-relation or timeseries focused databases, and so on. There are many different ways you can build on AWS to optimize for your particular use case. Initially it might be hard to wrap your head around this new way of building modern applications, but I believe that the benefits in terms of agility, cost effectiveness, and sheer possible hyperscale without headaches are worth it.

You’re one of the keynote speakers at the Singapore Summit. What are some themes you’ll be focusing on?

I’ll be speaking on the first day, which is what we call the Tech Fest. My keynote will primarily focus on the technology behind AWS products and services, and on how we build modern applications and modern data architectures. By that, I mean that we’ll take a look into what modern application architectures look like, how to effectively make use of data and how to build highly-scalable applications that are portable. In the Asia Pacific region, there’s a strong interest in mobile-first or web-first design. So how do we build effectively for those platforms? I’ll use my talk to look at some of the elements of distributed computing: how do you effectively build for a global, large-scale user base? How does distributed computing work? How do you use the appropriate AWS services and techniques to ensure that your last mile, even in remote areas, is done correctly?

I’ll also talk about the concept of data analytics and how to build effective data analysis on top of AWS to get meaningful insights, potentially in real-time. Beyond insights, we’ll have a look at using AI and machine learning to further create better customer experiences. Then I’ll wrap up with a look into robotics. We’ll have a variety of different interesting live demos across all of these topics.

What are you hoping that your audience will do differently as a result of your keynote?

One of the things I want people to take away is that, while there are numerous options for exactly how you build on AWS, there are some very common patterns for applying best practices. Don’t just build your applications and platforms in the old, traditional way of monolithic applications and physical blocks of services and firewalls. Instead, ask yourself, “How do I design modern-day application architectures? How do I make use of the information and data I’m collecting to build a better customer experience? How do I choose the best tools for my use case?” These are all things that we’ll talk about during the keynote. The workshops and bootcamps during the rest of the event are then designed to give you hands-on experience figuring out how to make use of various AWS services and techniques so that you can build in a cloud-native manner.

What else should visitors to Singapore take the time to do or see while they’re there?

I used to live in Singapore (although I currently live in Hong Kong). So if you’re visiting, one thing you should definitely check out is the hawker centers, which are the local food courts, and where you can try some great local delicacies. One of my personal favorites is a dish called bak kut teh. If you’re into an herbal soup experience, you should check it out. And if you’ve never been to Singapore, go to the Marina Bay, take a picture with the Merlion, which is the national symbol of Singapore, and enjoy the wonderful landscape and skyline.

You have an advanced certification with the Professional Association of Diving Instructors. Where is your favorite place to dive?

I live very close to some of the Southeast Asian seas, which have wonderful dive spots all over. It’s hard to pick a favorite. But one that stands out is a place called Sipadan. Sipadan was one of my most amazing dive experiences: I did one of those morning dives where you go out on the boat, the sun is just about to come up, you jump into the sea, and the entire marine world wakes up. It’s a natural marine park, so even if you don’t scuba dive and just snorkel, there’s probably no place you can go to see more fish, and sharks, and turtles.

If you’ve never tried scuba diving, I’d recommend it. Snorkeling is great, but scuba diving gives you a fundamentally different experience. It’s much more calming. While snorkeling, you hear your breathing as you swim around. But if you scuba dive, and you’ve got good control of your buoyancy, you can just hover in the water and quietly watch aquatic life pass around you. With quiet like that, marine life is less afraid and approaches you more easily.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Olivier Klein

Olivier is a hands-on technologist with more than 10 years of experience in the industry and has been working for AWS across APAC and Europe to help customers build resilient, scalable, secure and cost-effective applications and create innovative and data driven business models.

Provable security podcast: automated reasoning’s past, present, and future with Moshe Vardi

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/provable-security-podcast-automated-reasonings-past-present-and-future-with-moshe-vardi/

AWS just released the first podcast of a new miniseries called Provable Security: Conversations on Next Gen Security. We published a podcast on provable security last fall, and, due to high customer interest, we decided to bring you a regular peek into this AWS initiative. This series will explore the unique intersection between academia and industry in the cloud security space. Specifically, the miniseries will cover how the traditionally academic field of automated reasoning is being applied at AWS at scale to help provide higher assurances for our customers, regulators, and the broader cloud industry. We’ll talk to individuals whose minds helped shape the history of automated reasoning, as well as learn from engineers and scientists who are applying automated reasoning to help solve pressing security and privacy challenges in the cloud.

This first interview is with Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology. Moshe describes the history of logic, automated reasoning, and formal verification. He discusses modern day applications in software and how principles of automated reasoning underlie core aspects of computer science, such as databases. The podcast interview highlights Moshe’s standout contributions to the formal verification and automated reasoning space, as well as the number of awards he’s received for his work. You can learn more about Moshe Vardi here.

Byron Cook, Director of the AWS Automated Reasoning Group, interviews Moshe and will be featured throughout the miniseries. Byron is leading the provable security initiative at AWS, which is a collection of technologies that provide higher security assurance to customers by giving them a deeper understanding of their cloud architecture.

You can listen to or download the podcast above, or visit this link. We’ve also included links below to many of the technology and references Moshe discusses in his interview.

We hope you enjoy the podcast and this new miniseries! If you have feedback, let us know in the Comments section below.

Automated reasoning public figures:

Automated techniques and algorithms:

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

AWS Security releases IoT security whitepaper

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/aws-security-releases-iot-security-whitepaper/

We’ve published a whitepaper, Securing Internet of Things (IoT) with AWS, to help you understand and address data security as it relates to your IoT devices and the data generated by them. The whitepaper is intended for a broad audience who is interested in learning about AWS IoT security capabilities at a service-specific level and for compliance, security, and public policy professionals.

IoT technologies connect devices and people in a multitude of ways and are used across industries. For example, IoT can help manage thermostats remotely across buildings in a city, efficiently control hundreds of wind turbines, or operate autonomous vehicles more safely. With all of the different types of devices and the data they transmit, security is a top concern.

The specific challenges using IoT technologies present has piqued the interest of governments worldwide who are currently assessing what, if any, new regulatory requirements should take shape to keep pace with IoT innovation and the general problem of securing data. This whitepaper uses a specific example to cover recent developments published by the National Institute of Standards and Technology (NIST) and the United Kingdom’s Code of Practice that are specific to IoT.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Momena Cheema

Momena is passionate about evangelizing the security and privacy capabilities of AWS services through the lens of global emerging technology and trends, such as Internet of Things, artificial intelligence, and machine learning through written content, workshops, talks, and educational campaigns. Her goal is to bring the security and privacy benefits of the cloud to customers across industries in both public and private sectors.

How to run AWS CloudHSM workloads on Docker containers

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-docker-containers/

AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.

CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.

CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.

The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.

This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.

Solution overview

My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.

Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Prerequisites

To implement my solution, I recommend that you have basic knowledge of the below:

  • CloudHSM
  • Docker
  • Java

Here’s what you’ll need to follow along with my example:

  1. An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
  2. An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
  3. A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.

Solution details

  1. On your Amazon Linux EC2 instance, install Docker:
    
            # sudo yum -y install docker
            

  2. Start the docker service:
    
            # sudo service docker start
            

  3. Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
    
            # mkdir cloudhsm_container
            # cd cloudhsm_container           
            

  4. Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
  5. In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
    
            #! /bin/bash
    
            # start cloudhsm client
            echo -n "* Starting CloudHSM client ... "
            /opt/cloudhsm/bin/cloudhsm_client /opt/cloudhsm/etc/cloudhsm_client.cfg &> /tmp/cloudhsm_client_start.log &
            
            # wait for startup
            while true
            do
                if grep 'libevmulti_init: Ready !' /tmp/cloudhsm_client_start.log &> /dev/null
                then
                    echo "[OK]"
                    break
                fi
                sleep 0.5
            done
            echo -e "\n* CloudHSM client started successfully ... \n"
            
            # start application
            echo -e "\n* Running application ... \n"
            
            java -ea -Djava.library.path=/opt/cloudhsm/lib/ -jar target/assembly/aesgcm-runner.jar --method environment
            
            echo -e "\n* Application completed successfully ... \n"                      
            

  6. In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
    • The AWS CloudHSM client package.
    • The AWS CloudHSM Java JCE package.
    • OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
    • Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
    • The AWS CloudHSM Java JCE samples that will be downloaded and built.
  7. Cut and paste the contents below into Dockerfile.

    Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.

    
            # Use the amazon linux image
            FROM amazonlinux:2
            
            # Install CloudHSM client
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-latest.el7.x86_64.rpm
            
            # Install CloudHSM Java library
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-jce-latest.el7.x86_64.rpm
            
            # Install Java, Maven, wget, unzip and ncurses-compat-libs
            RUN yum install -y java maven wget unzip ncurses-compat-libs
            
            # Create a work dir
            WORKDIR /app
            
            # Download sample code
            RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/master.zip
            
            # unzip sample code
            RUN unzip master.zip
            
            # Change to the create directory
            WORKDIR aws-cloudhsm-jce-examples-master
            
            # Build JAR files
            RUN mvn validate && mvn clean package
            
            # Set HSM IP as an environmental variable
            ENV HSM_IP <insert the IP address of an active CloudHSM instance here>
            
            # Configure cloudhms-client
            COPY customerCA.crt /opt/cloudhsm/etc/
            RUN /opt/cloudhsm/bin/configure -a $HSM_IP
            
            # Copy the run_sample.sh script
            COPY run_sample.sh .
            
            # Run the script
            CMD ["bash","run_sample.sh"]                        
            

  8. Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
    
            # sudo docker build -t jce_sample_client .
            

  9. To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
    
            # sudo docker run --env HSM_PARTITION=PARTITION_1 \
            --env HSM_USER=<user> \
            --env HSM_PASSWORD=<password> \
            jce_sample_client
            

    If successful, the output should look like this:

    
            * Starting cloudhsm-client ... [OK]
            
            * cloudhsm-client started successfully ...
            
            * Running application ...
            
            ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors 
            to the console.
            70132FAC146BFA41697E164500000000
            Successful decryption
                SDK Version: 2.03
            
            * Application completed successfully ...          
            

Conclusion

My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir joined AWS in September 2017 as a Security CSE (Cloud Support Engineer) based in Cape Town. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/enabling-serverless-security-analytics-using-aws-waf-full-logs/

Traditionally, analyzing data logs required you to extract, transform, and load your data before using a number of data warehouse and business intelligence tools to derive business intelligence from that data—on top of maintaining the servers that ran behind these tools.

This blog post will show you how to analyze AWS Web Application Firewall (AWS WAF) logs and quickly build multiple dashboards, without booting up any servers. With the new AWS WAF full logs feature, you can now log all traffic inspected by AWS WAF into Amazon Simple Storage Service (Amazon S3) buckets by configuring Amazon Kinesis Data Firehose. In this walkthrough, you’ll create an Amazon Kinesis Data Firehose delivery stream to which AWS WAF full logs can be sent, and you’ll enable AWS WAF logging for a specific web ACL. Then you’ll set up an AWS Glue crawler job and an Amazon Athena table. Finally, you’ll set up Amazon QuickSight dashboards to help you visualize your web application security. You can use these same steps to build additional visualizations to draw insights from AWS WAF rules and the web traffic traversing the AWS WAF layer. Security and operations teams can monitor these dashboards directly, without needing to depend on other teams to analyze the logs.

The following architecture diagram highlights the AWS services used in the solution:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

AWS WAF is a web application firewall that lets you monitor HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, to Amazon CloudFront or to an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify—such as the IP addresses from which requests originate, or the values of query strings—API Gateway, CloudFront, or the Application Load Balancer responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked.

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

AWS Glue can be used to run serverless queries against your Amazon S3 data lake. AWS Glue can catalog your S3 data, making it available for querying with Amazon Athena and Amazon Redshift Spectrum. With crawlers, your metadata stays in sync with the underlying data (more details about crawlers later in this post). Amazon Athena and Amazon Redshift Spectrum can directly query your Amazon S3 data lake by using the AWS Glue Data Catalog. With AWS Glue, you access and analyze data through one unified interface without loading it into multiple data silos.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Amazon QuickSight is a business analytics service you can use to build visualizations, perform one-off analysis, and get business insights from your data. It can automatically discover AWS data sources and also works with your data sources. Amazon QuickSight enables organizations to scale to hundreds of thousands of users and delivers responsive performance by using a robust in-memory engine called SPICE.

SPICE stands for Super-fast, Parallel, In-memory Calculation Engine. SPICE supports rich calculations to help you derive insights from your analysis without worrying about provisioning or managing infrastructure. Data in SPICE is persisted until it is explicitly deleted by the user. SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources.

Step one: Set up a new Amazon Kinesis Data Firehose delivery stream

  1. In the AWS Management Console, open the Amazon Kinesis Data Firehose service and choose the button to create a new stream.
    1. In the Delivery stream name field, enter a name for your new stream that starts with aws-waf-logs- as shown in the screenshot below. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.

      Figure 2: Select the delivery stream name and source

      Figure 2: Select the delivery stream name and source

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Record transformation, choose Disabled.
    Figure 3: Select "Disabled" for record transformations

    Figure 3: Select “Disabled” for record transformations

    1. You’ll have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, you’ll be reading the AWS WAF logs in JSON format, so for Record format conversion, choose Disabled.

      Figure 4: Choose "Disabled" to not convert the JSON object

      Figure 4: Choose “Disabled” to not convert the JSON object

  3. On the Select destination screen, for Destination, choose Amazon S3.
    Figure 5: Choose the destination

    Figure 5: Choose the destination

    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket since you’ll need the bucket name in a later step in this walkthrough.
    2. For Source record S3 backup, choose Disabled, because the destination in this walkthrough is an S3 bucket.

      Figure 6: Enter the S3 bucket name, and select "Disabled" for the Source record S3 backup

      Figure 6: Enter the S3 bucket name, and select “Disabled” for the source record S3 backup

  4. On the next screen, leave the default conditions for Buffer size, Buffer interval, S3 compression and S3 encryption as they are. However, we recommend that you set Error logging to Enabled initially, for troubleshooting purposes.
    1. For IAM role, select Create new or choose. This opens up a new window that will prompt you to create firehose_delivery_role, as shown in the following screenshot. Choose Allow in this window to accept the role creation. This grants the Kinesis Data Firehose service access to the S3 bucket.

      Figure 7: Select "Create new or choose" for IAM Role

      Figure 7: Select “Allow” to create the IAM role “firehose_delivery_role”

  5. On the last step of configuration, review all the options you’ve chosen, and then select Create delivery stream. This will cause the delivery stream to display as “Creating” under Status. In a couple of minutes, the status will change to “Active,” as shown in the below screenshot.

    Figure 8: Review the options you selected

    Figure 8: Review the options you selected

Step two: Enable AWS WAF logging for a specific Web ACL

  1. From the AWS Management Console, open the AWS WAF service and choose Web ACLs. Open your Web ACL resource, which can either be deployed on a CloudFront distribution or on an Application Load Balancer.
    1. Choose the Web ACL for which you want to enable logging. (In the below screenshot, we’ve selected a Web ACL in the US East Region.)
    2. On the Logging tab, choose Enable Logging.

      Figure 9: Choose "Enable Logging"

      Figure 9: Choose “Enable Logging”

  2. The next page displays all the delivery streams that start with aws-waf-logs. Choose the Amazon Kinesis Data Firehose delivery stream that you created for AWS WAF logs at the start of this walkthrough. (In the screenshot below, our example stream name is “aws-waf-logs-us-east-1)
    1. You can also choose to redact certain fields that you wish to exclude from being captured in the logs. In this walkthrough, you don’t need to choose any fields to redact.
    2. Select Create.

      Figure 10: Choose your delivery stream, and select "Create"

      Figure 10: Choose your delivery stream, and select “Create”

After a couple of minutes, you’ll be able to inspect the S3 bucket that you defined in the Kinesis Data Firehose delivery stream. The log files are created in directories by year, month, day, and hour.

Step three: Set up an AWS Glue crawler job and Amazon Athena table

The purpose of a crawler within your Data Catalog is to traverse your data stores (such as S3) and extract the metadata fields of the files. The output of the crawler consists of one or more metadata tables that are defined in your Data Catalog. When the crawler runs, the first classifier in your list to successfully recognize your data store is used to create a schema for your table. AWS Glue provides built-in classifiers to infer schemas from common files with formats that include JSON, CSV, and Apache Avro.

  1. In the AWS Management Console, open the AWS Glue service and choose Crawler to setup a crawler job.
  2. Choose Add crawler to launch a wizard to setup the crawler job. For Crawler name, enter a relevant name. Then select Next.

    Figure 11: Enter "Crawler name," and select "Next"

    Figure 11: Enter “Crawler name,” and select “Next”

  3. For Choose a data store, select S3 and include the path of the S3 bucket that stores your AWS WAF logs, which you made note of in step 1.3. Then choose Next.

    Figure 12: Choose a data store

    Figure 12: Choose a data store

  4. When you’re given the option to add another data store, choose No.
  5. Then, choose Create an IAM role and enter a name. The role grants access to the S3 bucket for the AWS Glue service to access the log files.

    Figure 13: Choose "Create an IAM role," and enter a name

    Figure 13: Choose “Create an IAM role,” and enter a name

  6. Next, set the frequency to Run on demand. You can also schedule the crawler to run periodically to make sure any changes in the file structure are reflected in your data catalog.

    Figure 14: Set the "Frequency" to "Run on demand"

    Figure 14: Set the “Frequency” to “Run on demand”

  7. For output, choose the database in which the Athena table is to be created and add a prefix to identify your table name easily. Select Next.

    Figure 15: Choose the database, and enter a prefix

    Figure 15: Choose the database, and enter a prefix

  8. Review all the options you’ve selected for the crawler job and complete the wizard by selecting the Finish button.
  9. Now that the crawler job parameters are set up, on the left panel of the console, choose Crawlers to select your job and then choose Run crawler. The job creates an Amazon Athena table. The duration depends on the size of the log files.

    Figure 16: Choose "Run crawler" to create an Amazon Athena table

    Figure 16: Choose “Run crawler” to create an Amazon Athena table

  10. To see the Amazon Athena table created by the AWS Glue crawler job, from the AWS Management Console, open the Amazon Athena service. You can filter by your table name prefix.
      1. To view the data, choose Preview table. This displays the table data with certain fields showing data in JSON object structure.
    Figure 17: Choose "Preview table" to view the data

    Figure 17: Choose “Preview table” to view the data

Step four: Create visualizations using Amazon QuickSight

  1. From the AWS Management Console, open Amazon QuickSight.
  2. In the Amazon QuickSight window, in the top left, choose New Analysis. Choose New Data set, and for the data source choose Athena. Enter an appropriate name for the data source name and choose Create data source.

    Figure 18: Enter the "Data source name," and choose "Create data source"

    Figure 18: Enter the “Data source name,” and choose “Create data source”

  3. Next, choose Use custom SQL to extract all the fields in the JSON object using the following SQL query:
    
        ```
        with d as (select
        waf.timestamp,
            waf.formatversion,
            waf.webaclid,
            waf.terminatingruleid,
            waf.terminatingruletype,
            waf.action,
            waf.httpsourcename,
            waf.httpsourceid,
            waf.HTTPREQUEST.clientip as clientip,
            waf.HTTPREQUEST.country as country,
            waf.HTTPREQUEST.httpMethod as httpMethod,
            map_agg(f.name,f.value) as kv
        from sampledb.jsonwaflogs_useast1 waf,
        UNNEST(waf.httprequest.headers) as t(f)
        group by 1,2,3,4,5,6,7,8,9,10,11)
        select d.timestamp,
            d.formatversion,
            d.webaclid,
            d.terminatingruleid,
            d.terminatingruletype,
            d.action,
            d.httpsourcename,
            d.httpsourceid,
            d.clientip,
            d.country,
            d.httpMethod,
            d.kv['Host'] as host,
            d.kv['User-Agent'] as UA,
            d.kv['Accept'] as Acc,
            d.kv['Accept-Language'] as AccL,
            d.kv['Accept-Encoding'] as AccE,
            d.kv['Upgrade-Insecure-Requests'] as UIR,
            d.kv['Cookie'] as Cookie,
            d.kv['X-IMForwards'] as XIMF,
            d.kv['Referer'] as Referer
        from d;
        ```        
        

  4. To extract individual fields, copy the previous SQL query and paste it in the New custom SQL box, then choose Edit/Preview data.
    Figure 19: Paste the SQL query in "New custom SQL query"

    Figure 19: Paste the SQL query in “New custom SQL query”

    1. In the Edit/Preview data view, for Data source, choose SPICE, then choose Finish.

      Figure 20: Choose "Spice" and then "Finish"

      Figure 20: Choose “Spice” and then “Finish”

  5. Back in the Amazon Quicksight console, under the Fields section, select the drop-down menu and change the data type to Date.

    Figure 21: In the Amazon Quicksight console, change the data type to "Date"

    Figure 21: In the Amazon Quicksight console, change the data type to “Date”

  6. After you see the Date column appear, enter an appropriate name for the visualizations at the top of the page, then choose Save.

    Figure 22: Enter the name for the visualizations, and choose "Save"

    Figure 22: Enter the name for the visualizations, and choose “Save”

  7. You can now create various visualization dashboards with multiple visual types by using the drag-and-drop feature. You can drag and drop combinations of fields such as Action, Client IP, Country, Httpmethod, and User Agents. You can also add filters on Date to view dashboards for a specific timeline. Here are some sample screenshots:
    Figure 23: Visualization dashboard samples

    Figure 23a: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23b: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23c: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23d: Visualization dashboard samples

Conclusion

You can enable AWS WAF logs to Amazon S3 buckets and analyze the logs while they are being streamed by configuring Amazon Kinesis Data Firehose. You can further enhance this solution by automating the streaming of data and using AWS Lambda for any data transformations based on your specific requirements. Using Amazon Athena and Amazon QuickSight makes it easy to analyze logs and build visualizations and dashboards for executive leadership teams. Using these solutions, you can go serverless and let AWS do the heavy lifting for you.

Author photo

Umesh Kumar Ramesh

Umesh is a Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to various AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author photo

Muralidhar Ramarao

Muralidhar is a Data Engineer with the Amazon Payment Products Machine Learning Team. He has a Bachelor’s degree in Industrial and Production Engineering from the National Institute of Engineering, Mysore, India. Outside of work, he loves to hike. You will find him with his camera or snapping pictures with his phone, and always looking for his next travel destination.

How to use service control policies to set permission guardrails across accounts in your AWS Organization

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-to-set-permission-guardrails-across-accounts-in-your-aws-organization/

AWS Organizations provides central governance and management for multiple accounts. Central security administrators use service control policies (SCPs) with AWS Organizations to establish controls that all IAM principals (users and roles) adhere to. Now, you can use SCPs to set permission guardrails with the fine-grained control supported in the AWS Identity and Access Management (IAM) policy language. This makes it easier for you to fine-tune policies to meet the precise requirements of your organization’s governance rules.

Now, using SCPs, you can specify Conditions, Resources, and NotAction to deny access across accounts in your organization or organizational unit. For example, you can use SCPs to restrict access to specific AWS Regions, or prevent your IAM principals from deleting common resources, such as an IAM role used for your central administrators. You can also define exceptions to your governance controls, restricting service actions for all IAM entities (users, roles, and root) in the account except a specific administrator role.

To implement permission guardrails using SCPs, you can use the new policy editor in the AWS Organizations console. This editor makes it easier to author SCPs by guiding you to add actions, resources, and conditions. In this post, I review SCPs, walk through the new capabilities, and show how to construct an example SCP you can use in your organization today.

Overview of Service Control Policy concepts

Before I walk through some examples, I’ll review a few features of SCPs and AWS Organizations.

SCPs offer central access controls for all IAM entities in your accounts. You can use them to enforce the permissions you want everyone in your business to follow. Using SCPs, you can give your developers more freedom to manage their own permissions because you know they can only operate within the boundaries you define.

You create and apply SCPs through AWS Organizations. When you create an organization, AWS Organizations automatically creates a root, which forms the parent container for all the accounts in your organization. Inside the root, you can group accounts in your organization into organizational units (OUs) to simplify management of these accounts. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form a hierarchical structure. You can attach SCPs to the organization root, OUs, and individual accounts. SCPs attached to the root and OUs apply to all OUs and accounts inside of them.

SCPs use the AWS Identity and Access Management (IAM) policy language; however, they do not grant permissions. SCPs enable you set permission guardrails by defining the maximum available permissions for IAM entities in an account. If a SCP denies an action for an account, none of the entities in the account can take that action, even if their IAM permissions allow them to do so. The guardrails set in SCPs apply to all IAM entities in the account, which include all users, roles, and the account root user.

Policy Elements Available in SCPs

The table below summarizes the IAM policy language elements available in SCPs. You can read more about the different IAM policy elements in the IAM JSON Policy Reference.

The Supported Statement Effect column describes the effect type you can use with each policy element in SCPs.

Policy ElementDefinitionSupported Statement Effect
StatementMain element for a policy. Each policy can have multiple statements.Allow, Deny
Sid(Optional) Friendly name for the statement.Allow, Deny
EffectDefine whether a SCP statement allows or denies actions in an account.Allow, Deny
ActionList the AWS actions the SCP applies to.Allow, Deny
NotAction (New)(Optional) List the AWS actions exempt from the SCP. Used in place of the Action element.Deny
Resource (New)List the AWS resources the SCP applies to.Deny
Condition (New)(Optional) Specify conditions for when the statement is in effect.Deny

Note: Some policy elements are only available in SCPs that deny actions.

You can use the new policy elements in new or existing SCPs in your organization. In the next section, I use the new elements to create a SCP using the AWS Organizations console.

Create an SCP in the AWS Organizations console

In this section, you’ll create an SCP that restricts IAM principals in accounts from making changes to a common administrative IAM role created in all accounts in your organization. Imagine your central security team uses these roles to audit and make changes to AWS settings. For the purposes of this example, you have a role in all your accounts named AdminRole that has the AdministratorAccess managed policy attached to it. Using an SCP, you can restrict all IAM entities in the account from modifying AdminRole or its associated permissions. This helps you ensure this role is always available to your central security team. Here are the steps to create and attach this SCP.

  1. Ensure you’ve enabled all features in AWS Organizations and SCPs through the AWS Organizations console.
  2. In the AWS Organizations console, select the Policies tab, and then select Create policy.

    Figure 1: Select "Create policy" on the "Policies" tab

    Figure 1: Select “Create policy” on the “Policies” tab

  3. Give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: DenyChangesToAdminRole
    • Description: Prevents all IAM principals from making changes to AdminRole.

     

    Figure 2: Give the policy a name and description

    Figure 2: Give the policy a name and description

  4. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.

    Figure 3: SCP editor tool

    Figure 3: SCP editor tool

  5. Change the Statement ID to describe what the statement does. For this example, I reused the name of the policy, DenyChangesToAdminRole, because this policy has only one statement.

    Figure 4: Change the Statement ID

    Figure 4: Change the Statement ID

  6. Next, add the actions you want to restrict. Using the left panel, select the IAM service. You’ll see a list of actions. To learn about the details of each action, you can hover over the action with your mouse. For this example, we want to allow principals in the account to view the role, but restrict any actions that could modify or delete it. We use the new NotAction policy element to deny all actions except the view actions for the role. Select the following view actions from the list:
    • GetContextKeysForPrincipalPolicy
    • GetRole
    • GetRolePolicy
    • ListAttachedRolePolicies
    • ListInstanceProfilesForRole
    • ListRolePolicies
    • ListRoleTags
    • SimulatePrincipalPolicy
  7. Now position your cursor at the Action element and change it to NotAction. After you perform the steps above, your policy should look like the one below.

    Figure 5: An example policy

    Figure 5: An example policy

  8. Next, apply these controls to only the AdminRole role in your accounts. To do this, use the Resource policy element, which now allows you to provide specific resources.
      1. On the left, near the bottom, select the Add Resources button.
      2. In the prompt, select the IAM service from the dropdown menu.
      3. Select the role as the resource type, and then type “arn:aws:iam::*:role/AdminRole” in the resource ARN prompt.
      4. Select Add resource.

    Note: The AdminRole has a common name in all accounts, but the account IDs will be different for each individual role. To simplify the policy statement, use the * wildcard in place of the account ID to account for all roles with this name regardless of the account.

  9. Your policy should look like this:
    
    {    
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DenyChangesToAdminRole",
          "Effect": "Deny",
          "NotAction": [
            "iam:GetContextKeysForPrincipalPolicy",
            "iam:GetRole",
            "iam:GetRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListInstanceProfilesForRole",
            "iam:ListRolePolicies",
            "iam:ListRoleTags",
            "iam:SimulatePrincipalPolicy"
          ],
          "Resource": [
            "arn:aws:iam::*:role/AdminRole"
          ]
        }
      ]
    }
    

  10. Select the Save changes button to create your policy. You can see the new policy in the Policies tab.

    Figure 6: The new policy on the “Policies” tab

    Figure 6: The new policy on the “Policies” tab

  11. Finally, attach the policy to the AWS account where you want to apply the permissions.

When you attach the SCP, it prevents changes to the role’s configuration. The central security team that uses the role might want to make changes later on, so you may want to allow the role itself to modify the role’s configuration. I’ll demonstrate how to do this in the next section.

Grant an exception to your SCP for an administrator role

In the previous section, you created a SCP that prevented all principals from modifying or deleting the AdminRole IAM role. Administrators from your central security team may need to make changes to this role in your organization, without lifting the protection of the SCP. In this next example, I build on the previous policy to show how to exclude the AdminRole from the SCP guardrail.

  1. In the AWS Organizations console, select the Policies tab, select the DenyChangesToAdminRole policy, and then select Policy editor.
  2. Select Add Condition. You’ll use the new Condition element of the policy, using the aws:PrincipalARN global condition key, to specify the role you want to exclude from the policy restrictions.
  3. The aws:PrincipalARN condition key returns the ARN of the principal making the request. You want to ignore the policy statement if the requesting principal is the AdminRole. Using the StringNotLike operator, assert that this SCP is in effect if the principal ARN is not the AdminRole. To do this, fill in the following values for your condition.
    1. Condition key: aws:PrincipalARN
    2. Qualifier: Default
    3. Operator: StringNotEquals
    4. Value: arn:aws:iam::*:role/AdminRole
  4. Select Add condition. The following policy will appear in the edit window.
    
    {    
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DenyChangesToAdminRole",
          "Effect": "Deny",
          "NotAction": [
            "iam:GetContextKeysForPrincipalPolicy",
            "iam:GetRole",
            "iam:GetRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListInstanceProfilesForRole",
            "iam:ListRolePolicies",
            "iam:ListRoleTags"
          ],
          "Resource": [
            "arn:aws:iam::*:role/AdminRole"
          ],
          "Condition": {
            "StringNotLike": {
              "aws:PrincipalARN":"arn:aws:iam::*:role/AdminRole"
            }
          }
        }
      ]
    }
    
    

  5. After you validate the policy, select Save. If you already attached the policy in your organization, the changes will immediately take effect.

Now, the SCP denies all principals in the account from updating or deleting the AdminRole, except the AdminRole itself.

Next steps

You can now use SCPs to restrict access to specific resources, or define conditions for when SCPs are in effect. You can use the new functionality in your existing SCPs today, or create new permission guardrails for your organization. I walked through one example in this blog post, and there are additional use cases for SCPs that you can explore in the documentation. Below are a few that we have heard from customers that you may want to look at.

  • Account may only operate in certain AWS regions (example)
  • Account may only deploy certain EC2 instance types (example)
  • Account requires MFA is enabled before taking an action (example)

You can start applying SCPs using the AWS Organizations console, CLI, or API. See the Service Control Policies Documentation or the AWS Organizations Forums for more information about SCPs, how to use them in your organization, and additional examples.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.