Tag Archives: Security, Identity & Compliance

Definitely not an AWS Security Profile: Corey Quinn, a “Cloud Economist” who doesn’t work here

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/definitely-not-an-aws-security-profile-corey-quinn-a-cloud-economist-who-doesnt-work-here/

platypus scowling beside cloud

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You don’t work at AWS, but you do have deep experience with AWS Services. Can you talk about how you developed that experience and the work that you do as a “Cloud Economist?”

I see those sarcastic scare-quotes!

I’ve been using AWS for about a decade in a variety of environments. It sounds facile, but it turns out that being kinda good at something starts with being abjectly awful at it first. Once you break things enough times, you start to learn how to wield them in more constructive ways.

I have a background in SRE-style work and finance. Blending those together into a made-up thing called “Cloud Economics” made sense and focused on a business problem that I can help solve. It starts with finding low-effort cost savings opportunities in customer accounts and quickly transitions into building out costing predictions, allocating spend—and (aligned with security!) building out workable models of cloud governance that don’t get in an engineer’s way.

This all required me to be both broad and deep across AWS’s offerings. Somewhere along the way, I became something of a go-to resource for the community. I don’t pretend to understand how it happened, but I’m incredibly grateful for the faith the broader community has placed in me.

You’re known for your snarky newsletter. When you meet AWS employees, how do they tend to react to you?

This may surprise you, but the most common answer by far is that they have no idea who I am.

It turns out AWS employs an awful lot of people, most of whom have better things to do than suffer my weekly snarky slings and arrows.

Among folks who do know who I am, the response has been nearly universal appreciation. It seems that the newsletter is received in which the spirit I intend it—namely, that 90–95% of what AWS does is awesome. The gap between that and perfection offers boundless opportunities for constructive feedback—and also hilarity.

The funniest reaction I ever got was when someone at a Summit registration booth saw “Last Week in AWS” on my badge and assumed I was an employee serving out the end of his notice period.

“Senior RageQuit Engineer” at your service, I suppose.

You’ve been invited to present during the Leadership Session for the re:Inforce Foundation Track with Beetle. What have you got planned?

Ideally not leaving folks asking incredibly pointed questions about how the speaker selection process was mismanaged! If all goes well, I plan on being able to finish my talk without being dragged off the stage by AWS security!

I kid. But my theory of adult education revolves around needing to grab people’s attention before you can teach them something. For better or worse, my method for doing that has always been humor. While I’m cognizant that messaging to a large audience of security folks requires a delicate touch, I don’t subscribe to the idea that you can’t have fun with it as well.

In short: if nothing else, it’ll be entertaining!

What’s one thing that everyone should stop reading and go do RIGHT NOW to improve their security posture?

Easy. Log into the console of your organization’s master account and enable AWS CloudTrail for all regions and all accounts in your organization. Direct that trail to a locked-down S3 bucket in a completely separate, highly restricted account, and you’ve got a forensic log of all management options across your estate.

Worst case, you’ll thank me later. Best case, you’ll never need it.

It’s important, so what’s another security thing everyone should do?

Log in to your AWS accounts right now and update your security contact to your ops folks. It’s not used for marketing; it’s a point of contact for important announcements.

If you’re like many rapid-growth startups, your account is probably pointing to your founder’s personal email address— which means critical account notices are getting lost among Amazon.com sock purchase receipts.

That is not what being “SOC-compliant” means.

From a security perspective, what recent AWS release are you most excited about?

It was largely unheralded, but I was thrilled to see AWS Systems Manager Parameter Store (it’s a great service, though the name could use some work) receive higher API rate limits; it went from 40 to 1,000 requests per second.

This is great for concurrent workloads and makes it likelier that people will manage secrets properly without having to roll their own.

Yes, I know that AWS Secrets Manager is designed around secrets, but KMS-encrypted parameters in Parameter Store also get the job done. If you keep pushing I’ll go back to using Amazon Route 53 TXT records as my secrets database… (Just kidding. Please don’t do this.)

In your opinion, what’s the biggest challenge facing cloud security right now?

The same thing that’s always been the biggest challenge in security: getting people to care before a disaster happens.

We see the same thing in cloud economics. People care about monitoring and controlling cloud spend right after they weren’t being diligent and wound up with an unpleasant surprise.

Thankfully, with an unexpectedly large bill, you have a number of options. But you don’t get a do-over with a data breach.

The time to care is now—particularly if you don’t think it’s a focus area for you. One thing that excites me about re:Inforce is that it gives an opportunity to reinforce that viewpoint.

Five years from now, what changes do you think we’ll see across the cloud security landscape?

I think we’re already seeing it now. With the advent of things like AWS Security Hub and AWS Control Tower (both currently in preview), security is moving up the stack.

Instead of having to keep track of implementing a bunch of seemingly unrelated tooling and rulesets, higher-level offerings are taking a lot of the error-prone guesswork out of maintaining an effective security posture.

Customers aren’t going to magically reprioritize security on their own. So it’s imperative that AWS continue to strive to meet them where they are.

What are the comparative advantages of being a cloud economist vs. a platypus keeper?

They’re more alike than you might expect. The cloud has sharp edges, but platypodes are venomous.

Of course, large bills are a given in either space.

You sometimes rename or reimagine AWS services. How should the Security Blog rebrand itself?

I think the Security Blog suffers from a common challenge in this space.

It talks about AWS’s security features, releases, and enhancements—that’s great! But who actually identifies as its target market?

Ideally, everyone should; security is everyone’s job, after all.

Unfortunately, no matter what user persona you envision, a majority of the content on the blog isn’t written for that user. This potentially makes it less likely that folks read the important posts that apply to their use cases, which, in turn, reinforces the false narrative that cloud security is both impossibly hard and should be someone else’s job entirely.

Ultimately, I’d like to see it split into different blogs that emphasize CISOs, engineers, and business tracks. It could possibly include an emergency “this is freaking important” feed.

And as to renaming it, here you go: you’d be doing a great disservice to your customers should you name it anything other than “AWS Klaxon.”

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Corey Quinn

Corey is the Cloud Economist at the Duckbill Group. Corey specializes in helping companies fix their AWS bills by making them smaller and less horrifying. He also hosts the AWS Morning Brief and Screaming in the Cloud podcasts and curates Last Week in AWS, a weekly newsletter summarizing the latest in AWS news, blogs, and tools, sprinkled with snark.

Singapore financial services: new resources for customer side of the shared responsibility model

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/singapore-financial-services-new-resources-for-customer-side-of-shared-responsibility-model/

Based on customer feedback, we’ve updated our AWS User Guide to Financial Services Regulations and Guidelines in Singapore whitepaper, as well as our AWS Monetary Authority of Singapore Technology Risk Management Guidelines (MAS TRM Guidelines) Workbook, which is available for download via AWS Artifact. Both resources now include considerations and best practices for the customer portion of the AWS Shared Responsibility Model.

The whitepaper provides considerations for financial institutions as they assess their responsibilities when using AWS services with regard to the MAS Outsourcing Guidelines, MAS TRM Guidelines, and Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide.

The MAS TRM Workbook provides best practices for the customer portion of the AWS Shared Responsibility Model—that is, guidance on how you can manage security in the AWS Cloud. The guidance and best practices are sourced from the AWS Well-Architected Framework.

The Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism. We believe that having well-architected systems greatly increases the likelihood of business success. For more information, see the AWS Well-Architected homepage.

The compliance controls provided by the workbook also continue to address the AWS side of the Shared Responsibility Model (security of the AWS Cloud).

View the updated whitepaper here, or download the updated AWS MAS TRM Guidelines Workbook via AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond… Cx0 to <code>

AWS Security Profiles: Fritz Kunstler, Principal Consultant, Global Financial Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-fritz-kunstler-principal-consultant-global-financial-services/


In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been here for three years. My job is Security Transformation, which is a technical role in AWS Professional Services. It’s a fancy way of saying that I help customers build the confidence and technical capability to run their most sensitive workloads in the AWS Cloud. Much of my work lives at the intersection of DevOps and information security.

Broadly, how does the role of Consultant differ from positions like “Solutions Architect”?

Depth of engagement is one of the main differences. On many customer engagements, I’m involved for three months, or six months, or nine months. I have one customer now that I’ve been working with for more than a year. Consultants are also more integrated—I’m often embedded in the customer’s team, working side-by-side with their employees, which helps me learn about their culture and needs.

What’s your favorite part of your job?

There’s a lot I like about working at Amazon, but a couple of things stand out. First, the people I work with. Amazon culture—and the people who comprise that culture—are amazing. I’m constantly interacting with really smart people who are willing to go out of their way to make good things happen for customers. At companies I’ve worked for in the past, I’ve encountered individuals like this. But being surrounded by so many people who behave like this day in and day out is something special.

The customers that we have the privilege of working with at AWS also represent some very large brands. They serve many, many consumers all over the world. When I help these customers achieve their security and privacy goals, I’m doing something that has an impact on the world at large. I’ve worked in tech my entire career, in roles ranging from executive to coder, but I’ve never had a job that lets me make such a broad impact before. It’s really cool.

What does cloud security mean to you, personally?

I work in Global Financial Services, so my customers are the world’s biggest banks, investment firms, and independent software vendors. These are companies that we all rely on every day, and they put enormous effort into protecting their customers’ data and finances. As I work to support their efforts, I think about it in terms of my wife, kids, parents, siblings—really, my entire extended family. I’m working to protect us, to ensure that the online world we live in is a safer one.

In your opinion, what’s the biggest cloud security challenge facing the Financial Services industry right now?

How to transform the way they do security. It’s not only a technical challenge—it’s a human challenge. For FinServe customers to get the most value out of the cloud, a lot of people need to be willing to change their minds.

Highly regulated customers like financial services firms tend to have sophisticated security organizations already in place. They’ve been doing things effectively in a particular way for quite a while. It takes a lot of evidence to convince them to change their processes—and to convince them that those changes can drive increased value and performance while reducing risk. Security leaders tend to be a skeptical lot, and that has its place, but I think that we should strive to always be the most optimistic people in the room. The cloud lets people experiment with big ideas that may lead to big innovation, and security needs to enable that. If the security leader in the room is always saying no, then who’s going to say yes? That’s the essence of security transformation – developing capabilities that enable your organization to say yes.

What’s a trend you see currently happening in the Financial Services space that you’re excited about?

AWS has been working hard alongside some of our financial services customers for several years. Moving to the cloud is a big transition, and there’s been some FUD—some fear, uncertainty, and doubt—to work through, so not everyone has been able to adopt the cloud as quickly as they might’ve liked. But I feel we’re approaching an inflection point. I’m seeing increasing comfort, increasing awareness, and an increasingly trained workforce among my customers.

These changes, in conjunction with executive recognition that “the cloud” is not only worthwhile, but strategically significant to the business, may signal that we’re close to a breakthrough. These are firms that have the resources to make things happen when they’re ready. I’m optimistic that even the more conservative of our financial services customers will soon be taking advantage of AWS in a big way.

Five years from now, what changes do you think we’ll see across the Financial Services/Cloud Security landscape?

I think cloud adoption will continue to accelerate on the business side. I also expect to see the security orgs within these firms leverage the cloud more for their own workloads – in particular, to integrate AI and machine learning into security operations, and further left in the systems development lifecycle. Security teams still do a lot of manual work to analyze code, policies, logs, and so on. This is critical stuff, but it’s also very time consuming and much of it is ripe for automation. Skilled security practitioners are in high demand. They should be focused on high-value tasks that enable the business. Amazon GuardDuty is just one example of how security teams can use the cloud toward that end.

What’s one thing that people outside of Financial Services can learn from what’s happening in this industry?

As more and more Financial Services customers adopt AWS, I think that it becomes increasingly hard for leaders in other sectors to suggest that the cloud isn’t secure, reliable, or capable enough for any given use case. I love the quote from Capital One’s CIO about why they chose AWS.

You’re leading a re:Inforce session that focuses on “IAM strategy for financial services.” What are some of the unique considerations that the financial services industry faces when it comes to IAM?

Financial services firms and other highly regulated customers tend to invest much more into tools and processes to enforce least privilege and separation of duties, due to regulatory and compliance requirements. Traditional, centralized approaches to implementing those two principles don’t always work well in the cloud, where resources can be ephemeral. If your goal is to enable builders to experiment and fail fast, then it shouldn’t take weeks to get the approvals and access required for a proof-of-concept than can be built in two days.

AWS Identity and Access Management (IAM) capabilities have changed significantly in the past year. Those changes make it easier and safer than ever to do things like delegate administrative access to developers. But they aren’t the sort of high-profile announcement that you’d hear a keynote speaker talk about at re:Invent. So I think a lot of customers aren’t fully aware of them, or of what you can accomplish by combining them with automation and CI/CD techniques.

My talk will offer a strategy and examples for using those capabilities to provide the same level of security—if not a better level of security—without so many of the human reviews and approvals that often become bottlenecks.

What are you hoping that your audience will do differently as a result of attending your session?

I’d like them to investigate and holistically implement the handful of IAM capabilities that we’ll discuss during the session. I also hope that they’ll start working to delegate IAM responsibilities to developers and automate low-value human reviews of policy code. Finally, I think it’s critical to have CI/CD or other capabilities that enable rapid, reliable delivery of updates to IAM policies across many AWS accounts.

Can you talk about some of the recent enhancements to IAM that you’re excited about?

Permissions boundaries and IAM resource tagging are two features that are really powerful and that I don’t see widely used today. In some cases, customers may not even be aware of them. Another powerful and even more recent development is the introduction of conditional support to the service control policy mechanism provided by AWS Organizations.

You’re an avid photographer: What’s appealing to you about photography? What’s your favorite photo you’ve ever taken?

I’ve always struggled to express myself artistically. I take a very technical, analytical approach to life. I started programming computers when I was six. That’s how I think. Photography is sufficiently technical for me to wrap my brain around, which is how I got started. It took me a long time to begin to get comfortable with the creative aspects. But it fits well with my personality, while enabling expression that I’d never be able to find, say, as a painter.

I won’t claim to be an amazing photographer, but I’ve managed a few really good shots. The photo that comes to mind is one I captured in Bora Bora. There was a guy swimming through a picturesque, sheltered part of the ocean, where a reef stopped the big waves from coming in. This swimmer was towing a surfboard with his dog standing on it, and the sun was going down in the background. The colors were so vibrant it felt like a Disneyland attraction, and from a distance, you could just see a dog on a surfboard. Everything about that moment – where I was, how I was feeling, how surreal it all was, and the fact that I was on a honeymoon with my wife – made for a poignant photo.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Fritz Kunstler

Fritz is a Principal Consultant in AWS Professional Services, specializing in security. His first computer was a Commodore 64, which he learned to program in BASIC from the back of a magazine. Fritz has spent more than 20 years working in tech and has been an AWS customer since 2008. He is an avid photographer and is always one batch away from baking the perfect chocolate chip cookie.

How to securely provide database credentials to Lambda functions by using AWS Secrets Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-using-aws-secrets-manager/

As a solutions architect at AWS, I often assist customers in architecting and deploying business applications using APIs and microservices that rely on serverless services such as AWS Lambda and database services such as Amazon Relational Database Service (Amazon RDS). Customers can take advantage of these fully managed AWS services to unburden their teams from infrastructure operations and other undifferentiated heavy lifting, such as patching, software maintenance, and capacity planning.

In this blog post, I’ll show you how to use AWS Secrets Manager to secure your database credentials and send them to Lambda functions that will use them to connect and query the backend database service Amazon RDS—without hardcoding the secrets in code or passing them through environment variables. This approach will help you secure last-mile secrets and protect your backend databases. Long living credentials need to be managed and regularly rotated to keep access into critical systems secure, so it’s a security best practice to periodically reset your passwords. Manually changing the passwords would be cumbersome, but AWS Secrets Manager helps by managing and rotating the RDS database passwords.

Solution overview

This is sample code: you’ll use an AWS CloudFormation template to deploy the following components to test the API endpoint from your browser:

  • An RDS MySQL database instance on a db.t2.micro instance
  • Two Lambda functions with necessary IAM roles and IAM policies, including access to AWS Secrets Manager:
    • LambdaRDSCFNInit: This Lambda function will execute immediately after the CloudFormation stack creation. It will create an “Employees” table in the database, where it will insert three sample records.
    • LambdaRDSTest: This function will query the Employees table and return the record count in an HTML string format
  • RESTful API with “GET” method on AWS API Gateway

Here’s the high level setup of the AWS services that will be created from the CloudFormation stack deployment:
 

Figure 1: Solution architecture

Figure 1: Architecture diagram

  1. Clients call the RESTful API hosted on AWS API Gateway
  2. The API Gateway executes the Lambda function
  3. The Lambda function retrieves the database secrets using the Secrets Manager API
  4. The Lambda function connects to the RDS database using database secrets from Secrets Manager and returns the query results

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/AWS-SecretsManager-Lambda-RDS-blog.

Deploying the sample solution

Set up the sample deployment by selecting the Launch Stack button below. If you haven’t logged into your AWS account, follow the prompts to log in.

By default, the stack will be deployed in the us-east-1 region. If you want to deploy this stack in any other region, download the code from the above GitHub link, place the Lambda code zip file in a region-specific S3 bucket and make the necessary changes in the CloudFormation template to point to the right S3 bucket. (Please refer to the AWS CloudFormation User Guide for additional details on how to create stacks using the AWS CloudFormation console.)
 
Select this image to open a link that starts building the CloudFormation stack

Next, follow these steps to execute the stack:

  1. Leave the default location for the template and select Next.
     
    Figure 2: Keep the default location for the template

    Figure 2: Keep the default location for the template

  2. On the Specify Details page, you’ll see the parameters pre-populated. These parameters include the name of the database and the database user name. Select Next on this screen
     
    Figure 3: Parameters on the "Specify Details" page

    Figure 3: Parameters on the “Specify Details” page

  3. On the Options screen, select the Next button.
  4. On the Review screen, select both check boxes, then select the Create Change Set button:
     
    Figure 4: Select the check boxes and "Create Change Set"

    Figure 4: Select the check boxes and “Create Change Set”

  5. After the change set creation is completed, choose the Execute button to launch the stack.
  6. Stack creation will take between 10 – 15 minutes. After the stack is created successfully, select the Outputs tab of the stack, then select the link.
     
    Figure 5:  Select the link on the "Outputs" tab

    Figure 5: Select the link on the “Outputs” tab

    This action will trigger the code in the Lambda function, which will query the “Employee” table in the MySQL database and will return the results count back to the API. You’ll see the following screen as output from the RESTful API endpoint:
     

    Figure 6:   Output from the RESTful API endpoint

    Figure 6: Output from the RESTful API endpoint

At this point, you’ve successfully deployed and tested the API endpoint with a backend Lambda function and RDS resources. The Lambda function is able to successfully query the MySQL RDS database and is able to return the results through the API endpoint.

What’s happening in the background?

The CloudFormation stack deployed a MySQL RDS database with a randomly generated password using a secret resource. Now that the secret resource with randomly generated password has been created, the CloudFormation stack will use dynamic reference to resolve the value of the password from Secrets Manager in order to create the RDS instance resource. Dynamic references provide a compact, powerful way for you to specify external values that are stored and managed in other AWS services, such as Secrets Manager. The dynamic reference guarantees that CloudFormation will not log or persist the resolved value, keeping the database password safe. The CloudFormation template also creates a Lambda function to do automatic rotation of the password for the MySQL RDS database every 30 days. Native credential rotation can improve security posture, as it eliminates the need to manually handle database passwords through the lifecycle process.

Below is the CloudFormation code that covers these details:


#This is a Secret resource with a randomly generated password in its SecretString JSON.
MyRDSInstanceRotationSecret:
    Type: AWS::SecretsManager::Secret
    Properties:
    Description: 'This is my rds instance secret'
    GenerateSecretString:
        SecretStringTemplate: !Sub '{"username": "${!Ref RDSUserName}"}'
        GenerateStringKey: 'password'
        PasswordLength: 16
        ExcludeCharacters: '"@/\'
    Tags:
    -
        Key: AppNam
        Value: MyApp

#This is a RDS instance resource. Its master username and password use dynamic references to resolve values from
#SecretsManager. The dynamic reference guarantees that CloudFormation will not log or persist the resolved value
#We use a ref to the Secret resource logical id in order to construct the dynamic reference, since the Secret name is being
#generated by CloudFormation
MyDBInstance2:
    Type: AWS::RDS::DBInstance
    Properties:
    AllocatedStorage: 20
    DBInstanceClass: db.t2.micro
    DBName: !Ref RDSDBName
    Engine: mysql
    MasterUsername: !Ref RDSUserName
    MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref MyRDSInstanceRotationSecret, ':SecretString:password}}' ]]
    MultiAZ: False
    PubliclyAccessible: False      
    StorageType: gp2
    DBSubnetGroupName: !Ref myDBSubnetGroup
    VPCSecurityGroups:
    - !Ref RDSSecurityGroup
    BackupRetentionPeriod: 0
    DBInstanceIdentifier: 'rotation-instance'

#This is a SecretTargetAttachment resource which updates the referenced Secret resource with properties about
#the referenced RDS instance
SecretRDSInstanceAttachment:
    Type: AWS::SecretsManager::SecretTargetAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    TargetId: !Ref MyDBInstance2
    TargetType: AWS::RDS::DBInstance
#This is a RotationSchedule resource. It configures rotation of password for the referenced secret using a rotation lambda
#The first rotation happens at resource creation time, with subsequent rotations scheduled according to the rotation rules
#We explicitly depend on the SecretTargetAttachment resource being created to ensure that the secret contains all the
#information necessary for rotation to succeed
MySecretRotationSchedule:
    Type: AWS::SecretsManager::RotationSchedule
    DependsOn: SecretRDSInstanceAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    RotationLambdaARN: !GetAtt MyRotationLambda.Arn
    RotationRules:
        AutomaticallyAfterDays: 30

#This is a lambda Function resource. We will use this lambda to rotate secrets
#For details about rotation lambdas, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html     https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
#The below example assumes that the lambda code has been uploaded to a S3 bucket, and that it will rotate a mysql database password
MyRotationLambda:
    Type: AWS::Serverless::Function
    Properties:
    Runtime: python2.7
    Role: !GetAtt MyLambdaExecutionRole.Arn
    Handler: mysql_secret_rotation.lambda_handler
    Description: 'This is a lambda to rotate MySql user passwd'
    FunctionName: 'cfn-rotation-lambda'
    CodeUri: 's3://devsecopsblog/code.zip'      
    Environment:
        Variables:
        SECRETS_MANAGER_ENDPOINT: !Sub 'https://secretsmanager.${AWS::Region}.amazonaws.com' 

Verifying the solution

To be certain that everything is set up properly, you can look at the Lambda code that’s querying the database table by following the below steps:

  1. Go to the AWS Lambda service page
  2. From the list of Lambda functions, click on the function with the name scm2-LambdaRDSTest-…
  3. You can see the environment variables at the bottom of the Lambda Configuration details screen. Notice that there should be no database password supplied as part of these environment variables:
     
    Figure 7: Environment variables

    Figure 7: Environment variables

    
        import sys
        import pymysql
        import boto3
        import botocore
        import json
        import random
        import time
        import os
        from botocore.exceptions import ClientError
        
        # rds settings
        rds_host = os.environ['RDS_HOST']
        name = os.environ['RDS_USERNAME']
        db_name = os.environ['RDS_DB_NAME']
        helperFunctionARN = os.environ['HELPER_FUNCTION_ARN']
        
        secret_name = os.environ['SECRET_NAME']
        my_session = boto3.session.Session()
        region_name = my_session.region_name
        conn = None
        
        # Get the service resource.
        lambdaClient = boto3.client('lambda')
        
        
        def invokeConnCountManager(incrementCounter):
            # return True
            response = lambdaClient.invoke(
                FunctionName=helperFunctionARN,
                InvocationType='RequestResponse',
                Payload='{"incrementCounter":' + str.lower(str(incrementCounter)) + ',"RDBMSName": "Prod_MySQL"}'
            )
            retVal = response['Payload']
            retVal1 = retVal.read()
            return retVal1
        
        
        def openConnection():
            print("In Open connection")
            global conn
            password = "None"
            # Create a Secrets Manager client
            session = boto3.session.Session()
            client = session.client(
                service_name='secretsmanager',
                region_name=region_name
            )
            
            # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
            # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
            # We rethrow the exception by default.
            
            try:
                get_secret_value_response = client.get_secret_value(
                    SecretId=secret_name
                )
                print(get_secret_value_response)
            except ClientError as e:
                print(e)
                if e.response['Error']['Code'] == 'DecryptionFailureException':
                    # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                    # An error occurred on the server side.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidParameterException':
                    # You provided an invalid value for a parameter.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidRequestException':
                    # You provided a parameter value that is not valid for the current state of the resource.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                    # We can't find the resource that you asked for.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
            else:
                # Decrypts secret using the associated KMS CMK.
                # Depending on whether the secret is a string or binary, one of these fields will be populated.
                if 'SecretString' in get_secret_value_response:
                    secret = get_secret_value_response['SecretString']
                    j = json.loads(secret)
                    password = j['password']
                else:
                    decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                    print("password binary:" + decoded_binary_secret)
                    password = decoded_binary_secret.password    
            
            try:
                if(conn is None):
                    conn = pymysql.connect(
                        rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
                elif (not conn.open):
                    # print(conn.open)
                    conn = pymysql.connect(
                        rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
        
            except Exception as e:
                print (e)
                print("ERROR: Unexpected error: Could not connect to MySql instance.")
                raise e
        
        
        def lambda_handler(event, context):
            if invokeConnCountManager(True) == "false":
                print ("Not enough Connections available.")
                return False
        
            item_count = 0
            try:
                openConnection()
                # Introducing artificial random delay to mimic actual DB query time. Remove this code for actual use.
                time.sleep(random.randint(1, 3))
                with conn.cursor() as cur:
                    cur.execute("select * from Employees")
                    for row in cur:
                        item_count += 1
                        print(row)
                        # print(row)
            except Exception as e:
                # Error while opening connection or processing
                print(e)
            finally:
                print("Closing Connection")
                if(conn is not None and conn.open):
                    conn.close()
                invokeConnCountManager(False)
        
            content =  "Selected %d items from RDS MySQL table" % (item_count)
            response = {
                "statusCode": 200,
                "body": content,
                "headers": {
                    'Content-Type': 'text/html',
                }
            }
            return response        
        

In the AWS Secrets Manager console, you can also look at the new secret that was created from CloudFormation execution by following the below steps:

  1. Go to theAWS Secret Manager service page with appropriate IAM permissions
  2. From the list of secrets, click on the latest secret with the name MyRDSInstanceRotationSecret-…
  3. You will see the secret details and rotation information on the screen, as shown in the following screenshot:
     
    Figure 8: Secret details and rotation information

    Figure 8: Secret details and rotation information

Conclusion

In this post, I showed you how to manage database secrets using AWS Secrets Manager and how to leverage Secrets Manager’s API to retrieve the secrets into a Lambda execution environment to improve database security and protect sensitive data. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and ongoing maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Secrets Manager Forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ramesh Adabala

Ramesh is a Solution Architect on the Southeast Enterprise Solution Architecture team at AWS.

AWS Security Profiles: Matthew Campagna, Sr. Principal Security Engineer, Cryptography

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-matthew-campagna-sr-principal-security-engineer-cryptography/

AWS Security Profiles: Matthew Campagna, Senior Principal Security Engineer, Cryptography

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for almost 6 years. I joined as a Principal Security Engineer, but my focus has always been cryptography. I’m a cryptographer. At the start of my Amazon career, I worked on designing our AWS Key Management Service (KMS). Since then, I’ve gotten involved in other projects—working alongside a group of volunteers in the AWS Cryptography Bar Raisers group.

Today, the Crypto Bar Raisers are a dedicated portion of my team that work with any AWS team who’s designed a novel application of cryptography. The underlying cryptographic mechanisms aren’t novel, but the engineer has figured out how to apply them in a non-standard way, often to solve a specific problem for a customer. We provide these AWS employees with a deep analysis of their applications to ensure that the applications meet our high cryptographic security bar.

How do you explain your job to non-tech friends?

I usually tell people that I’m a mathematician. Sometimes I’ll explain that I’m a cryptographer. If anyone wants detail beyond that, I say I design security protocols or application uses of cryptography.

What’s the most challenging part of your job?

I’m convinced the most challenging part of any job is managing email.

Apart from that, within AWS there’s lots of demand for making sure we’re doing security right. The people who want us to review their projects come to us via many channels. They might already be aware of the Crypto Bar Raisers, and they want our advice. Or, one of our internal AWS teams—often, one of the teams who perform security reviews of our services—will alert the project owner that they’ve deviated from the normal crypto engineering path, and the team will wind up working with us. Our requests tend to come from smart, enthusiastic engineers who are trying to deliver customer value as fast as possible. Our ability to attract smart, enthusiastic engineers has served us quite well as a company. Our engineering strength lies in our ability to rapidly design, develop, and deploy features for our customers.

The challenge of this approach is that it’s not the fastest way to achieve a secure system. That is, you might end up designing things before you can demonstrate that they’re secure. Cryptographers design in the opposite way: We consider “ability to demonstrate security” in advance, as a design consideration. This approach can seem unusual to a team that has already designed something—they’re eager to build the thing and get it out the door. There’s a healthy tension between the need to deliver the right level of security and the need to deliver solutions as quickly as possible. It can make our day-to-day work challenging, but the end result tends to be better for customers.

Amazon’s s2n implementation of the Transport Layer Security protocol was a pretty big deal when it was announced in 2015. Can you summarize why it was a big deal, and how you were involved?

It was a big deal, and it was a big decision for AWS to take ownership of the TLS libraries that we use. The decision was predicated on the belief we could do a better job than other open source TLS packages by providing a smaller, simpler—and inherently more secure—version of TLS that would raise the security bar for us and for our customers.

To do this, the Automated Reasoning Group demonstrated the formal correctness of the code to meet the TLS specification. For the most part, my involvement in the initial release was limited to scenarios where the Amazon contributors did their own cryptographic implementations within TLS (that is, within the existing s2n library), which was essentially like any other Crypto Bar Raiser review for me.

Currently, my team and I are working on additional developments to s2n—we’re deploying something called “quantum-safe cryptography” into it.

You’re leading a session at re:Inforce that provides “an introduction to post-quantum cryptography.” How do you explain post-quantum cryptography to a beginner?

Post-quantum cryptography, or quantum-safe cryptography, refers to cryptographic techniques that remain secure even against the power of a large-scale quantum computer.

A quantum computer would be fundamentally different than the computers we use today. Today, we build computers based off of certain mathematical assumptions—that certain cryptographic ciphers cannot be cracked without an immense, almost impossible amount of computing power. In particular, a basic assumption that cryptographers build upon today is that the discreet log problem, or integer factorization, is hard. We take it for granted that this type of problem is fundamentally difficult to solve. It’s not a task that can be completed quickly or easily.

Well, it turns out that if you had the computing power of a large-scale quantum computer, those assumptions would be incorrect. If you could figure out how to build a quantum computer, it could unravel the security aspects of the TLS sessions we create today, which are built upon those assumptions.

The reason that we take this “if” so seriously is that, as a company, we have data that we know we want to keep secure. The probability of such a quantum computer coming into existence continues to rise. Eventually, the probability that a quantum computer exists during the lifetime of the sensitivity of the data we are protecting will rise above the risk threshold that we’re willing to accept.

It can take 10 to 15 years for the cryptographic community to study new algorithms well enough to have faith in the core assumptions about how they work. Additionally, it takes time to establish new standards and build high quality and certified implementations of these algorithms, so we’re investing now.

I research post-quantum cryptographic techniques, which means that I’m basically looking for quantum-safe techniques that can be designed to run on the classical computers that we use now. Identifying these techniques lets us implement quantum-safe security well in advance of a quantum computer. We’ll remain secure even if someone figures out how to create one.

We aren’t doing this alone. We’re working within in the larger cryptographic community and participating in the NIST Post-Quantum Cryptography Standardization process.

What do you hope that people will do differently as a result of attending your re:Inforce session?

First, I hope people download and use s2n in any form. S2n is a nice, simple Transport Layer Socket (TLS) implementation that reduces overall risk for people who are currently using TLS.

In addition, I’d encourage engineers to try the post-quantum version of s2n and see how their applications work with it. Post-quantum cryptographic schemes are different. They have a slightly different “shape,” or usage. They either take up more bandwidth, which will change your application’s latency and bandwidth use, or they require more computational power, which will affect battery life and latency.

It’s good to understand how this increase in bandwidth, latency, and power consumption will impact your application and your user experience. This lets you make proactive choices, like reducing the frequency of full TLS handshakes that your application has to complete, or whatever the equivalent would be for the security protocol that you’re currently using.

What implications do post-quantum s2n developments have for the field of cloud security as a whole?

My team is working in the public domain as much as possible. We want to raise the cryptography bar not just for AWS, but for everyone. In addition to the post-quantum extension to s2n that we’re writing, we’re writing specifications. This means that any interested party can inspect and analyze precisely how we’re doing things. If they want to understand nuances of TLS 1.2 or 1.3, they can look at those specifications, and see how these post-quantum extensions apply to those standards.

We hope that documenting our work in the public space, where others can build interoperable systems, will raise the bar for all cloud providers, so that everyone is building upon a more secure foundation.

What resources would you recommend to someone interested in learning more about s2n or post-quantum cryptography?

For s2n, we do a lot of our communication through Security Blog posts. There’s also the AWS GitHub repository, which houses our source code. It’s available to anyone who wants to look at it, use it, or become a contributor. Any issues that arise are captured in issue pages there.

For quantum-safe crypto, a fairly influential paper was released in 2015. It’s the European Telecommunications Standards Institute’s Quantum-Safe Whitepaper (PDF file). It provides a gentle introduction to quantum computing and the impact it has on information systems that we’re using today to secure our information. It sets forth all of the reasons we need to invest now. It helped spur a shift in thinking about post-quantum encryption, from “research project” to “business need.”

There are certainly resources that allow you to go a lot deeper. There’s a highly technical conference called PQ Crypto that’s geared toward cryptographers and focuses on post-quantum crypto. For resources ranging from executive to developer level, there’s a quantum-safe cryptography workshop organized every year by the Institute for Quantum Computing at the University of Waterloo (IQC) and the European Telecommuncations Standards Institute (ETSI). AWS is partnering with ETSI/IQC to host the 2019 workshop in Seattle.

What’s one fact about cryptography that you think everyone—even laypeople—should be aware of?

People sometimes speak about cryptography like it’s a fact or a mathematical science. And it’s not, precisely. Cryptography doesn’t guarantee outcomes. It deals with probabilities based upon core assumptions. Cryptographic engineering requires you to understand what those assumptions are and closely monitor any challenges to them.

In the business world, if you want to keep something secret or confidential, you need to be able to express the probability that the cryptographic method fails to provide the desired security property. Understanding this probability is how businesses evaluate risk when they’re building out a new capability. Cryptography can enable new capabilities that might otherwise represent too high a risk. For instance, public-key cryptography and certificate authorities enabled the development of the Secure Socket Layer (SSL) protocol, and this unlocked e-Commerce, making it possible for companies to authenticate to end users, and for end users to engage in a confidential session to conduct business transactions with very little risk. So at the end of the day, I think of cryptography as essentially a tool to reduce the risk of creating new capabilities, especially for business.

Anything else?

Don’t think of cryptography as a guarantee. Think about it as a probability that’s tied to how often you use the cryptographic method.

You have confidentiality if you use the system based on an assumption that you can understand, like “this cryptographic primitive (or block cipher) is a pseudo-random permutation.” Then, if you encrypt 232 messages, the probability that all your data stays secure (confidential or authentic) is, let’s say, 2-72. Those numbers are where people’s eyes may start to gloss over when they hear them, but most engineers can process that information if it’s written down. And people should be expecting that from their solutions.

Once you express it like that, I think it’s clear why we want to move to quantum-safe crypto. The probabilities we tolerate for cryptographic security are very small, typically smaller than 2-32, around the order of one in four billion. We’re not willing to take much risk, and we don’t typically have to from our cryptographic constructions.

That’s especially true for a company like Amazon. We process billions of objects a day. Even if there’s a one in the 232 chance that some information is going to spill over, we can’t tolerate such a probability.

Most of cryptography wasn’t built with the cloud in mind. We’re seeing that type of cryptography develop now—for example, cryptographic computing models where you encrypt the data before you store it in the cloud, and you maintain the ability to do some computation on its encrypted form, and the plaintext never exists within the cloud provider’s systems. We’re also seeing core crypto primitives, like the Advanced Encryption Standard, which wasn’t designed for the cloud, begin to show some age. The massive use cases and sheer volume of things that we’re encrypting require us to develop new techniques, like the derived-key mode of AES-GCM that we use in AWS KMS.

What does cloud security mean to you, personally?

I’ll give you a roundabout answer. Before I joined Amazon, I’d been working on quantum-safe cryptography, and I’d been thinking about how to securely distribute an alternative cryptographic solution to the community. I was focused on whether this could be done by tying distribution into a user’s identity provider.

Now, we all have a trust relationship with some entity. For example, you have a trust relationship between yourself and your mobile phone company that creates a private, encrypted tunnel between the phone and your local carrier. You have a similar relationship with your cable or internet provider—a private connection between the modem and the internet provider.

When I looked around and asked myself who’d make a good identity provider, I found a lot of entities with conflicting interests. I saw few companies positioned to really deliver on the promise of next-generation cryptographic solutions, but Amazon was one of them, and that’s why I came to Amazon.

I don’t think I will provide the ultimate identity provider to the world. Instead, I’ve stayed to focus on providing Amazon customers the security they need, and I’m thrilled to be here because of the sheer volume of great cryptographic engineering problems that I get to see on a regular basis. More and more people have their data in a cloud. I have data in the cloud. I’m very motivated to continue my work in an environment where the security and privacy of customer data is taken so seriously.

You live in the Seattle area: When friends from out of town visit, what hidden gem do you take them to?

When friends visit, I bring them to the Amazon Spheres, which are really neat, and the MoPOP museum. For younger people, children, I take them on the Seattle Underground Tour. It has a little bit of a Harry Potter-like feel. Otherwise, the great outdoors! We spend a lot of time outside, hiking or biking.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Campagna bio photo

Matthew Campagna

Matthew is a Sr. Principal Engineer for Amazon Web Services’s Cryptography Group. He manages the design and review of cryptographic solutions across AWS. He is an affiliate of Institute for Quantum Computing at the University of Waterloo, a member of the ETSI Security Algorithms Group Experts (SAGE), and ETSI TC CYBER’s Quantum Safe Cryptography group. Previously, Matthew led the Certicom Research group at BlackBerry managing cryptographic research, standards, and IP, and participated in various standards organizations, including ANSI, ZigBee, SECG, ETSI’s SAGE, and the 3GPP-SA3 working group. He holds a Ph.D. in mathematics from Wesleyan University in group theory, and a bachelor’s degree in mathematics from Fordham University.

How to use AWS Secrets Manager client-side caching in .NET

Post Syndicated from Sepehr Samiei original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-client-side-caching-in-dotnet/

AWS Secrets Manager now has a client-side caching library for.NET that makes it easier to access secrets from .NET applications. This is in addition to client-side caching libraries for Java, JDBC, Python, and Go. These libraries help you improve availability, reduce latency, and reduce the cost of retrieving your secrets. Secrets Manager cache library does this by serving secrets out of a local cache and eliminating frequent Secrets Manager API calls.

AWS Secrets Manager enables you to automatically rotate, manage, and retrieve secrets throughout their lifecycle. Users and applications can access secrets through a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. It offers secret rotation with built-in integration for AWS services such as Amazon Relational Database Service (Amazon RDS) and Amazon Redshift, and it’s also extensible to other types of secrets. Secrets Manager enables you to control access to secrets using fine-grained permissions, and all actions on secrets, including retrievals, are traceable and auditable through AWS CloudTrail.

AWS Secrets Manager client-side caching for .NET extends benefits of the AWS Secrets Manager to wider use cases in .NET applications. These extra benefits are now available without having to spend precious time and effort on developing your own caching solution.

In this post, I’ll discuss the following topics:

  • The benefits offered by Secrets Manager client-side caching library for .NET
  • How Secrets Manager client-side caching library for .NET works
  • How to use Secrets Manager client-side library in .NET applications
  • How to extend Secrets Manager client-side library with your own custom logic

The benefits offered by Secrets Manager client-side caching library for .NET

Client-side caching is benefitial in following ways:

  • Availability: Network links sometimes suffer slowdowns or intermittent breaks. Client-side caching can significantly improve availability by eliminating a large number of API calls.
  • Latency: Retrieving secrets through API calls includes the network latency. Retrieving secrets from the local cache eliminates that latency and, therefore, improves performance.
  • Cost: Each API call to a Secrets Manager endpoint encounters a small charge. Using a local cache saves costs associated with API calls.

Using a client-side cache is a best practice; however, in the same way that you don’t want to reinvent the wheel everytime you need one, crafting your own client-side caching solution is suboptimal. The Secrets Manager client-side caching library relieves you from writing your own client-side caching solution while still giving you its benefits. Furthermore, it includes best practices such as:

  • Automatically refreshing cached secrets: the library periodically updates secrets to ensure your application gets the most recent version of a secret. You can control and change refresh intervals using configuration properties.
  • Integration with your applications: To use this library, just add the dependency to your .NET project and provide the identifier to the secret you want to access in your code.

How Secrets Manager client-side caching library for .NET works

The library is implemented in .NET Standard. This means you can reuse the same library in projects of all flavors of .NET, including .NET Framework, .NET Core, and Xamarin.

Note: Because the AWS Secrets Manager client-side caching library depends on Microsoft.Extensions.caching.memory, make sure you add it to your project dependencies.

As an extension to Secrets Manager .NET SDK, the cache library provides you an alternative to direct invocation of Secrets Manager API methods. You invoke cache library methods, and if the value doesn’t exist in the cache, the cache library invokes Secrets Manager methods on your behalf.

The default refresh interval for “current” version of secrets (the latest value stored in Secrets Manager for that secret) is 1 hour. This is because latest version may change from time to time. The library allows you to configure this frequency to higher or lower per your specific application requirements.

If you request a specific version of a secret by specifying both secret ID and secret version parameters, by default the library sets refresh interval to 48 hours. Since each version of a secret is immutable, there is no need to refresh them frequently.

You can also enable “Last known good value caching” to provide some protection in cases of transient network issues or service outages. If this is enabled, the cache will keep track of the last known good secret value, and in the event of an error occurring while refreshing a secret value from the service, the cache will return the last known good value. This feature is disabled by default, and can be enabled by setting the EnableLastKnownGoodValueCaching property of SecretsManagerCacheOptions class to true. You can pass your instance of SecretsManagerCacheOptions to SecretsManagerCache constructor.

The cache library provides a thread-safe implementation for both cache-check as well as entry populations. Therefore, simultaneous requests for a secret that is not available in the cache will result in a single API request to SecretsManager.

How to use Secrets Manager client-side caching library in .NET applications

You can add Secrets Manager client-side caching library to your projects either directly or through dependency injection. The dependency package is also available through NuGet. In this example, I use NuGet to add the library to my project. Open the NuGet Package Manager console and browse for AWSSDK.SecretsManager.Caching. Select the library and install it.
 

Figure 1: Select the AWSSDK.SecretsManager.Caching library

Figure 1: Select the AWSSDK.SecretsManager.Caching library

Before using the cache, you need to have at least one secret stored in your account using AWS Secrets Manager. To create a test secret:

  1. Go to the AWS Console, and then select AWS Secrets Manager.
  2. Select Store a new secret.
  3. For secret type, select Other type of secret, and then add three key/value pairs as shown here:
    
        {
          "Domain": "<yourDomainName>",
          "UserName": "<yourUserName>",
          "Password": "<yourPassword>"
        }  
        

  4. Next, create a cache object, and then invoke its methods with appropriate parameters. Below is a code snippet using AWS Secrets Manager client-side cache library to access our secret. Notice this snippet assumes you’ve added Newtonsoft.Json library to your project:
    
        public MyClass : IDisposable
        {
                private readonly IAmazonSecretsManager secretsManager;
                private readonly ISecretsManagerCache cache;
            
                public MyClass()
                {
                    this.secretsManager = new AmazonSecretsManagerClient();
                    this.cache = new SecretsManagerCache(this.secretsManager);
                }
        
                public void Dispose()
                {
                    this.secretsManager.Dispose();
                    this.cache.Dispose();
                }
            
                public async Task<NetworkCredential> GetNetworkCredential(string secretId)
                    {
                        var sec = await this.cache.GetSecretAsync(secretId);
                        var jo = Newtonsoft.Json.Linq.JObject.Parse(sec.SecretString);
                        return new NetworkCredential(
                            domain: jo["Domain"].ToObject<string>(),
                            userName: jo["Username"].ToObject<string>(),
                            password: jojo["Password"].ToObject<string>());
                    }
                }        
        

For ASP.NET projects, you can use the library with dependency-injection. To do this, you first have to register Secrets Manager caching to the dependency injection service collection in the Startup class of your ASP.NET project:


public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddSecretsManagerCaching();
    }
}

Then, you’ll be able to consume the cache using constructor injection in your classes.


public MyClass : IDisposable
    {
        private readonly ISecretsManagerCache cache;

        public MyClass(ISecretsManagerCache cache)
        {
            this.cache = cache;
        }

        public async Task<NetworkCredential> GetNetworkCredential(string secretId)
        {
            var sec = await this.cache.GetSecretAsync(secretId);
            var jo = Newtonsoft.Json.Linq.JObject.Parse(sec.SecretString);
                        return new NetworkCredential(
                        domain: jo["Domain"].ToObject<string>(),
                        userName: jo["Username"].ToObject<string>(),
                        password: jojo["Password"].ToObject<string>());
        }
    }    

How to add in-memory encryption and other custom extensions

The Secrets Manager caching library is designed to be extendable with your own custom logic. One possibility is to extend its implementation to include in-memory encryption of cached secrets to add another layer of protection on your retrieved secrets. For this purpose, you have to manually implement two of the interfaces included in the library. The library includes SecretCacheEntry class, implementing the interface ISecretCacheEntry. This is the object that stores secrets in memory. You could create another class implementing the same ISecretCacheEntry interface to add in-memory encryption/decryption.


public class EncryptedSecretCacheEntry : ISecretCacheEntry
    {
        public EncryptedSecretCacheEntry(GetSecretValueResponse response, TimeSpan expiry)
        {
            this.VersionId = response.VersionId;
            this.LastRetreived = DateTime.UtcNow;
            this.Name = response.Name;
            this.Expires = this.LastRetreived.Add(expiry);

            if (response.SecretBinary != null && response.SecretBinary.Length > 0)
            {
                using (var ms = response.SecretBinary)
                {
                    this.SecretBinary = ms.ToArray();
                }
            }
            else
            {
                this.SecretString = response.SecretString; 
            }
        }
    
        private byte[] _EncryptedSecretString;
        public string SecretString
        {
            get { return MyCustomCipherService.DecryptString(_EncryptedSecretString); }
            set { _EncryptedSecretString = MyCustomCipherService.EncryptString(value); }
        }
    
        private byte[] _EncryptedSecretBinary;
        public byte[] SecretBinary
        {
            get { return MyCustomCipherService.Decrypt(_EncryptedSecretBinary); }
            set { _EncryptedSecretBinary = MyCustomCipherService.Encrypt(value); }
        }
    
        public string VersionId { get; private set; }

        public string Name { get; private set; }

        public string LocalId => $"{this.Name}:{this.VersionId}";

        public DateTime LastRetreived { get; private set; }

        public DateTime Expires { get; private set; }
    } 

The second step is to implement the ISecretCacheEntryFactory class:


public class EncryptedSecretCacheEntryFactory : ISecretCacheEntryFactory
    {
        public ISecretCacheEntry CreateEntry(GetSecretValueResponse response, TimeSpan expiry)
        {
            return new EncryptedSecretCacheEntry(response, expiry);
        }
    }

Having these two classes, I can now modify the constructor of my SecretsUserClass to add my custom encryption logic to Secrets Manager cache library:


public SecretsUserClass()
    {
        this.secretsManager = new AmazonSecretsManagerClient();
        this.cache = new SecretsManagerCache(this.secretsManager, new   EncryptedSecretCacheEntryFactory(), new SecretsManagerCacheOptions(), null);
    }

You could even go further and fully customize the cache by implementing ISecretsManagerCache or implementing a child class that inherits functionality of SecretsManagerCache and adds new methods to it.

Conclusion

It’s critical for enterprises to protect secrets from unauthorized access and adhere to various industry or legislative compliance requirements. Mitigating the risk of compromise often involves complex techniques, significant effort, and costs, such as applying encryption, managing vaults and HSM modules, rotating secrets, audit access, and so on. Because the level of effort is high, many developers tend to use the much simpler, but substantially riskier, alternative of hard-coding secrets in application code, or simply storing secrets in plain-text format. These practices are problematic from the security and compliance point of view, but they need to be understood as symptoms of the more fundamental problem of complexity in the systems enterprises have built. To address the problem of weak security and compliance practices, you have to address the problem of complexity. Complex systems can be simplified and made more secure when they are reusable, accessible, and are automated, needing no human interaction.

In this post, I’ve shown how you can improve availability, reduce latency, and reduce the cost of using your secrets by using the Secrets Manager client-side caching library for .NET. I also showed how to extend it by implementing your own custom logic for more advanced use-cases, such as in-memory encryption of secrets.

To get started managing secrets, open the Secrets Manager console. To learn more, read
How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation. See AWS Region Table for the list of AWS regions where Secrets Manager is available.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread in the Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sepehr Samiei

Sepehr is currently a Senior Solutions Architect at AWS. He started his professional career as a .Net developer, which continued for more than 10 years. Early on, he quickly became a fan of cloud computing and loves to help customers utilise the power of Microsoft tech on AWS. His wife and daughter are the most precious parts of his life, and he and his wife expect to have a son soon!

New whitepaper available: Architecting for PCI DSS Segmentation and Scoping on AWS

Post Syndicated from Avik Mukherjee original https://aws.amazon.com/blogs/security/new-whitepaper-available-architecting-for-pci-dss-segmentation-and-scoping-on-aws/

AWS has published a whitepaper, Architecting for PCI DSS Scoping and Segmentation on AWS, to provide guidance on how to properly define the scope of your Payment Card Industry (PCI) Data Security Standard (DSS) workloads running on the AWS Cloud. The whitepaper looks at how to define segmentation boundaries between your in-scope and out-of-scope resources using cloud native AWS services.

The whitepaper is intended for engineers and solution builders, but it also serves as a guide for Qualified Security Assessors (QSAs) and internal security assessors (ISAs) to better understand the different segmentation controls available within AWS products and services, along with associated scoping considerations.

Compared to on-premises environments, software defined networking on AWS transforms the scoping process for applications by providing additional segmentation controls beyond network segmentation. Thoughtful design of your applications and selection of security-impacting services for implementing your required controls can reduce the number of systems and services in your cardholder data environment (CDE).

The whitepaper is based on the PCI Council’s Information Supplement: Guidance for PCI DSS Scoping and Network Segmentation.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Avik Mukherjee

Avik is a Security Architect with more than a decade of experience in IT governance, security, risk, and compliance. He’s a Qualified Security Assessor (QSA) for Payment Card Industry (PCI) Data Security Standard (DSS) and Point-to-Point-Encryption (P2PE) and has deep knowledge of security advisory and assessment work in various industries, including retail, financial, and technology. He’s part of the AWS professional services teams that work with clients to assist them in their journey to transform the security posture of their resources running on AWS. He loves spending time with his family and working on his culinary skills.

Simplify DNS management in a multi-account environment with Route 53 Resolver

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/

In a previous post, I showed you a solution to implement central DNS in a multi-account environment that simplified DNS management by reducing the number of servers and forwarders you needed when implementing cross-account and AWS-to-on-premises domain resolution. With the release of the Amazon Route 53 Resolver service, you now have access to a native conditional forwarder that will simplify hybrid DNS resolution even more.

In this post, I’ll show you a modernized solution to centralize DNS management in a multi-account environment by using Route 53 Resolver. This solution allows you to resolve domains across multiple accounts and between workloads running on AWS and on-premises without the need to run a domain controller in AWS.

Solution overview

My solution will show you how to solve three primary use-cases for domain resolution:

  • Resolving on-premises domains from workloads running in your VPCs.
  • Resolving private domains in your AWS environment from workloads running on-premises.
  • Resolving private domains between workloads running in different AWS accounts.

The following diagram explains the high-level full architecture.
 

Figure 1: Solution architecture diagram

Figure 1: Solution architecture diagram

In this architecture:

  1. This is the Amazon-provided default DNS server for the central DNS VPC, which we’ll refer to as the DNS-VPC. This is the second IP address in the VPC CIDR range (as illustrated, this is 172.27.0.2). This default DNS server will be the primary domain resolver for all workloads running in participating AWS accounts.
  2. This shows the Route 53 Resolver endpoints. The inbound endpoint will receive queries forwarded from on-premises DNS servers and from workloads running in participating AWS accounts. The outbound endpoint will be used to forward domain queries from AWS to on-premises DNS.
  3. This shows conditional forwarding rules. For this architecture, we need two rules, one to forward domain queries for onprem.private zone to the on-premises DNS server through the outbound gateway, and a second rule to forward domain queries for awscloud.private to the resolver inbound endpoint in DNS-VPC.
  4. This indicates that these two forwarding rules are shared with all other AWS accounts through AWS Resource Access Manager and are associated with all VPCs in these accounts.
  5. This shows the private hosted zone created in each account with a unique subdomain of awscloud.private.
  6. This shows the on-premises DNS server with conditional forwarders configured to forward queries to the awscloud.private zone to the IP addresses of the Resolver inbound endpoint.

Note: This solution doesn’t require VPC-peering or connectivity between the source/destination VPCs and the DNS-VPC.

How it works

Now, I’m going to show how the domain resolution flow of this architecture works according to the three use-cases I’m focusing on.

First use case

 

 Figure 2:  Use case for resolving on-premises domains from workloads running in AWS

Figure 2: Use case for resolving on-premises domains from workloads running in AWS

First, I’ll look at resolving on-premises domains from workloads running in AWS. If the server with private domain host1.acc1.awscloud.private attempts to resolve the address host1.onprem.private, here’s what happens:

  1. The DNS query will route to the default DNS server of the VPC that hosts host1.acc1.awscloud.private
  2. Because the VPC is associated with the forwarding rules shared from the central DNS account, these rules will be evaluated by the default Amazon-provided DNS in the VPC.
  3. In this example, one of the rules indicates that queries for onprem.private should be forwarded to an on-premises DNS server. Following this rule, the query will be forwarded to an on-premises DNS server.
  4. The forwarding rule is associated with the Resolver outbound endpoint, so the query will be forwarded through this endpoint to an on-premises DNS server.

In this flow, the DNS query that was initiated in one of the participating accounts has been forwarded to the centralized DNS server which, in turn, forwarded this to the on-premises DNS.

Second use case

Next, here’s how on-premises workloads will be able to resolve private domains in your AWS environment:
 

Figure 3: Use case for how on-premises workloads will be able to resolve private domains in your AWS environment

Figure 3: Use case for how on-premises workloads will be able to resolve private domains in your AWS environment

In this case, the query for host1.acc1.awscloud.private is initiated from an on-premises host. Here’s what happens next:

  1. The domain query is forwarded to on-premises DNS server.
  2. The query is then forwarded to the Resolver inbound endpoint via a conditional forwarder rule on the on-premises DNS server.
  3. The query reaches the default DNS server for DNS-VPC.
  4. Because DNS-VPC is associated with the private hosted zone acc1.awscloud.private, the default DNS server will be able to resolve this domain.

In this case, the DNS query has been initiated on-premises and forwarded to centralized DNS on the AWS side through the inbound endpoint.

Third use case

Finally, you might need to resolve domains across multiple AWS accounts. Here’s how you could achieve this:
 

Figure 4: Use case for how to resolve domains across multiple AWS accounts

Figure 4: Use case for how to resolve domains across multiple AWS accounts

Let’s say that host1 in host1.acc1.awscloud.private attempts to resolve the domain host2.acc2.awscloud.private. Here’s what happens:

  1. The domain query is sent to the default DNS server for the VPC hosting source machine (host1).
  2. Because the VPC is associated with the shared forwarding rules, these rules will be evaluated.
  3. A rule indicates that queries for awscloud.private zone should be forwarded to the resolver endpoint in DNS-VPC (for inbound endpoint IP addresses), which will then use the Amazon-provided default DNS to resolve the query.
  4. Because DNS-VPC is associated with the acc2.awscloud.private hosted zone, the default DNS will use auto-defined rules to resolve this domain.

This use case explains the AWS-to-AWS case where the DNS query has been initiated on one participating account and forwarded to central DNS for resolution of domains in another AWS account. Now, I’ll look at what it takes to build this solution in your environment.

How to deploy the solution

I’ll show you how to configure this solution in four steps:

  1. Set up a centralized DNS account.
  2. Set up each participating account.
  3. Create private hosted zones and Route 53 associations.
  4. Configure on-premises DNS forwarders.

Step 1: Set up a centralized DNS account

In this step, you’ll set up resources in the centralized DNS account. Primarily, this includes the DNS-VPC, Resolver endpoints, and forwarding rules.

  1. Create a VPC to act as DNS-VPC according to your business scenario, either using the web console or from an AWS Quick Start. You can review common scenarios in the Amazon VPC user guide; one very common scenarios is a VPC with public and private subnets.
  2. Create resolver endpoints. You need to create an outbound endpoint to forward DNS queries to on-premises DNS and an inbound endpoint to receive DNS queries forwarded from on-premises workloads and other AWS accounts.
  3. Create two forwarding rules. The first rule is to forward DNS queries for zone onprem.private to your on-premises DNS server IP addresses, and the second rule is to forward DNS queries for zone awscloud.private to the IP addresses of the resolver inbound endpoint.
  4. After creating the rules, associate them with DNS-VPC that was created in step #1. This will allow the Route 53 Resolver to start forwarding domain queries accordingly.
  5. Finally, you need to share the two forwarding rules with all participating accounts. To do that, you’ll use AWS Resource Access Manager and you can share the rules with your entire AWS Organization or with specific accounts.

Note: To be able to forward domain queries to your on-premises DNS server, you need connectivity between your data center and DNS-VPC, which could be established either using site-to-site VPN or AWS Direct Connect.

Step 2: Set up participating accounts

For each participating account, you need to configure your VPCs to use the shared forwarding rules, and you need to create a private hosted zone for each account.

  • Accept the shared rules from AWS Resource Access Manager. This step is not required if the rules were shared to your AWS Organization. Then, associate the forwarding rules with the VPCs that host your workloads in each account. Once associated, the resolver will start forwarding DNS queries according to the rules.

At this point, you should be able to resolve on-premises domains from workloads running in any VPC associated with the shared forwarding rules. To create private domains in AWS, you need to create Private Hosted Zones.

Step 3: Create private hosted zones

In this step, you need to create a private hosted zone in each account with a subdomain of awscloud.private. Use unique names for each private hosted zone to avoid domain conflicts in your environment (for example, acc1.awscloud.private or dev.awscloud.private).

  1. Create a private hosted zone in each participating account with a subdomain of awscloud.private and associate it with VPCs running in that account.
  2. Associate the private hosted zone with DNS-VPC. This allows the centralized DNS-VPC to resolve domains in the private hosted zone and act as a DNS resolver between AWS accounts.

Because the private hosted zone and DNS-VPC are in different accounts, you need to associate the private hosted zone with DNS-VPC. To do that, you need to create authorization from the account that owns the private hosted zone and accept this authorization from the account that owns DNS-VPC. You can do that using AWS CLI:

  1. In each participating account, create the authorization using the private hosted zone ID, the region, and the VPC ID that you want to associate (DNS-VPC).
    
        aws route53 create-vpc-association-authorization --hosted-zone-id <hosted-zone-id>  --vpc VPCRegion=<region> ,VPCId=<vpc-id>    
    

  2. In the centralized DNS account, associate the DNS-VPC with the hosted zone in each participating account.
    
        aws route53 associate-vpc-with-hosted-zone --hosted-zone-id <hosted-zone-id> --vpc VPCRegion=<region>,VPCId=<vpc-id>    
    

Step 4: Configure on-premises DNS forwarders

To be able to resolve subdomains within the awscloud.private domain from workloads running on-premises, you need to configure conditional forwarding rules to forward domain queries to the two IP addresses of resolver inbound endpoints that were created in the central DNS account. Note that this requires connectivity between your data center and DNS-VPC, which could be established either using site-to-site VPN or
AWS Direct Connect.

Additional considerations and limitations

Thanks to the flexibility of Route 53 Resolver and conditional forwarding rules, you can control which queries to send to central DNS and which ones to resolve locally in the same account. This is particularly important when you plan to use some AWS services, such as AWS PrivateLink or Amazon Elastic File System (EFS) because domain names associated with these services need to be resolved local to the account that owns them. In this section, I will name two use-cases that require additional considerations.

  1. Interface VPC Endpoints (AWS PrivateLink)

    When you create an AWS PrivateLink interface endpoint, AWS generates endpoint-specific DNS hostnames that you can use to communicate with the service. For AWS services and AWS Marketplace partner services, you can optionally enable private DNS for the endpoint. This option associates a private hosted zone with your VPC. The hosted zone contains a record set for the default DNS name for the service (for example, ec2.us-east-1.amazonaws.com) that resolves to the private IP addresses of the endpoint network interfaces in your VPC. This enables you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames.

    If you use private DNS for your endpoint, you have to resolve DNS queries to the endpoint local to the account and use the default DNS provided by AWS. So, in this case, I recommend that you resolve domain queries in amazonaws.com locally and not forward these queries to central DNS.

  2. Mounting EFS with a DNS name

    You can mount an Amazon EFS file system on an Amazon EC2 instance using DNS names. The file system DNS name automatically resolves to the mount target’s IP address in the Availability Zone of the connecting Amazon EC2 instance. To be able to do that, the VPC must use the default DNS provided by Amazon to resolve EFS DNS names.

    If you plan to use EFS in your environment, I recommend that you resolve EFS DNS names locally and avoid sending these queries to central DNS because clients in that case would not receive answers optimized for their availability zone, which might result in higher operation latencies and less durability.

Summary

In this post, I introduced a simplified solution to implement central DNS resolution in a multi-account and hybrid environment. This solution uses AWS Route 53 Resolver, AWS Resource Access Manager, and native Route 53 capabilities and it reduces complexity and operations effort by removing the need for custom DNS servers or forwarders in AWS environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on in the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

AWS and the CLOUD Act

Post Syndicated from Michael Punke original https://aws.amazon.com/blogs/security/aws-and-the-cloud-act/

While news of Brexit dominates headlines in the United Kingdom, another important event took place recently in London. U.S. Deputy Assistant Attorney General Richard W. Downing addressed the myths and realities of the Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”), in a speech at the Academy of European Law Conference. Following the speech, the U.S. Department of Justice (DOJ) published a whitepaper and FAQ clarifying the purpose and scope of the CLOUD Act and addressing many of the misunderstandings of this law. I strongly encourage people to read the speech, the DOJ’s whitepaper, and the FAQ to understand what the CLOUD Act actually does and does not do. Simply put, the CLOUD Act provides minor updates to a decades-old law that is strictly limited to helping law enforcement agencies fight and deter international criminal and terrorist activity. It does not, as some have suggested, give U.S. law enforcement agencies free access to data stored in the cloud.

We see the DOJ’s speech and guidance as a step in the right direction, but more needs to be done by governments around the world to educate cloud computing customers about important issues regarding access to data. This is why I want to take some time today to highlight a few of the key misunderstandings about the CLOUD Act in order to help customers understand that this law should not change how they use cloud services.

Law enforcement access to data over the last 30 years

In 1986, Congress enacted the Stored Communications Act (SCA), which addressed law enforcement access to electronic communications. Although the SCA was considered forward-looking at the time, courts have struggled over the years to apply it to technologies like internet applications and cloud computing that did not exist when the SCA was passed. One area of debate related to whether U.S. law enforcement agencies could obtain data located outside the United States. The CLOUD Act resolved this debate. It made clear that providers subject to U.S. law, such as an entity doing business in the United States (including foreign-based entities with U.S. subsidiaries) can be served with a warrant and court order under the SCA to provide data under their control, regardless of where it is stored.

To be clear, despite suggestions to the contrary, the CLOUD Act does not introduce a new concept. Governments across the globe have long had the ability to obtain evidence of crimes located outside of their jurisdiction. As the DOJ noted in its whitepaper, most countries require disclosure of data wherever it is stored, consistent with the Budapest Convention, which was the first international treaty aimed at improving cooperation and investigations in cyber and computer crimes. Indeed, French courts have long allowed police to obtain data outside of France so long as it is accessible from a computer in France. Most recently, in February 2019, the United Kingdom passed the Crime (Overseas Production Orders) Act, which allows U.K. law enforcement agencies to obtain stored electronic data from a company or person based outside of the United Kingdom.

This practice is consistent with a centuries old principle of international cooperation. Countries use a number of tools, ranging from domestic laws to international treaties, to seek potential evidence located beyond their borders and establish a tradition of cross-border cooperation. This serves as the foundation for what trusted and respected organizations like Europol do, and the CLOUD Act simply reflects what these other law enforcement agencies and other countries have been doing for many years.

Understanding the CLOUD Act

One of the most common misunderstandings about the CLOUD Act is that it is applicable to only U.S. companies. This is not true. The CLOUD Act applies to all electronic communication service or remote computing service providers that are subject to U.S. jurisdiction, including email providers, telecom companies, social media sites, and cloud providers, whether they are established in the United States or in another country. This means any foreign company with an office or subsidiary in the United States is subject to the CLOUD Act. As Mr. Downing said in his speech, U.S. courts have ruled that even non-U.S. websites that have been used by customers based in the United States have been subject to U.S. jurisdiction and therefore could be subject to the CLOUD Act.

Another common misunderstanding about the CLOUD Act is that it somehow provides the U.S. government with unfettered access to data held by cloud providers. This is simply false. The CLOUD Act does not grant law enforcement agencies free access to data stored in the cloud. Law enforcement can compel service providers to provide data only by meeting the rigorous legal standards for a warrant issued by a U.S. court. U.S. law sets a high bar for obtaining a warrant, requiring that an independent judge conclude that law enforcement has reasonable grounds to request the information, the information requested directly relates to a crime, and that the request is made clearly, accurately, and proportionally. This is the opposite of unfettered access.

When AWS receives a request for data located outside the United States, we have tools to challenge it and a long track record of doing so. In fact, our challenges typically begin well before we go to a court. Each request from law enforcement agencies is reviewed by a team of legal professionals. As part of that review, we assess whether the request would violate the laws of the United States or of the foreign country in which the data is located, or would violate the customer’s rights under the relevant laws. We rigorously enforce applicable legal standards to limit – or reject outright – any law enforcement request for data coming from any country, including the United States. We actively push back on law enforcement agencies to address concerns, which frequently results in them withdrawing their request.

In the event we cannot resolve a dispute, we do not hesitate to go to court. Amazon has a history of formally challenging government requests for customer information that we believe are overbroad or otherwise inappropriate. We will continue to resist requests, including those that conflict with local law such as GDPR in the European Union, to do everything we can to protect customer data. We will also continue to notify customers before disclosing content, and we provide advanced encryption and key management services that customers can use to protect their content further. We have industry leading encryption services that give our customers a range of options to encrypt data in-transit and at rest, and to manage encryption/decryption keys – because encrypted content is rendered useless without the applicable decryption keys.

The CLOUD Act did not change cloud providers’ ability to protect their customers

AWS is vigilant about its customers’ privacy and security. We are committed to providing all customers, including governmental agencies who trust us with their most sensitive content, with the most extensive set of security services and features to help ensure complete control of their data. The CLOUD Act did not alter or weaken this commitment. On the contrary, the CLOUD Act recognizes the right of cloud providers to challenge requests that conflict with another country’s laws or national interests and requires that governments respect local rules of law. Additionally, foreign governments concerned about the risk of government data disclosure may be entitled to sovereign immunity. The United States recognizes that under the principle of sovereign immunity foreign governments have effective legal means under U.S. law to prevent disclosure of their data.

Customers around the world can continue to use AWS in compliance with local laws

At AWS, we are constantly helping our customers and partners understand their position in relation to new compliance standards and laws. It is the only way we believe organizations can ensure that they are able to protect their end users. After you have read Mr. Downing’s speech and the documents from DOJ, you should visit our webpage dedicated to the CLOUD Act, which has FAQs, whitepapers, and other resources for customers and APN partners. On that webpage, you can learn the facts about the limited impact of the CLOUD Act and understand its application to AWS.

The reality is that cloud computing is positively impacting lives around the world in all kinds of ways. With AWS technologies, our customers are creating forward-thinking technologies that shape the ways we live and learn, whether through photo sharing and video streaming, increased access to financial services and e-commerce/trade, processing geospatial data for new discoveries, creating or promoting greater opportunities for education and skills development, or helping industries evolve with accessible AI/ML services. Our customers are also leveraging the cloud for good: working to prevent human trafficking, prevent violent crime, improve citizen services in cities, and to make medical breakthroughs. What would be incredibly disappointing would be for all of this to be slowed due to fundamental misunderstandings about the CLOUD Act. The information recently provided by DOJ regarding the CLOUD Act is a helpful step toward greater understanding of the facts, but we hope this post and related resources will bring insight and clarity to this debate.

Michael Punke is the Vice President of Global Public Policy at AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Join us at AWS re:Inforce for the Builders Fair!

Post Syndicated from Ram Ramani original https://aws.amazon.com/blogs/security/join-us-at-aws-reinforce-for-the-builders-fair/

AWS is launching its first conference dedicated to cloud security, AWS re:Inforce, which will take place June 25-26, 2019 at the Boston Convention and Exhibition Center.

At AWS, we encourage everyone to be a builder, to learn and be curious, and to use AWS products and services to explore the Art of the Possible. At re:Inforce, you’ll have an opportunity to see our “culture of building” at the AWS re:Inforce Builders Fair. The Builders Fair is a set of “science fair” projects from AWS employees that highlight different aspects of security built upon the AWS cloud. You’ll see how AWS services can be used to solve real-world security problems and you’ll get ideas that you can use in your own organization.

The Builders Fair features eight teams of AWS employees from Brazil, Chile, China, and the United States who were chosen out of nearly 100 submissions to our call for presenters. Every project was reviewed by a team of five judges and evaluated against a number of criteria, including the subject and the services used. We looked for submissions that were relevant to both the current and the future state of cloud security. The selected projects cover a number of areas, including data anonymization, chaos testing, detecting social engineering, voice services, and application protection. We’re really excited to share them with you.

Check out the re:Inforce website to get your conference tickets (which include access to the Builders Fair), or view Builders Fair sessions. Please stop by the Builders Fair, meet our team members, and consider the Art of the Possible when it comes to security backed by the power of the AWS cloud.

Want more AWS Security news? Follow us on Twitter.

Author photo: Ram Ramani

Ram Ramani

Ram is a Solutions Architect on the Security and Compliance team at AWS.

Author Photo: Jeff Levine

Jeff Levine

Jeff is a Senior Solutions Architect on the Security and Compliance team at AWS.

How to decrypt ciphertexts in multiple regions with the AWS Encryption SDK in C

Post Syndicated from Liz Roth original https://aws.amazon.com/blogs/security/how-to-decrypt-ciphertexts-multiple-regions-aws-encryption-sdk-in-c/

You’ve told us that you want to encrypt data once with AWS Key Management Service (AWS KMS) and decrypt that data with customer master keys (CMKs) that you specify, often with CMKs in different AWS Regions. Doing this saves you compute resources and helps you to enable secure and efficient high-availability schemes.

The AWS Crypto Tools team has introduced the AWS Encryption SDK for C so you can achieve these goals. The new tool also adds more options for language and platform support and is fully interoperable with the implementations in Java and Python.

The AWS Encryption SDK is a client-side encryption library that helps make it easier for you to implement encryption best practices in your applications. You can use it with master keys from multiple sources, including AWS KMS CMKs. The AWS Encryption SDK doesn’t require AWS KMS or any other AWS service.

You can use AWS KMS APIs directly to encrypt data keys using multiple CMKs, but the AWS Encryption SDK provides tools to make working with multiple CMKs even easier, with everything you need stored in the Encryption SDK’s portable encrypted message format. The AWS Encryption SDK for C uses the concept of keyrings, which makes it easy to work with ciphertexts encrypted using multiple CMKs.

In this post, I will walk you through an example using the new AWS Encryption SDK for C. I’ll focus on some highlights from example code in the context of what an example application deployment might look like. You can find the complete example code in this GitHub repository. As always, we welcome your comments and your contributions.

Example scenario

To add some context around the example code, assume that you have a data processing application deployed both in US West (Oregon) us-west-2 and EU Central (Frankfurt) eu-central-1. For added durability, this example application creates and encrypts data in us-west-2 before it’s copied to the eu-central-1 Region. You have assurance that you could decrypt that data in us-west-2 if needed, but you want to mitigate the case where the decryption service in us-west-2 is unavailable. So how do you ensure you can decrypt your data in the eu-central-1 region when you need to?

In this example, your data processing application uses the AWS Encryption SDK and AWS KMS to generate a 256-bit data key to encrypt content locally in us-west-2. The AWS Encryption SDK for C deletes the plaintext data key after use, but an encrypted copy of that data key is included in the encrypted message that the AWS Encryption SDK returns. This prevents you from losing the encrypted copy of the data key, which would make your encrypted content unrecoverable. The data key is encrypted under the AWS KMS CMKs in each of the two regions in which you might want to decrypt the data in the future.

A best practice is to plan to decrypt data using in-region data keys and CMKs. This reduces latency and simplifies the permissions and auditing properties of the decryption operation. The latency impact from the cross-region API calls occur only during the encryption operation.

In this scenario, the AWS KMS CMK key policy permissions look like this:

  • To encrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:GenerateDataKey permission on the us-west-2 CMK and kms:Encrypt permission on the eu-central-1 CMK. You can specify these permissions in a key policy or IAM policy. This will let the application create a data key in us-west-2 and encrypt the data key under CMKs in both AWS Regions.
  • To decrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:Decrypt permissions on the CMK in us-west-2 or the CMK in eu-central-1.

Encryption path

First, define variables for the Amazon Resource Names (ARNs) of your CMKs in us-west-2 and eu-central-1. In the Encryption SDK for C, to encrypt, you can identify a CMK by its CMK ARN or the Alias ARN that is mapped to the CMK ARN.


const char *KEY_ARN_US_WEST_2 = "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab";

const char *KEY_ARN_EU_CENTRAL_1 = "arn:aws:kms:eu-central-1:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321";      
     

Now, use the CMK ARNs to create a keyring. In the Encryption SDK, a keyring is used to generate, encrypt, and decrypt data keys under multiple master keys. You’ll create a KMS keyring configured to use multiple CMKs.


struct aws_cryptosdk_keyring *kms_keyring=Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_US_WEST_2, { KEY_ARN_EU_CENTRAL_1 });

When the AWS Encryption SDK uses this keyring to encrypt data, it calls GenerateDataKey on the first CMK that you specify, and Encrypt on each of the remaining CMKs that you specify. The result is a plaintext data key generated in us-west-2, an encryption of the data key using the CMK in us-west-2, and an encryption of the data key using the CMK in eu-central-1.

The plaintext data key that AWS KMS generated in us-west-2 is protected under a TLS session using only cipher suites that support forward-secrecy. The process of sending that same plaintext data key to the AWS KMS endpoint in eu-central-1 for encryption is also protected under a similar TLS session.

The Encryption SDK uses the data key to encrypt your data, and it stores the encrypted data keys with your encrypted content. The result is an encrypted message that can be decrypted using the CMK in us-west-2 or the CMK in eu-central-1.

Now that you understand what’s going to happen after you create the keyring, I’ll return to the code sample. Next, you need to create an encrypt-mode session with your keyring and pass in the CMM. In the AWS Encryption SDK for C, you use a session to encrypt a single plaintext message or decrypt a single ciphertext message, regardless of its size. The session maintains the state of the message throughout its processing.


struct aws_cryptosdk_session *session = aws_cryptosdk_session_new_from_keyring(alloc, AWS_CRYPTOSDK_ENCRYPT, kms_keyring);

With the keyring and encrypt-mode session, the data processing application can ask the Encryption SDK to encrypt the data under the CMKs that you specified in two different AWS regions:


aws_cryptosdk_session_process(
    session,
    out_ciphertext,
    out_ciphertext_buf_sz,
    out_ciphertext_len,
    in_plaintext,
    in_plaintext_len,
    &in_plaintext_consumed))

The result is an encrypted message that contains the ciphertext and two encrypted copies of the same data key. One encrypted data key was encrypted by your CMK in us-west-2 and other encrypted data key was encrypted by your CMK in eu-central-1.

Decryption path

In the AWS Encryption SDK for C, you use keyrings for both encrypting and decrypting. You can use the same keyring for both, or you can use different keyrings for each operation.

Why would you want to use a different keyring for decryption? At a high level, encrypt keyrings specify all CMKs that can decrypt the ciphertext. Decrypt keyrings constrain the CMKs the application is permitted to use.

Reusing a keyring for both encrypt and decrypt mode can simplify your AWS Encryption SDK client configuration, but splitting the keyring and using different AWS KMS clients provides more flexibility to meet your security and architecture goals. The option you choose depends in part on the constraints you want to place on the CMKs your application uses.

The Decrypt API in the AWS KMS service doesn’t permit you to specify a CMK as a request parameter. But the AWS Encryption SDK lets you specify one or many CMKs in a decryption keyring, or even discover which CMKs to try automatically. I’ll discuss each option in the next section.

Decryption path 1: Use a specific CMK

This keyring option configures the AWS Encryption SDK to use only a specified CMK in the specified AWS Region. This implies that your data processing application will need kms:Decrypt permissions on that specific CMK and your application will always call the same AWS KMS endpoints in the specified AWS Region. CloudTrail events from the Decrypt API will also only appear in the specified AWS Region.

You might use a specific CMK when the user or application that is decrypting the data has kms:Decrypt permission on only one of the CMKs that encrypted the data keys.

The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the caller has kms:Decrypt permission to use it.

In my example, I encrypted the data keys using CMKs in us-west-2 and eu-central 1, so I’ll start decrypting in eu-central-1 because I want to have a specific decrypt instantiation of the data processing application dedicated to eu-central-1. Assume the eu-central-1 data processing application has configured AWS IAM credentials for a principal with permission to call the Decrypt operation on the eu-central-1 CMK.

Configure a keyring that asks the AWS Encryption SDK to use the CMK in eu-central-1 to decrypt:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1)

The Encryption SDK reads the encrypted message, finds the encrypted data key that was encrypted using the CMK in eu-central-1, and uses this keyring to decrypt.

Decryption path 2: Use any of several CMKs

This keyring option configures the AWS Encryption SDK to try several specific CMKs during its decryption attempts, stopping as soon as it succeeds. You should configure the AWS IAM credentials used by your data processing application to have kms:Decrypt permissions on each of the specified regional CMKs.

Your application could end up calling multiple regional AWS KMS endpoints. CloudTrail events from the Decrypt API will appear in the AWS Region in which the decrypt operation succeeds, and in any of the other AWS Regions that the keyring attempts to use. The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the application has kms:Decrypt permission to use it.

You might define an encryption keyring that includes multiple CMKs so that users with different permissions can decrypt the same message. For example, you might include in your encryption keyring keys in multiple AWS regions.

Here’s an example keyring constructed with multiple CMKs:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1, { KEY_ARN_US_WEST_2 })

The AWS Encryption SDK reads each of the encrypted data keys stored in the encrypted message in the order that they appear. For each data key, the Encryption SDK searches the keyring for the matching CMK that encrypted it. If it finds that CMK, the AWS Encryption SDK calls AWS KMS in the AWS Region where the CMK exists to decrypt that data key, then uses that decrypted key to decrypt the message. If the decryption operation fails for any reason, the AWS Encryption SDK moves on to the next encrypted data key in the message and tries again.

The AWS Encryption SDK will try to decrypt the encrypted message in this way until either decryption succeeds, or the AWS Encryption SDK has attempted and failed to decrypt any of the encrypted data keys using the CMKs specified in the keyring.

If this keyring configuration looks familiar, it’s because it’s similar to the configuration you used on the encrypt path when you encrypted under multiple CMKs. The difference is this:

  • Encryption: The AWS Encryption SDK uses every CMK in the keyring to encrypt the data key, and adds all of the encrypted data keys to the encrypted message.
  • Decryption: The AWS Encryption SDK attempts to decrypt one of the encrypted data key using only the CMKs in the keyring. It stops as soon as it succeeds.

Decryption path 3: Strategic ARNs reduction using the Discovery keyring

The previous decryption paths required you to keep track of the exact CMKs used during the encryption operation, which may suit your needs for security and event logging. But what if you want more flexibility? What if you want to change the CMKs that you use in encryption operations without updating the data processing application that decrypts your data? You can configure a keyring that doesn’t specify CMKs to use for decryption, but instead tries each CMK that encrypted a data key until decryption succeeds or all referenced CMKs fail. We call this configuration a KMS Discovery keyring.

A Discovery keyring is equivalent to a keyring that includes all of the same CMKs that were used to encrypt the data, but it’s simpler and less error-prone. You might use a KMS Discovery keyring if you have no preference among the CMKs that encrypted a data key, and don’t mind the latency tradeoffs of trying CMKs in remote AWS Regions, or trying CMKs that will fail a permissions check while searching for one that succeeds. You can think of the KMS Discovery keyring as a universal keyring that you can use and reuse in your applications in many AWS Regions.

When you use a KMS Discovery keyring, the AWS Encryption SDK reads each encrypted data key and discovers the ARN of the CMK used to encrypt it. The AWS Encryption SDK then uses the configured IAM credentials to call AWS KMS in that CMK’s AWS Region to decrypt the data key. The AWS Encryption SDK repeats that process until it has decrypted the data key or runs out of encrypted data keys to try. .


Aws::Cryptosdk::KmsKeyring::Builder().BuildDiscovery();

While KMS Discovery keyrings are simpler, you run the risk of having your data processing application make a cross-region call to an AWS KMS endpoint that adds unwanted latency. In my example, you might not want the decrypting application running in us-west-2 to wait for the AWS Encryption SDK to call AWS KMS in eu-central-1. To use only the CMKs in a particular AWS Region to decrypt the data keys, create a KMS Regional Discovery keyring that specifies the AWS Region, but not the CMK ARNs. In my example, the following keyring allows the AWS Encryption SDK to use only CMKs in us-west-2.


Aws::Cryptosdk::KmsKeyring::Builder()
        .WithKmsClient(create_kms_client(Aws::Region::US_WEST_2)).BuildDiscovery());

Because this example KMS Regional Discovery keyring specifies a client for the us-west-2 AWS Region, not a CMK ARN, the AWS Encryption SDK will only try to decrypt any encrypted data key it finds that was encrypted using any CMK in us-west-2. If, for some reason, none of the encrypted data keys was encrypted using a CMK in us-west-2, or the application decrypting the data doesn’t have permission to use CMKs in us-west-2, the AWS Encryption SDK call to decrypt the message with this keyring fails and fails fast. This may provide you with more options for deterministic error handling.

Keep in mind that the KMS Regional Discovery keyring allows the AWS Encryption SDK to try the CMK for each encrypted data key in the specified AWS Region. However, AWS KMS never uses a CMK until it verifies that the caller has permission to perform the requested operation. If the application doesn’t have kms:Decrypt permission for any of the CMKs that were used to encrypt the data keys, decryption fails.

Summary

Encrypting KMS data keys using multiple CMKs provides a variety of options to decrypt ciphertexts to meet your security, auditing, and latency requirements. My examples show how encrypted messages can be decrypted by using AWS KMS CMKs in multiple AWS Regions. You can also use the Encryption SDK with master keys supplied by a custom key management infrastructure independent of AWS.

The AWS Encryption SDK’s portable and interoperable encrypted message format makes it easier to combine multiple encrypted data keys with your encrypted data to support the decryption access scheme you want. The AWS Encryption SDK for C brings these utilities to a new, broader set of platform and application environments to complement the existing Java and Python versions.

You can find the AWS Encryption SDK for C on GitHub.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Crypto Tools forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Liz Roth

Liz is a Senior Software Development Engineer at Amazon Web Services. She has been at Amazon for more than 8 years and has more than 10 years of industry experience across a variety of areas, including security, networks, and operations.

AWS Security Profiles: Stephen Quigg, Principal Security Solutions Architect, Financial Services Industry

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-stephen-quigg-principal-security-solutions-architect-financial-services-industry/

AWS Security Profiles: Stephen Quigg, Principal Security Solutions Architect, Financial Services Industry

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do as a Principal Security Solutions Architect?

I’ve been with AWS for six years and four months. My job is to help customers get comfortable with the security measures that AWS takes on their behalf—measures which help customers build secure applications on top of our services. I help them understand both sides of the Shared Responsibility Model.

What’s your favorite part of your job?

Being thrust into a room full of security professionals who don’t have experience with AWS. They’re understandably wary of trying new things, so I’m happy when I see their attitudes and moods change. Once they begin to understand how I can help them, they start nodding their heads and uncrossing their arms.

What’s the most challenging part of your job?

When I present customers with a logical argument, some of them still won’t be reassured. I can ask “why not?” but they might not be able to explain. That’s a tough situation, because those are the people that my team needs to keep talking to. We need to keep trying to understand what drives their concerns about cloud security, and what needs to happen for them to make the shift. It’s not effective to tell people that they’re wrong. You have to understand their needs, explain things, and let them draw their own conclusions.

When I look at our customers’ information security teams, these teams are just people like me, who are often stuck in very difficult situations. They might not have the resources or the time to do all the things they want to do, and they probably have to interact with a lot of people who think of them as blockers, even though their job is to protect the company. AWS represents a wonderful opportunity for change. I want to show them a new style of service, and that we’re listening to them, and that we’re acting in their best interest.

In your opinion, what’s the biggest challenge facing cloud security in the financial services industry right now?

The biggest challenge for our customers is that they already have so much work to do. And when their businesses start transitioning to AWS, their security teams now have new work.

Their ability to do all of that work can be improved through the advanced automation that AWS offers. But we need to help our customers learn how to build automated solutions. We offer wonderful services, and we can help people stitch them together, but all of this represents a new “cloud security” operating model that customers often have to start up while still maintaining their existing operating model.

Five years from now, what changes do you think we’ll see across the financial services/cloud security landscape?

Many incidents that currently require a human response will be dealt with automatically. Beyond that, the ability to automatically analyze the evidence to figure out what went wrong, and what to do about it is where we’ll see big developments.

I think the other big change is that customers will start worrying less about high-touch processes like patching, because they’ll have automated the process of building new things. There will still be vulnerabilities—there will always be vulnerabilities—but these types of changes will be helpful because it will mean fewer people logged onto production servers. We’re moving toward a zero-touch environment where, barring extreme emergencies, everything can be done via automation.

It’s typically human error that causes security incidents. It’s not malicious, it’s just that people make mistakes. And the fewer the number of people involved, the fewer the number of mistakes that can happen, because you can automate code testing to make sure it’s working properly. It’s hard to have that level of assurance when we’re talking about a person’s fingers on a keyboard.

How does working in a highly regulated industry like financial services affect the work that you do as a solutions architect?

I have to be really diligent about listening. I can’t answer customer requests by painting with broad brushstrokes and saying “close enough.” Many AWS customers are regulated. They have strict requirements from their regulators that they have to meet. AWS can’t make assumptions that something is “good enough.” The customer has to make those decisions.

So it’s important to be very precise about what you tell customers, and it’s important to be very transparent, and to give them very accurate information, because they’ll need to be able to prove that they understand the AWS cloud when they talk to their regulators. It’s not good enough for AWS to talk on their behalf.

What does cloud security mean to you, personally?

I love technology and security. Since 1999, when I first started working full-time as a security professional, my singular point of focus has been information security. Back then, my job was to help advise people on security architecture and to do pen testing. And during those years, I watched a lot of people fail: The solutions they bought were the equivalent of trying to patch up a wound with sticking plasters. The solutions rarely worked.

What drives me today is that after six years at AWS, I still absolutely love it. I’m a part of one of the biggest improvements that has ever happened to information security. The ability of customers to automate tasks, plus the ability to pass management of really complex architecture to a cloud provider that is focused on security, a provider that “does security” as its core competency, its core business—that thrills me.

I see all these opportunities for our customers, and I want them all to succeed, because everyone benefits from a more secure world. I benefit from a life where I can do my banking on my phone, and where I can video call from anywhere. My family and my children will benefit from this world.

You’re co-hosting a session at re:Inforce that focuses on “When and how” to use AWS CloudHSM. When should you use CloudHSM, and what’s the alternative?

You should choose AWS CloudHSM when you have strict compliance requirements that require cryptographic operations to be performed within a FIPS 140-2 Level 3 validated device, which is what CloudHSM is. You should use it when you can’t use any other AWS cryptographic service.

CloudHSM is a really important service for a small set of customers that really need it. Many of our customers’ needs are actually more easily served by other AWS services. And those services often require less skill to set up, and less attention from the customer. CloudHSM, by its very nature, gives customers a lot of responsibility. If you’re able to pick other AWS cryptographic services, AWS can do much more of the work.

You’re also hosting a builders session. What can you tell us about that topic?

Many people who are new to the cloud come from a traditional security architecture background, so we need to teach them about architecture in the cloud. Automation is really, really important in the cloud, but to get to the point where you can automate everything, there’s a lot you need to design and build.

Many of our customers in financial services also care very deeply about how things connect to the internet, whether it’s traffic that’s coming to their Amazon Elastic Compute Cloud (EC2) instances or traffic from their EC2 instances that’s going back out to the internet.

So I’ve designed a session to help those people get comfortable with how to take traditional security architecture and rethink it for AWS. This moves them one step closer to the cloud. The session includes stepping stones between what you’re doing on-premise and how you can do it in the cloud. I want to help people get comfortable and realize that the shift isn’t as large as they thought it was. The session will cover topics like firewalls, inbound and outbound proxies, and web application firewalls; all things that are easily built on top of AWS services.

What do you hope that people will do differently as a result of attending your sessions?

The CloudHSM session is deep dive into how customers can optimize their use of the service in terms of cost, performance, and resilience.

I’m hoping that the people who attend the session are either planning a deployment of CloudHSM or have existing deployments that they want to look at in terms of those three levers—cost, performance, and resilience. I want people to review their requirements and ask, “Am I using too many CloudHSMs? Have I provisioned too many?” There might be an opportunity to save money. Or, “Am I built resiliently enough? Do I have CloudHSMs deployed in multiple Availability Zones, or maybe in multiple Regions?” And finally, “Am I using the right AWS crypto service? Do I really need a CloudHSM, or should I be using AWS Key Management Service or AWS Certificate Manager instead?”

For the second session on modernizing security architecture, I’d like people to go home and start thinking about moving their data centers to AWS. I’d like to be the catalyst. Earlier, I mentioned some of the challenges customers can face when transitioning to security in the cloud—I hope my session will help attendees start thinking through the logistics of that transition.

Anything else about either session you want us to know about?

Yes. I am not a cryptographer, so someone from the AWS cryptography team is going to co-host the CloudHSM deep-dive. When I’m speaking to my customers, I always try to be very clear about this point. The one thing you should never wave your arms and be vague about is cryptography. I think cryptography on its own is fascinating, so I’m humbled by the fact that I will never be a cryptographer.

Have you been to Boston before? Is there anything that you’re looking forward to seeing or doing while you’re there?

I’ve never been to Boston, and I’m really looking forward to it. It seems like a beautiful city, and I’m excited about experiencing its atmosphere and its character as an historic European point of entry into America. Every American city has such a different character, so I’m looking forward to experiencing Boston. It’s also the city one of my favorite Neal Stephenson books was based in – Zodiac!

You make music, with track titles like “Hardware Security Module – SHA-3.” How does your day job inform your music? (And vice versa?)

I love my electro music, and I love my techno. The origins of techno were in Detroit, where the sound was built based on the repetitive noise of machines. Now, I’m from near the Glasgow, Scotland area, and Glasgow was at the heart of the Industrial Revolution. It was called the “engine room of the Empire.” We had shipyards and production lines. So I think the people of Glasgow and the people of Detroit have a common tie in terms of the music that we like.

Working in technology fuels the same kind of inspiration for me. We’re in the middle of a new industrial revolution, and my work and my mood very much influence my music. I’m really happy working at AWS, and I get to work with absolutely awesome technology daily. This all bleeds into the music I make. (HSM and Detroit are my techno tracks, and HSM and Chicago are my acid house tracks.)

You live in Scotland. What’s one thing a visitor should see, eat, or do when visiting Scotland?

If I were from Edinburgh, I’d tell you to see Edinburgh Castle. But instead, I’d say that you must visit the West Coast. The West Coast is our islands and our mountains—places like Glencoe, if you can get there. Other countries have bigger mountains. Other countries have steeper cliffs. But there is a loneliness and a barrenness about Scotland that’s quite breathtaking, and that I’ve never found anywhere else in the world. The West Coast of Scotland is absolutely the place that you must experience if you visit the country. If you don’t, you won’t understand the whole “Scottish thing” that we have going on.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Stephen Quigg

Squigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

Building an AWS Landing Zone from Scratch in Six Weeks

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/building-an-aws-landing-zone-from-scratch-in-six-weeks/

In an effort to deliver a simpler, smarter, and more unified experience on its website, the UK’s Ministry of Justice and its Lead Technical Architect, James Abley, created a bespoke AWS Landing Zone, a pre-defined template for an AWS account or infrastructure. And they did it in six weeks.

Supporting 33 agencies and public bodies, and making sure they all work together, the Ministry of Justice is at the heart of the United Kingdom’s justice system. Its task is to look after all parts of the justice system, including the courts, prisons, probation services, and legal aid, striving to bring the principles of justice to life for everyone in society.

In a This Is My Architecture video, shot at 2018 re:Invent in Las Vegas, James talks with AWS Solutions Architect, Simon Treacy, about the importance of delivering a consistent experience to his website’s customers, a mix of citizen and internal legal aid agency case workers.

Utilizing a number of AWS services, James walks us through the user experience, and he why decided to put AWS CoudFront and AWS Web Application Firewall (WAF) up front to improve the security posture of the ministry’s legacy applications and extend their lifespan. James also explained how he split traffic between two availability zones, using AWS Elastic Load Balancing (ELB) to provide higher availability and resilience, which will help with zero downtime deployment later on.

 

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

 

AWS Security Profiles: Tracy Pierce, Senior Consultant, Security Specialty, Remote Consulting Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-tracy-pierce-senior-consultant-security-specialty-rcs/

AWS Security Profiles: Tracy Pierce, Senior Consultant, Security Specialty, Remote Consulting Services

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You’ve worn a lot of hats at AWS. What do you do in your current role, and how is it different from previous roles?

I joined AWS as a Customer Support Engineer. Currently, I’m a Senior Consultant, Security Specialty, for Remote Consulting Services, which is part of the AWS Professional Services (ProServe) team.

In my current role, I work with ProServe agents and Solution Architects who might be out with customers onsite and who need stuff built. “Stuff” could be automation, like AWS Lambda functions or AWS CloudFormation templates, or even security best practices documentation… you name it. When they need it built, they come to my team. Right now, I’m working on an AWS Lambda function to pull AWS CloudTrail logs so that you can see if anyone is making policy changes to any of your AWS resources—and if so, have it written to an Amazon Aurora database. You can then check to see if it matches the security requirements that you have set up. It’s fun! It’s new. I’m developing new skills along the way.

In my previous Support role, my work involved putting out fires, walking customers through initial setup, and showing them how to best use resources within their existing environment and architecture. My position as a Senior Consultant is a little different—I get to work with the customer from the beginning of a project rather than engaging much later in the process.

What’s your favorite part of your job

Talking with customers! I love explaining how to use AWS services. A lot of people understand our individual services but don’t always understand how to use multiple services together. We launch so many features and services that it’s understandably hard to keep up. Getting to help someone understand, “Hey, this cool new service will do exactly what I want!” or showing them how it can be combined in a really cool way with this other new service—that’s the fun part.

What’s the most challenging part of your job?

Right now? Learning to code. I don’t have a programming background, so I’m learning Python on the fly with the help of some teammates. I’m a very graphic-oriented, visual learner, so writing lines of code is challenging. But I’m getting there.

What career advice would you offer to someone just starting out at AWS?

Find a thing that you’re passionate about, and go for it. When I first started, I was on the Support team in the Linux profile, but I loved figuring out permissions and firewall rules and encryption. I think AWS had about ten services at the time, and I kept pushing myself to learn as much as I could about AWS Identity and Access Management (IAM). I asked enough questions to turn myself into an expert in that field. So, my best advice is to find a passion and don’t let anything hold you back.

What inspires you about security? Why is it something you’re passionate about?

It’s a puzzle, and I love puzzles. We’re always trying to stay one step ahead, which means there’s always something new to learn. Every day, there are new developments. Working in Security means trying to figure out how this ever-growing set of puzzles and pieces can fit together—if one piece could potentially open a back door, how can you find a different piece that will close it? Figuring out how to solve these challenges, often while others in the security field are also working on them, is a lot of fun.

In your opinion, what’s the biggest challenge facing cloud security right now?

There aren’t enough people focusing on cybersecurity. We’re in an era where people are migrating from on-prem to cloud, and it requires a huge shift in mindset to go from working with on-prem hardware to systems that you can no longer physically put your hands on. People are used to putting in physical security restraints, like making sure doors locks and badges are required for entry. When you move to the cloud, you have to start thinking not just about security group rules—like who’s allowed access to your data—but about all the other functions, features, and permissions that are a part of your cloud environment. How do you restrict those permissions? How do you restrict them for a certain team versus certain projects? How can you best separate job functions, projects, and teams in your organization? There’s so much more to cybersecurity than the stories of “hackers” you see on TV.

What’s the most common misperception you encounter about cloud security?

That it’s a one-and-done thing. I meet a lot of people who think, “Oh, I set it up” but who haven’t touched their environment in four years. The cloud is ever-changing, so your production environment and workloads are ever-changing. They’re going to grow; they’ll need to be audited in some fashion. It’s important to keep on top of that. You need to audit permissions, audit who’s accessing which systems, and make sure the systems are doing what they’re supposed to. You can’t just set it up and be finished.

How do you help educate customers about these types of misperceptions?

I go to AWS Pop-up Lofts periodically, plus conferences like re:Inforce and re:Invent, where I spend a lot of time helping people understand that security is a continuous thing. Writing blog posts also really helps, since it’s a way to show customers new ways of securing their environment using methods that they might not have considered. I can take edge cases that we might hear about from one or two customers, but which probably affect hundreds of other organizations, and reach out to them with some different setups.

You’re leading a re:Inforce builders session called “Automating password and secrets, and disaster recovery.” What’s a builders session?

Builders sessions are basically labs: There will be a very short introduction to the session, where you’re introduced to the concepts and services used in the lab. In this case, I’ll talk a little about how you can make sure your databases and resources are resilient and that you’ve set up disaster recovery scenarios.

After that, I walk around while people try out the services, hands-on, for themselves, and I see if anyone has questions. A lot of people learn better if they actually get a chance to play with things instead of just read about them. If people run into issues, like, “Why does the code say this for example?” or “Why does it create this folder over here in a different region?” I can answer those questions in the moment.

How did you arrive at your topic?

It’s based on a blog post that I wrote, called “How to automate replication of secrets in AWS Secrets Manager across AWS Regions.” It was a highly requested feature from customers that were already dealing with RDS databases. I actually wrote two posts–the second post focused on Windows passwords, and it demonstrated how you can have a secure password for Windows without having to share an SSH key across multiple entities in an organization. These two posts gave me the idea for the builders session topic: I want to show customers that you can use Secrets Manager to store sensitive information without needing to have a human manually read it in plain text.

A lot of customers are used to an on-premises access model, where everything is physical and things are written in a manual—but then you have to worry about safeguarding the manual so that only the appropriate people can read it. With the approach I’m sharing, you can have two or three people out of your entire organization who are in charge of creating the security aspects, like password policy, creation, rotation, and function. And then all other users can log in: The system pulls the passwords for them, inputs the passwords into the application, and the users do not see them in plain text. And because users have to be authenticated to access resources like the password, this approach prevents people from outside your organization from going to a webpage and trying to pull that secret and log in. They’re not going to have permissions to access it. It’s one more way for customers to lock down their sensitive data.

What are you hoping that your audience will do differently as a result of this session?

I hope they’ll begin migrating their sensitive data—whether that’s the keys they’re using to encrypt their client-side databases, or their passwords for Windows—because their data is safer in the cloud. I want people to realize that they have all of these different options available, and to start figuring ways to utilize these solutions in their own environment.

I also hope that people will think about the processes that they have in their own workflow, even if those processes don’t extend to the greater organization and it’s something that only affects their job. For example, how can they make changes so that someone can’t just walk into their office on any given day and see their password? Those are the kinds of things I hope people will start thinking about.

Is there anything else you want people to know about your session?

Security is changing so much and so quickly that nobody is 100% caught up, so don’t be afraid to ask for help. It can feel intimidating to have to figure out new security methods, so I like to remind people that they shouldn’t be afraid to reach out or ask questions. That’s how we all learn.

You love otters. What’s so great about them?

I’m obsessed with them—and otters and security actually go together! When otters are with their family group, they’re very dedicated to keeping outsiders away and not letting anybody or anything get into their den, into their home, or into their family. There are large Amazon river otters that will actually take on Cayman alligators, as a family group, to make sure the alligators don’t get anywhere near the nest and attack the pups. Otters also try to work smarter, not harder, which I’ve found to be a good motto. If you can accomplish your goal through a small task, and it’s efficient, and it works, and it’s secure, then go for it. That’s what otters do.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Spring 2019 SOC 2 Type 1 Privacy report now available

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-soc-2-type-1-privacy-report-now-available/

At AWS, our customers’ security and privacy is of the highest importance and we continue to provide transparency into our security and privacy posture. Following our first SOC 2 Type 1 Privacy report released in December 2018, AWS is proud to announce the release of the Spring 2019 SOC 2 Type 1 privacy report. The Spring 2019 SOC 2 Privacy report provides you with a third-party attestation of our systems and the suitability of the design of our privacy controls. The report also provides a detailed description of those controls, the same controls that AWS uses to address the GDPR requirements around data security and privacy.

This updated report is a part of the SOC family of reports, and has been updated to align with the new Association of International Certified Professional Accountants (AICPA) Trust Service Criteria. The Trust Service Criteria align with the Committee of Sponsoring Organizations of the Treadway Commission (COSO) 2013 framework which has been designed to better address cybersecurity risks.

The highlights of the new Trust Service Criteria include:

  • A definition of principal service commitments and system requirements.
  • Restructuring and addition of supplemental criteria to better address cybersecurity risks.
  • New description criteria requiring the disclosure of system incidents.

The scope of the privacy report includes systems that AWS uses to collect personal information and all 104 services and locations in scope for the latest AWS SOC reports. You can download the new SOC 2 Type I Privacy report now through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Spring 2019 SOC reports now available with 104 services in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-soc-reports-now-available-with-104-services-in-scope/

We’re celebrating the addition of 31 new services in scope with our latest SOC report, pushing AWS past the century mark for the first time – with 104 total services in scope, to be exact! These services are now available under our System and Organizational Controls (SOC) 1, 2, and 3 audits, including the 31 new services added during this most recent audit cycle. These SOC reports are now available to you on demand in the AWS Management Console. The SOC 3 report can also be downloaded online as a pdf.

The SOC 2 report has been updated to align with the new Association of International Certified Professional Accountants (AICPA) Trust Service Criteria. The new Trust Service Criteria align with the Committee of Sponsoring Organizations of the Treadway Commission (COSO) 2013 framework and are designed to provide flexibility that better addresses cybersecurity risks. The new Trust Service Criteria provide customers with more information on how AWS mitigates cybersecurity risks. Updates related to the new Trust Service Criteria are as follows:

  • Restructuring and realignment of the Trust Service Criteria with the COSO 2013 Framework.
  • Restructuring and addition of supplemental criteria to better address cybersecurity risks.
  • Inclusion of the 17 COSO principles within the SOC 2 common criteria.
  • Additional points of focus added to all criteria, such as requirements to add additional description around service commitments and system requirements.

Here are the 31 services newly added to our SOC scope:

As always, my team strives to bring services into the scope of our compliance programs based on your architectural and regulatory needs. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope across any of our compliance programs.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Create fine-grained session permissions using IAM managed policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/create-fine-grained-session-permissions-using-iam-managed-policies/

As a security best practice, AWS Identity and Access Management (IAM) recommends that you use temporary security credentials from AWS Security Token Service (STS) when you access your AWS resources. Temporary credentials are short-term credentials generated dynamically and provided to the user upon request. Today, one of the most widely used mechanisms for requesting temporary credentials in AWS is an IAM role. The advantage of using an IAM role is that multiple users in your organization can assume the same IAM role. By default, all users assuming the same role get the same permissions for their role session. To create distinctive role session permissions or to further restrict session permissions, users or systems can set a session policy when assuming a role. A session policy is an inline permissions policy which users pass in the session when they assume the role. You can pass the policy yourself, or you can configure your broker to insert the policy when your identities federate in to AWS (if you have an identity broker configured in your environment). This allows your administrators to reduce the number of roles they need to create, since multiple users can assume the same role yet have unique session permissions. If users don’t require all the permissions associated to the role to perform a specific action in a given session, your administrator can configure the identity broker to pass a session policy to reduce the scope of session permissions when users assume the role. This helps administrators set permissions for users to perform only those specific actions for that session.

With today’s launch, AWS now enables you to specify multiple IAM managed policies as session policies when users assume a role. This means you can use multiple IAM managed policies to create fine-grained session permissions for your user’s sessions. Additionally, you can centrally manage session permissions using IAM managed policies.
In this post, I review session policies and their current capabilities, introduce the concept of using IAM managed policies as session policies to control session permissions, and show you how to use managed policies to create fine-grained session permissions in AWS.

How do session policies work?

Before I walk through an example, I’ll review session policies.

A session policy is an inline policy that you can create on the fly and pass in the session during role assumption to further scope the permissions of the role session. The effective permissions of the session are the intersection of the role’s identity-based policies and the session policy. The maximum permissions that a session can have are the permissions that are allowed by the role’s identity-based policies. You can pass a single inline session policy programmatically by using the policy parameter with the AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, and GetFederationToken API operations.

Next, I’ll provide an example with an inline session policy to demonstrate how you can restrict session permissions.

Example: Passing a session policy with AssumeRole API to restrict session permissions

Consider a scenario where security administrator John has administrative privileges when he assumes the role SecurityAdminAccess in the organization’s AWS account. When John assumes this role, he knows the specific actions he’ll perform using this role. John is cautious of the role permissions and follows the practice of restricting his own permissions by using a session policy when assuming the role. This way, John ensures that at any given point in time, his role session can only perform the specific action for which he assumed the SecurityAdminAccess role.

In my example, John only needs permissions to access an Amazon Simple Storage Service (S3) bucket called NewHireOrientation in the same account. He passes a session policy using the policy.json file below to reduce his session permissions when assuming the role SecurityAdminAccess.


{
"Version":"2012-10-17",
"Statement":[{
    "Sid":"Statement1",
    "Effect":"Allow",
    "Action":["s3:GetBucket", "s3:GetObject"],
    "Resource": ["arn:aws:s3:::NewHireOrientation", "arn:aws:s3:::NewHireOrientation/*"]
    }]
}  

In this example, the action and resources elements of the policy statement allow access only to the NewHireOrientation bucket and all the objects inside this bucket.

Using the AWS Command Line Interface (AWS CLI), John can pass the session policy’s file path (that is, file://policy.json) while calling the AssumeRole API with the following commands:


aws sts assume-role 
--role-arn "arn:aws:iam::111122223333:role/SecurityAdminAccess" 
--role-session-name "s3-session" 
--policy file://policy.json 

When John assumes the SecurityAdminAccess role using the above command, his effective session permissions are the intersection of the permissions on the role and the session policy. This means that although the SecurityAdminAccess role had administrative privileges, John’s resulting session permissions are s3:GetBucket and s3:GetObject on the NewHireOrientation bucket. This way, John can ensure he only has access to the NewHireOrientation bucket for this session.

Using IAM managed policies as session policies

You can now pass up to 10 IAM managed policies as session policies. This gives you the ability to further restrict session permissions. The managed policy you pass can be AWS managed or customer managed. To pass managed policies as session policies, you need to specify the Amazon Resource Name (ARN) of the IAM policies using the new policy-arns parameter in the AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederationToken API operations. You can use existing managed policies or create new policies in your account and pass them as session policies with any of the aforementioned APIs. The managed policies passed in the role session must be in the same account as that of the role. Additionally, you can pass an inline session policy and ARNs of managed policies in the same role session. To learn more about the sizing guidelines for session policies, please review the STS documentation.

Next, I’ll provide an example using IAM managed policies as session policies to help you understand how you can use multiple managed policies to create fine-grained session permissions.

Example: Passing IAM managed policies in a role session

Consider an example where Mary has a software development team in California (us-west-1) working on a project using Amazon Elastic Compute Cloud (EC2). This team needs permissions to spin up new EC2 instances to meet the project’s scalability requirements. Mary’s organization has a security policy that requires developers to create and manage AWS resources in their respective geographic locations only. This means a developer from California should have permissions to launch new EC2 instances only in California. Now, Mary’s organization has an identity and authentication system such as Active Directory, for which all employees already have identities created. Additionally, there is a custom identity broker application which verifies that employees are signed into the existing identity and authentication system. This broker application is configured to obtain temporary security credentials for the employees using the AssumeRole API. (To learn more about using identity provider and identity broker with AWS, please see AWS Federated Authentication with Active Directory Federation Services..)

Mary creates a managed policy called DevCalifornia and adds a region restriction for California using the aws:RequestedRegion condition key. Following the best practice of granting least privilege, Mary lists out the specific actions the developers would need for spinning up EC2 instances:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "ec2:RunInstances"                               
            ],
            "Resource": "*",
                    "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "us-west-1"
                }
            }
        }
        
    ]
}    

The above policy grants specific permissions to launch EC2 instances. The condition element of the policy sets a restriction on the Region where these actions can be performed. The condition key aws:RequestedRegion ensures that these service-specific actions can only be performed in California.

For Mary’s team’s use case, instead of creating a new role Mary uses an existing role in her account called EC2Admin, which has the AmazonEC2FullAccess AWS managed policy attached to it, granting full access to Amazon EC2. Next, Mary configures the identity broker in such a way that the developers from the team in California can assume the EC2Admin role but with reduced session permissions. The broker passes the DevCalifornia managed policy as a session policy to reduce the scope of the session permissions when a developer from Mary’s team assumes the role. This way, Mary can ensure the team remains compliant with her organization’s security policy.

If performed using the AWS CLI, the command would look like this:

aws sts assume-role –role-arn “arn:aws:iam::444455556666:role/AppDev” –role-session-name “teamCalifornia-session” –policy-arns arn=”arn:aws:iam::444455556666:policy/DevCalifornia”

If you want to pass multiple managed policies as session policies, then the command would look like this:

aws sts assume-role –role-arn “arn:aws:iam::<accountID>:role/<RoleName>” –role-session-name “<example-session>” –policy-arns arn=”arn:aws:iam::<accountID>:policy/<PolicyName1>” arn=”arn:aws:iam::<accountID>:policy/<PolicyName2>”

In the above example, PolicyName1 and PolicyName2 can be AWS managed or customer managed policies. You can also use them in conjunction, where PolicyName1 is an AWS managed policy and PolicyName2 a customer managed policy.

Conclusion

You can now use IAM managed policies as session policies in role sessions and federated sessions to create fine-grained session permissions. You can use this functionality today by creating IAM managed policies using your existing inline session policies and referencing their policy ARNs in your role sessions. You can also keep using your existing session policy and pass the ARNs of IAM managed policies using the new policy-arn parameter to further scope your session permissions.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

How to share encrypted AMIs across accounts to launch encrypted EC2 instances

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/how-to-share-encrypted-amis-across-accounts-to-launch-encrypted-ec2-instances/

Do you encrypt your Amazon Machine Instances (AMIs) with AWS Key Management Service (AWS KMS) customer master keys (CMKs) for regulatory or compliance reasons? Do you launch instances with encrypted root volumes? Do you create a golden AMI and distribute it to other accounts in your organization for standardizing application-specific Amazon Elastic Compute Cloud (Amazon EC2) instance launches? If so, then we have good news for you!

We’re happy to announce that you can now share AMIs encrypted with customer-managed CMKs across accounts with a single API call. Additionally, you can launch an EC2 instance directly from an encrypted AMI that has been shared with you. Previously, this was possible for unencrypted AMIs only. Extending this capability to encrypted AMIs simplifies your AMI distribution process and reduces the snapshot storage cost associated with the older method of sharing encrypted AMIs that resulted in a copy in each of your accounts.

In this post, we demonstrate how you can share an encrypted AMI between accounts and launch an encrypted, Amazon Elastic Block Store (Amazon EBS) backed EC2 instance from the shared AMI.

Prerequisites for sharing AMIs encrypted with a customer-managed CMK

Before you can begin sharing the encrypted AMI and launching an instance from it, you’ll need to set up your AWS KMS key policy and AWS Identity and Access Management (IAM) policies.

For this walkthrough, you need two AWS accounts:

  1. A source account in which you build a custom AMI and encrypt the associated EBS snapshots.
  2. A target account in which you launch instances using the shared custom AMI with encrypted snapshots.

In this example, I’ll use fictitious account IDs 111111111111 and 999999999999 for the source account and the target account, respectively. In addition, you’ll need to create an AWS KMS customer master key (CMK) in the source account in the source region. For simplicity, I’ve created an AWS KMS CMK with the alias cmkSource under account 111111111111 in us-east-1. I’ll use this cmkSource to encrypt my AMI (ami-1234578) that I’ll share with account 999999999999. As you go through this post, be sure to change the account IDs, source account, AWS KMS CMK, and AMI ID to match your own.

Create the policy setting for the source account

First, an IAM user or role in the source account needs permission to share the AMI (an EC2 ModifyImageAttribute operation). The following JSON policy document shows an example of the permissions an IAM user or role policy needs to have. In order to share ami-12345678, you’ll need to create a policy like this:


    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2: ModifyImageAttribute",
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1::image/<12345678>"
            ]
        }
 ] 
}

Second, the target account needs permission to use cmkSource for re-encrypting the snapshots. The following steps walk you through how to add the target account’s ID to the cmkSource key policy.

  1. From the IAM console, select Encryption Keys in the left pane, and then select the source account’s CMK, cmkSource, as shown in Figure 1:
     
    Figure 1: Select "cmkSource"

    Figure 1: Select “cmkSource”

  2. Look for the External Accounts subsection, type the target account ID (for example, 999999999999), and then select Add External Account.
     
    Figure 2: Enter the target account ID and then select "Add External Account"

    Figure 2: Enter the target account ID and then select “Add External Account”

Create a policy setting for the target account

Once you’ve configured the source account, you need to configure the target account. The IAM user or role in the target account needs to be able to perform the AWS KMS DescribeKey, CreateGrant, ReEncrypt* and Decrypt operations on cmkSource in order to launch an instance from a shared encrypted AMI. The following JSON policy document shows an example of these permissions:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:<111111111111>:key/<key-id of cmkSource>"
            ]                                                    
        }
    ]
}

Once you’ve completed the configuration steps in the source and target accounts, then you’re ready to share and launch encrypted AMIs. The actual sharing of the encrypted AMI is no different from sharing an unencrypted AMI. If you want to use the AWS Management Console, then follow the steps described in Sharing an AMI (Console). To use the AWS Command Line Interface (AWS CLI), follow the steps described in Sharing an AMI (AWS CLI).

Below is an example of what the CLI command to share the AMI would look like:


aws ec2 modify-image-attribute --image-id <ami-12345678> --launch-permission "Add=[{UserId=<999999999999>}]"

Launch an instance from the shared encrypted AMI

To launch an AMI that was shared with you, set the AMI ID of the shared AMI in the image-id parameter of Run-Instances API/CLI. Optionally, to re-encrypt the volumes with a custom CMK in your account, you can specify the KmsKeyId in the Block Device Mapping as follows:


    $> aws ec2 run-instances 
    --image-id ami-
    --count 1 
    --instance-type m4.large 
    --region us-east-1
    --subnet-id subnet-aec2fc86 
    --key-name 2016KeyPair 
    --security-group-ids sg-f7dbc78e subnet-id subnet-aec2fc86 
    --block-device-mappings file://mapping.json    

Where the mapping.json contains the following:


[
    {
        "DeviceName": "/dev/xvda",
        "Ebs": {
                "Encrypted": true,
                "KmsKeyId": "arn:aws:kms:us-east-1:<999999999999>:key/<abcd1234-a123-456a-a12b-a123b4cd56ef>"
        }
    },
]

While launching the instance from a shared encrypted AMI, you can specify a CMK of your choice. You may also choose cmkSource to encrypt volumes in your account. However, we recommend that you re-encrypt the volumes using a CMK in the target account. This protects you if the source CMK is compromised, or if the source account revokes permissions, which could cause you to lose access to any encrypted volumes you created using cmkSource.

Summary

In this blog post, we discussed how you can easily share encrypted AMIs across accounts to launch encrypted instances. AWS has extended the same capabilities to snapshots, allowing you to create encrypted EBS volumes from shared encrypted snapshots.

This feature is available through the AWS Management Console, AWS CLI, or AWS SDKs at no extra charge in all commercial AWS regions except China. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nishit Nagar

Nishit is a Senior Product Manager at AWS.

How to quickly launch encrypted EBS-backed EC2 instances from unencrypted AMIs

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/how-to-quickly-launch-encrypted-ebs-backed-ec2-instances-from-unencrypted-amis/

An Amazon Machine Image (AMI) provides the information that you need to launch an instance (a virtual server) in your AWS environment. There are a number of AMIs on the AWS Marketplace (such as Amazon Linux, Red Hat or Ubuntu) that you can use to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance. When you launch an instance from these AMIs, the resulting volumes are unencrypted. However, for regulatory purposes or internal compliance reasons, you might need to launch instances with encrypted root volumes. Previously, this required you to follow a a multi-step process in which you maintained a separate, encrypted copy of the AMI in your account in order to launch instances with encrypted volumes. Now, you no longer need to maintain multiple AMI copies for encryption. To launch an encrypted Amazon Elastic Block Store (Amazon EBS) backed instance from an unencrypted AMI, you can directly specify the encryption properties in your existing workflow (such as with the RunInstances API or the Launch Instance Wizard). This simplifies the process of launching instances with encrypted volumes and reduces your associated AMI storage costs.

In this post, we demonstrate how you can start from an unencrypted AMI and launch an encrypted EBS-backed Amazon EC2 instance. We’ll show you how to do this from both the AWS Management Console, and using the RunInstances API with the AWS Command Line Interface (AWS CLI).

Launching an instance from the AWS Management Console

  1. Sign into the AWS Management Console and open the EC2 console.
  2. Select Launch instance, then follow the prompts of the launch wizard:
    1. In step 1 of the wizard, select the Amazon Machine Image (AMI) that you want to use.
    2. In step 2 of the wizard, select your instance type.
    3. In step 3, provide additional configuration details. For details about configuring your instances, see Launching an Instance.
    4. In step 4, specify your EBS volumes. The encryption properties of the volumes will be inherited from the AMI that you’ve chosen. If you’re using an unencrypted AMI, it will show up as “Not Encrypted.” From the dropdown, you can then select a customer master key (CMK) for encrypting the volume. You may select the same CMK for each volume that you want to create, or you may use a different CMK for each volume.
       
      Figure 1: Specifying your EBS volumes

      Figure 1: Specifying your EBS volumes

  3. Select Review and then Launch. Your instance will launch with an encrypted Amazon EBS volume that uses the CMK you selected. To learn more about the launch wizard, see Launching an Instance with Launch Wizard.

Launching an instance from the RunInstances API

From the RunInstances API/CLI, you can provide the kmsKeyID for encrypting the volumes that will be created from the AMI by specifying encryption in the BlockDeviceMapping (BDM) object. If you don’t specify the kmsKeyID in BDM but set the encryption flag to “true,” then AWS Managed CMK will be used for encrypting the volume.

For example, to launch an encrypted instance from an Amazon Linux AMI with an additional empty 100 GB of data volume (/dev/sdb), the API call would be as follows:


    $> aws ec2 run-instances 
    --image-id ami-009d6802948d06e52
    --count 1 
    --instance-ype m4.large 
    --region us-east-1
    --subnet-id subnet-aec2fc86 
    --key-name 2016KeyPair 
    --security-group-ids sg-f7dbc78e subnet-id subnet-aec2fc86 
    --block-device-mappings file://mapping.json    

Where the mapping.json contains the following:


[
    {
        "DeviceName": "/dev/xvda",
        "Ebs": {
                "Encrypted": true,
                "KmsKeyId": "arn:aws:kms:<us-east-1:012345678910>:key/<Example_Key_ID_12345>"
        }
    },

    {
        "DeviceName": "/dev/sdb",
        "Ebs": {
            "DeleteOnTermination": true,
            "VolumeSize": 100,
            "VolumeType": "gp2",
            "Encrypted": true,
            "KmsKeyId": "arn:aws:kms:<us-east-1:012345678910>:key/<Example_Key_ID_12345>"
        }
    }
]

You may specify different keys for different volumes. Providing a kmsKeyID without the encryption flag will result in an API error.

Conclusion

In this blog post, we demonstrated how you can quickly launch encrypted, EBS-backed instances from an unencrypted AMI in a few steps. You can also use the same process to launch EBS-backed encrypted volumes from unencrypted snapshots. This simplifies the process of launching instances with encrypted volumes and reduces your AMI storage costs.

This feature is available through the AWS Management Console, AWS CLI, or AWS SDKs at no extra charge in all commercial AWS regions except China. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nishit Nagar

Nishit is a Senior Product Manager at AWS.

Improve availability and latency of applications by using AWS Secret Manager’s Python client-side caching library

Post Syndicated from Paavan Mistry original https://aws.amazon.com/blogs/security/improve-availability-and-latency-of-applications-by-using-aws-secret-managers-python-client-side-caching-library/

Note from May 10, 2019: We’ve updated a code sample for accuracy.


Today, AWS Secrets Manager introduced a client-side caching library for Python that improves the availability and latency of accessing and distributing credentials to your applications. It can also help you reduce the cost associated with retrieving secrets. In this post, I’ll walk you through the following topics:

  • An overview of the Secrets Manager client-side caching library for Python
  • How to use the Python client-side caching library to retrieve a secret

Here are the key benefits of client-side caching libraries:

  • Improved availability: You can cache secrets to reduce the impact of network availability issues such as increased response times and temporary loss of network connectivity.
  • Improved latency: Retrieving secrets from the local cache is faster than retrieving secrets by sending API requests to Secrets Manager within a Virtual Private Network (VPN) or over the Internet.
  • Reduced cost: Retrieving secrets from the cache can reduce the number of API requests made to and billed by Secrets Manager.
  • Automatic refresh of secrets: The library updates the cache by calling Secrets Manager periodically, ensuring your applications use the most current secret value. This ensures any regularly rotated secrets are automatically retrieved.
  • Implementation in just two steps: Add the Python library dependency to your application, and then provide the identifier of the secret that you want the library to use.

Using the Secrets Manager client-side caching library for Python

First, I’ll walk you through an example in which I retrieve a secret without using the Python cache. Then I’ll show you how to update your code to use the Python client-side caching library.

Retrieving a secret without using a cache

Using the AWS SDK for Python (Boto3), you can retrieve a secret from Secrets Manager using the API call flow, as shown below.

Figure 1: Diagram showing GetSecretValue API call without the Python cache

Figure 1: Diagram showing GetSecretValue API call without the Python cache

To understand the benefits of using a cache, I’m going to create a sample secret using the AWS Command Line Interface (AWS CLI):


aws secretsmanager create-secret --name python-cache-test --secret-string "cache-test"

The code below demonstrates a GetSecretValue API call to AWS Secrets Manager without using the cache feature. Each time the application makes a call, the AWS Secrets Manager GetSecretValue API will also be called. This increases the secret retrieval latency. Additionally, there is a minor cost associated with an API call made to the AWS Secrets Manager API endpoint.


    import boto3
    import base64
    from botocore.exceptions import ClientError
    
    def get_secret():
    
        secret_name = "python-cache-test"
        region_name = "us-west-2"
    
        # Create a Secrets Manager client
        session = boto3.session.Session()
        client = session.client(
            service_name='secretsmanager',
            region_name=region_name
        )
    
        # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
        # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
        # We rethrow the exception by default.
    
        try:
            get_secret_value_response = client.get_secret_value(
                SecretId=secret_name
            )
        except ClientError as e:
            if e.response['Error']['Code'] == 'DecryptionFailureException':
                # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
        else:
            # Decrypts secret using the associated KMS CMK.
            # Depending on whether the secret is a string or binary, one of these fields will be populated.
            if 'SecretString' in get_secret_value_response:
                secret = get_secret_value_response['SecretString']
                print(secret)
            else:
                decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                
        # Your code goes here.
    
    get_secret()       

Using the Python client-side caching library to retrieve a secret

Using the Python cache feature, you can now use the cache library to reduce calls to the AWS Secrets Manager API, improving the availability and latency of your application. As shown in the diagram below, when you implement the Python cache, the call to retrieve the secret is routed to the local cache before reaching the AWS Secrets Manager API. If the secret exists in the cache, the application retrieves the secret from the client-side cache. If the secret does not exist in the client-side cache, the request is routed to the AWS Secrets Manager endpoint to retrieve the secret.

Figure 2: Diagram showing GetSecretValue API call using Python client-side cache

                                Figure 2: Diagram showing GetSecretValue API call using Python client-side cache

In the example below, I’ll implement a Python cache to retrieve the secret from a local cache, and hence avoid calling the AWS Secrets Manager API:


    import boto3
	import base64
	from aws_secretsmanager_caching import SecretCache, SecretCacheConfig

	from botocore.exceptions import ClientError

	def get_secret():

    	secret_name = "python-cache-test"
    	region_name = "us-west-2"

    	# Create a Secrets Manager client
    	session = boto3.session.Session()
    	client = session.client(
        	service_name='secretsmanager',
        	region_name=region_name
    	)

    	try:
        	# Create a cache
        	cache = SecretCache(SecretCacheConfig(),client)

        	# Get secret string from the cache
        	get_secret_value_response = cache.get_secret_string(secret_name)

    	except ClientError as e:
        	if e.response['Error']['Code'] == 'DecryptionFailureException':
            	# Deal with the exception here, and/or rethrow at your discretion.
            	raise e
    	else:
            	secret = get_secret_value_response
            	print(secret)
    	# Your code goes here.
	get_secret()    

The cache allows advanced configuration using the SecretCacheConfig library. This library allows you to define cache configuration parameters to help meet your application security, performance, and cost requirements. The SDK enforces the configuration thresholds on maximum cache size, default secret version stage to request, and secret refresh interval between requests. It also allows configuration of various exception thresholds. Further detail on this library is provided in the library.

Based on the secret refresh interval defined in your cache configuration, the cache will check the version of the secret at the defined interval, using the DescribeSecret API to determine if a new version is available. If there is a newer version of the secret, the cache will update to the latest version from AWS Secrets Manager, using the GetSecretValue API. This ensures that an updated version of the secret is available in the cache.

Additionally, the Python client-side cache library allows developers to retrieve secrets from the cache directly, using the secret name through decorator functions. An example of using a decorator function is shown below:


    from aws_secretsmanager_caching.decorators import InjectKeywordedSecretString
 
    class TestClass:
        def __init__(self):
            pass
     
        @InjectKeywordedSecretString('python-cache-test', cache, arg1='secret_key1', arg2='secret_key2')
        def my_test_function(arg1, arg2):
            print("arg1: {}".format(arg1))
            print("arg2: {}".format(arg2))
     
    test = TestClass()
    test.my_test_function()    

To delete the secret created in this post, run the command below:


aws secretsmanager delete-secret --secret-id python-cache-test --force-delete-without-recovery

Summary

In this post, we’ve showed how you can improve availability, reduce latency, and reduce API call cost for your secrets by using the Secrets Manager client-side caching library for Python. To get started managing secrets, open the Secrets Manager console. To learn more, read How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Paavan Mistry

Paavan is a Security Specialist Solutions Architect at AWS where he enjoys solving customers’ cloud security, risk, and compliance challenges. Outside of work, he enjoys reading about leadership, politics, law, and human rights.