Tag Archives: Security, Identity & Compliance

New SOC 2 Report Available: Privacy

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-soc-2-report-available-privacy/

Maintaining your trust is an ongoing commitment of ours, and your voice drives our growing portfolio of compliance reports, attestations, and certifications. As a result of your feedback and deep interest in privacy and data security, we are happy to announce the publication of our new SOC 2 Type I Privacy report.

Keeping you informed of our privacy and data security policies, practices, and technologies we’ve put in place is important to us. The SOC 2 Privacy Type I report is complementary to that effort . The SOC 2 Privacy Trust Principle, developed by the American Institute of CPAs (AICPA), establishes the criteria for evaluating controls related to how personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives. The AWS SOC 2 Privacy Type I report provides you with a third-party attestation of our systems and the suitability of the design of our privacy controls, as stated in our Privacy Notice.

The scope of the privacy report includes systems AWS uses to collect personal information and all 72 services and locations in scope for the latest AWS SOC reports. You can download the new SOC 2 Type I Privacy report now through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profile (and re:Invent 2018 wrap-up): Eric Docktor, VP of AWS Cryptography

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profile-and-reinvent-2018-wrap-up-eric-docktor-vp-of-aws-cryptography/

Eric Docktor

We sat down with Eric Docktor to learn more about his 19-year career at Amazon, what’s new with cryptography, and to get his take on this year’s re:Invent conference. (Need a re:Invent recap? Check out this post by AWS CISO Steve Schmidt.)


How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over nineteen years, but I joined AWS in April 2015. I’m the VP of AWS Cryptography, and I lead a set of teams that develops services related to encryption and cryptography. We own three services and a tool kit: AWS Key Management Service (AWS KMS), AWS CloudHSM, AWS Certificate Manager, plus the AWS Encryption SDK that we produce for our customers.

Our mission is to help people get encryption right. Encryption algorithms themselves are open source, and generally pretty well understood. But just implementing encryption isn’t enough to meet security standards. For instance, it’s great to encrypt data before you write it to disk, but where are you going to store the encryption key? In the real world, developers join and leave teams all the time, and new applications will need access to your data—so how do you make a key available to those who really need it, without worrying about someone walking away with it?

We build tools that help our customers navigate this process, whether we’re helping them secure the encryption keys that they use in the algorithms or the certificates that they use in asymmetric cryptography.

What did AWS Cryptography launch at re:Invent?

We’re really excited about the launch of KMS custom key store. We’ve received very positive feedback about how KMS makes it easy for people to control access to encryption keys. KMS lets you set up IAM policies that give developers or applications the ability to use a key to encrypt or decrypt, and you can also write policies which specify that a particular application—like an Amazon EMR job running in a given account—is allowed to use the encryption key to decrypt data. This makes it really easy to encrypt data without worrying about writing massive decrypt jobs if you want to perform analytics later.

But, some customers have told us that for regulatory or compliance reasons, they need encryption keys stored in single-tenant hardware security modules (HSMs) that they manage. This is where the new KMS custom key store feature comes in. Custom key store combines the ease of using KMS with the ability to run your own CloudHSM cluster to store your keys. You can create a CloudHSM cluster and link it to KMS. After setting that up, any time you want to generate a new master key, you can choose to have it generated and stored in your CloudHSM cluster instead of using a KMS multi-tenant HSM. The keys are stored in an HSM under your control, and they never leave that HSM. You can reference the key by its Amazon Resource Name (ARN), which allows it to be shared with users and applications, but KMS will handle the integration with your CloudHSM cluster so that all crypto operations stay in your single-tenant HSM.

You can read our blog post about custom key store for more details.

If both AWS KMS and AWS CloudHSM allow customers to store encryption keys, what’s the difference between the services?

Well, at a high level, sure, both services offer customers a high level of security when it comes to storing encryption keys in FIPS 140-2 validated hardware security modules. But there are some important differences, so we offer both services to allow customers to select the right tool for their workloads.

AWS KMS is a multi-tenant, managed service that allows you to use and manage encryption keys. It is integrated with over 50 AWS services, so you can use familiar APIs and IAM policies to manage your encryption keys, and you can allow them to be used in applications and by members of your organization. AWS CloudHSM provides a dedicated, FIPS 140-2 Level 3 HSM under your exclusive control, directly in your Amazon Virtual Private Cloud (VPC). You control the HSM, but it’s up to you to build the availability and durability you get out of the box with KMS. You also have to manage permissions for users and applications.

Other than helping customers store encryption keys, what else does the AWS Cryptography team do?

You can use CloudHSM for all sorts of cryptographic operations, not just key management. But we definitely do more than KMS and CloudHSM!

AWS Certificate Manager (ACM) is another offering from the cryptography team that’s popular with customers, who use it to generate and renew TLS certificates. Once you’ve got your certificate and you’ve told us where you want it deployed, we take care of renewing it and binding the new certificate for you. Earlier this year, we extended ACM to support private certificates as well, with the launch of ACM Private Certificate Authority.

We also helped the AWS IoT team launch support for cryptographically signing software updates sent to IoT devices. For IoT devices, and for software installation in general, it’s a best practice to only accept software updates from known publishers, and to validate that the new software has been correctly signed by the publisher before installing. We think all IoT devices should require software updates to be signed, so we’ve made this really easy for AWS IoT customers to implement.

What’s the most challenging part of your job?

We’ve built a suite of tools to help customers manage encryption, and we’re thrilled to see so many customers using services like AWS KMS to secure their data. But when I sit down with customers, especially large customers looking seriously at moving from on-premises systems to AWS, I often learn that they have years and years of investment into their on-prem security systems. Migrating to the cloud isn’t easy. It forces them to think differently about their security models. Helping customers think this through and map a strategy can be challenging, but it leads to innovation—for our customers, and for us. For instance, the idea for KMS custom key store actually came out of a conversation with a customer!

What’s your favorite part of your job?

Ironically, I think it’s the same thing! Working with customers on how they can securely migrate and manage their data in AWS can be challenging, but it’s really rewarding once the customer starts building momentum. One of my favorite moments of my AWS career was when Goldman Sachs went on stage at re:Invent last year and talked about how they use KMS to secure their data.

Five years from now, what changes do you think we’ll see within the field of encryption?

The cryptography community is in the early stages of developing a new cryptographic algorithm that will underpin encryption for data moving across the internet. The current standard is RSA, and it’s widely used. That little padlock you see in your web browser telling you that your connection is secure uses the RSA algorithm to set up an encrypted connection between the website and your browser. But, like all good things, RSA’s time may be coming to an end—the quantum computer could be its undoing. It’s not yet certain that quantum computers will ever achieve the scale and performance necessary for practical applications, but if one did, it could be used to attack the RSA algorithm. So cryptographers are preparing for this. Last year, the National Institute of Standards and Technology (NIST) put out a call for algorithms that might be able to replace RSA, and got 68 responses. NIST is working through those ideas now and will likely select a smaller number of algorithms for further study. AWS participated in two of those submissions and we’re keeping a close eye on NIST’s process. New cryptographic algorithms take years of testing and vetting before they make it into any standards, but we want to be ready, and we want to be on the forefront. Internally, we’re already considering what it would look like to make this change. We believe it’s our job to look around corners and prepare for changes like this, so our customers don’t have to.

What’s the most common misconception you encounter about encryption?

Encryption technology itself is decades-old and fairly well understood. That’s both the beauty and the curse of encryption standards: By the time anything becomes a standard, there are years and years of research and proof points into the stability and the security of the algorithm. But just because you have a really good encryption algorithm that takes an encryption key and a piece of data you want to secure and spits out an impenetrable cipher text, it doesn’t mean that you’re done. What did you do with the encryption key? Did you check it into source code? Did you write it on a piece of paper and leave it in the conference room? It’s these practices around the encryption that can be difficult to navigate.

Security-conscious customers know they need to encrypt sensitive data before writing it to disk. But, if you want your application to run smoothly, sometimes you need that data in clear text. Maybe you need the data in a cache. But who has access to the cache? And what logging might have accidentally leaked that information while the application was running and interacting with the cache?

Or take TLS certificates. Each TLS certificate has a public piece—the certificate—and a private piece—a private key. If an adversary got ahold of the private key, they could use it to impersonate your website or your API. So, how do you secure that key after you’ve procured the certificate?

It’s practices like this that some customers still struggle with. You have to think about all the places that your sensitive data is moving, and about real-world realities, like the fact that the data has to be unecrypted somewhere. That’s where AWS can help with the tooling.

Which re:Invent session videos would you recommend for someone interested in learning more about encryption?

Ken Beer’s encryption talk is a very popular session that I recommend to people year after year. If you want to learn more about KMS custom key store, you should also check out the video from the LaunchPad event, where we talked with Box about how they’re using custom key store.

People do a lot of networking during re:Invent. Any tips for maintaining those connections after everyone’s gone home?

Some of the people that I meet at re:Invent I get to see again every year. With these customers, I tend to stay in touch through email, and through Executive Briefing Center sessions. That contact is important since it lets us bounce ideas off each other and we use that feedback to refine AWS offerings. One conference I went to also created a Slack channel for attendees—and all the attendees are still on it. It’s quiet most of the time, but people have a way to re-engage with each other and ask a question, and it’ll be just like we’re all together again.

If you had to pick any other job, what would you want to do with your life?

If I could do anything, I’d be a backcountry ski guide. Now, I’m not a good enough skier to actually have this job! But I like being outside, in the mountains. If there was a way to make a living out of that, I would!

Author photo

Erick Docktor

Eric joined Amazon in 1999 and has worked in a variety of Amazon’s businesses, including being part of the teams that launched Amazon Marketplace, Amazon Prime, the first Kindle, and Fire Phone. Eric has also worked in Supply Chain planning systems and in Ordering. Since 2015, Eric has led the AWS Cryptography team that builds tools to make it easy for AWS customers to encrypt their data in AWS. Prior to Amazon, Eric was a journalist and worked for newspapers including the Oakland Tribune and the Doylestown (PA) Intelligencer.

New podcast: VP of Security answers your compliance and data privacy questions

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/new-podcast-vp-of-security-answers-your-compliance-and-data-privacy-questions/

Does AWS comply with X program? How about GDPR? What about after Brexit? And what happens with machine learning data?

In the latest AWS Security & Compliance Podcast, we sit down with VP of Security Chad Woolf, who answers your compliance and data privacy questions. Including one of the most frequently asked questions from customers around the world, which is: how many compliance programs does AWS have/attest to/audit against?

Chad also shares what it was like to work at AWS in the early days. When he joined, AWS was housed on just a handful of floors, in a single building. Over the course of nearly nine years with the company, he has witnessed tremendous growth of the business and industry.

Listen to the podcast and hear about company history and get answers to your tough questions. If you have a compliance or data privacy question, you can submit it through our contact us form.

Want more AWS news? Follow us on Twitter.

Creating an opportunistic IPSec mesh between EC2 instances

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/creating-an-opportunistic-ipsec-mesh-between-ec2-instances/

IPSec diagram

IPSec (IP Security) is a protocol for in-transit data protection between hosts. Configuration of site-to-site IPSec between multiple hosts can be an error-prone and intensive task. If you need to protect N EC2 instances, then you need a full mesh of N*(N-1)IPSec tunnels. You must manually propagate every IP change to all instances, configure credentials and configuration changes, and integrate monitoring and metrics into the operation. The efforts to keep the full-mesh parameters in sync are enormous.

Full mesh IPSec, known as any-to-any, builds an underlying network layer that protects application communication. Common use cases are:

  • You’re migrating legacy applications to AWS, and they don’t support encryption. Examples of protocols without encryption are File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) or Lightweight Directory Access Protocol (LDAP).
  • You’re offloading protection to IPSec to take advantage of fast Linux kernel encryption and automated certificate management, the use case we focus on in this solution.
  • You want to segregate duties between your application development and infrastructure security teams.
  • You want to protect container or application communication that leaves an EC2 instance.

In this post, I’ll show you how to build an opportunistic IPSec mesh that sets up dynamic IPSec tunnels between your Amazon Elastic Compute Cloud (EC2) instances. IPSec is based on Libreswan, an open-source project implementing opportunistic IPSec encryption (IKEv2 and IPSec) on a large scale.

Solution benefits and deliverable

The solution delivers the following benefits (versus manual site-to-site IPSec setup):

  • Automatic configuration of opportunistic IPSec upon EC2 launch.
  • Generation of instance certificates and weekly re-enrollment.
  • IPSec Monitoring metrics in Amazon CloudWatch for each EC2 instance.
  • Alarms for failures via CloudWatch and Amazon Simple Notification Service (Amazon SNS).
  • An initial generation of a CA root key if needed, including IAM Policies and two customer master keys (CMKs) that will protect the CA key and instance key.

Out of scope

This solution does not deliver IPSec protection between EC2 instances and hosts that are on-premises, or between EC2 instances and managed AWS components, like Elastic Load Balancing, Amazon Relational Database Service, or Amazon Kinesis. Your EC2 instances must have general IP connectivity that allows NACLs and Security Groups. This solution cannot deliver extra connectivity like VPC peering or Transit VPC can.

Prerequisites

You’ll need the following resources to deploy the solution:

  • A trusted Unix/Linux/MacOS machine with AWS SDK for Python and OpenSSL
  • AWS admin rights in your AWS account (including API access)
  • AWS Systems Manager on EC2
  • Linux RedHat, Amazon Linux 2, or CentOS installed on the EC2 instances you want to configure
  • Internet access on the EC2 instances for downloading Linux packages and reaching AWS Systems Manager endpoint
  • The AWS services used by the solution, which are AWS Lambda, AWS Key Management Service (AWS KMS), AWS Identity and Access Management (IAM), AWS Systems Manager, Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), and Amazon SNS

Solution and performance costs

My solution does not require any additional charges to standard AWS services, since it uses well-established open source software. Involved AWS services are as follows:

  • AWS Lambda is used to issue the certificates. Per EC2 and per week, I estimate the use of two 30 second Lambda functions with 256 MB of allocated memory. For 100 EC2 instances, the cost will be several cents. See AWS Lambda Pricing for details.
  • Certificates have no charge, since they’re issued by the Lambda function.
  • CloudWatch Events and Amazon S3 Storage usage are within the free tier policy.
  • AWS Systems Manager has no additional charge.
  • AWS EC2 is a standard AWS service on which you deploy your workload. There are no charges for IPSec encryption.
  • EC2 CPU performance decrease due to encryption is negligible since we use hardware encryption support of the Linux kernel. The IKE negotiation that is done by the OS in your CPU may add minimal CPU overhead depending on the number of EC2 instances involved.

Installation (one-time setup)

To get started, on a trusted Unix/Linux/MacOS machine that has admin access to your AWS account and AWS SDK for Python already installed, complete the following steps:

  1. Download the installation package from https://github.com/aws-quickstart/quickstart-ec2-ipsec-mesh.
  2. Edit the following files in the package to match your network setup:
    • config/private should contain all networks with mandatory IPSec protection, such as EC2s that should only be communicated with via IPSec. All of these hosts must have IPSec installed.
    • config/clear should contain any networks that do not need IPSec protection. For example, these might include Route 53 (DNS), Elastic Load Balancing, or Amazon Relational Database Service (Amazon RDS).
    • config/clear-or-private should contain networks with optional IPSec protection. These networks will start clear and attempt to add IPSec.
    • config/private-or-clear should also contain networks with optional IPSec protection. However, these networks will start with IPSec and fail back to clear.
  3. Execute ./aws_setup.py and carefully set and verify the parameters. Use -h to view help. If you don’t provide customized options, default values will be generated. The parameters are:
    • Region to install the solution (default: your AWS Command Line Interface region)
    • Buckets for configuration, sources, published host certificates and CA storage. (Default: random values that follow the pattern ipsec-{hostcerts|cacrypto|sources}-{stackname} will be generated.) If the buckets do not exist, they will be automatically created.
    • Reuse of an existing CA? (default: no)
    • Leave encrypted backup copy of the CA key? The password will be printed to stdout (default: no)
    • Cloud formation stackname (default: ipsec-{random string}).
    • Restrict provisioning to certain VPC (default: any)

     
    Here is an example output:

    
                ./aws_setup.py  -r ca-central-1 -p ipsec-host-v -c ipsec-crypto-v -s ipsec-source-v
                Provisioning IPsec-Mesh version 0.1
                
                Use --help for more options
                
                Arguments:
                ----------------------------
                Region:                       ca-central-1
                Vpc ID:                       any
                Hostcerts bucket:             ipsec-host-v
                CA crypto bucket:             ipsec-crypto-v
                Conf and sources bucket:      ipsec-source-v
                CA use existing:              no
                Leave CA key in local folder: no
                AWS stackname:                ipsec-efxqqfwy
                ---------------------------- 
                Do you want to proceed ? [yes|no]: yes
                The bucket ipsec-source-v already exists
                File config/clear uploaded in bucket ipsec-source-v
                File config/private uploaded in bucket ipsec-source-v
                File config/clear-or-private uploaded in bucket ipsec-source-v
                File config/private-or-clear uploaded in bucket ipsec-source-v
                File config/oe-cert.conf uploaded in bucket ipsec-source-v
                File sources/enroll_cert_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/generate_certifcate_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/ipsec_setup_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/cron.txt uploaded in bucket ipsec-source-v
                File sources/cronIPSecStats.sh uploaded in bucket ipsec-source-v
                File sources/ipsecSetup.yaml uploaded in bucket ipsec-source-v
                File sources/setup_ipsec.sh uploaded in bucket ipsec-source-v
                File README.md uploaded in bucket ipsec-source-v
                File aws_setup.py uploaded in bucket ipsec-source-v
                The bucket ipsec-host-v already exists
                Stack ipsec-efxqqfwy creation started. Waiting to finish (ca 3-5 min)
                Created CA CMK key arn:aws:kms:ca-central-1:123456789012:key/abcdefgh-1234-1234-1234-abcdefgh123456
                Certificate generation lambda arn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy
                Generating RSA private key, 4096 bit long modulus
                .............................++
                .................................................................................................................................................................................................................................................................................................................................................................................++
                e is 65537 (0x10001)
                Certificate and key generated. Subject CN=ipsec.ca-central-1 Valid 10 years
                The bucket ipsec-crypto-v already exists
                Encrypted CA key uploaded in bucket ipsec-crypto-v
                CA cert uploaded in bucket ipsec-crypto-v
                CA cert and key remove from local folder
                Lambda functionarn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy updated
                Resource policy for CA CMK hardened - removed action kms:encrypt
                
                done :-)
            

Launching the EC2 Instance

Now that you’ve installed the solution, you can start launching EC2 instances. From the EC2 Launch Wizard, execute the following steps. The instructions assume that you’re using RedHat, Amazon Linux 2, or CentOS.

Note: Steps or details that I don’t explicitly mention can be set to default (or according to your needs).

  1. Select the IAM Role already configured by the solution with the pattern Ec2IPsec-{stackname}
     
    Figure 1: Select the IAM Role

    Figure 1: Select the IAM Role

  2. (You can skip this step if you are using Amazon Linux 2.) Under Advanced Details, select User data as text and activate the AWS Systems Manager Agent (SSM Agent) by providing the following string (for RedHat and CentOS 64 Bits only):
    
        #!/bin/bash
        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
        sudo systemctl start amazon-ssm-agent
        

     

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

  3. Set the tag name to IPSec with the value todo. This is the identifier that triggers the installation and management of IPsec on the instance.
     
    Figure 3: Set the tag name to "IPSec" with the value "todo"

    Figure 3: Set the tag name to “IPSec” with the value “todo”

  4. On the Configuration page for the security group, allow ESP (Protocol 50) and IKE (UDP 500) for your network, like 172.31.0.0/16. You need to enter these values as shown in following screen:
     
    Figure 4: Enter values on the "Configuration" page

    Figure 4: Enter values on the “Configuration” page

After 1-2 minutes, the value of the IPSec instance tag will change to enabled, meaning the instance is successfully set up.
 

Figure 5: Look for the "enabled" value for the IPSec key

Figure 5: Look for the “enabled” value for the IPSec key

So what’s happening in the background?

 

Figure 6: Architectural diagram

Figure 6: Architectural diagram

As illustrated in the solution architecture diagram, the following steps are executed automatically in the background by the solution:

  1. An EC2 launch triggers a CloudWatch event, which launches an IPSecSetup Lambda function.
  2. The IPSecSetup Lambda function checks whether the EC2 instance has the tag IPSec:todo. If the tag is present, the Lambda function issues a certificate calling a GenerateCertificate Lambda.
  3. The GenerateCertificate Lambda function downloads the encrypted CA certificate and key.
  4. The GenerateCertificate Lambda function decrypts the CA key with a customer master key (CMK).
  5. The GenerateCertificate Lambda function issues a host certificate to the EC2 instance. It encrypts the host certificate and key with a KMS generated random secret in PKCS12 structure. The secret is envelope-encrypted with a dedicated CMK.
  6. The GenerateCertificate Lambda function publishes the issued certificates to your dedicated bucket for documentation.
  7. The IPSec Lambda function calls and runs the installation via SSM.
  8. The installation downloads the configuration and installs python, aws-sdk, libreswan, and curl if needed.
  9. The EC2 instance decrypts the host key with the dedicated CMK and installs it in the IPSec database.
  10. A weekly scheduled event triggers reenrollment of the certificates via the Reenrollcertificates Lambda function.
  11. The Reenrollcertificates Lambda function triggers the IPSecSetup Lambda (call event type: execution). The IPSecSetup Lambda will renew the certificate only, leaving the rest of the configuration untouched.

Testing the connection on the EC2 instance

You can log in to the instance and ping one of the hosts in your network. This will trigger the IPSec connection and you should see successful answers.


        $ ping 172.31.1.26
        
        PING 172.31.1.26 (172.31.1.26) 56(84) bytes of data.
        64 bytes from 172.31.1.26: icmp_seq=2 ttl=255 time=0.722 ms
        64 bytes from 172.31.1.26: icmp_seq=3 ttl=255 time=0.483 ms
        

To see a list of IPSec tunnels you can execute the following:


        sudo ipsec whack --trafficstatus
        

Here is an example of the execution:
 

Figure 7: Example execution

Figure 7: Example execution

Changing your configuration or installing it on already running instances

All configuration exists in the source bucket (default: ipsec-source prefix), in files for libreswan standard. If you need to change the configuration, follow the following instructions:

  1. Review and update the following files:
    1. oe-conf, which is the configuration for libreswan
    2. clear, private, private-to-clear and clear-to-ipsec, which should contain your network ranges.
  2. Change the tag for the IPSec instance to
    IPSec:todo.
  3. Stop and Start the instance (don’t restart). This will retrigger the setup of the instance.
     
    Figure 8: Stop and start the instance

    Figure 8: Stop and start the instance

    1. As an alternative to step 3, if you prefer not to stop and start the instance, you can invoke the IPSecSetup Lambda function via Test Event with a test JSON event in the following format:
      
                      { "detail" :  
                          { "instance-id": "YOUR_INSTANCE_ID" }
                      }
              

      A sample of test event creation in the Lambda Design window is shown below:
       

      Figure 9: Sample test event creation

      Figure 9: Sample test event creation

Monitoring and alarms

The solution delivers and takes care of IPSec/IKE Metrics and SNS Alarms in the case of errors. To monitor your IPSec environment, you can use Amazon CloudWatch. You can see metrics for active IPSec sessions, IKE/ESP errors, and connection shunts.
 

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

There are two SNS topics and alarms configured for IPSec setup failure or certificate reenrollment failure. You will see an alarm and an SNS message. It’s very important that your administrator subscribes to notifications so that you can react quickly. If you receive an alarm, please use the information in the “Troubleshooting” section of this post, below.
 

Figure 11: Alarms

Figure 11: Alarms

Troubleshooting

Below, I’ve listed some common errors and how to troubleshoot them:
 

The IPSec Tag doesn’t change to IPSec:enabled upon EC2 launch.

  1. Wait 2 minutes after the EC2 instance launches, so that it becomes reachable for AWS SSM.
  2. Check that the EC2 Instance has the right role assigned for the SSM Agent. The role is provisioned by the solution named Ec2IPsec-{stackname}.
  3. Check that the SSM Agent is reachable via a NAT gateway, an Internet gateway, or a private SSM endpoint.
  4. For CenOS and RedHat, check that you’ve installed the SSM Agent. See “Launching the EC2 instance.”
  5. Check the output of the SSM Agent command execution in the EC2 service.
  6. Check the IPSecSetup Lambda logs in CloudWatch for details.

The IPSec connection is lost after a few hours and can only be established from one host (in one direction).

  1. Check that your Security Groups allow ESP Protocol and UDP 500. Security Groups are stateful. They may only allow a single direction for IPSec establishment.
  2. Check that your network ACL allows UDP 500 and ESP Protocol.

The SNS Alarm on IPSec reenrollment is trigged, but everything seems to work fine.

  1. Certificates are valid for 30 days and rotated every week. If the rotation fails, you have three weeks to fix the problem.
  2. Check that the EC2 instances are reachable over AWS SSM. If reachable, trigger the certificate rotation Lambda again.
  3. See the IPSecSetup Lambda logs in CloudWatch for details.

DNS Route 53, RDS, and other managed services are not reachable.

  1. DNS, RDS and other managed services do not support IPSec. You need to exclude them from encryption by listing them in the config/clear list. For more details see step 2 of Installation (one-time setup) in this blog.

Here are some additional general IPSec commands for troubleshooting:

Stopping IPSec can be done by executing the following unix command:


        sudo ipsec stop 
        

If you want to stop IPSec on all instances, you can execute this command via AWS Systems Manager on all instances with the tag IPSec:enabled. Stopping encryption means all traffic will be sent unencrypted.

If you want to have a fail-open case, meaning on IKE(IPSec) failure send the data unencrypted, then configure your network in config/private-or-clear as described in step 2 of Installation (one-time setup).

Debugging IPSec issues can be done using Libreswan commands . For example:


        sudo ipsec status 
        
        sudo ipsec whack –debug `
        
        sudo ipsec barf 
        

Security

The CA key is encrypted using an Advanced Encryption Standard (AES) 256 CBC 128-byte secret and stored in a bucket with server-side encryption (SSE). The secret is envelope-encrypted with a CMK in AWS KMP pattern. Only the certificate-issuing Lambda function can decrypt the secret KMS resource policy. The encrypted secret for the CA key is set in an encrypted environment variable of the certificate-issuing Lambda function.

The IPSec host private key is generated by the certificate-issuing Lambda function. The private key and certificate are encrypted with AES 256 CBC (PKCS12) and protected with a 128-byte secret generated by KMS. The secret is envelope-encrypted with a user CMK. Only the EC2 instances with attached IPSec IAM policy can decrypt the secret and private key.

The issuing of the certificate is a full synchronous call: One request and one corresponding response without any polling or similar sync/callbacks. The host private key is not stored in a database or an S3 bucket.

The issued certificates are valid for 30 days and are stored for auditing purposes in a certificates bucket without a private key.

Alternate subject names and multiple interfaces or secondary IPs

The certificate subject name and AltSubjectName attribute contains the private Domain Name System (DNS) of the EC2 and all private IPs assigned to the instance (interfaces, primary, and secondary IPs).

The provided default libreswan configuration covers a single interface. You can adjust the configuration according to libreswan documentation for multiple interfaces, for example, to cover Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Conclusion

With the solution in this blog post, you can automate the process of building an encryption IPSec layer for your EC2 instances to protect your workloads. You don’t need to worry about configuring certificates, monitoring, and alerting. The solution uses a combination of AWS KMS, IAM, AWS Lambda, CloudWatch and the libreswan implementation. If you need libreswan support, use the mailing list or github. AWS forums can give you more information on KMS for IAM. If you require a special enterprise enhancement, contact AWS professional services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Automate analyzing your permissions using IAM access advisor APIs

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/automate-analyzing-permissions-using-iam-access-advisor/

As an administrator that grants access to AWS, you might want to enable your developers to get started with AWS quickly by granting them broad access. However, as your developers gain experience and your applications stabilize, you want to limit permissions to only what they need. To do this, access advisor will determine the permissions your developers have used by analyzing the last timestamp when an IAM entity (for example, a user, role, or group) accessed an AWS service. This information helps you audit service access, remove unnecessary permissions, and set appropriate permissions across different environments. For example, you can grant broad access to services in development accounts and then reduce permissions for access to specific services in production accounts. Finally, as you manage more IAM entities and AWS accounts, you need a way to scale these processes through automation. To help you achieve this automation, you can now use IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first provide the details of the access advisor APIs. Next, I walk through an example to demonstrate how you can use the AWS CLI to create a report of the last-accessed timestamps for the services used by the roles in your account. In this post, I assume that you’re familiar with access advisor and how to Remove Unnecessary Permissions in Your IAM Policies by Using Service Last Accessed Data from the IAM console. Before I share an example, I’ll describe the new IAM access advisor APIs:

  • generate-service-last-accessed-details: Generates the service last accessed data for an IAM resource (user, role, group, or policy). You need to call this API first to start a job that generates the service last accessed data for the IAM resource. This API returns a JobId that you will use for the other APIs, such as get-service-last-accessed-details, to determine the status of the job completion.
  • get-service-last-accessed-details: Use this to retrieve the service last accessed data for an IAM resource based on the JobID you pass in.
  • get-service-last-accessed-details-with-entities: Use this to retrieve the service last accessed data for a specific AWS service. The API provides you with a list of all the IAM entities who have access to the service and includes the last accessed date for each IAM entity.
  • list-policies-granting-service-access: Use this to retrieve all the IAM policies that grant permissions to the services accessed for an IAM entity. This helps you identify the policies you need to modify to remove any unused permissions.

Now that you understand the different IAM access advisor APIs, I’ll walk through an example to demonstrate how to use them to set permissions based on service last accessed information.

Example use case: Setting permissions for IAM roles

Assume Arnav Desai is a security administrator for Example Corp. He works with several development teams and monitors their access across multiple accounts. To get his development teams up and running quickly, he initially created multiple roles with broad permissions that are based on job function in the development accounts. Now, his developers are ready to deploy workloads to production accounts. The developers need access to configure AWS, however, Arnav only wants to grant them access to what they need. To determine these permissions, he uses access advisor APIs to automate a process that helps him understand the services developers accessed in the last six months. Using this information, he authors policies to grant access to specific services in production. I’ll now show you an example to achieve this in one account using AWS CLI commands.

First, Arnav uses the list-roles command to list the IAM roles in his development account. For this example, there are two roles in his development account: DBAdminRole and NetworkAdminRole.

For each role, he uses the generate-service-last-accessed-details command to generate the service last accessed data for the role. Here’s an example of the command that he uses:


aws iam generate-service-last-accessed-details --arn arn:aws:iam::123456789012:role/DBAdminRole

The command above provides Arnav with a JobId for each role signaling that the job has started generating the service last accessed details. Arnav waits for the job to complete successfully to retrieve the access advisor information. In the meantime, he can call the get-service-last-accessed-details command to view the JobStatus of the job. Once the jobs for both roles are COMPLETED, Arnav can view the service last accessed report for both the roles, as shown below.

DBAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-01T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon DynamoDB",
            "ServiceNamespace": "dynamodb",
            "TotalAuthenticatedEntities": 1
        },
        {
            "LastAuthenticated": "2018-08-25T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon S3",
            "ServiceNamespace": "s3",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the DBAdminRole doesn’t access other services.

NetworkAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-21T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ NetworkAdminRole",
            "ServiceName": "Amazon EC2",
            "ServiceNamespace": "ec2",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the NetworkAdminRole doesn’t access other services.

Based on the output above, you can see that the two roles in development accessed Amazon DynamoDB, Amazon S3, and Amazon EC2 in the last six months. Using this information, Arnav can author a policy to grant access to these specific services for the production accounts.

Conclusion

In this post, I reviewed IAM access advisor APIs and shown how you can use them to determine service last accessed information programmatically. You can use this information to audit access, removed unused permissions, or grant appropriate permissions across your accounts.

If you have comments about retrieving Access Advisor service last accessed information programmatically, submit them in the Comments section below. If you have issues using access advisor commands, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Learn about New AWS re:Invent Launches – December AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-new-aws-reinvent-launches-december-aws-online-tech-talks/

AWS Tech Talks

Join us in the next couple weeks to learn about some of the new service and feature launches from re:Invent 2018. Learn about features and benefits, watch live demos and ask questions! We’ll have AWS experts online to answer any questions you may have. Register today!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Compute

December 19, 2018 | 01:00 PM – 02:00 PM PTDeveloping Deep Learning Models for Computer Vision with Amazon EC2 P3 Instances – Learn about the different steps required to build, train, and deploy a machine learning model for computer vision.

Containers

December 11, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS App Mesh – Learn about using AWS App Mesh to monitor and control microservices on AWS.

Data Lakes & Analytics

December 10, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS Lake Formation – Build a Secure Data Lake in Days – AWS Lake Formation (coming soon) will make it easy to set up a secure data lake in days. With AWS Lake Formation, you will be able to ingest, catalog, clean, transform, and secure your data, and make it available for analysis and machine learning.

December 12, 2018 | 11:00 AM – 12:00 PM PTIntroduction to Amazon Managed Streaming for Kafka (MSK) – Learn about features and benefits, use cases and how to get started with Amazon MSK.

Databases

December 10, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon RDS on VMware – Learn how Amazon RDS on VMware can be used to automate on-premises database administration, enable hybrid cloud backups and read scaling for on-premises databases, and simplify database migration to AWS.

December 13, 2018 | 09:00 AM – 10:00 AM PTServerless Databases with Amazon Aurora and Amazon DynamoDB – Learn about the new serverless features and benefits in Amazon Aurora and DynamoDB, use cases and how to get started.

Enterprise & Hybrid

December 19, 2018 | 11:00 AM – 12:00 PM PTHow to Use “Minimum Viable Refactoring” to Achieve Post-Migration Operational Excellence – Learn how to improve the security and compliance of your applications in two weeks with “minimum viable refactoring”.

IoT

December 17, 2018 | 11:00 AM – 12:00 PM PTIntroduction to New AWS IoT Services – Dive deep into the AWS IoT service announcements from re:Invent 2018, including AWS IoT Things Graph, AWS IoT Events, and AWS IoT SiteWise.

Machine Learning

December 10, 2018 | 09:00 AM – 10:00 AM PTIntroducing Amazon SageMaker Ground Truth – Learn how to build highly accurate training datasets with machine learning and reduce data labeling costs by up to 70%.

December 11, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS DeepRacer – AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and a global racing league.

December 12, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Forecast and Amazon Personalize – Learn about Amazon Forecast and Amazon Personalize – what are the key features and benefits of these managed ML services, common use cases and how you can get started.

December 13, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Textract: Now in Preview – Learn how Amazon Textract, now in preview, enables companies to easily extract text and data from virtually any document.

Networking

December 17, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS Transit Gateway – Learn how AWS Transit Gateway significantly simplifies management and reduces operational costs with a hub and spoke architecture.

Robotics

December 18, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS RoboMaker, a New Cloud Robotics Service – Learn about AWS RoboMaker, a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale.

Security, Identity & Compliance

December 17, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS Security Hub – Learn about AWS Security Hub, and how it gives you a comprehensive view of high-priority security alerts and your compliance status across AWS accounts.

Serverless

December 11, 2018 | 11:00 AM – 12:00 PM PTWhat’s New with Serverless at AWS – In this tech talk, we’ll catch you up on our ever-growing collection of natively supported languages, console updates, and re:Invent launches.

December 13, 2018 | 11:00 AM – 12:00 PM PTBuilding Real Time Applications using WebSocket APIs Supported by Amazon API Gateway – Learn how to build, deploy and manage APIs with API Gateway.

Storage

December 12, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Windows File Server – Learn about Amazon FSx for Windows File Server, a new fully managed native Windows file system that makes it easy to move Windows-based applications that require file storage to AWS.

December 14, 2018 | 01:00 PM – 02:00 PM PTWhat’s New with AWS Storage – A Recap of re:Invent 2018 Announcements – Learn about the key AWS storage announcements that occurred prior to and at re:Invent 2018. With 15+ new service, feature, and device launches in object, file, block, and data transfer storage services, you will be able to start designing the foundation of your cloud IT environment for any application and easily migrate data to AWS.

December 18, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Lustre – Learn about Amazon FSx for Lustre, a fully managed file system for compute-intensive workloads. Process files from S3 or data stores, with throughput up to hundreds of GBps and sub-millisecond latencies.

December 18, 2018 | 01:00 PM – 02:00 PM PTIntroduction to New AWS Services for Data Transfer – Learn about new AWS data transfer services, and which might best fit your requirements for data migration or ongoing hybrid workloads.

2018 ISO certificates are here, with a 70% increase of in scope services

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/2018-iso-certificates-are-here-with-a-70-increase-of-in-scope-services/

In just the last year, we’ve increased the number of ISO services in scope by 70%. That makes 114 services in total that have been validated against ISO 9001, 27001, 27017, and 27018.

The following services are new to our ISO program:

  • Amazon AppStream 2.0
  • Amazon Athena
  • Amazon Chime
  • Amazon CloudWatch Events
  • Amazon CloudWatch
  • Amazon Comprehend
  • Amazon Elastic Container Service for Kubernetes
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx*
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Pinpoint
  • Amazon Polly
  • Amazon Rekognition
  • Amazon Transcribe
  • Amazon Translate
  • AWS Amplify*
  • AWS AppSync
  • AWS Artifact
  • AWS Certificate Manager
  • AWS CodeStar
  • AWS DataSync*
  • AWS Device Farm
  • AWS Elemental MediaConnect*
  • AWS Elemental MediaConvert
  • AWS Elemental MediaLive
  • AWS Firewall Manager
  • AWS Global Accelerator*
  • AWS Glue
  • AWS IoT Greengrass
  • AWS IoT 1-Click
  • AWS IoT Analytics
  • AWS License Manager*
  • AWS OpsWorks CM [includes Chef Automate, Puppet Enterprise]
  • AWS Organizations
  • AWS RoboMaker*
  • AWS Secrets Manager
  • AWS Server Migration Service
  • AWS Serverless Application Repository
  • AWS Service Catalog
  • AWS Single Sign-On
  • AWS Transfer for SFTP*
  • AWS Trusted Advisor
  • Amazon Route 53 Resolver*

*New Service

The latest certificates for ISO 9001, 27001, 27017, and 27018 are now available, giving you insight into our information security management system from third-party auditors. They contain the full list of AWS locations in scope and reference the ISO Certified webpage, which includes all services in scope. For convenience, you can also download the certs in the console via AWS Artifact, as well.

We’re clearly accelerating the pace that we add services in scope, but our ultimate goal is to eliminate your wait for compliant services altogether. To that end, in this latest audit cycle 9 of the 51 services added, launched generally available with the certifications at re:Invent 2018.

Want more AWS Security news? Follow us on Twitter.

New PCI DSS report now available, 31 services added to scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-pci-dss-report-now-available-31-services-added-to-scope/

In just the last 6 months, we’ve increased the number of Payment Card Industry Data Security Standard (PCI DSS) certified services by 50%. We were evaluated by third-party auditors from Coalfire and the latest report is now available on AWS Artifact.

I would like to especially call out the six new services (marked with asterisks) that just launched generally available at re:Invent with PCI certification. We’re increasing the rate we add existing services in scope and are also launching new services PCI certified, enabling you to use them for regulated workloads sooner. The goal is for all of our services to have compliance certifications so you never have to wait to verify their security and compliance posture. Additional work to that end is already underway, and we’ll be updating you about our progress at every significant milestone.

With the addition of the following 31 services, you can now select from a total of 93 PCI-compliant services. To see the full list, go to our Services in Scope by Compliance Program page.

  • Amazon Athena
  • Amazon Comprehend
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx*
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Rekognition
  • Amazon Transcribe
  • Amazon Translate
  • AWS AppSync
  • AWS Certificate Manager (ACM)
  • AWS DataSync*
  • AWS Elemental MediaConnect*
  • AWS Global Accelerator*
  • AWS Glue
  • AWS Greengrass
  • AWS IoT Core {includes Device Management}
  • AWS OpsWorks for Chef Automate {includes Puppet Enterprise}
  • AWS RoboMaker*
  • AWS Secrets Manager
  • AWS Serverless Application Repository
  • AWS Server Migration Service (SMS)
  • AWS Step Functions
  • AWS Transfer for SFTP*
  • VM Import/Export

*New Service

If you want to know more about our compliance programs or provide feedback, please contact us. Your feedback helps us prioritize our decisions and innovate our programs.

Want more AWS Security news? Follow us on Twitter.

Scaling a governance, risk, and compliance program for the cloud, emerging technologies, and innovation

Post Syndicated from Michael South original https://aws.amazon.com/blogs/security/scaling-a-governance-risk-and-compliance-program-for-the-cloud/

Governance, risk, and compliance (GRC) programs are sometimes looked upon as the bureaucracy getting in the way of exciting cybersecurity work. But a good GRC program establishes the foundation for meeting security and compliance objectives. It is the proactive approach to cybersecurity that, if done well, minimizes reactive incident response.

Of the three components of cybersecurity—people, processes, and technology—technology is the viewed as the “easy button” because in relative terms, it’s simpler than drafting a policy with the right balance of flexibility and specificity or managing countless organizational principles and human behavior. Still, as much as we promote technology and automation at AWS, we also understand that automating a bad process with the latest technology doesn’t make the process or outcome better. Cybersecurity must incorporate all three aspects with a programmatic approach that scales. To reach that goal, an effective GRC program is essential because it ensures a holistic view has been taken while tackling the daunting mission of cybersecurity.

Although governance, risk, and compliance are oftentimes viewed as separate functions, there is a symbiotic relationship between them. Governance establishes the strategy and guardrails for meeting specific requirements that align and support the business. Risk management connects specific controls to governance and assessed risks, and provides business leaders with the information they need to prioritize resources and make risk-informed decisions. Compliance is the adherence and monitoring of controls to specific governance requirements. It is also the “ante” to play the game in certain industries and, with continuous monitoring, closes the feedback loop regarding effective governance. Security architecture, engineering, and operations are built upon the GRC foundation.

Without a GRC program, people tend to solely focus on technology and stove-pipe processes. For example, say a security operations employee is faced with four events to research and mitigate. In the absence of a GRC program, the staffer would have no context about the business risk or compliance impact of the events, which could lead them to prioritize the least important issue.

GRC relationship model

GRC has a symbiotic relationship

The breadth and depth of a GRC program varies with each organization. Regardless of its simplicity or complexity, there are opportunities to transform or scale that program for the adoption of cloud services, emerging technologies, and other future innovations.

Below is a checklist of best practices to help you on your journey. The key takeaways of the checklist are: base governance on objectives and capabilities, include risk context in decision-making, and automate monitoring and response.

Governance

Identify compliance requirements

__ Identify required compliance frameworks (such as HIPAA or PCI) and contract/agreement obligations.

__ Identify restrictions/limitations to cloud or emerging technologies.

__ Identify required or chosen standards to implement (for example NIST, ISO, COBIT, CSA, CIS, etc.).

Conduct program assessment

__ Conduct a program assessment based on industry processes such as the NIST Cyber Security Framework (CSF) or ISO/IEC TR 27103:2018 to understand the capability and maturity of your current profile.

__ Determine desired end-state capability and maturity, also known as target profile.

__ Document and prioritize gaps (people, process, and technologies) for resource allocation.

__ Build a Cloud Center of Excellence (CCoE) team.

__ Draft and publish a cloud strategy that includes procurement, DevSecOps, management, and security.

__ Define and assign functions, roles, and responsibilities.

Update and publish policies, processes, procedures

__ Update policies based on objectives and desired capabilities that align to your business.

__ Update processes for modern organization and management techniques such as DevSecOps and Agile, specifying how to upgrade old technologies.

__ Update procedures to integrate cloud services and other emerging technologies.

__ Establish technical governance standards to be used to select controls and that monitor compliance.

Risk management

Conduct a risk assessment*

__ Conduct or update an organizational risk assessment (e.g., market, financial, reputation, legal, etc.).

__ Conduct or update a risk assessment for each business line (such as mission, market, products/services, financial, etc.).

__ Conduct or update a risk assessment for each asset type.

* The use of pre-established threat models can simplify the risk assessment process, both initial and updates.

Draft risk plans

__ Implement plans to mitigate, avoid, transfer, or accept risk at each tier, business line, and asset (for example, a business continuity plan, a continuity of operations plan, a systems security plan).

__ Implement plans for specific risk areas (such as supply chain risk, insider threat).

Authorize systems

__ Use NIST Risk Management Framework (RMF) or other process to authorize and track systems.

__ Use NIST Special Publication 800-53, ISO ISO/IEC 27002:2013, or other control set to select, implement, and assess controls based on risk.

__ Implement continuous monitoring of controls and risk, employing automation to the greatest extent possible.

Incorporate risk information into decisions

__ Link system risk to business and organizational risk

__ Automate translation of continuous system risk monitoring and status to business and org risk

__ Incorporate “What’s the risk?” (financial, cyber, legal, reputation) into leadership decision-making

Compliance

Monitor compliance with policy, standards, and security controls

__ Automate technical control monitoring and reporting (advanced maturity will lead to AI/ML).

__ Implement manual monitoring of non-technical controls (for example periodic review of visitor logs).

__ Link compliance monitoring with security information and event management (SIEM) and other tools.

Continually self-assess

__ Automate application security testing and vulnerability scans.

__ Conduct periodic self-assessments from sampling of controls, entire functional area, and pen-tests.

__ Be overly critical of assumptions, perspectives, and artifacts.

Respond to events and changes to risk

__ Integrate security operations with the compliance team for response management.

__ Establish standard operating procedures to respond to unintentional changes in controls.

__ Mitigate impact and reset affected control(s); automate as much as possible.

Communicate events and changes to risk

__ Establish a reporting tree and thresholds for each type of incident.

__ Include general counsel in reporting.

__ Ensure applicable regulatory authorities are notified when required.

__ Automate where appropriate.

Want more AWS Security news? Follow us on Twitter.

Michael South

Michael joined AWS in 2017 as the Americas Regional Leader for public sector security and compliance business development. He supports customers who want to achieve business objectives and improve their security and compliance in the cloud. His customers span across the public sector, including: federal governments, militaries, state/provincial governments, academic institutions, and non-profits from North to South America. Prior to AWS, Michael was the Deputy Chief Information Security Officer for the city of Washington, DC and the U.S. Navy’s Chief Information Officer for Japan.

Are KMS custom key stores right for you?

Post Syndicated from Richard Moulds original https://aws.amazon.com/blogs/security/are-kms-custom-key-stores-right-for-you/

You can use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. The KMS custom key store integrates KMS with AWS CloudHSM to help satisfy compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs) while providing the AWS service integrations of KMS. However, the additional control comes with increased cost and potential impact on performance and availability. This post will help you decide if this feature is the best approach for you.

KMS is a fully managed service that generates encryption keys and helps you manage their use across more than 45 AWS services. It also supports the AWS Encryption SDK and other client-side encryption tools, and you can integrate it into your own applications. KMS is designed to meet the requirements of the vast majority of AWS customers. However, there are situations where customers need to manage their keys in single-tenant HSMs that they exclusively control. Previously, KMS did not meet these requirements since it offered only the ability to store keys in shared HSMs that are managed by KMS.

AWS CloudHSM is a service that’s primarily intended to support customer-managed applications that are specifically designed to use HSMs. It provides direct control over HSM resources, but the service isn’t, by itself, widely integrated with other AWS managed services. Before custom key store, this meant that if you required direct control of your HSMs but still wanted to use and store regulated data in AWS managed services, you had to choose between changing those requirements, not using a given AWS service, or building your own solution. KMS custom key store gives you another option.

How does a custom key store work?

With custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather than the default KMS key store. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Your KMS customer master keys (CMKs) never leave the CloudHSM instances, and all KMS operations that use those keys are only performed in your HSMs. In all other respects, the master keys stored in your custom key store are used in a way that is consistent with other KMS CMKs.

This diagram illustrates the primary components of the service and shows how a cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store.
 

Figure 1: A cluster of two CloudHSM instances is connected to the KMS front-end hosts to create a customer controlled key store

Figure 1: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Because you control your CloudHSM cluster, you can take direct action to manage certain aspects of the lifecycle of your keys, independently of KMS. Specifically, you can verify that KMS correctly created keys in your HSMs and you can delete key material and restore keys from backup at any time. You can also choose to connect and disconnect the CloudHSM cluster from KMS, effectively isolating your keys from KMS. However, with more control comes more responsibility. It’s important that you understand the availability and durability impact of using this feature, and I discuss the issues in the next section.

Decision criteria

KMS customers who plan to use a custom key store tell us they expect to use the feature selectively, deciding on a key-by-key basis where to store them. To help you decide if and how you might use the new feature, here are some important issues to consider.

Here are some reasons you might want to store a key in a custom key store:

  • You have keys that are required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
  • You have keys that are explicitly required to be stored in an HSM validated at FIPS 140-2 level 3 overall (the HSMs used in the default KMS key store are validated to level 2 overall, with level 3 in several categories, including physical security).
  • You have keys that are required to be auditable independently of KMS.

And here are some considerations that might influence your decision to use a custom key store:

  • Cost — Each custom key store requires that your CloudHSM cluster contains at least two HSMs. CloudHSM charges vary by region, but you should expect costs of at least $1,000 per month, per HSM, if each device is permanently provisioned. This cost occurs regardless of whether you make any requests of the KMS API directly or indirectly through an AWS service.
  • Performance — The number of HSMs determines the rate at which keys can be used. It’s important that you understand the intended usage patterns for your keys and ensure that you have provisioned your HSM resources appropriately.
  • Availability — The number of HSMs and the use of availability zones (AZs) impacts the availability of your cluster and, therefore, your keys. The risk of your configuration errors that result in a custom key store being disconnected, or key material being deleted and unrecoverable, must be understood and assessed.
  • Operations — By using the custom key store feature, you will perform certain tasks that are normally handled by KMS. You will need to set up HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which you should have the appropriate resources and organizational controls in place to perform.

Getting Started

Here’s a basic rundown of the steps that you’ll take to create your first key in a custom key store within a given region.

  1. Create your CloudHSM cluster, initialize it, and add HSMs to the cluster. If you already have a CloudHSM cluster, you can use it as a custom key store in addition to your existing applications.
  2. Create a CloudHSM user so that KMS can access your cluster to create and use keys.
  3. Create a custom key store entry in KMS, give it a name, define which CloudHSM cluster you want it to use, and give KMS the credentials to access your cluster.
  4. Instruct KMS to make a connection to your cluster and log in.
  5. Create a CMK in KMS in the usual way except now select CloudHSM as the source of your key material. You’ll define administrators, users, and policies for the key as you would for any other CMK.
  6. Use the key via the existing KMS APIs, AWS CLI, or the AWS Encryption SDK. Requests to use the key don’t need to be context-aware of whether the key is stored in a custom key store or the default KMS key store.

Summary

Some customers need specific controls in place before they can use KMS to manage encryption keys in AWS. The new KMS custom key store feature is intended to satisfy that requirement. You can now apply the controls provided by CloudHSM to keys managed in KMS, without changing access control policies or service integration.

However, by using the new feature, you take responsibility for certain operational aspects that would otherwise be handled by KMS. It’s important that you have the appropriate controls in place and understand the performance and availability requirements of each key that you create in a custom key store.

If you’ve been prevented from migrating sensitive data to AWS because of specific key management requirements that are currently not met by KMS, consider using the new KMS custom key store feature.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Author

Richard Moulds

Richard is a Principal Product Manager at AWS. He’s a member of the KMS team and is primarily focused on helping to define the product roadmap and satisfy customer requirements. Richard has more than 15 years experience in helping customers build encryption and key management systems to protect their data. His attraction to cryptography stems from the challenge of taking such a complex subject and translating it into simple solutions that customers should be able to take for granted, on a grand scale. When he’s not thinking ahead he’s focused on the past, restoring classic cars, the more rust the better.

Announcing the First AWS Security Conference: AWS re:Inforce 2019

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/announcing-the-first-aws-security-conference-aws-reinforce-2019/

On the eve of re:Invent 2018, I’m pleased to announce that AWS is launching our first conference dedicated to cloud security: AWS re:Inforce. The event will offer a deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Security is the top priority at AWS, and AWS re:Inforce is emblematic of our commitment to giving direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services.

The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. The cost for a full conference pass will be $1,099.

Over the course of this two-day conference we will offer multiple content tracks designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. Our technical track will offer detailed tactical education to take your security posture from reactive to proactive. We’ll also be offering a business enablement track tailored to assisting with your strategic migration decisions. You’ll find content delivered in a number of formats to meet a diversity of learning preferences, including keynotes, breakout sessions, Q&As, hands-on workshops, simulations, training and certification, as well as our interactive Security Jam. We anticipate 100+ sessions ranging in levels of ability from beginner to expert.

AWS re:Inforce will also feature our AWS Security Competency Partners, each of whom has demonstrated success in building products and solutions on AWS to support customers in multiple domains. With hundreds of industry-leading products, these partners will give you an opportunity to learn how to enhance the security of both on-premises and cloud environments.

Finally, you’ll find sessions built around the Security Pillar of Well Architected and the Security Perspective of our Cloud Adoption Framework (CAF). These will include Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, Security Services, and Incident Response. Our automated reasoning, cryptography researchers and scientists will also be available, as well as our partners in the academic community discussing Provable Security and additional emerging security trends.

If you’d like to sign up to be notified of when registration opens, please visit:

https://reinforce.awsevents.com

Additional information and registration details will be shared in early 2019, we look forward to seeing you all there!

– Steve Schmidt, Chief Information Security Officer
Follow Steve on Twitter.

AWS Security Profiles: Quint Van Deman, Principal Business Development Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-quint-van-deman-principal-business-development-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I joined AWS in August 2014. I spent my first two and a half years in the Professional Services group, where I ran around the world to help some of our largest customers sort through their security and identity implementations. For the last two years, I’ve parleyed that experience into my current role of Business Development Manager for the Identity and Directory Services group. I help the product development team build services and features that address the needs I’ve seen in our customer base. We’re working on a next generation of features that we think will radically simplify the way customers implement and manage identities and permissions within the cloud environment. The other key element of my job is to find and disseminate the most innovative solutions I’m seeing today across the broadest possible set of AWS customers to help them be more successful faster.

How do you explain your job to non-tech friends?

I keep one foot in the AWS service team organizations, where they build features, and one foot in day-to-day customer engagement to understand the real-world experiences of people using AWS. I learn about the similarities and differences between how these two groups operate, and then I help service teams understand these similarities and differences, as well.

You’re a “bar raiser” for the Security Blog. What does that role entail?

The notion of being a bar raiser has a lot of different facets at Amazon. The general concept is that, as we go about certain activities — whether hiring new employees or preparing blog posts — we send things past an outside party with no team biases. As a bar raiser for the Security Blog, I don’t have a lot of incentive to get posts out because of a deadline. My role is to make sure that nothing is published until it successfully addresses a customer need. At Amazon, we put the best customer experience first. As a bar raiser, I work to hold that line, even though it might not be the fastest approach, or the path of least resistance.

What’s the most challenging part of your job?

Ruthless prioritization. One of our leadership principles at Amazon is frugality. Sometimes, that means staying in cheap hotel rooms, but more often it means frugality of resources. In my case, I’ve been given the awesome charter to serve as the Business Development Manager for our suite of Identity and Directory Services. I’m something of a one-man army the world over. But that means a lot of things come past my desk, and I have to prioritize ruthlessly to ensure I’m focusing on the things that will be most impactful for our customers.

What’s your favorite part of your job?

A lot of our customers are doing an awesome job being bar raisers themselves. They’re pushing the envelope in terms of identity-focused solutions in their own AWS environments. One fulfilling part of my work is getting to collaborate with those customers who are on the leading edge: Their AWS field teams will get ahold of me, and then I get to do two really fun things. First, I get to dive in and help these customers succeed at whatever they’re trying to do. Second, I get to learn from them. I get to examine the really amazing ideas they’ve come up with and see if we might be able to generalize their solutions and roll them out to the delight of many more AWS customers that might not have teams mature enough to build them on their own. While my title is Business Development Manager, I’m a technologist through and through. Getting to dive into these thorny technical situations and see them resolve into really great solutions is extremely rewarding.

How did you choose your particular topics for re:Invent 2018?

Over the last year, I’ve talked with lots of customers and AWS field teams. My Mastering Identity at Every Layer of the Cake session was born out of the fact that I noticed a lot of folks doing a lot of work to get identity for AWS right, but other layers of identity that are just as important weren’t getting as much attention. I made it my mission to provide a more holistic understanding of what identity in the cloud means to these customers, and over time I developed ways of articulating the topic which really seemed to resonate. My session is about sharing this understanding more broadly. It’s a 400-level talk, since I want to really dive deep with my audience. I have five embedded demos, all of which are going to show how to combine multiple features, sprinkle in a bit of code, and apply them to near universally applicable customer use cases.

Why use the metaphor of a layer cake?

I’ve found that analogies and metaphors are very effective ways of grounding someone’s mental imagery when you’re trying to relay a complex topic. Last year, my metaphor was bridges. This year, I decided to go with cake: It’s actually very descriptive of the way that our customers need to think about Identity in AWS since there are multiple layers. (Also, who doesn’t like cake? It’s delicious.)

What are you hoping that your audience will take away from the session?

Customers are spending a lot of time getting identity right at the AWS layer. And that’s a ground-level, must-do task. I’m going to put a few new patterns in the audience’s hands to do this more effectively. But as a whole, we aren’t consistently putting as much effort into the infrastructure and application layers. That’s what I’m really hoping to expose people to. We have a wealth of features that really raise the bar in terms of cloud security and identity — from how users authenticate to operating systems or databases, to how they authenticate to the applications and APIs that they put on AWS. I want to expose these capabilities to folks and paint a vivid image for them of the really powerful things that they can do today that they couldn’t have done before.

What do you want your audience to do differently after attending your session?

During the session, I’ll be taking a handful of features that are really interesting in their own right, and combining them in a way that I hope will absolutely delight my audience. For example, I’ll show how you can take AWS CloudFormation macros and AWS Identity and Access Management, layer a little bit of customization on top, and come up with something far more magical than either of the two individually. It’s an advanced use case that, with very little effort, can disproportionately improve your security posture while letting your organization move faster. That’s just one example though, and the session is going to be loaded with them, including a grand finale. I’ve already started the work to open source a lot of what I’m going to show, but even where I can’t open source, I want to paint a very clear, prescriptive blueprint for how to get there. My goal is that my audience goes back to work on Monday and, within a couple of hours, they’ve measurably moved the security bar for their organization.

Any tips for first-time conference attendees?

Be deliberate about going outside of your comfort zone. If you’re not working in Security, come to one of our sessions. If you do work in Security, go to some other tracks, like Dev-Ops or Analytics, to get that cross-pollination of ideas. One of the most amazing things about AWS is how it helps dramatically lower the barrier to entry for unfamiliar technology domains and tools. A developer can ship more secure code faster by being invested in security, and a security expert can disproportionally scale their impact by applying the tools of developers or data scientists. Re:Invent is an amazing place to start exploring that diversity, and if you do, I suspect you’ll find ways to immediately make yourself better at your day job.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

Complexity versus human understanding have always been at odds. I see initiatives across AWS that have all kinds of awesome innovation and computer science behind them. In the coming years, I think these will mature to the point that they will be able to offload much of the natural complexity that comes with securing large scale environments with extremely fine grain permissions. Folks will be able to provide very simple statements or rules of how they want their environment to be, and we should be able to manage the complexity for them, and present them with a nice, clean picture they can easily understand.

What does cloud security mean to you, personally?

I see possibilities today that were herculean tasks before. For example, the process to make sure APIs can properly authenticate and authorize each other used to be an extremely elaborate process at scale. It became such an impossible mess that only the largest of organizations with the best skills, the best technology, and the best automation were really able to achieve it. Everyone else just had to punt or put a band-aid on the problem. But in the world of the cloud, all it takes is attaching an AWS IAM role on one side, and a fairly small resource-based policy to an Amazon API Gateway API on the other. Examples like this show how we’re making security that would once have been extremely difficult for most customers to afford or implement simple to configure, get right, and deploy ubiquitously, and that’s really powerful. It’s what keeps me passionate about my work.

If you had to pick any other job, what would you want to do with your life?

I’ve got all kinds of whacky hobbies. I kiteboard, I surf, work on massive renovation projects at home, hike and camp in the backcountry, and fly small airplanes. It’s an overwhelming set of hobbies that didn’t align with my professional aptitude. But if the world were my oyster and I had to do something else, I would want to combine those hobbies into one single career that’s never before been seen.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Quint Van Deman

Quint is the global business development manager for AWS Identity and Directory services. In this role, he leads the incubation, scaling, and evolution of new and existing identity-based services, as well as field enablement and strategic customer advisement for the same. Before joining the BD team, Quint was an early member of the AWS Professional Services team, where he was a Senior Consultant leading cloud transformation teams at several prominent enterprise customers, and a company-wide subject matter expert on IAM and Identity federation.

AWS Security Profiles: Henrik Johansson, Principal, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-henrik-johansson-principal-office-of-the-ciso/

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

As a Principal for the Office of the CISO, I not only get to spend time directly with our customers and their executives and operational teams, I also get to work with our own service teams and other parts of our organization. Additionally, a big part of this role involves spending time with the industry as a whole, in both small and large settings, and trying to raise the bar of the overall industry together with a number of other teams within AWS.

How do you explain your job to non-tech friends?

Whether or not someone understands what the cloud is, I try to focus on the core part of the role: I help people and organizations understand AWS Security and what it means to operate securely on the cloud. And I focus on helping the industry achieve these same goals.

What’s your favorite part of your job?

Helping customers and their executive leadership to understand the benefits of cloud security and how they can improve the overall security posture by using cloud features. Getting to show them how we can help drive road maps and new features and functions that they can use to secure their workloads (based on their valuable feedback) is very rewarding.

Tell us about the open source communities you support. Why they are important to AWS?

The open source community is important to me for a couple of reasons. First, it helps enable and inspire innovation by inviting the community at large to expand on the various use cases our services provide. I also really appreciate how customers enable other customers by not only sharing their own innovations but also inviting others to contribute and further improve their solutions. I have a couple of open source repositories that I maintain, where I put various security automation tools that I’ve built to show various innovative ways that customers can use our services to strengthen their security posture. Even if you don’t use open source in your company, you can still look at the vast number of projects out there, both from customers and from AWS, and learn from them.

What does cloud security mean to you, personally?

For me, it represents the possibility of creating efficient, secure solutions. I’ve been working in various security roles for almost twenty-five years, and the ability we have to protect data and our infrastructure has never been stronger. We have an incredible opportunity to solve challenges that would have been insurmountable before, and this leads to one thing: trust. It allows us to earn trust from customers, trust from users, and trust from the industry. It also enables our customers to earn trust from their users.

In your opinion, what’s the biggest challenge facing cloud security right now?

The opportunities far outweigh the challenges, honestly. The different methods that customers and users have to gain visibility into what they’re actually running is mind-blowing. That visibility is a combination of knowing what you have, knowing what you run, and knowing all the ins and outs of it. I still hear people talking about that server in the corner under someone’s desk that no one else knows about. That simply doesn’t exist in the cloud, where everything is an API call away. If anything, the challenge lies in finding people who want to continue driving the innovation and solving the hard cases with all the technology that’s at our fingertips.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

One shift we’re already seeing is that compliance is becoming a natural part of the security and innovation conversation. Previously, “compliance” meant that maybe you had a specific workload that needed to be PCI-compliant, or you were under HIPPA requirements. Nowadays, compliance is a more natural part of what we do. Privacy is everywhere. It has to be everywhere, based on requirements like GDPR, but we’re seeing that a lot of these “have to be” requirements turning into “want to be” requirements — we’re not distinguishing between the users that are required to be protected and the “regular” users. More and more, we’re seeing that privacy is always going to have a seat at the table, which is something we’ve always wanted.

At re:Invent 2018, you’re presenting two sessions together with Andrew Krug. How did you choose your topics?

They’re a combination of what I’m passionate about and what I see our customers need. This is the third year I’ve presented my Five New Security Automations Using AWS Security Services & Open Source session. Previously, I’ve also built boot camps and talks around secure automation, DevSecOps, and container security. But we have a big need for open source security talks that demonstrate how people can actually use open source to integrate with our services — not just as a standalone piece, but actually using open source as inspiration for what they can build on their own. That’s not to say that AWS services aren’t extremely important. They’re the driving force here. But the open source piece allows people to adapt solutions to their specific needs, further driving the use cases together with the various AWS security services.

What are you hoping that your audience will take away from your sessions?

I want my audience to walk away feeling that they learned something new, and that they can build something that they didn’t know how to before. They don’t have to take and use the specific open source tools we put out there, but I want them to see our examples as a way to learn how our services work. It doesn’t matter if you just download a sample script or if you run a full project, or a full framework, but it’s important to learn what’s possible with services beyond what you see in the console or in the documentation.

Any tips for first-time conference attendees?

Plan ahead, but be open to ad-hoc changes. And most importantly, wear sneakers or comfortable walking shoes. Your feet will appreciate it.

If you had to pick any other job, what would you want to do with your life?

If I picked another role at Amazon, it would definitely be a position around innovation, thinking big, and building stuff. Even if it was a job somewhere else, I’d still want it to involve building, whether woodshop projects or a robot. Innovation and building are my passions.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Henrik Johansson

Henrik is a Principal in the Office of the CISO at AWS Security. With over 22 years of experience in IT with a focus on security and compliance, he focuses on establishing and driving CISO-level relationships as a trusted cloud security advisor who has a passionate focus on developing services and features for security and compliance at scale.

AWS Security Profiles: Sam Koppes, Senior Product Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sam-koppes-senior-product-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for a year, and I’m a Senior Product Manager for the AWS CloudTrail team. I’m responsible for product roadmap decisions, customer outreach, and for planning our engineering work.

How do you explain your job to non-tech friends?

I work on a technical product, and for any tech product, responsibility is split in half: We have product managers and engineering managers. Product managers are responsible for what the product does. They’re responsible for figuring out how it behaves, what needs it addresses, and why customers would want it. Engineering managers are responsible for figuring out how to make it. When you look to build a product, there’s always the how and the what. I’m responsible for the what.

What are you currently working on that you’re excited about?

The scale challenges that we’re facing today are extremely interesting. We’re allowing customers to build things at an absolutely unheard-of scale, and bringing security into that mix is a challenge. But it’s also one of the great opportunities for AWS — we can bring a lot of value to customers by making security as turnkey as possible so that it just comes with the additional scale and additional service areas. I want people to sleep easy at night knowing that we’ve got their backs.

What’s your favorite part of your job?

When I deliver a product, I love sending out the What’s New announcement. During our launch calls, I love collecting social media feedback to measure the impact of our products. But really, the best part is the post-launch investigation that we do, which allows us understand whether we hit the mark or not. My team usually does a really good job of making sure that we deliver the kinds of features that our customers need, so seeing the impact we’ve had is very gratifying. It’s a privilege to get to hear about the ways we’re changing people’s lives with the new features we’re building.

How did you choose your particular topic for re:Invent this year?

My session is called Augmenting Security Posture and Improving Operational Health with AWS CloudTrail. As a service, CloudTrail has been around a while. But I’ve found that customers face knowledge gaps in terms of what to do with it. There are a lot of people out there with an impressive depth of experience, but they sometimes lack an additional breadth that would be helpful. We also have a number of new customers who want more guidance. So I’m using the session to do a reboot: I’ll start from the beginning and go through what the service is and all the things it does for you, and then I’ll highlight some of the benefits of CloudTrail that might be a little less obvious. I built the session based on discussions with customers, who frequently tell me they start using the service — and only belatedly realize that they can do much more with it beyond, say, using it as a compliance tool. When you start using CloudTrail, you start amassing a huge pile of information that can be quite valuable. So I’ll spend some time showing customers how they can use this information to enhance their security posture, to increase their operational health, and to simplify their operational troubleshooting.

What are you hoping that your audience will take away from it?

I want people to walk away with two fistfuls of ideas for cool things they can do with CloudTrail. There are some new features we’re going to talk about, so even if you’re a power user, my hope is that you’ll return to work with three or four features you have a burning desire to try out.

What does cloud security mean to you, personally?

I’m very aware of the magnitude of the threats that exist today. It’s an evolving landscape. We have a lot of powerful tools and really smart people who are fighting this battle, but we have to think of it as an ongoing war. To me, the promise you should get from any provider is that of a safe haven — an eye in the storm, if you will — where you have relative calm in the midst of the chaos going on in the industry. Problems will constantly evolve. New penetration techniques will appear. But if we’re really delivering on our promise of security, our customers should feel good about the fact that they have a secure place that allows them to go about their business without spending much mental capacity worrying about it all. People should absolutely remain vigilant and focused, but they don’t have to spend all of their time and energy trying to stay abreast of what’s going on in the security landscape.

What’s the most common misperception you encounter about cloud security and compliance?

Many people think that security is a magic wand: You wave it, and it leads to a binary state of secure or not secure. And that’s just not true. A better way to think of security is as a chain that’s only as strong as its weakest link. You might find yourself in a situation where lots of people have worked very hard to build a very secure environment — but then one person comes in and builds on top of it without thinking about security, and the whole thing blows wide open. All it takes is one little hole somewhere. People need to understand that everyone has to participate in security.

In your opinion, what’s the biggest challenge that people face as they move to the cloud?

At AWS, we follow this thing called the Shared Responsibility Model: AWS is responsible for securing everything from the virtualization layer down, and customers are responsible for building secure applications. One of the biggest challenges that people face lies in understanding what it means to be secure while doing application development. Companies like AWS have invested hugely in understanding different attack vectors and learning how to lock down our systems when it comes to the foundational service we offer. But when customers build on a platform that is fundamentally very secure, we still need to make sure that we’re educating them about the kinds of things that they need to do, or not do, to ensure that they stay within this secure footprint.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a tremendous amount of growth in the application of machine learning and artificial intelligence. Historically, we’ve approached security in a very binary way: rules-based security systems in which things are either okay or not okay. And we’ve built complex systems that define “okay” based on a number of criteria. But we’ve always lacked the ability to apply a pseudo-human level of intelligence to threat detection and remediation, and today, we’re seeing that start to change. I think we’re in the early stages of a world where machine learning and artificial intelligence become a foundational, indispensable part of an effective security perimeter. Right now, we’re in a world where we can build strong defenses against known threats, and we can build effective hedging strategies to intercept things we consider risky. Beyond that, we have no real way of dynamically detecting and adapting to threat vectors as they evolve — but that’s what we’ll start to see as machine learning and artificial intelligence enter the picture.

If you had to pick any other job, what would you want to do with your life?

I have a heavy engineering background, so I could see myself becoming a very vocal and customer-obsessed engineering manager. For a more drastic career change, I’d write novels—an ability that I’ve been developing in my free time.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Sam Koppes

Sam is a Senior Product Manager at Amazon Web Services. He currently works on AWS CloudTrail and has worked on AWS CloudFormation, as well. He has extensive experience in both the product management and engineering disciplines, and is passionate about making complex technical offerings easy to understand for customers.

AWS Security Profiles: Alana Lan, Software Development Engineer; Shane Xu, Technical Program Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-alana-lan-software-development-engineer-shane-xu-technical-program-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

Alana: I’m a software development engineer, and I’ve been here for a year and a half. I’m on the Security Assessment and Automation team. My team’s main purpose is to develop tools that help internal teams save time. For example, we build tools to help people find resources for external customers, like information control frameworks. We also build services that aggregate data about AWS resources that other teams can use to identify critical resources.

Shane: I started around the same time as Alana — we’re part of the same team. I’m a Technical Program Manager, and my role is to perform deep-dives into different security domains to investigate the effectiveness of our controls and then propose ways to automate the monitoring and mitigation of those controls. I like to explain my role using a metaphor: If AWS Security is the guardian of the AWS Cloud, then the role of the Security Assurance team is to make sure the guardians have the right superpowers. And the goal of my team is to ensure those superpowers are automated and always monitored so that they’re always available when needed.

How do you explain your job to non-tech friends?

Alana: I tell people that there are many AWS services, and many teams working to make those services available globally. My work is to make the jobs of those teams easier with tools and resources that reduce manual effort and allow them to serve customers better.

Shane: I normally tell people that my role is related to security automation. Those two words tend to make sense to people. If they want more detail, I explain that my role is to automate the compliance managers out of the repetitive aspects of their jobs. Compliance managers cut tickets to request different kinds of evidence to show to auditors. My role is to automate this so that compliance managers don’t need to go through a long, manual process and so they can focus on more important tasks.

What are you currently working on that you’re excited about?

Alana: We’re working on a service that aggregates data about Amazon and AWS resources to provide ways to find relationships between these resources. We’re also experimenting with Amazon Neptune (a graph database) plus some new features of other services to help our teams help customers. Sometimes, SDEs seek us out for help with specific needs, and we try to encourage that: We want to emphasize how important security is. I like getting to work on a team that grapples with abstract concepts like “security” and “compliance.”

Shane: I’m working on an initiative to reduce the manual effort required for data center audits. We’re a cloud company, which means we have data centers all over the world and they are critical infrastructure for AWS services and customer data. For compliance purposes, we need to do physical audits of all of those sites and a typical approach would be flying out to dozens of locations each year to examine the security and environmental controls we have in place. I’m working on a project that’s less manual and resource-heavy.

You’re involved with this year’s Security Jam at re:Invent. What’s a Security Jam?

Shane: The Security Jam is basically a hackathon. It’s an all-day event from 8 AM to 4 PM that includes a dozen challenges (one of which Alana and I are hosting). The doors open at 7 AM at the MGM Studio Ballroom, and you can sign up as a group, or we’ll randomly pair you as needed. Your team works through as many of the challenges as possible, with the goal of getting the high score. The challenges are intended to provide hands-on experience with how to use AWS services and configure them to make sure your environment is secure. The Jams are structured to accommodate AWS users of all levels.

What’s your Security Jam challenge about?

Shane: Last year, our challenge focused on ensuring an environment was secure and compliant. This year, we’re taking it one step further by focusing on continuous monitoring. It’s a challenge that’s relevant whether you’re a small company or a large enterprise: You can’t realistically have one person sitting in front of a dashboard 24/7. You need to find a way to continuously monitor your resources so that at any time, when a new resource becomes available and older ones are deprecated, you have an up-to-date snapshot of your compliance environment. For the Security Jam challenge, I provide a proof of concept that lets participants use AWS Config to configure some out-of-the-box rules (or develop new rules) to provide continuous monitoring of their environment. We’ve also added an API around this for people like compliance managers, who might not have a technical background but need to be able to easily get a report if they need it.

Alana: Customers have reported that AWS Config is very useful, so we built the challenge to expose more people to the service. It will give participants a foundation that they can use in the future to protect their data or services. It’s a starting point.

What knowledge or experience do you hope participants will gain by completing your challenge?

Alana: I want people to understand that AWS services are not difficult to use. For example, there are many open source AWS Lambda functions that can help protect your data with a few button clicks. Don’t be afraid to get started.

Shane: People sometimes think compliance is scary. I want the hands-on nature of the challenge to show people that we provide tools that will make your life, and your customers’ lives, easier. I also want people to learn ways of avoiding compliance fatigue. Automation makes it easier for you to focus on more innovative work. It’s the future of compliance.

In your opinion, what’s the biggest challenge facing cloud security and compliance right now?

Shane: The scope for compliance is getting larger and larger, and there will always be new revelations and new types of threats. Developing scalable solutions to help achieve compliance is an ongoing challenge, and one we can’t just throw human power at. That’s why automation is so important. The other challenge is that some people see compliance as a burden, when we want it to be an enabler. I want people to understand that it’s not just a regulation or a security best practice. Compliance is a way to enable growth.

Alana: If I worked on another team, I think it would have taken me several years to figure out how my daily job impacted the security and compliance of AWS as a whole. It’s hard to connect the coding of an individual project back to AWS Security. We’re encouraged to take trainings, and we know that it’s important to protect your data, but people don’t always understand why, exactly. It’s hard for individual contributors to get a sense of the big picture.

If you had to pick any other job, what would you want to do with your life?

Alana: If I wasn’t an SDE, I’d want to be a Data Scientist. I think it would be interesting to analyze data and figure out the trends.

Shane: I would really like to be involved in AI. There are so many unknowns right now, in terms of how to ensure AI that’s secure and ethical. I’d also like to be a teacher, or a university professor. When I was working on my Master’s degree, it was really difficult to get some practical skills, such as how to have a productive one-on-one with my manager, or what career paths are available in a security-related field. I like the idea of being able to use my industry experience to help other students.

What career advice do you have for someone just joining AWS?

Shane: There’s a lot of opportunity at AWS. During my first six months here, I was cautious: Because of my previous consulting background, I felt like I had to have a legit case to talk with leadership and take up their time. It’s certainly important that I value their time, but in general I’ve found people in senior positions to be very willing to engage with me. My advice is to not be afraid to reach out, grow your network, and learn new things.

Alana: I’d echo what Shane said. There are a lot of possibilities at AWS, so don’t be afraid to try something new.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Alana Lan

Alana is a Software Development Engineer at AWS. She’s responsible for building tools and services to help with the operations of AWS security and compliance controls. Currently, she is obsessed with exploring AWS Services.

Author

Shane Xu

Shane is a Technical Program Manager for Security Assessment and Automation at AWS. Shane brings together people, technology, and processes to invent and simplify security and compliance automation solutions. He’s a passionate learner and curious explorer at work and in life.

AWS Security Profiles: Matt Bretan, Principal Manager, AWS Professional Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-matt-bretan-principal-manager-aws-professional-services/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I‘ve been with AWS Professional Services nearly five years. I run two teams: our Security Assurance and Advisory Practice team, and our Security Experience team. The Security Assurance and Advisory Practice team is responsible for working with our customers’ executive leadership to help them plan their security risk and compliance strategy when they move to AWS. Executives need to understand how to organize their teams and what tools and mechanisms they need in order to meet expected regulatory or policy-based controls. We help with that. It’s a relatively new team that we started up in early 2018.

The Security Experience team is responsible for our Jam platform, which is changing the way we help customers learn about AWS services and partners. Previously, when we went to a customer, we gave slide presentations about how to be secure on AWS and how to migrate to the cloud. At the end of the presentation, people could usually repeat definitions back at us, but when we put them in front of a keyboard and monitor, they were uncertain about what to do. So, we built out the Jam platform, which allows customers to get hands-on experiences across a wide variety of AWS services, plus some partner products as well. It’s a highly gamified way to learn.

What’s the most challenging part of your job?

How to scale our offerings. A lot of what we do is to work one-on-one with our customers. Part of my job is to figure out how to impact more customers. We don’t just want to work with the largest companies of the world, but rather we want to help all companies be more secure. So, I’m constantly asking myself how to create tools and offerings that are scalable enough to impact everyone, and that everyone can benefit from.

What are you currently working on that you’re excited about?

The Jam platform. It allows us to change the way that customers experience AWS, and the way that they learn about moving to the cloud. It’s a different way to think about learning — gamifying the cloud adoption process helps people actually experience the technology. It’s not just definitions on a slide deck anymore. People get to see the capabilities of AWS in action, and they’ll have that Jam experience as a foundation once they start building their own infrastructure.

What can people expect from your teams at re:Invent this year?

The Jam Lounge will be in the Tundra Lounge within the Partner Expo Center at the Venetian. You’ll be able to register for the Jam Lounge there, and from Monday night through Thursday night, you can take part in a number of challenges — everything from security to migration to data analytics. We’ll be showcasing five partner solutions as well. The cool thing about the Jam Lounge is that it’s a completely virtual event. Once you register for the event in the Partner Expo Center, you can take part in the challenges from anywhere at re:Invent. This means that you can gain hands on experiences with AWS and our partner solutions in between the other amazing sessions and activities that go on during re:Invent.

The Security Jam takes place on Thursday, and it’s purely security-focused. We’ll have 13 different challenges. There are 10 specifically around AWS services and three from partners, and they’ll highlight different cloud security scenarios that people might encounter on a day-to-day basis. You’ll get to go into AWS accounts that we provision for you, identify what is wrong, and then fix them to get them into a known good state.

We’re also hosting the Executive Security Simulation as part of the executive track. That one is a tabletop exercise to help attendees experience and think about security from a high level. We simulate the first two years in a company’s life as they adopt the cloud — including some of the decisions they have to make in this process — so that people can think through security adoption from a lens that’s less about technical implementation and more about high-level strategy.

You mentioned that the Security Jam is an example of gamified learning. Can you talk more about what that means?

People love the hands-on application of learning: Rather than reading definitions, you get to use the technology and experience it. And that’s what gamification does: It gives you the actual infrastructure with an actual problem, and you get to go in and fix it. Also, it plays well to peoples’ competitive side. We set participants up in teams, and you have to work together to solve problems and win. There’s a leader board and scoring with points and clues. Anyone can participate, get what they need out of it, have fun doing it, and feel successful at the end of the day. This is the third year we’ve run a Jam at re:Invent, and we’re excited to have everyone try brand-new challenges and learn about new services and ways to do things on AWS.

Any tips for first-time conference attendees?

This conference is a marathon and not a sprint! There are so many great sessions and activities that go on during the week, so spend a little bit of time now reviewing the agenda and figure out what is most important for you to attend. Prioritize those items, and then make sure to leave some time for some surprise announcements! For the Jam sessions, you actually get to interact with AWS and our partner solutions, so bring your laptop. But also, come with an open mind. I think the big thing here is that re:Invent is a learning event. But for our events, at the end, there are prizes!

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

I think a lot of the changes will be around the requirements themselves. Today, many of the requirements in the compliance space center around specific technologies, rather than around the risk itself. Often, these programs are also primarily written around a traditional data center model where someone deploys an application onto a server and then doesn’t touch it for years. I think as compliance programs mature, we’ll shift to more of a risk-based process that puts the overall security and protection of customers first while taking into account how technology is constantly changing.

What does cloud security mean to you, personally?

I use technology: I stream videos, I do online banking, I buy things online, and I have an IoT-connected house. So, for me, cloud security is a way to protect my own interests and the interests of my family. I’m using these companies — often customers of ours — on a day-to-day basis. So the more I can do to ensure that they’re being secure with their implementations, the more secure I’ll be in the long run — and the more secure all consumers will be. The more I can do to proactively make it difficult for malicious parties to do harm, the safer and better all of our lives will be.

If you had to pick any other job, what would you want to do with your life?

My passion is building things. If I were to switch careers, I think I’d want to build physical structures, like houses or buildings. I believe there is a strong similarity between the work I do now around helping design security controls and the work that architects do when they design buildings. There are risks around building physical structures. You have to deal with things like lateral loads and entrance and exit controls. Technology involves a different kind of load, but in both cases, you have to go through a process of preparing for it and understanding it. I find that similarity fascinating.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Matt Bretan

Matt travels the world helping customers move their most sensitive workloads onto AWS while trying to find the best airline snack. He won’t stop until he has figured out how to help everyone. When not working with customers, he is at home with his beautiful wife and three wonderful kids.

How federal agencies can leverage AWS to extend CDM programs and CIO Metric Reporting

Post Syndicated from Darren House original https://aws.amazon.com/blogs/security/how-federal-agencies-can-leverage-aws-to-extend-cdm-programs-and-cio-metric-reporting/

Continuous Diagnostics and Mitigation (CDM), a U.S. Department of Homeland Security cybersecurity program, is gaining new visibility as part of the federal government’s overall focus on securing its information and networks. How an agency performs against CDM will soon be regularly reported in the updated Federal Information Technology Acquisition Reform Act (FITARA) scorecard. That’s in addition to updates in the President’s Management Agenda. Because of this additional scrutiny, there are many questions about how cloud service providers can enable the CDM journey.

This blog will explain how you can implement a CDM program—or extend an existing one—within your AWS environment, and how you can use AWS capabilities to provide real-time CDM compliance and FISMA reporting.

When it comes to compliance for departments and agencies, the AWS Shared Responsibility Model describes how AWS is responsible for security of the cloud and the customer is responsible for security in the cloud. The Shared Responsibility Model is segmented into 3 categories (1) infrastructure services (see figure 1 below), (2) container services, and (3) abstracted services, each having a different level of controls customers can inherit to minimize their effort in managing and maintaining compliance and audit requirements. For example, when designing your workloads in AWS that fall under Infrastructure Services in the Shared Responsibility Model, AWS helps relieve your operational burden, as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate. This also relates to IT controls you can inherit through AWS compliance programs. Before their journey to the AWS Cloud, a customer may have been responsible for the entire control set in their compliance and auditing program. With AWS, you can inherit controls from AWS compliance programs, allowing you to focus on workloads and data you put in the cloud.

Figure 1: AWS Infrastructure Services

For example, if you deploy infrastructure services such as Amazon Virtual Private Cloud (Amazon VPC) networking and Amazon Elastic Compute Cloud (Amazon EC2) instances, you can base your CDM compliance controls on the AWS controls you inherit for network infrastructure, virtualization, and physical security. You would be responsible for managing things like change management for Amazon EC2 AMIs, operating system and patching, AWS Config management, AWS Identity and Access Management (IAM), and encryption of services at rest and in transit.

If you deploy container services such as Amazon Relational Database Service (Amazon RDS) or Amazon EMR that build on top of infrastructure services, AWS manages the underlying infrastructure virtualization, physical controls, and the OS and associated IT controls, like patching and change management. You inherit the IT security controls from AWS compliance programs and can use them as artifacts in your compliance program. For example, you can request a SOC report for one of our 62 SOC-compliant services available from AWS Artifact. You are still responsible for the controls of what you put in the cloud.

Another example is if you deploy abstracted services such as Amazon Simple Storage Service (S3) or Amazon DynamoDB. AWS is responsible for the IT controls applicable to the services we provide. You are responsible for managing your data, classifying your assets, using IAM to apply permissions to individual resources at the platform level, or managing permissions based on user identity or user responsibility at the IAM user/group level.

For agencies struggling to comply with FISMA requirements and thus CDM reporting, leveraging abstracted and container services means you now have the ability to inherit more controls from AWS, reducing the resource drain and cost of FISMA compliance.

So, what does this all mean for your CDM program on AWS? It means that AWS can help agencies realize CDM’s vision of near-real time FISMA reporting for infrastructure, container, and abstracted services. The following paragraphs explain how you can leverage existing AWS Services and solutions that support CDM.

1.0 Identify

AWS can meet CDM Asset Management (AM) 1.1 – 1.3 using AWS Config to track AWS resource configurations, use the resource groups tagging API to manage FISMA Classifications of your cloud infrastructure, and automatically tag Amazon EC2 resources.

For AM 1.4, AWS Config identifies all cloud assets in your AWS account, which can be stored in a DynamoDB table in the reporting format required. AWS Config rules can enforce and report on compliance, ensuring all Amazon Elastic Block Store (EBS) volumes (block level storage) and Amazon S3 buckets (object storage) are encrypted.

2.0 Protect

For CIO metrics 2.1, 2.2, and 2.14, Amazon Inspector uses Common Vulnerabilities and Exposures (CVEs) based on the National Vulnerability Database (NVD) Common Vulnerability Scoring System (CVSS). Using Amazon Inspector, customers can set up continuous golden AMI vulnerability assessments then take the Inspector findings and store them in a CIO metrics DynamoDB table for reporting.

For metrics 2.3 – 2.7, AWS provides federation services with on-premise Active Directory (AD) services, allowing customers to map IAM Roles to AD groups, report on IAM privileged system access, and identify which IAM roles have access to particular services.

To provide reporting for metric 2.10.1, AWS offers FIPS 140-2 validated endpoints for US East/West regions and AWS GovCloud (US).

For metric 2.9, AWS Config provides relationship information for all resources, which means you can use AWS Config relationship data to report on CIO metrics focused on the segmentation of resources. AWS Config can identify things like the network interfaces, security groups (firewalls), subnets, and VPCs related to an instance.

3.0 Detect

For detection that covers 3.3, 3.6, 3.8, 3.9, 3.11, and 3.12, AWS Web Application Firewall (WAF) and Amazon GuardDuty have native automation through Amazon CloudWatch, AWS Step Functions (orchestration), and AWS Lambda (serverless) to provide adaptive computing capabilities such as automating the application of AWS WAF rules, recovering from incidents, mitigating OWASP top 10 threats, and automating the import of third-party threat intelligence feeds.

4.0 Respond

The focus is on you to develop policies and procedures to respond to cyber “incidents.” AWS provides capabilities to automate policies and procedures in a policy driven manner to improve detection times, shorten response times, and reduce attack surface. FISMA defines “incident” as “an occurrence that (A) actually or imminently jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information or an information system; or (B) constitutes a violation or imminent threat of violation of law, security policies, security procedures, or acceptable use policies.”

Enabling GuardDuty in your accounts and writing finding results to a reporting database complies with CIO “Recover” metrics 4.1 and 4.2. Requirement 4.3 mandates you “automatically disable the system or relevant asset upon the detection of a given security violation or vulnerability,” per NIST SP 800-53r4 IR-4(2). You can comply by enabling automation to remediate instances.

5.0 Recover

Responding and recovering are closely related processes. Enabling automation to remediate instances also complies with 5.1 and 5.2, which focus on recovering from incidents. While it is your responsibility to develop an Information System Contingency Plan (ISCP), AWS can enable you to translate that plan into automated machine-readable policy through AWS CloudFormation. The service can use AWS Config to report on the number of HVA systems for which an ISCP has been developed (5.1). For both 5.1 and 5.2, “mean time for the organization to restore operations following the containment of a system intrusion or compromise,” using AWS multi-region architectures, inter-region VPC peering, S3 cross-region replication, AWS Landing Zones, and the global AWS network to enable global networking with services like AWS Direct Connect gateway allows you to develop an architecture (5.1.1) that tracks the number of HVA systems that have an alternate processing site identified and provisioned to enable global recovery in minutes.

Conclusion

There are many ways to report and visualize the data for all the solutions identified. A simple way is to write sensor data to a DynamoDB table. You can enhance your basic reporting to provide advanced analytics and visualization by integrating DynamoDB data with Amazon Athena. You can log all your services directly to an Amazon S3 bucket, use Amazon QuickSight for the dashboard and create custom analyses from your dashboards. You could also enrich your CIO metric dashboard data with other normalized datasets.

The end result is that you can build a new CDM program or extend an existing one using AWS. We are relentlessly focused on innovating for you and providing a comprehensive platform of secure IT services to meet your unique needs. Federal customers required to comply with CDM can use these innovations to incorporate services like AWS Config and AWS Systems Manager to provide assets and software management in AWS and on-premise systems to answer the question, “What is on the network?”

For real time compliance and reporting, you can leverage services like GuardDuty to analyze AWS CloudTrail, DNS, and VPC flow logs in your account so you really know “who is on your network,” and “what is happening on the network.”  VPC, security groups, Amazon CloudFront, and AWS WAF allow you to protect the boundary of your environment, while services like AWS Key Management Service (KMS), AWS CloudHSM, AWS Certificate Manager (ACM) and HTTPS endpoints enable you to “protect the data on the network.”

The services discussed in this blog provide you with the ability to design a cloud architecture that can help you move from ad-hoc to optimized in your CDM reporting. If you want more data, send us feedback and please, let us know how we can help with your CDM journey! If you have feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Darren House

Darren is a Senior Solutions Architect at AWS who is passionate about designing business- and mission-aligned cloud solutions that enable government customers to achieve objectives and improve desired outcomes. Outside work, Darren enjoys hiking, camping, motorcycles, and snowboarding.

AWS Security Profiles: Phil Rodrigues, Principal Security Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-phil-rodrigues-principal-security-solutions-architect/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’m a Principal Security Solutions Architect based in Sydney, Australia. I look after both Australia and New Zealand. I just had my two year anniversary with AWS. As I tell new hires, the first few months at AWS are a blur, after 6-12 months you start to get some ideas, and then after a year or two you start to really own and implement your decisions. That’s the phase I’m in now. I’m working to figure out new ways to help customers with cloud security.

What are you currently working on that you’re excited about?

In Australia, AWS has a mature set of financial services customers who are leading the way in terms of how large, regulated institutions can consume cloud services at scale. Many Aussie banks started this process as soon as we opened the region six years ago. They’re over the first hump, in terms of understanding what’s appropriate to put into the cloud, how they should be controlling it, and how to get regulatory support for it. Now they’re looking to pick up steam and do this at scale. I’m excited to be a part of that process.

What’s the most challenging part of your job?

Among our customers’ senior leadership there’s still a difference of opinion on whether or not the public cloud is the right place to be running, say, critical banking workloads. Based on anecdotal evidence, I think we’re at a tipping point leading to broad adoption of public cloud for the industry’s most critical workloads. It’s challenging to figure out the right messaging that will resonate with the boards of large, multi-national banks to help them understand that the technology control benefits of the cloud are far superior when it comes to security.

What’s your favorite part of your job?

We had a private customer security event in Australia recently, and I realized that: We now have the chance to do things that security professionals have always wanted to do. That is, we can automatically apply the most secure configurations at scale, ubiquitously across all workloads, and we can build environments that are quick to respond to security problems and that can automatically fix those problems. For people in the security industry, that’s always been the dream, and it’s a dream that some of our customers are now able to realize. I love getting to hear from customers how AWS helped make that happen.

How did you choose your particular topic for re:Invent this year?

Myles Hosford and I are presenting a session called Top Cloud Security Myths – Dispelled! It’s a very practical session. We’ve talked with hundreds of customers about security over the past two years, and we’ve noticed the types of questions that they ask tend to follow a pattern that’s largely dependent on where they are in their cloud journey. Our talk covers these questions — from the simple to the complex. We want the talk to be accessible for people who are new to cloud security, but still interesting for people who have more experience. We hope we’ll be able to guide everyone through the journey, starting with basics like, “Why is AWS more secure than my data center?”, up through more advanced questions, like “How does AWS protect and prevent administrative access to the customer environment?”

What are you hoping that your audience will take away from it?

There are only a few 200-level talks on the Security track. Our session is for people who don’t have a high level of expertise in cloud security — people who aren’t planning to go to the 300- and 400-level builder talks — but who still have some important, foundational questions about how secure the cloud is and what AWS does to keep it secure. We’re hoping that someone who has questions about cloud security can come to the session and, in less than an hour, get a number of the answers that they need in order to make them more comfortable about migrating their most important workloads to the cloud.

Any tips for first-time conference attendees?

You’ll never see it all, so don’t exhaust yourself by trying to crisscross the entire length of the Strip. Focus on the sessions that will be the most beneficial to you, stay close to the people that you’d like to share the experience with, and enjoy it. This isn’t a scientific measure, but I estimate that last year I saw maybe 1% of re:Invent — so I tried to make it the best 1% that I could. You can catch up on new service announcements and talks later, via video.

What’s the most common misperception you encounter about cloud security?

One common misperception stems from the fact that cloud is a broad term. On one side of the spectrum, you have global hyperscale providers, but on the opposite end, you have small operations with what I’d call “a SaaS platform and a dream” who might sell business ideas to individual parts of a larger organization. The organization might want to process important information on the SaaS platform, but the provider doesn’t always have the experience to put the correct controls into place. Now, AWS does an awesome job of keeping the cloud itself secure, and we give customers a lot of options to create secure workloads, but many times, if an organization asks the SaaS provider if they’re secure, the SaaS provider says, “Of course we’re secure. We use AWS.” They’ll give out AWS audit reports that shows what AWS does to keep the cloud secure, but that’s not the full story. The software providers operating on top of AWS also play a role in keeping their customers’ data secure, and not all of these providers are following the same mature, rigorous processes that we follow — for example, undergoing external third-party audits. It’s important for AWS to be secure, but it’s also important for the ecosystem of partners building on top of us to be secure.

In your opinion, what’s the biggest challenge facing cloud security right now?

The number of complex choices that customers must make when deciding which of our services to use and how to configure them. We offer great guidance through best practices, Well-Architected reviews, and a number of other mechanisms that guide the industry, but our overall model is still that of providing building blocks that customers must assemble themselves. We hope customers are making great decisions regarding security configurations while they’re building, and we provide a number of tools to help them do this — as do a number of third-parties. But staying secure in the cloud still requires a lot of choices.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

I’m not losing much sleep over quantum computing and its impact on cryptography. I think that’s a while away. For me, the near future is more likely to feature developments like broad adoption of automated assurance. We’ll move away from a paper-based, once-a-year audits to determine organizations’ technology risk, and toward taking advantage of persistent automation, near-instant visibility, and being able to react to things that happen in real-time. I also think we’ll see a requirement for large organizations who want to move important workloads to the cloud to use security automation. Regulators and the external audit community have started to realize that automated security is possible, and then they’ll push to require it. We’re already seeing a handful of examples in Australia, where regulators who understand the cloud are asking to see evidence of AWS best practices being applied. Some customers are also asking third-party auditors not to bring in a spreadsheet but rather to query the state of their security controls via an API in real-time or through a dashboard. I think these trends will continue. The future will be very automated, and much more secure.

What does cloud security mean to you, personally?

My customer base in Australia includes banks, governments, healthcare, energy, telco, and utility. For me, this drives home the realization that the cloud is the critical digital infrastructure of the future. I have a young family who will be using these services for a long time. They rely on the cloud either as the infrastructure underneath another service they’re consuming — including services as important as transportation and education — or else they access the cloud directly themselves. How we keep this infrastructure safe and secure, and how we keep peoples’ information private but available affects my family.

Professionally, I’ve been interested in security since before it was a big business, and it’s rewarding to see stuff that we toiled on in the corner of a university lab two decades ago gaining attention and becoming best practice. At the same time, I think everyone who works in security thrives on the challenge that it’s not simple, it’s certainly not “done” yet, and there’s always someone on the other side trying to make it harder. What drives me is both that professional sense of competition, and the personal realization that getting it right impacts me and my family.

What’s the one thing a visitor should do on a trip to Sydney?

Australia is a fascinating place, and visitors tend to be struck by how physically beautiful it is. I agree; I think Sydney is one of the most beautiful cities in the world. My advice is to take a walk, whether along the Opera House, at Sydney Harbor, up through the botanical gardens, or along the beaches. Or take a ferry across to the Manly beachfront community to walk down the promenade. It’s easy to see the physical beauty of Sydney when you visit — just take a walk.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Phil Rodrigues

Phil Rodrigues is a Principal Security Solutions Architect for AWS based in Sydney, Australia. He works with AWS’s largest customers to improve their security, risk, and compliance in the cloud. Phil is a frequent speaker at AWS and cloud events across Australia. Prior to AWS, he worked for over 17 years in Information Security in the US, Europe, and Asia-Pacific.

AWS Security Profiles: Ken Beer, General Manager, AWS Key Management Service

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-ken-beer-general-manager-aws-key-management-service/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been here a little over six years. I’m the General Manager of AWS Key Management Service (AWS KMS).

How do you explain your job to non-tech friends?

For any kind of product development, you have the builders and the sellers. I manage all of the builders of the Key Management Service.

What are you currently working on that you’re excited about?

The work that gets me excited isn’t always about new features. There’s also essential work going on to keep AWS KMS up and running, which enables more and more people to use it whenever they want. We have to maintain a high level of availability, low latency, and a good customer experience 24 hours a day. Ensuring a good customer experience and providing operational excellence is a big source of adrenaline for service teams at AWS — you get to determine in real time when things aren’t going well, and then try to address the issue before customers notice.

In addition, we’re always looking for new features to add to enable customers to run new workloads in AWS. At a high level, my teams are responsible for ensuring that customers can easily encrypt all their data, whether it resides in an AWS service or not. To date, that’s primarily been an opt-in exercise for customers. Depending on the service, they might choose to have it encrypted — and, more specifically, they might choose to have it encrypted under keys that they have more control over. In a classic model of encryption, your data is encrypted at storage — think of BitLocker on your laptop — and that’s it. But if you don’t understand how encryption works, then you don’t appreciate the fact that encrypted by default doesn’t necessarily provide the security you think it does if the person or application that has access to your encrypted data can also cause your keys to be used whenever it wants to decrypt your data. AWS KMS was invented to help provide that separation: Customers can control who has access to their keys and enable how AWS services or their own applications make use of those keys in an easy, reliable, and low cost way.

What’s the most challenging part of your job?

It depends on the project that’s in front of me. Sometimes it’s finding the right people. That’s always a challenge for any manager at AWS, considering that we’re still in a growth phase. In my case, finding people who meet the engineering bar for being great computer scientists is often not enough — I’ve got to find people who appreciate security and have a strong ethos for maintaining the confidentiality of customer data. That makes it tougher to find people who will be a good fit on my teams.

Outside of hiring good people, the biggest challenge is to minimize risk as we constantly improve the service. We’re trying to improve the feature set and the customer experience of our service APIs, which means that we’re always pushing new software — and every deployment introduces risk.

What’s your favorite part of your job?

Working with very smart, committed, passionate people.

How did you choose your particular topic for re:Invent this year?

For the past four years, I’ve given a re:Invent talk that offered an overview of how to encrypt things at AWS. When I started giving this talk, not every service supported encryption, AWS KMS was relatively new, and we were adding a lot of new features. This year, I was worried the presentation wouldn’t include enough new material, so I decided to broaden the scope. The new session is Data Protection: Encryption, Availability, Resiliency, and Durability, which I’ll be co-presenting with Peter O’Donnell, one of our solution architects. This session focuses on how to approach data security and how to think about access control of data in the cloud holistically, where encryption is just a part of the solution. When we talk to our customers directly, we often hear that they’re struggling to figure out which of their well-established on-premises security controls they should take with them to the cloud — and what new things should they be doing once they get there. We’re using the session to give people a sense of what it means to own logical access control, and of all the ways they can control access to AWS resources and their data within those resources. Encryption is another access control mechanism that can provide strong confidentiality if used correctly. If a customer delegates to an AWS managed service to encrypt data at rest, the actual encipherment of data is going to happen on a server that they can’t touch. It’s going to use code that they don’t own. All of the encryption occurs in a black box, so to speak. AWS KMS gives customers the confidence to say, “In order to get access to this piece of data, not only does someone have to have permission to the encrypted data itself in storage, that person also has to have permission to use the right decryption key.” Customers now have two independent access control mechanisms, as a belt-and-suspenders approach for stronger data security.

What are you hoping that your audience will take away from it?

I want people to think about the classification of their data and which access control mechanisms they should apply to it. In many cases, a given AWS service won’t let you apply a specific classification to an individual piece of data. It’ll be applied to a collection or a container of data — for example, a database. I want people to think about how they’re going to define the containers and resources that hold their data, how they want to organize them, and how they’re going to manage access to create, modify, and delete them. People should focus on the data itself as opposed to the physical connections and physical topology of the network since with most AWS services they can’t control that topology or network security — AWS does it all for them.

Any tips for first-time conference attendees?

Wear comfortable shoes. Because so many different hotels are involved, getting from point A to point B often requires walking at a brisk pace. We hope a renewed investment in shuttle buses will help make transitions easier.

What’s the most common misperception you encounter about encryption in the cloud?

I encounter a lot of people who think, “If I use my cloud provider’s encryption services, then they must have access to my data.” That’s the most common misperception. AWS services that integrate with AWS KMS are designed so that AWS does not have access to your data unless you explicitly give us that access. This can be a hard concept to grasp for some, but we put a lot of effort into the secure design of our encryption features and we hold ourselves accountable to that design with all the compliance schemes we follow. After that, I see a lot of people under the impression that there’s a huge performance penalty for using encryption. This is often based on experiences from years ago: At the time, a lot of CPU cycles were spent on encryption, which meant they weren’t available for interesting things like database searches or vending web pages. Using the latest hardware in the cloud, that’s mostly changed. While there’s a non-zero cost to doing encryption (it’s math and physics, after all), AWS can hide a lot of that and absorb the overhead on behalf of customers. Especially when customers are trying to do full-disk encryptions for workloads running on Amazon Elastic Compute Cloud (Amazon EC2) with Amazon Elastic Block Store (Amazon EBS), we actually perform the encryption on dedicated hardware that’s not exposed to the customer’s memory space or compute. We’re minimizing the perceived latency of encryption, and we all but erase the performance cost in terms of CPU cycles for customers.

What are some of the blockers that customers face when it comes to using cryptography?

There are some customers who’ve heard encryption is a good idea — but every time they’ve looked at it, they’ve decided that it’s too hard or too expensive. A lot of times, that’s because they’ve brought in a consultant or vendor who’s influenced them to think that it would be expensive, not just from a licensing standpoint but also in terms of having people on staff who understand how to do it right. We’d like to convince those customers that they can take advantage of encryption, and that it’s incredibly easy in AWS. We make sure it’s done right, and in a way that doesn’t introduce new risks for their data.

There are other customers, like banks and governments, who have been doing encryption for years. They don’t realize that we’ve made encryption better, faster, and cheaper. AWS has hundreds of people tasked with making sure encryption works properly for all of the millions of AWS customers. Most companies don’t have hundreds of people on staff who care about encryption and key management the way we do. These companies should absolutely perform due diligence and force us to prove that our security controls are in place and they do what we claim they do. We’ve found the customers that have done this diligence understand that we’re providing a consistent way to enforce the use of encryption across all of their workloads. We’re also on the cutting edge of trying to protect them against tomorrow’s encryption-related problems, such as quantum-safe cryptography.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a couple of changes. The first is that we’ll see more customers use encryption by default, making encryption a critical part of their operational security. It won’t just be used in regulated industries or by very large companies.

The second change is more fundamental, and has to do with a perceived threat to some of today’s cryptography: There’s some evidence that quantum computing will become affordable and usable at some point in time — although it’s unclear if that time is 5 or 50 years away. But when it comes, it will make certain types of cryptography very weak, including the kind we use for data in transit security protocols like HTTPS and TLS. The industry is currently working on what’s called quantum-safe or post-quantum cryptography, in which you use different algorithms and different key sizes to provide the same level of security that we have today, even in the face of an adversary that has a quantum computer and can capture your communications. As encryption algorithms and protocols evolve to address this potential future risk, we’ll see a shift in the way our devices connect to each other. Our phones, our laptops, and our servers will adopt this new technology to ensure privacy in our communications.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Ken Beer

Ken is the General Manager of the AWS Key Management Service. Ken has worked in identity and access management, encryption, and key management for over 6 years at AWS. Before joining AWS, Ken was in charge of the network security business at Trend Micro. Before Trend Micro, he was at Tumbleweed Communications. Ken has spoken on a variety of security topics at events such as the RSA Conference, the DoD PKI User’s Forum, and AWS re:Invent.