Tag Archives: security

AWS Security releases IoT security whitepaper

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/aws-security-releases-iot-security-whitepaper/

We’ve published a whitepaper, Securing Internet of Things (IoT) with AWS, to help you understand and address data security as it relates to your IoT devices and the data generated by them. The whitepaper is intended for a broad audience who is interested in learning about AWS IoT security capabilities at a service-specific level and for compliance, security, and public policy professionals.

IoT technologies connect devices and people in a multitude of ways and are used across industries. For example, IoT can help manage thermostats remotely across buildings in a city, efficiently control hundreds of wind turbines, or operate autonomous vehicles more safely. With all of the different types of devices and the data they transmit, security is a top concern.

The specific challenges using IoT technologies present has piqued the interest of governments worldwide who are currently assessing what, if any, new regulatory requirements should take shape to keep pace with IoT innovation and the general problem of securing data. This whitepaper uses a specific example to cover recent developments published by the National Institute of Standards and Technology (NIST) and the United Kingdom’s Code of Practice that are specific to IoT.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Momena Cheema

Momena is passionate about evangelizing the security and privacy capabilities of AWS services through the lens of global emerging technology and trends, such as Internet of Things, artificial intelligence, and machine learning through written content, workshops, talks, and educational campaigns. Her goal is to bring the security and privacy benefits of the cloud to customers across industries in both public and private sectors.

Door Pi Plus — door security system for the elderly

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/door-pi-plus-door-security-system-elderly/

13-year-old Freddie from Monmouthshire has gained national attention for his incredible award-winning invention Door Pi Plus.

Freddie – Door Plus Pi

No Description

Door security system

Freddie spent more than twelve months building a door security system for the elderly, inspired by the desire to help his great-aunt feel more secure at home.

The invention keeps the door locked until the camera recognises a face of a family member and makes it possible to open the lock. Freddie used a Raspberry Pi to enable facial recognition technology in his impressive project.

“I’ve been building this project on and off for a year now,” says Freddie. “I started coding at my primary school Code Club, but now I mainly code at home.”

Coolest Projects UK

Freddie took part in this year’s Coolest Projects UK, entering the Hardware category of the world-leading showcase for young innovators who make stuff with technology.

Mark Feltham on Twitter

The amazing Freddie explaing his security system for dementia sufferers at #coolestprojects @Raspberry_Pi facial recognition, PIR and RFID hooked up to lock through relays, coded in #python. He’s 13… #blownaway

Martin O’Hanlon of the Raspberry Pi Foundation, and a judge at Coolest Projects UK, commented “I was blown away by the Door Pi Plus. The motivation to create something which would help others was clear, but the technical aspects of the project also really stood out, integrating lots of different technologies and making skills.

“The project used multiple Raspberry Pis to control an RFID reader, electronic door lock mechanism, cameras, motion sensors, and audio playback. The whole system sent messages to Freddie to ensure that his great-aunt would be safe and that she could get help if she needed it.“

Freddie won his Coolest Projects category to much acclaim, and went on to win the award for Junior Engineer of the Year at the Big Bang Fair and the Siemens Digital Skills Award!

Inspired by his experience making, he is now encouraging other young people to learn to code and start to make their own creations.

“Coding is cool because you can invent cool things to help you and other people around you. I do think more kids should code because lots of the job in the future are probably going to involved coding.”

Coolest Projects International

Freddie will participate in Coolest Projects International next, for which he won a special bursary as part of his award for winning the UK event’s Hardware category.

Not one to shy away from a challenge, Freddie decided to build a new project for the event! It’s called Safe Kids, and it’s a speed camera and ANPR system, to be installed outside primary schools.

He will be showcasing his new creation at Coolest Projects International in the RDS, Dublin on 5 May, alongside hundreds of young coders from around the globe.

Want to share your creation with the world too?

Then register your project idea for Coolest Projects International before the 14 April deadline, and get building for the event.

Participants of all ages and skill levels, and projects using all types of technology and hardware are encouraged!

The post Door Pi Plus — door security system for the elderly appeared first on Raspberry Pi.

Guidelines for protecting your AWS account while using programmatic access

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

One of the most important things you can do as a customer to ensure the security of your resources is to maintain careful control over who has access to them. This is especially true if any of your AWS users have programmatic access. Programmatic access allows you to invoke actions on your AWS resources either through an application that you write or through a third-party tool. You use an access key ID and a secret access key to sign your requests for authorization to AWS. Programmatic access can be quite powerful, so implementing best practices to protect access key IDs and secret access keys is important in order to prevent accidental or malicious account activity. In this post, I’ll highlight some general guidelines to help you protect your account, as well as some of the options you have when you need to provide programmatic access to your AWS resources.

Protect your root account

Your AWS root account—the account that’s created when you initially sign up with AWS—has unrestricted access to all your AWS resources. There’s no way to limit permissions on a root account. For this reason, AWS always recommends that you do not generate access keys for your root account. This would give your users the power to do things like close the entire account—an ability that they probably don’t need. Instead, you should create individual AWS Identity and Access Management (IAM) users, then grant each user permissions based on the principle of least privilege: Grant them only the permissions required to perform their assigned tasks. To more easily manage the permissions of multiple IAM users, you should assign users with the same permissions to an IAM group.

Your root account should always be protected by Multi-Factor Authentication (MFA). This additional layer of security helps protect against unauthorized logins to your account by requiring two factors: something you know (a password) and something you have (for example, an MFA device). AWS supports virtual and hardware MFA devices, U2F security keys, and SMS text message-based MFA.

Decide how to grant access to your AWS account

To allow users access to the AWS Management Console and AWS Command Line Interface (AWS CLI), you have two options. The first one is to create identities and allow users to log in using a username and password managed by the IAM service. The second approach is to use federation
to allow your users to use their existing corporate credentials to log into the AWS console and CLI.

Each approach has its use cases. Federation is generally better for enterprises that have an existing central directory or plan to need more than the current limit of 5,000 IAM users.

Note: Access to all AWS accounts is managed by AWS IAM. Regardless of the approach you choose, make sure to familiarize yourself with and follow IAM best practices.

Decide when to use access keys

Applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

However, if the resources that need programmatic access are running inside AWS, the best practice is to use IAM roles instead. An IAM role is a defined set of permissions—it’s not associated with a specific user or group. Instead, any trusted entity can assume the role to perform a specific business task.

By utilizing roles, you can grant a resource access without hardcoding an access key ID and secret access key into the configuration file. For example, you can grant an Amazon Elastic Compute Cloud (EC2) instance access to an Amazon Simple Storage Service (Amazon S3) bucket by attaching a role with a policy that defines this access to the EC2 instance. This approach improves your security, as IAM will dynamically manage the credentials for you with temporary credentials that are rotated automatically.

Grant least privileges to service accounts

If you decided to create service accounts (that is, accounts used for programmatic access by applications running outside of the AWS environment) and generate access keys for them, you should create a dedicated service account for each use case. This will allow you to restrict the associated policy to only the permissions needed for the particular use case, limiting the blast radius if the credentials are compromised. For example, if a monitoring tool and a release management tool both require access to your AWS environment, create two separate service accounts with two separate policies that define the minimum set of permissions for each tool.

In addition to this, it’s also a best practice to add conditions to the policy that further restrict access—such as restricting access to only the source IP address range of your clients.

Below is an example policy that represents least privileges. It grants the needed permissions (PutObject) on to a specific resource (an S3 bucket named “examplebucket”) while adding further conditions (the client must come from IP range

    "Version": "2012-10-17",
    "Id": "S3PolicyRestrictPut",
    "Statement": [
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::examplebucket/*",
            "Condition": {
                "IpAddress": {"aws:SourceIp": ""}

Use temporary credentials from AWS STS

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for use in your code, CLI, or third-party tools. It allows you to assume an IAM role with which you have a trusted relationship and then generate temporary, time-limited credentials based on the permissions associated with the role. These credentials can only be used during the validity period, which reduces your risk.

There are two ways to generate temporary credentials. You can generate them from the CLI, which is helpful when you need credentials for testing from your local machine or from an on-premises or third-party tool. You can also generate them from code using one of the AWS SDKs. This approach is helpful if you need credentials in your application, or if you have multiple user types that require different permission levels.

Create temporary credentials using the CLI

If you have access to the AWS CLI, you can use it to generate temporary credentials with limited permissions to use in your local testing or with third-party tools. To be able to use this approach, here’s what you need:

  • Access to the AWS CLI through your primary user account or through federation. To learn how to configure CLI access using your IAM credentials, follow this link. If you use federation, you still can use the CLI by following the instructions in this blog post.
  • An IAM role that represents the permissions needed for your test client. In the example below, I use “s3-read”. This role should have a policy attached that grants the least privileges needed for the use case.
  • A trusted relationship between the service role (“s3-read”) and your user account, to allow you to assume the service role and generate temporary credentials. Visit this link for the steps to create this trust relationship.

The example command below will generate a temporary access key ID and secret access key that are valid for 15 minutes, based on permissions associated with the role named “s3-read”. You can replace the values below with your own account number, service role, and duration, then use the secret access key and access key ID in your local clients.

aws sts assume-role --role-arn <arn:aws:iam::AWS-ACCOUNT-NUMBER:role/s3-read> --role-session-name <s3-access> --duration-seconds <900>

Here are my results from running the command:

{ "AssumedRoleUser": 
        "AssumedRoleId": "AROAIEGLQIIQUSJ2I5XRM:s3-access", 
        "Arn": "arn:aws:sts::AWS-ACCOUNT-NUMBER:assumed-role/s3-read/s3-access" 
    "Credentials": { 
        "SessionToken": "FQoGZXIvYXdzENr//////////<<REST-OF-TOKEN>>",
        "Expiration": "2018-11-02T16:46:23Z",
        "AccessKeyId": "ASIAXQZXUENECYQBAAQG" 

Create temporary credentials from your code

If you have an application that already uses the AWS SDK, you can use AWS STS to generate temporary credentials right from the code instead of hard-coding credentials into your configurations. This approach is recommended if you have client-side code that requires credentials, or if you have multiple types of users (for example, admins, power-users, and regular users) since it allows you to avoid hardcoding multiple sets of credentials for each user type.

For more information about using temporary credentials from the AWS SDK, visit this link.

Utilize Access Advisor

The IAM console provides information about when an AWS service was last accessed by different principals. This information is called service last accessed data.

Using this tool, you can view when an IAM user, group, role, or policy last attempted to access services to which they have permissions. Based on this information, you can decide if certain permissions need to be revoked or restricted further.

Make this tool part of your periodic security check. Use it to evaluate the permissions of all your IAM entities and to revoke unused permissions until they’re needed. You can also automate the process of periodic permissions evaluation using Access Advisor APIs. If you want to learn how, this blog post is a good starting point.

Other tools for credentials management

While least privilege access and temporary credentials are important, it’s equally important that your users are managing their credentials properly—from rotation to storage. Below is a set of services and features that can help to securely store, retrieve, and rotate credentials.

AWS Systems Manager Parameter Store

AWS Systems Manager offers a capability called Parameter Store that provides secure, centralized storage for configuration parameters and secrets across your AWS account. You can store plain text or encrypted data like configuration parameters, credentials, and license keys. Once stored, you can configure granular access to specify who can obtain these parameters in your application, adding another layer of security to protect your data.

Parameter store is a good choice for use cases in which you need hierarchical storage for configuration data management across your account. For example, you can store database access credentials (username and password) in parameter store, encrypt them with an encryption key managed by AWS Key Management Service, and grant EC2 instances running your application permissions to read and decrypt those credentials.

For more information on using AWS Systems Manager Parameter Store, visit this link.

AWS Secrets Manager

AWS Secrets Manager is a service that allows you to centrally manage the lifecycle of secrets used in your organization, including rotation, audits, and access control. By enabling you to rotate secrets automatically, Secrets Manager can help you meet your security and compliance requirements. Secrets Manager also offers built-in integration for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS and can be extended to other services.

For more information about using AWS Secrets Manager to store and retrieve secrets, visit this link.

Amazon Cognito

Amazon Cognito lets you add user registration, sign-in, and access management features to your web and mobile applications.

Cognito can be used as an Identity Provider (IdP), where it stores and maintains users and credentials securely for your applications, or it can be integrated with OpenID Connect, SAML, and other popular web identity providers like Amazon.com.

Using Amazon Cognito, you can generate temporary access credentials for your clients to access AWS services, eliminating the need to store long-term credentials in client applications.

To learn more about using Amazon Cognito as an IdP, visit our developer guide to Amazon Cognito User Pools. If you’re interested in information about using Amazon Cognito with a third party IdP, review our guide to Amazon Cognito Identity Pools (Federated Identities).

AWS Trusted Advisor

AWS Trusted Advisor is a service that provides a real-time review of your AWS account and offers guidance on how to optimize your resources to reduce cost, increase performance, expand reliability, and improve security.

The “Security” section of AWS Trusted Advisor should be reviewed on regular basis to evaluate the health of your AWS account. Currently, there are multiple security specific checks that occur—from IAM access keys that haven’t been rotated to insecure security groups. Trusted Advisor is a tool to help you more easily perform a daily or weekly review of your AWS account.


, available from the AWS Labs GitHub account, helps you avoid committing passwords and other sensitive credentials to a git repository. It scans commits, commit messages, and –no-ff merges to prevent your users from inadvertently adding secrets to your repositories.


In this blog post, I’ve introduced some options to replace long-term credentials in your applications with temporary access credentials that can be generated using various tools and services on the AWS platform. Using temporary credentials can reduce the risk of falling victim to a compromised environment, further protecting your business.

I also discussed the concept of least privilege and provided some helpful services and procedures to maintain and audit the permissions given to various identities in your environment.

If you have questions or feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.


Joe Chapman

Joe is a Solutions Architect with Amazon Web Services. He primarily serves AWS EdTech customers, providing architectural guidance and best practice recommendations for new and existing workloads. Outside of work, he enjoys spending time with his wife and dog, and finding new adventures while traveling the world.

Add a layer of security for AWS SSO user portal sign-in with context-aware email-based verification

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/add-a-layer-of-security-for-aws-sso-user-portal-sign-in-with-context-aware-email-based-verification/

If you’re an IT administrator of a growing workforce, your users will require access to a growing number of business applications and AWS accounts. You can use AWS Single Sign-On (AWS SSO) to create and manage users centrally and grant access to AWS accounts and business applications, such as such Salesforce, Box, and Slack. When you use AWS SSO, your users sign in to a central portal to access all of their AWS accounts and applications. Today, we launched email-based verification that provides an additional layer of security for users signing in to the AWS SSO user portal. AWS SSO supports a one-time passcode (OTP) sent to users’ email that they then use as a verification code during sign-in. When enabled, AWS SSO prompts users for their user name and password and then to enter a verification code that was sent to their email address. They need all three pieces of information to be able to sign in to the AWS SSO user portal.

You can enable email-based verification in context-aware or always-on mode. We recommend you enable email-based verification in context-aware mode for users created using the default AWS SSO directory. In this mode, users sign in easily with their username and password for most sign-ins, but must provide additional verification when their sign-in context changes, such as when signing in from a new device or an unknown location. Alternatively, if your company requires users to complete verification for every sign-in, you can use always-on mode.

In this post, I demonstrate how to enable verification in context-aware mode for users in your SSO directory using the AWS SSO console. I then demonstrate how to sign into the AWS SSO user portal using email-based verification.

Enable email-based verification in context-aware mode for users in your SSO directory

Before you enable email-based verification, you must ensure that all your users can access their email to retrieve their verification code. If your users require the AWS SSO user portal to access their email, do not enable email-based verification. For example, if you use AWS SSO to access Office 365, then your users may not be able to access their AWS SSO user portal when you enable email-based verification.

Follow these steps to enable email-based verification for users in your SSO directory:

  1. Sign in to the AWS SSO console. In the left navigation pane, select Settings, and then select Configure under the Two-step verification settings.
  2. Select Context-aware under Verification mode, and Email-based verification under Verification method, and then select Save changes.
    Figure 1: Select the verification mode and the verification method

    Figure 1: Select the verification mode and the verification method

  3. Before you choose to confirm the changes in the Enable email-based verification window, make sure that all your users can access their email to retrieve the verification code required to sign in to the AWS SSO user portal without signing in using AWS SSO. To confirm your choice, type CONFIRM (case-sensitive) in the text-entry field, and then select Confirm.
    Figure 2: The "Enable email-based verification" window

    Figure 2: The “Enable email-based verification” window

You’ll see that you successfully enabled email-based verification in context-aware mode for all users in your AWS SSO directory.

Figure 3: Verification of the settings

Figure 3: Verification of the settings

Next, I demonstrate how your users sign into the AWS SSO user portal with email-based verification in addition to their username and password

to the AWS SSO user portal with email-based verification

With email-based verification enabled in context-aware mode, users use the verification code sent to their email when there is a change in their sign-in context. Here’s how that works:

  1. Navigate to your AWS SSO user portal.
  2. Enter your email address and password, and then select Sign in.
    Figure 4: The "Single Sign-On" window

    Figure 4: The “Single Sign-On” window

  3. If AWS detects a change in your sign-in context, you’ll receive an email with a 6-digit verification code that you will enter in the next step.
    Figure 5: Example verification email

    Figure 5: Example verification email

  4. Enter the code in the Verification code box, and then select Sign in. If you haven’t received your verification code, select Resend email with a code to receive a new code, and be sure to check your spam folder. You can select This is a trusted device to mark your device as trusted so you don’t need to enter a verification code unless your sign-in context changes again, such as signing in from a new browser or an unknown location.
    Figure 6: Enter the verification code

    Figure 6: Enter the verification code

The user can now access AWS accounts and business applications that the administrator has configured for them.


In this post, I shared the benefits of using email-based verification in context-aware mode. I demonstrated how you can enable email-based verification for your users through the SSO console. I also showed you how to sign into the AWS SSO user portal with email-based verification. You can also enable email-based verification for SSO users from your connected AD directory by following the process outlined above.

If you have comments, please submit them in the Comments section below. If you have issues enabling email-based verification for your users, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Announcing the First AWS Security Conference: AWS re:Inforce 2019

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/announcing-the-first-aws-security-conference-aws-reinforce-2019/

On the eve of re:Invent 2018, I’m pleased to announce that AWS is launching our first conference dedicated to cloud security: AWS re:Inforce. The event will offer a deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Security is the top priority at AWS, and AWS re:Inforce is emblematic of our commitment to giving direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services.

The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. The cost for a full conference pass will be $1,099.

Over the course of this two-day conference we will offer multiple content tracks designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. Our technical track will offer detailed tactical education to take your security posture from reactive to proactive. We’ll also be offering a business enablement track tailored to assisting with your strategic migration decisions. You’ll find content delivered in a number of formats to meet a diversity of learning preferences, including keynotes, breakout sessions, Q&As, hands-on workshops, simulations, training and certification, as well as our interactive Security Jam. We anticipate 100+ sessions ranging in levels of ability from beginner to expert.

AWS re:Inforce will also feature our AWS Security Competency Partners, each of whom has demonstrated success in building products and solutions on AWS to support customers in multiple domains. With hundreds of industry-leading products, these partners will give you an opportunity to learn how to enhance the security of both on-premises and cloud environments.

Finally, you’ll find sessions built around the Security Pillar of Well Architected and the Security Perspective of our Cloud Adoption Framework (CAF). These will include Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, Security Services, and Incident Response. Our automated reasoning, cryptography researchers and scientists will also be available, as well as our partners in the academic community discussing Provable Security and additional emerging security trends.

If you’d like to sign up to be notified of when registration opens, please visit:


Additional information and registration details will be shared in early 2019, we look forward to seeing you all there!

– Steve Schmidt, Chief Information Security Officer
Follow Steve on Twitter.

AWS Security Profiles: Quint Van Deman, Principal Business Development Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-quint-van-deman-principal-business-development-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I joined AWS in August 2014. I spent my first two and a half years in the Professional Services group, where I ran around the world to help some of our largest customers sort through their security and identity implementations. For the last two years, I’ve parleyed that experience into my current role of Business Development Manager for the Identity and Directory Services group. I help the product development team build services and features that address the needs I’ve seen in our customer base. We’re working on a next generation of features that we think will radically simplify the way customers implement and manage identities and permissions within the cloud environment. The other key element of my job is to find and disseminate the most innovative solutions I’m seeing today across the broadest possible set of AWS customers to help them be more successful faster.

How do you explain your job to non-tech friends?

I keep one foot in the AWS service team organizations, where they build features, and one foot in day-to-day customer engagement to understand the real-world experiences of people using AWS. I learn about the similarities and differences between how these two groups operate, and then I help service teams understand these similarities and differences, as well.

You’re a “bar raiser” for the Security Blog. What does that role entail?

The notion of being a bar raiser has a lot of different facets at Amazon. The general concept is that, as we go about certain activities — whether hiring new employees or preparing blog posts — we send things past an outside party with no team biases. As a bar raiser for the Security Blog, I don’t have a lot of incentive to get posts out because of a deadline. My role is to make sure that nothing is published until it successfully addresses a customer need. At Amazon, we put the best customer experience first. As a bar raiser, I work to hold that line, even though it might not be the fastest approach, or the path of least resistance.

What’s the most challenging part of your job?

Ruthless prioritization. One of our leadership principles at Amazon is frugality. Sometimes, that means staying in cheap hotel rooms, but more often it means frugality of resources. In my case, I’ve been given the awesome charter to serve as the Business Development Manager for our suite of Identity and Directory Services. I’m something of a one-man army the world over. But that means a lot of things come past my desk, and I have to prioritize ruthlessly to ensure I’m focusing on the things that will be most impactful for our customers.

What’s your favorite part of your job?

A lot of our customers are doing an awesome job being bar raisers themselves. They’re pushing the envelope in terms of identity-focused solutions in their own AWS environments. One fulfilling part of my work is getting to collaborate with those customers who are on the leading edge: Their AWS field teams will get ahold of me, and then I get to do two really fun things. First, I get to dive in and help these customers succeed at whatever they’re trying to do. Second, I get to learn from them. I get to examine the really amazing ideas they’ve come up with and see if we might be able to generalize their solutions and roll them out to the delight of many more AWS customers that might not have teams mature enough to build them on their own. While my title is Business Development Manager, I’m a technologist through and through. Getting to dive into these thorny technical situations and see them resolve into really great solutions is extremely rewarding.

How did you choose your particular topics for re:Invent 2018?

Over the last year, I’ve talked with lots of customers and AWS field teams. My Mastering Identity at Every Layer of the Cake session was born out of the fact that I noticed a lot of folks doing a lot of work to get identity for AWS right, but other layers of identity that are just as important weren’t getting as much attention. I made it my mission to provide a more holistic understanding of what identity in the cloud means to these customers, and over time I developed ways of articulating the topic which really seemed to resonate. My session is about sharing this understanding more broadly. It’s a 400-level talk, since I want to really dive deep with my audience. I have five embedded demos, all of which are going to show how to combine multiple features, sprinkle in a bit of code, and apply them to near universally applicable customer use cases.

Why use the metaphor of a layer cake?

I’ve found that analogies and metaphors are very effective ways of grounding someone’s mental imagery when you’re trying to relay a complex topic. Last year, my metaphor was bridges. This year, I decided to go with cake: It’s actually very descriptive of the way that our customers need to think about Identity in AWS since there are multiple layers. (Also, who doesn’t like cake? It’s delicious.)

What are you hoping that your audience will take away from the session?

Customers are spending a lot of time getting identity right at the AWS layer. And that’s a ground-level, must-do task. I’m going to put a few new patterns in the audience’s hands to do this more effectively. But as a whole, we aren’t consistently putting as much effort into the infrastructure and application layers. That’s what I’m really hoping to expose people to. We have a wealth of features that really raise the bar in terms of cloud security and identity — from how users authenticate to operating systems or databases, to how they authenticate to the applications and APIs that they put on AWS. I want to expose these capabilities to folks and paint a vivid image for them of the really powerful things that they can do today that they couldn’t have done before.

What do you want your audience to do differently after attending your session?

During the session, I’ll be taking a handful of features that are really interesting in their own right, and combining them in a way that I hope will absolutely delight my audience. For example, I’ll show how you can take AWS CloudFormation macros and AWS Identity and Access Management, layer a little bit of customization on top, and come up with something far more magical than either of the two individually. It’s an advanced use case that, with very little effort, can disproportionately improve your security posture while letting your organization move faster. That’s just one example though, and the session is going to be loaded with them, including a grand finale. I’ve already started the work to open source a lot of what I’m going to show, but even where I can’t open source, I want to paint a very clear, prescriptive blueprint for how to get there. My goal is that my audience goes back to work on Monday and, within a couple of hours, they’ve measurably moved the security bar for their organization.

Any tips for first-time conference attendees?

Be deliberate about going outside of your comfort zone. If you’re not working in Security, come to one of our sessions. If you do work in Security, go to some other tracks, like Dev-Ops or Analytics, to get that cross-pollination of ideas. One of the most amazing things about AWS is how it helps dramatically lower the barrier to entry for unfamiliar technology domains and tools. A developer can ship more secure code faster by being invested in security, and a security expert can disproportionally scale their impact by applying the tools of developers or data scientists. Re:Invent is an amazing place to start exploring that diversity, and if you do, I suspect you’ll find ways to immediately make yourself better at your day job.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

Complexity versus human understanding have always been at odds. I see initiatives across AWS that have all kinds of awesome innovation and computer science behind them. In the coming years, I think these will mature to the point that they will be able to offload much of the natural complexity that comes with securing large scale environments with extremely fine grain permissions. Folks will be able to provide very simple statements or rules of how they want their environment to be, and we should be able to manage the complexity for them, and present them with a nice, clean picture they can easily understand.

What does cloud security mean to you, personally?

I see possibilities today that were herculean tasks before. For example, the process to make sure APIs can properly authenticate and authorize each other used to be an extremely elaborate process at scale. It became such an impossible mess that only the largest of organizations with the best skills, the best technology, and the best automation were really able to achieve it. Everyone else just had to punt or put a band-aid on the problem. But in the world of the cloud, all it takes is attaching an AWS IAM role on one side, and a fairly small resource-based policy to an Amazon API Gateway API on the other. Examples like this show how we’re making security that would once have been extremely difficult for most customers to afford or implement simple to configure, get right, and deploy ubiquitously, and that’s really powerful. It’s what keeps me passionate about my work.

If you had to pick any other job, what would you want to do with your life?

I’ve got all kinds of whacky hobbies. I kiteboard, I surf, work on massive renovation projects at home, hike and camp in the backcountry, and fly small airplanes. It’s an overwhelming set of hobbies that didn’t align with my professional aptitude. But if the world were my oyster and I had to do something else, I would want to combine those hobbies into one single career that’s never before been seen.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Quint Van Deman

Quint is the global business development manager for AWS Identity and Directory services. In this role, he leads the incubation, scaling, and evolution of new and existing identity-based services, as well as field enablement and strategic customer advisement for the same. Before joining the BD team, Quint was an early member of the AWS Professional Services team, where he was a Senior Consultant leading cloud transformation teams at several prominent enterprise customers, and a company-wide subject matter expert on IAM and Identity federation.

AWS Security Profiles: Henrik Johansson, Principal, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-henrik-johansson-principal-office-of-the-ciso/

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

As a Principal for the Office of the CISO, I not only get to spend time directly with our customers and their executives and operational teams, I also get to work with our own service teams and other parts of our organization. Additionally, a big part of this role involves spending time with the industry as a whole, in both small and large settings, and trying to raise the bar of the overall industry together with a number of other teams within AWS.

How do you explain your job to non-tech friends?

Whether or not someone understands what the cloud is, I try to focus on the core part of the role: I help people and organizations understand AWS Security and what it means to operate securely on the cloud. And I focus on helping the industry achieve these same goals.

What’s your favorite part of your job?

Helping customers and their executive leadership to understand the benefits of cloud security and how they can improve the overall security posture by using cloud features. Getting to show them how we can help drive road maps and new features and functions that they can use to secure their workloads (based on their valuable feedback) is very rewarding.

Tell us about the open source communities you support. Why they are important to AWS?

The open source community is important to me for a couple of reasons. First, it helps enable and inspire innovation by inviting the community at large to expand on the various use cases our services provide. I also really appreciate how customers enable other customers by not only sharing their own innovations but also inviting others to contribute and further improve their solutions. I have a couple of open source repositories that I maintain, where I put various security automation tools that I’ve built to show various innovative ways that customers can use our services to strengthen their security posture. Even if you don’t use open source in your company, you can still look at the vast number of projects out there, both from customers and from AWS, and learn from them.

What does cloud security mean to you, personally?

For me, it represents the possibility of creating efficient, secure solutions. I’ve been working in various security roles for almost twenty-five years, and the ability we have to protect data and our infrastructure has never been stronger. We have an incredible opportunity to solve challenges that would have been insurmountable before, and this leads to one thing: trust. It allows us to earn trust from customers, trust from users, and trust from the industry. It also enables our customers to earn trust from their users.

In your opinion, what’s the biggest challenge facing cloud security right now?

The opportunities far outweigh the challenges, honestly. The different methods that customers and users have to gain visibility into what they’re actually running is mind-blowing. That visibility is a combination of knowing what you have, knowing what you run, and knowing all the ins and outs of it. I still hear people talking about that server in the corner under someone’s desk that no one else knows about. That simply doesn’t exist in the cloud, where everything is an API call away. If anything, the challenge lies in finding people who want to continue driving the innovation and solving the hard cases with all the technology that’s at our fingertips.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

One shift we’re already seeing is that compliance is becoming a natural part of the security and innovation conversation. Previously, “compliance” meant that maybe you had a specific workload that needed to be PCI-compliant, or you were under HIPPA requirements. Nowadays, compliance is a more natural part of what we do. Privacy is everywhere. It has to be everywhere, based on requirements like GDPR, but we’re seeing that a lot of these “have to be” requirements turning into “want to be” requirements — we’re not distinguishing between the users that are required to be protected and the “regular” users. More and more, we’re seeing that privacy is always going to have a seat at the table, which is something we’ve always wanted.

At re:Invent 2018, you’re presenting two sessions together with Andrew Krug. How did you choose your topics?

They’re a combination of what I’m passionate about and what I see our customers need. This is the third year I’ve presented my Five New Security Automations Using AWS Security Services & Open Source session. Previously, I’ve also built boot camps and talks around secure automation, DevSecOps, and container security. But we have a big need for open source security talks that demonstrate how people can actually use open source to integrate with our services — not just as a standalone piece, but actually using open source as inspiration for what they can build on their own. That’s not to say that AWS services aren’t extremely important. They’re the driving force here. But the open source piece allows people to adapt solutions to their specific needs, further driving the use cases together with the various AWS security services.

What are you hoping that your audience will take away from your sessions?

I want my audience to walk away feeling that they learned something new, and that they can build something that they didn’t know how to before. They don’t have to take and use the specific open source tools we put out there, but I want them to see our examples as a way to learn how our services work. It doesn’t matter if you just download a sample script or if you run a full project, or a full framework, but it’s important to learn what’s possible with services beyond what you see in the console or in the documentation.

Any tips for first-time conference attendees?

Plan ahead, but be open to ad-hoc changes. And most importantly, wear sneakers or comfortable walking shoes. Your feet will appreciate it.

If you had to pick any other job, what would you want to do with your life?

If I picked another role at Amazon, it would definitely be a position around innovation, thinking big, and building stuff. Even if it was a job somewhere else, I’d still want it to involve building, whether woodshop projects or a robot. Innovation and building are my passions.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Henrik Johansson

Henrik is a Principal in the Office of the CISO at AWS Security. With over 22 years of experience in IT with a focus on security and compliance, he focuses on establishing and driving CISO-level relationships as a trusted cloud security advisor who has a passionate focus on developing services and features for security and compliance at scale.

AWS Security Profiles: Sam Koppes, Senior Product Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sam-koppes-senior-product-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for a year, and I’m a Senior Product Manager for the AWS CloudTrail team. I’m responsible for product roadmap decisions, customer outreach, and for planning our engineering work.

How do you explain your job to non-tech friends?

I work on a technical product, and for any tech product, responsibility is split in half: We have product managers and engineering managers. Product managers are responsible for what the product does. They’re responsible for figuring out how it behaves, what needs it addresses, and why customers would want it. Engineering managers are responsible for figuring out how to make it. When you look to build a product, there’s always the how and the what. I’m responsible for the what.

What are you currently working on that you’re excited about?

The scale challenges that we’re facing today are extremely interesting. We’re allowing customers to build things at an absolutely unheard-of scale, and bringing security into that mix is a challenge. But it’s also one of the great opportunities for AWS — we can bring a lot of value to customers by making security as turnkey as possible so that it just comes with the additional scale and additional service areas. I want people to sleep easy at night knowing that we’ve got their backs.

What’s your favorite part of your job?

When I deliver a product, I love sending out the What’s New announcement. During our launch calls, I love collecting social media feedback to measure the impact of our products. But really, the best part is the post-launch investigation that we do, which allows us understand whether we hit the mark or not. My team usually does a really good job of making sure that we deliver the kinds of features that our customers need, so seeing the impact we’ve had is very gratifying. It’s a privilege to get to hear about the ways we’re changing people’s lives with the new features we’re building.

How did you choose your particular topic for re:Invent this year?

My session is called Augmenting Security Posture and Improving Operational Health with AWS CloudTrail. As a service, CloudTrail has been around a while. But I’ve found that customers face knowledge gaps in terms of what to do with it. There are a lot of people out there with an impressive depth of experience, but they sometimes lack an additional breadth that would be helpful. We also have a number of new customers who want more guidance. So I’m using the session to do a reboot: I’ll start from the beginning and go through what the service is and all the things it does for you, and then I’ll highlight some of the benefits of CloudTrail that might be a little less obvious. I built the session based on discussions with customers, who frequently tell me they start using the service — and only belatedly realize that they can do much more with it beyond, say, using it as a compliance tool. When you start using CloudTrail, you start amassing a huge pile of information that can be quite valuable. So I’ll spend some time showing customers how they can use this information to enhance their security posture, to increase their operational health, and to simplify their operational troubleshooting.

What are you hoping that your audience will take away from it?

I want people to walk away with two fistfuls of ideas for cool things they can do with CloudTrail. There are some new features we’re going to talk about, so even if you’re a power user, my hope is that you’ll return to work with three or four features you have a burning desire to try out.

What does cloud security mean to you, personally?

I’m very aware of the magnitude of the threats that exist today. It’s an evolving landscape. We have a lot of powerful tools and really smart people who are fighting this battle, but we have to think of it as an ongoing war. To me, the promise you should get from any provider is that of a safe haven — an eye in the storm, if you will — where you have relative calm in the midst of the chaos going on in the industry. Problems will constantly evolve. New penetration techniques will appear. But if we’re really delivering on our promise of security, our customers should feel good about the fact that they have a secure place that allows them to go about their business without spending much mental capacity worrying about it all. People should absolutely remain vigilant and focused, but they don’t have to spend all of their time and energy trying to stay abreast of what’s going on in the security landscape.

What’s the most common misperception you encounter about cloud security and compliance?

Many people think that security is a magic wand: You wave it, and it leads to a binary state of secure or not secure. And that’s just not true. A better way to think of security is as a chain that’s only as strong as its weakest link. You might find yourself in a situation where lots of people have worked very hard to build a very secure environment — but then one person comes in and builds on top of it without thinking about security, and the whole thing blows wide open. All it takes is one little hole somewhere. People need to understand that everyone has to participate in security.

In your opinion, what’s the biggest challenge that people face as they move to the cloud?

At AWS, we follow this thing called the Shared Responsibility Model: AWS is responsible for securing everything from the virtualization layer down, and customers are responsible for building secure applications. One of the biggest challenges that people face lies in understanding what it means to be secure while doing application development. Companies like AWS have invested hugely in understanding different attack vectors and learning how to lock down our systems when it comes to the foundational service we offer. But when customers build on a platform that is fundamentally very secure, we still need to make sure that we’re educating them about the kinds of things that they need to do, or not do, to ensure that they stay within this secure footprint.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a tremendous amount of growth in the application of machine learning and artificial intelligence. Historically, we’ve approached security in a very binary way: rules-based security systems in which things are either okay or not okay. And we’ve built complex systems that define “okay” based on a number of criteria. But we’ve always lacked the ability to apply a pseudo-human level of intelligence to threat detection and remediation, and today, we’re seeing that start to change. I think we’re in the early stages of a world where machine learning and artificial intelligence become a foundational, indispensable part of an effective security perimeter. Right now, we’re in a world where we can build strong defenses against known threats, and we can build effective hedging strategies to intercept things we consider risky. Beyond that, we have no real way of dynamically detecting and adapting to threat vectors as they evolve — but that’s what we’ll start to see as machine learning and artificial intelligence enter the picture.

If you had to pick any other job, what would you want to do with your life?

I have a heavy engineering background, so I could see myself becoming a very vocal and customer-obsessed engineering manager. For a more drastic career change, I’d write novels—an ability that I’ve been developing in my free time.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Sam Koppes

Sam is a Senior Product Manager at Amazon Web Services. He currently works on AWS CloudTrail and has worked on AWS CloudFormation, as well. He has extensive experience in both the product management and engineering disciplines, and is passionate about making complex technical offerings easy to understand for customers.

AWS Security Profiles: Alana Lan, Software Development Engineer; Shane Xu, Technical Program Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-alana-lan-software-development-engineer-shane-xu-technical-program-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

Alana: I’m a software development engineer, and I’ve been here for a year and a half. I’m on the Security Assessment and Automation team. My team’s main purpose is to develop tools that help internal teams save time. For example, we build tools to help people find resources for external customers, like information control frameworks. We also build services that aggregate data about AWS resources that other teams can use to identify critical resources.

Shane: I started around the same time as Alana — we’re part of the same team. I’m a Technical Program Manager, and my role is to perform deep-dives into different security domains to investigate the effectiveness of our controls and then propose ways to automate the monitoring and mitigation of those controls. I like to explain my role using a metaphor: If AWS Security is the guardian of the AWS Cloud, then the role of the Security Assurance team is to make sure the guardians have the right superpowers. And the goal of my team is to ensure those superpowers are automated and always monitored so that they’re always available when needed.

How do you explain your job to non-tech friends?

Alana: I tell people that there are many AWS services, and many teams working to make those services available globally. My work is to make the jobs of those teams easier with tools and resources that reduce manual effort and allow them to serve customers better.

Shane: I normally tell people that my role is related to security automation. Those two words tend to make sense to people. If they want more detail, I explain that my role is to automate the compliance managers out of the repetitive aspects of their jobs. Compliance managers cut tickets to request different kinds of evidence to show to auditors. My role is to automate this so that compliance managers don’t need to go through a long, manual process and so they can focus on more important tasks.

What are you currently working on that you’re excited about?

Alana: We’re working on a service that aggregates data about Amazon and AWS resources to provide ways to find relationships between these resources. We’re also experimenting with Amazon Neptune (a graph database) plus some new features of other services to help our teams help customers. Sometimes, SDEs seek us out for help with specific needs, and we try to encourage that: We want to emphasize how important security is. I like getting to work on a team that grapples with abstract concepts like “security” and “compliance.”

Shane: I’m working on an initiative to reduce the manual effort required for data center audits. We’re a cloud company, which means we have data centers all over the world and they are critical infrastructure for AWS services and customer data. For compliance purposes, we need to do physical audits of all of those sites and a typical approach would be flying out to dozens of locations each year to examine the security and environmental controls we have in place. I’m working on a project that’s less manual and resource-heavy.

You’re involved with this year’s Security Jam at re:Invent. What’s a Security Jam?

Shane: The Security Jam is basically a hackathon. It’s an all-day event from 8 AM to 4 PM that includes a dozen challenges (one of which Alana and I are hosting). The doors open at 7 AM at the MGM Studio Ballroom, and you can sign up as a group, or we’ll randomly pair you as needed. Your team works through as many of the challenges as possible, with the goal of getting the high score. The challenges are intended to provide hands-on experience with how to use AWS services and configure them to make sure your environment is secure. The Jams are structured to accommodate AWS users of all levels.

What’s your Security Jam challenge about?

Shane: Last year, our challenge focused on ensuring an environment was secure and compliant. This year, we’re taking it one step further by focusing on continuous monitoring. It’s a challenge that’s relevant whether you’re a small company or a large enterprise: You can’t realistically have one person sitting in front of a dashboard 24/7. You need to find a way to continuously monitor your resources so that at any time, when a new resource becomes available and older ones are deprecated, you have an up-to-date snapshot of your compliance environment. For the Security Jam challenge, I provide a proof of concept that lets participants use AWS Config to configure some out-of-the-box rules (or develop new rules) to provide continuous monitoring of their environment. We’ve also added an API around this for people like compliance managers, who might not have a technical background but need to be able to easily get a report if they need it.

Alana: Customers have reported that AWS Config is very useful, so we built the challenge to expose more people to the service. It will give participants a foundation that they can use in the future to protect their data or services. It’s a starting point.

What knowledge or experience do you hope participants will gain by completing your challenge?

Alana: I want people to understand that AWS services are not difficult to use. For example, there are many open source AWS Lambda functions that can help protect your data with a few button clicks. Don’t be afraid to get started.

Shane: People sometimes think compliance is scary. I want the hands-on nature of the challenge to show people that we provide tools that will make your life, and your customers’ lives, easier. I also want people to learn ways of avoiding compliance fatigue. Automation makes it easier for you to focus on more innovative work. It’s the future of compliance.

In your opinion, what’s the biggest challenge facing cloud security and compliance right now?

Shane: The scope for compliance is getting larger and larger, and there will always be new revelations and new types of threats. Developing scalable solutions to help achieve compliance is an ongoing challenge, and one we can’t just throw human power at. That’s why automation is so important. The other challenge is that some people see compliance as a burden, when we want it to be an enabler. I want people to understand that it’s not just a regulation or a security best practice. Compliance is a way to enable growth.

Alana: If I worked on another team, I think it would have taken me several years to figure out how my daily job impacted the security and compliance of AWS as a whole. It’s hard to connect the coding of an individual project back to AWS Security. We’re encouraged to take trainings, and we know that it’s important to protect your data, but people don’t always understand why, exactly. It’s hard for individual contributors to get a sense of the big picture.

If you had to pick any other job, what would you want to do with your life?

Alana: If I wasn’t an SDE, I’d want to be a Data Scientist. I think it would be interesting to analyze data and figure out the trends.

Shane: I would really like to be involved in AI. There are so many unknowns right now, in terms of how to ensure AI that’s secure and ethical. I’d also like to be a teacher, or a university professor. When I was working on my Master’s degree, it was really difficult to get some practical skills, such as how to have a productive one-on-one with my manager, or what career paths are available in a security-related field. I like the idea of being able to use my industry experience to help other students.

What career advice do you have for someone just joining AWS?

Shane: There’s a lot of opportunity at AWS. During my first six months here, I was cautious: Because of my previous consulting background, I felt like I had to have a legit case to talk with leadership and take up their time. It’s certainly important that I value their time, but in general I’ve found people in senior positions to be very willing to engage with me. My advice is to not be afraid to reach out, grow your network, and learn new things.

Alana: I’d echo what Shane said. There are a lot of possibilities at AWS, so don’t be afraid to try something new.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Alana Lan

Alana is a Software Development Engineer at AWS. She’s responsible for building tools and services to help with the operations of AWS security and compliance controls. Currently, she is obsessed with exploring AWS Services.


Shane Xu

Shane is a Technical Program Manager for Security Assessment and Automation at AWS. Shane brings together people, technology, and processes to invent and simplify security and compliance automation solutions. He’s a passionate learner and curious explorer at work and in life.

AWS Security Profiles: Matt Bretan, Principal Manager, AWS Professional Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-matt-bretan-principal-manager-aws-professional-services/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I‘ve been with AWS Professional Services nearly five years. I run two teams: our Security Assurance and Advisory Practice team, and our Security Experience team. The Security Assurance and Advisory Practice team is responsible for working with our customers’ executive leadership to help them plan their security risk and compliance strategy when they move to AWS. Executives need to understand how to organize their teams and what tools and mechanisms they need in order to meet expected regulatory or policy-based controls. We help with that. It’s a relatively new team that we started up in early 2018.

The Security Experience team is responsible for our Jam platform, which is changing the way we help customers learn about AWS services and partners. Previously, when we went to a customer, we gave slide presentations about how to be secure on AWS and how to migrate to the cloud. At the end of the presentation, people could usually repeat definitions back at us, but when we put them in front of a keyboard and monitor, they were uncertain about what to do. So, we built out the Jam platform, which allows customers to get hands-on experiences across a wide variety of AWS services, plus some partner products as well. It’s a highly gamified way to learn.

What’s the most challenging part of your job?

How to scale our offerings. A lot of what we do is to work one-on-one with our customers. Part of my job is to figure out how to impact more customers. We don’t just want to work with the largest companies of the world, but rather we want to help all companies be more secure. So, I’m constantly asking myself how to create tools and offerings that are scalable enough to impact everyone, and that everyone can benefit from.

What are you currently working on that you’re excited about?

The Jam platform. It allows us to change the way that customers experience AWS, and the way that they learn about moving to the cloud. It’s a different way to think about learning — gamifying the cloud adoption process helps people actually experience the technology. It’s not just definitions on a slide deck anymore. People get to see the capabilities of AWS in action, and they’ll have that Jam experience as a foundation once they start building their own infrastructure.

What can people expect from your teams at re:Invent this year?

The Jam Lounge will be in the Tundra Lounge within the Partner Expo Center at the Venetian. You’ll be able to register for the Jam Lounge there, and from Monday night through Thursday night, you can take part in a number of challenges — everything from security to migration to data analytics. We’ll be showcasing five partner solutions as well. The cool thing about the Jam Lounge is that it’s a completely virtual event. Once you register for the event in the Partner Expo Center, you can take part in the challenges from anywhere at re:Invent. This means that you can gain hands on experiences with AWS and our partner solutions in between the other amazing sessions and activities that go on during re:Invent.

The Security Jam takes place on Thursday, and it’s purely security-focused. We’ll have 13 different challenges. There are 10 specifically around AWS services and three from partners, and they’ll highlight different cloud security scenarios that people might encounter on a day-to-day basis. You’ll get to go into AWS accounts that we provision for you, identify what is wrong, and then fix them to get them into a known good state.

We’re also hosting the Executive Security Simulation as part of the executive track. That one is a tabletop exercise to help attendees experience and think about security from a high level. We simulate the first two years in a company’s life as they adopt the cloud — including some of the decisions they have to make in this process — so that people can think through security adoption from a lens that’s less about technical implementation and more about high-level strategy.

You mentioned that the Security Jam is an example of gamified learning. Can you talk more about what that means?

People love the hands-on application of learning: Rather than reading definitions, you get to use the technology and experience it. And that’s what gamification does: It gives you the actual infrastructure with an actual problem, and you get to go in and fix it. Also, it plays well to peoples’ competitive side. We set participants up in teams, and you have to work together to solve problems and win. There’s a leader board and scoring with points and clues. Anyone can participate, get what they need out of it, have fun doing it, and feel successful at the end of the day. This is the third year we’ve run a Jam at re:Invent, and we’re excited to have everyone try brand-new challenges and learn about new services and ways to do things on AWS.

Any tips for first-time conference attendees?

This conference is a marathon and not a sprint! There are so many great sessions and activities that go on during the week, so spend a little bit of time now reviewing the agenda and figure out what is most important for you to attend. Prioritize those items, and then make sure to leave some time for some surprise announcements! For the Jam sessions, you actually get to interact with AWS and our partner solutions, so bring your laptop. But also, come with an open mind. I think the big thing here is that re:Invent is a learning event. But for our events, at the end, there are prizes!

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

I think a lot of the changes will be around the requirements themselves. Today, many of the requirements in the compliance space center around specific technologies, rather than around the risk itself. Often, these programs are also primarily written around a traditional data center model where someone deploys an application onto a server and then doesn’t touch it for years. I think as compliance programs mature, we’ll shift to more of a risk-based process that puts the overall security and protection of customers first while taking into account how technology is constantly changing.

What does cloud security mean to you, personally?

I use technology: I stream videos, I do online banking, I buy things online, and I have an IoT-connected house. So, for me, cloud security is a way to protect my own interests and the interests of my family. I’m using these companies — often customers of ours — on a day-to-day basis. So the more I can do to ensure that they’re being secure with their implementations, the more secure I’ll be in the long run — and the more secure all consumers will be. The more I can do to proactively make it difficult for malicious parties to do harm, the safer and better all of our lives will be.

If you had to pick any other job, what would you want to do with your life?

My passion is building things. If I were to switch careers, I think I’d want to build physical structures, like houses or buildings. I believe there is a strong similarity between the work I do now around helping design security controls and the work that architects do when they design buildings. There are risks around building physical structures. You have to deal with things like lateral loads and entrance and exit controls. Technology involves a different kind of load, but in both cases, you have to go through a process of preparing for it and understanding it. I find that similarity fascinating.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Matt Bretan

Matt travels the world helping customers move their most sensitive workloads onto AWS while trying to find the best airline snack. He won’t stop until he has figured out how to help everyone. When not working with customers, he is at home with his beautiful wife and three wonderful kids.

AWS Security Profiles: Phil Rodrigues, Principal Security Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-phil-rodrigues-principal-security-solutions-architect/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’m a Principal Security Solutions Architect based in Sydney, Australia. I look after both Australia and New Zealand. I just had my two year anniversary with AWS. As I tell new hires, the first few months at AWS are a blur, after 6-12 months you start to get some ideas, and then after a year or two you start to really own and implement your decisions. That’s the phase I’m in now. I’m working to figure out new ways to help customers with cloud security.

What are you currently working on that you’re excited about?

In Australia, AWS has a mature set of financial services customers who are leading the way in terms of how large, regulated institutions can consume cloud services at scale. Many Aussie banks started this process as soon as we opened the region six years ago. They’re over the first hump, in terms of understanding what’s appropriate to put into the cloud, how they should be controlling it, and how to get regulatory support for it. Now they’re looking to pick up steam and do this at scale. I’m excited to be a part of that process.

What’s the most challenging part of your job?

Among our customers’ senior leadership there’s still a difference of opinion on whether or not the public cloud is the right place to be running, say, critical banking workloads. Based on anecdotal evidence, I think we’re at a tipping point leading to broad adoption of public cloud for the industry’s most critical workloads. It’s challenging to figure out the right messaging that will resonate with the boards of large, multi-national banks to help them understand that the technology control benefits of the cloud are far superior when it comes to security.

What’s your favorite part of your job?

We had a private customer security event in Australia recently, and I realized that: We now have the chance to do things that security professionals have always wanted to do. That is, we can automatically apply the most secure configurations at scale, ubiquitously across all workloads, and we can build environments that are quick to respond to security problems and that can automatically fix those problems. For people in the security industry, that’s always been the dream, and it’s a dream that some of our customers are now able to realize. I love getting to hear from customers how AWS helped make that happen.

How did you choose your particular topic for re:Invent this year?

Myles Hosford and I are presenting a session called Top Cloud Security Myths – Dispelled! It’s a very practical session. We’ve talked with hundreds of customers about security over the past two years, and we’ve noticed the types of questions that they ask tend to follow a pattern that’s largely dependent on where they are in their cloud journey. Our talk covers these questions — from the simple to the complex. We want the talk to be accessible for people who are new to cloud security, but still interesting for people who have more experience. We hope we’ll be able to guide everyone through the journey, starting with basics like, “Why is AWS more secure than my data center?”, up through more advanced questions, like “How does AWS protect and prevent administrative access to the customer environment?”

What are you hoping that your audience will take away from it?

There are only a few 200-level talks on the Security track. Our session is for people who don’t have a high level of expertise in cloud security — people who aren’t planning to go to the 300- and 400-level builder talks — but who still have some important, foundational questions about how secure the cloud is and what AWS does to keep it secure. We’re hoping that someone who has questions about cloud security can come to the session and, in less than an hour, get a number of the answers that they need in order to make them more comfortable about migrating their most important workloads to the cloud.

Any tips for first-time conference attendees?

You’ll never see it all, so don’t exhaust yourself by trying to crisscross the entire length of the Strip. Focus on the sessions that will be the most beneficial to you, stay close to the people that you’d like to share the experience with, and enjoy it. This isn’t a scientific measure, but I estimate that last year I saw maybe 1% of re:Invent — so I tried to make it the best 1% that I could. You can catch up on new service announcements and talks later, via video.

What’s the most common misperception you encounter about cloud security?

One common misperception stems from the fact that cloud is a broad term. On one side of the spectrum, you have global hyperscale providers, but on the opposite end, you have small operations with what I’d call “a SaaS platform and a dream” who might sell business ideas to individual parts of a larger organization. The organization might want to process important information on the SaaS platform, but the provider doesn’t always have the experience to put the correct controls into place. Now, AWS does an awesome job of keeping the cloud itself secure, and we give customers a lot of options to create secure workloads, but many times, if an organization asks the SaaS provider if they’re secure, the SaaS provider says, “Of course we’re secure. We use AWS.” They’ll give out AWS audit reports that shows what AWS does to keep the cloud secure, but that’s not the full story. The software providers operating on top of AWS also play a role in keeping their customers’ data secure, and not all of these providers are following the same mature, rigorous processes that we follow — for example, undergoing external third-party audits. It’s important for AWS to be secure, but it’s also important for the ecosystem of partners building on top of us to be secure.

In your opinion, what’s the biggest challenge facing cloud security right now?

The number of complex choices that customers must make when deciding which of our services to use and how to configure them. We offer great guidance through best practices, Well-Architected reviews, and a number of other mechanisms that guide the industry, but our overall model is still that of providing building blocks that customers must assemble themselves. We hope customers are making great decisions regarding security configurations while they’re building, and we provide a number of tools to help them do this — as do a number of third-parties. But staying secure in the cloud still requires a lot of choices.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

I’m not losing much sleep over quantum computing and its impact on cryptography. I think that’s a while away. For me, the near future is more likely to feature developments like broad adoption of automated assurance. We’ll move away from a paper-based, once-a-year audits to determine organizations’ technology risk, and toward taking advantage of persistent automation, near-instant visibility, and being able to react to things that happen in real-time. I also think we’ll see a requirement for large organizations who want to move important workloads to the cloud to use security automation. Regulators and the external audit community have started to realize that automated security is possible, and then they’ll push to require it. We’re already seeing a handful of examples in Australia, where regulators who understand the cloud are asking to see evidence of AWS best practices being applied. Some customers are also asking third-party auditors not to bring in a spreadsheet but rather to query the state of their security controls via an API in real-time or through a dashboard. I think these trends will continue. The future will be very automated, and much more secure.

What does cloud security mean to you, personally?

My customer base in Australia includes banks, governments, healthcare, energy, telco, and utility. For me, this drives home the realization that the cloud is the critical digital infrastructure of the future. I have a young family who will be using these services for a long time. They rely on the cloud either as the infrastructure underneath another service they’re consuming — including services as important as transportation and education — or else they access the cloud directly themselves. How we keep this infrastructure safe and secure, and how we keep peoples’ information private but available affects my family.

Professionally, I’ve been interested in security since before it was a big business, and it’s rewarding to see stuff that we toiled on in the corner of a university lab two decades ago gaining attention and becoming best practice. At the same time, I think everyone who works in security thrives on the challenge that it’s not simple, it’s certainly not “done” yet, and there’s always someone on the other side trying to make it harder. What drives me is both that professional sense of competition, and the personal realization that getting it right impacts me and my family.

What’s the one thing a visitor should do on a trip to Sydney?

Australia is a fascinating place, and visitors tend to be struck by how physically beautiful it is. I agree; I think Sydney is one of the most beautiful cities in the world. My advice is to take a walk, whether along the Opera House, at Sydney Harbor, up through the botanical gardens, or along the beaches. Or take a ferry across to the Manly beachfront community to walk down the promenade. It’s easy to see the physical beauty of Sydney when you visit — just take a walk.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Phil Rodrigues

Phil Rodrigues is a Principal Security Solutions Architect for AWS based in Sydney, Australia. He works with AWS’s largest customers to improve their security, risk, and compliance in the cloud. Phil is a frequent speaker at AWS and cloud events across Australia. Prior to AWS, he worked for over 17 years in Information Security in the US, Europe, and Asia-Pacific.

AWS Security Profiles: Ken Beer, General Manager, AWS Key Management Service

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-ken-beer-general-manager-aws-key-management-service/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been here a little over six years. I’m the General Manager of AWS Key Management Service (AWS KMS).

How do you explain your job to non-tech friends?

For any kind of product development, you have the builders and the sellers. I manage all of the builders of the Key Management Service.

What are you currently working on that you’re excited about?

The work that gets me excited isn’t always about new features. There’s also essential work going on to keep AWS KMS up and running, which enables more and more people to use it whenever they want. We have to maintain a high level of availability, low latency, and a good customer experience 24 hours a day. Ensuring a good customer experience and providing operational excellence is a big source of adrenaline for service teams at AWS — you get to determine in real time when things aren’t going well, and then try to address the issue before customers notice.

In addition, we’re always looking for new features to add to enable customers to run new workloads in AWS. At a high level, my teams are responsible for ensuring that customers can easily encrypt all their data, whether it resides in an AWS service or not. To date, that’s primarily been an opt-in exercise for customers. Depending on the service, they might choose to have it encrypted — and, more specifically, they might choose to have it encrypted under keys that they have more control over. In a classic model of encryption, your data is encrypted at storage — think of BitLocker on your laptop — and that’s it. But if you don’t understand how encryption works, then you don’t appreciate the fact that encrypted by default doesn’t necessarily provide the security you think it does if the person or application that has access to your encrypted data can also cause your keys to be used whenever it wants to decrypt your data. AWS KMS was invented to help provide that separation: Customers can control who has access to their keys and enable how AWS services or their own applications make use of those keys in an easy, reliable, and low cost way.

What’s the most challenging part of your job?

It depends on the project that’s in front of me. Sometimes it’s finding the right people. That’s always a challenge for any manager at AWS, considering that we’re still in a growth phase. In my case, finding people who meet the engineering bar for being great computer scientists is often not enough — I’ve got to find people who appreciate security and have a strong ethos for maintaining the confidentiality of customer data. That makes it tougher to find people who will be a good fit on my teams.

Outside of hiring good people, the biggest challenge is to minimize risk as we constantly improve the service. We’re trying to improve the feature set and the customer experience of our service APIs, which means that we’re always pushing new software — and every deployment introduces risk.

What’s your favorite part of your job?

Working with very smart, committed, passionate people.

How did you choose your particular topic for re:Invent this year?

For the past four years, I’ve given a re:Invent talk that offered an overview of how to encrypt things at AWS. When I started giving this talk, not every service supported encryption, AWS KMS was relatively new, and we were adding a lot of new features. This year, I was worried the presentation wouldn’t include enough new material, so I decided to broaden the scope. The new session is Data Protection: Encryption, Availability, Resiliency, and Durability, which I’ll be co-presenting with Peter O’Donnell, one of our solution architects. This session focuses on how to approach data security and how to think about access control of data in the cloud holistically, where encryption is just a part of the solution. When we talk to our customers directly, we often hear that they’re struggling to figure out which of their well-established on-premises security controls they should take with them to the cloud — and what new things should they be doing once they get there. We’re using the session to give people a sense of what it means to own logical access control, and of all the ways they can control access to AWS resources and their data within those resources. Encryption is another access control mechanism that can provide strong confidentiality if used correctly. If a customer delegates to an AWS managed service to encrypt data at rest, the actual encipherment of data is going to happen on a server that they can’t touch. It’s going to use code that they don’t own. All of the encryption occurs in a black box, so to speak. AWS KMS gives customers the confidence to say, “In order to get access to this piece of data, not only does someone have to have permission to the encrypted data itself in storage, that person also has to have permission to use the right decryption key.” Customers now have two independent access control mechanisms, as a belt-and-suspenders approach for stronger data security.

What are you hoping that your audience will take away from it?

I want people to think about the classification of their data and which access control mechanisms they should apply to it. In many cases, a given AWS service won’t let you apply a specific classification to an individual piece of data. It’ll be applied to a collection or a container of data — for example, a database. I want people to think about how they’re going to define the containers and resources that hold their data, how they want to organize them, and how they’re going to manage access to create, modify, and delete them. People should focus on the data itself as opposed to the physical connections and physical topology of the network since with most AWS services they can’t control that topology or network security — AWS does it all for them.

Any tips for first-time conference attendees?

Wear comfortable shoes. Because so many different hotels are involved, getting from point A to point B often requires walking at a brisk pace. We hope a renewed investment in shuttle buses will help make transitions easier.

What’s the most common misperception you encounter about encryption in the cloud?

I encounter a lot of people who think, “If I use my cloud provider’s encryption services, then they must have access to my data.” That’s the most common misperception. AWS services that integrate with AWS KMS are designed so that AWS does not have access to your data unless you explicitly give us that access. This can be a hard concept to grasp for some, but we put a lot of effort into the secure design of our encryption features and we hold ourselves accountable to that design with all the compliance schemes we follow. After that, I see a lot of people under the impression that there’s a huge performance penalty for using encryption. This is often based on experiences from years ago: At the time, a lot of CPU cycles were spent on encryption, which meant they weren’t available for interesting things like database searches or vending web pages. Using the latest hardware in the cloud, that’s mostly changed. While there’s a non-zero cost to doing encryption (it’s math and physics, after all), AWS can hide a lot of that and absorb the overhead on behalf of customers. Especially when customers are trying to do full-disk encryptions for workloads running on Amazon Elastic Compute Cloud (Amazon EC2) with Amazon Elastic Block Store (Amazon EBS), we actually perform the encryption on dedicated hardware that’s not exposed to the customer’s memory space or compute. We’re minimizing the perceived latency of encryption, and we all but erase the performance cost in terms of CPU cycles for customers.

What are some of the blockers that customers face when it comes to using cryptography?

There are some customers who’ve heard encryption is a good idea — but every time they’ve looked at it, they’ve decided that it’s too hard or too expensive. A lot of times, that’s because they’ve brought in a consultant or vendor who’s influenced them to think that it would be expensive, not just from a licensing standpoint but also in terms of having people on staff who understand how to do it right. We’d like to convince those customers that they can take advantage of encryption, and that it’s incredibly easy in AWS. We make sure it’s done right, and in a way that doesn’t introduce new risks for their data.

There are other customers, like banks and governments, who have been doing encryption for years. They don’t realize that we’ve made encryption better, faster, and cheaper. AWS has hundreds of people tasked with making sure encryption works properly for all of the millions of AWS customers. Most companies don’t have hundreds of people on staff who care about encryption and key management the way we do. These companies should absolutely perform due diligence and force us to prove that our security controls are in place and they do what we claim they do. We’ve found the customers that have done this diligence understand that we’re providing a consistent way to enforce the use of encryption across all of their workloads. We’re also on the cutting edge of trying to protect them against tomorrow’s encryption-related problems, such as quantum-safe cryptography.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a couple of changes. The first is that we’ll see more customers use encryption by default, making encryption a critical part of their operational security. It won’t just be used in regulated industries or by very large companies.

The second change is more fundamental, and has to do with a perceived threat to some of today’s cryptography: There’s some evidence that quantum computing will become affordable and usable at some point in time — although it’s unclear if that time is 5 or 50 years away. But when it comes, it will make certain types of cryptography very weak, including the kind we use for data in transit security protocols like HTTPS and TLS. The industry is currently working on what’s called quantum-safe or post-quantum cryptography, in which you use different algorithms and different key sizes to provide the same level of security that we have today, even in the face of an adversary that has a quantum computer and can capture your communications. As encryption algorithms and protocols evolve to address this potential future risk, we’ll see a shift in the way our devices connect to each other. Our phones, our laptops, and our servers will adopt this new technology to ensure privacy in our communications.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Ken Beer

Ken is the General Manager of the AWS Key Management Service. Ken has worked in identity and access management, encryption, and key management for over 6 years at AWS. Before joining AWS, Ken was in charge of the network security business at Trend Micro. Before Trend Micro, he was at Tumbleweed Communications. Ken has spoken on a variety of security topics at events such as the RSA Conference, the DoD PKI User’s Forum, and AWS re:Invent.

AWS Security Profiles: Nihar Bihani, Senior Manager; Jeff Lyon, Systems Development Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-nihar-bihani-senior-manager-jeff-lyon-systems-development-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

Jeff: I’ve been with AWS for four years. I started as a Product Manager before transitioning into my current role as a Systems Development Manager where I lead the AWS DDoS Response Team. The AWS DDoS Response Team is the group that defends the Amazon infrastructure against denial-of-service attacks, in addition to protecting many of our customers against the impact of those attacks on their own applications.

Nihar: I’ve been with AWS for nearly 10 years. I started as an intern. I’m now a Senior Manager for two customer-facing services. The first is AWS WAF. The other is AWS Firewall Manager. I’m responsible for managing the team that builds those services.

How do you explain your job to non-tech friends?

Jeff: We help AWS defend against outside attacks — external threats that might otherwise cause problems for people.

Nihar: I usually tell people that my job is to make sure the applications that are running on AWS stay secure. My team writes the software that helps keep these sites safe and secure.

What are you currently working on that you’re excited about?

Jeff: I’m excited about some things that are happening behind the scenes. When people hear about the DDoS Response Team, I think the picture that comes to mind is engineers answering tickets and working on individual problems. We do a bit of that, but we’re mostly focused on building automation to solve these problems at scale. What we’re trying to do is to remove the undifferentiated heavy lifting from something that used to be really complicated and difficult for developers to solve, allowing them to focus more on the applications running on our platform.

Nihar: Lots of things! Security is an area that customers take very seriously—and it’s also an area that we take very seriously. My team is working on initiatives in three broad areas. First, we’re going to make our existing services scale more, perform better, and be more available. Second, we’re investing in adding new features for both AWS WAF and AWS Firewall Manager — something that our customers tend to get very excited about because they can use those features right away to help make their applications more secure. The third major project is geographic expansion. We’re working on expanding the AWS WAF presence across more AWS regions.

What’s the most challenging part of your job?

Jeff: Solving problems at scale. If you think about the many different problems in distributed systems, solving them individually tends to be relatively easy. But when you think about them on a large scale, and then think about the number of points of presence within AWS regions that we have, and even the size of some of our customers’ applications, it becomes quite a different story. Being able to think through those problems and figure out how to implement solutions on a much larger scale is a unique challenge.

Nihar: The most challenging part of my job is to deliver everything that our customers need fast enough. It’s not because we don’t want to. We do. And we want to build solutions that are of high-quality. But we have limited resources, and there’s a finite number of things we can do with them. It’s really helpful when customers help us prioritize against their needs, since that allows us to iterate as quickly as we can while knowing that what we’re delivering will have the most impact for customers.

How did you choose your particular topic for re:Invent this year?

Jeff: Our session is about orchestrating perimeter security. Perimeter security is the concept of taking threats and mitigating them far away from the application itself. The session focuses on how to build a layer of defense that people can use to defend against things like external threats, application vulnerabilities, bad bots, and DDoS attacks. Our customers are interested in this topic, and we field a lot of questions like, “What are the best practices? What architectures should I consider?” So the goal of the session is to help people protect their AWS resources so that they can spend more time building their applications and less time worrying about security threats. The “orchestration” component comes into play for large organizations, who need to answer the question, “How do you do that and manage it at scale?” For a large organization with a lot of applications, you have to ensure that if you build out a security policy, any given change will take effect across the entire application. You need a centralized way of doing that. So we’ll also talk about the capabilities that AWS offers via AWS Firewall Manager, which allows customers to orchestrate security policies on behalf of AWS WAF. We’ll discuss ways you can lock down your VPC network access control list, plus other strategies that a centralized security team can use to make sure that there’s a ubiquitous protection layer for the entire application.

Nihar: I also want to emphasize that this approach allows customers to achieve a strong security posture for their applications without the need to re-architect any of the applications or any of the infrastructure that’s already running on AWS. We want to dispel the idea that customers will have to do a ton of work. You won’t, and yet you’ll be able to improve the availability of your applications and benefit from being compliant with many regulatory requirements. Perimeter security is like building a wall around a castle you’ve already built. You don’t have to renovate the castle. You can build the wall, and maybe fill it with security guards, or put cameras on it: you haven’t changed your castle at all, but it’s so much more secure.

What are you hoping that your audience will take away from your session? What should they do differently as a result of it?

Jeff: I hope our customers will realize that there are lots of ways to architect and build things on AWS. And one of those ways is by using the AWS edge network as a tool to mitigate threats. We want them to understand the differentiating capabilities that we provide with that edge network and to be able to up-level their security when they get back to the office.

Nihar: I want people to understand that making their applications more secure doesn’t take a lot of effort. There are tools available, and we’ll show them how to use those tools in their own service architectures. Some customers might not be aware of all the threats they should be protecting their applications from, so the session is also about educating our customers on potential threats and how to mitigate against those threats. Jeff and I live in this world, so we’re very aware.

Does your session require existing knowledge about the topic?

Jeff: There’s a lot of a value in this session for developers at different experience levels and across different applications, but it’ll be especially useful for application developers who’ve built on AWS and who’ve gone through our security best practices — but are looking for opportunities to do more.

Nihar: You don’t need to have an extensive background in security because we’ll cover some of the current threat landscape, in addition to covering some of the ways that you can defend against these threats.

What are the biggest misconceptions that people have about perimeter security?

Jeff: People sometimes think that the on-premise capabilities they’ve built for themselves are going to be lost when they move to the cloud. One of the things we do in our session is demonstrate how our customers actually retain all those capabilities. We’ve just made them easier to consume and understand.
Nihar: People also think sometimes that perimeter security isn’t beneficial, or that it’s too hard, or too expensive. To the first point, I’d say that there are a lot of “bad actors” out there, and consumers have high standards for availability and security when they use any application. As for difficulty and expense, these are exactly the things we have in mind — we’re doing our best to ensure that it’s a simple experience that’s affordable for everyone.

Can you tell us about some of the innovations AWS has made in perimeter security?

Jeff: My favorite is the way we’ve leveraged the AWS global infrastructure to be able to detect and mitigate threats at the point of ingress. If you think about distributed denial-of-service attacks, historically, the network of any given company might have multiple points of presence. But these individual points of presence might not all be prepared to handle a DDoS attack, and so you’d have to shunt the traffic off to much larger locations called “scrubbing centers” and then pull it back to the point of presence in order to serve your customers. That approach can be costly, it can be difficult to build at scale, and it can add a performance penalty—but it was historically the industry standard. One of the things we’ve created at AWS is a way to do this such that every point of presence in every AWS region has a system right there at the point of ingress that will inspect the traffic, decide if it’s valid to be passed to the customer’s application, and pass it without a noticeable performance penalty. That’s difficult to accomplish at scale.

Nihar: AWS WAF offers a flexible rule language with full API access, so many of our customers have built automations with it. For instance, customers see traffic coming to their applications and they evaluate their logs using some of the data processing tools AWS AWF has, and then they immediately turn around and programmatically create a new WAF rule, submit it to AWS WAF, and within minutes AWS WAF is starting to block that bad traffic. All of this can be automated, and that’s powerful. In addition to customers writing their own rules, we offer Managed Rules that are written, curated and managed by AWS Marketplace Sellers and can be easily deployed in front of your web applications.

AWS Firewall Manager is integrated with AWS Organizations and AWS Config with the goal of providing a consistent, reliable security posture for customers that have potentially hundreds or thousands of applications running on AWS. These customers often find it beneficial to use AWS Firewall Manager to programmatically protect all of their applications in a simple way rather than having to do a lot of undifferentiated heavy lifting by building Lambda functions and working with AWS Config and doing a lot of scripting. All that is doable, but AWS Firewall Manager simplifies the experience.

What does cloud security mean to you, personally?

Jeff: Cloud security to me means two different things, both related to the Shared Responsibility Model. There is security in the cloud and security of the cloud. Security of the cloud is AWS’s responsibility, and security in the cloud is our customer’s responsibility. Our engineers are responsible for building security into AWS services, so that when customers move to the cloud, some aspects of security are taken care of automatically. But there are other aspects that our customers remain responsible for. To me, cloud security means that we will take care of all the things we’re able to take care of for our customers. And for the things we can’t take care of — the things that our customers remain responsible for and will have to manage themselves — we’re going to at least make them easier to think about, easier to configure, and easier to manage at scale.

Nihar: Security is our highest priority. If we’re not secure, we don’t have a business. So in one word, cloud security for me is trust. Our customers have a high bar because their customers, their consumers, demand a very high security posture. And as Jeff said, security is certainly a shared responsibility. But for the pieces for which we’re responsible, we have set a very high bar for ourselves so we continue to earn customer trust.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

Jeff: If you’re developing on AWS, you don’t have to worry about a lot of foundational things, like building a data center, figuring out where the power comes from, or managing the infrastructure. Security is the next frontier, where we can abstract and make it easier for our customers. I think that over the next several years, customers will see things get easier to manage and easier to think about. People won’t have to worry as much about the engineering behind the security. They’ll be able to express intent, which will be translated into security.

Nihar: I think we’re going to continue to add more learning and intelligence to our security services over the next several years, so we can be more proactive when it comes to the security and compliance of our customers’ applications. In practical terms, I think this means that we’ll innovate by building solutions that are really simple to use, targeted to each specific application, evolve with that application, yet work at AWS scale.

If you had to pick any other job, what would you want to do with your life?

Jeff: My dream job growing up was to be a police officer. I went through school and college thinking I’d pursue that dream and actually joined the Navy as a Master at Arms, which is a police officer in the Navy. I did that for nine years and was also an auxiliary Sheriff’s Deputy for two years. So I got a lot of law enforcement experience, which has actually benefited my career. Really law enforcement is all about problem solving. So coming to AWS, I was able to bring a lot of those skills with me.

Nihar: I like building things. It just resonates with me. Here at Amazon, we like building new things, launching them, and then going back to square one to do it all over again. I’m organized and meticulous, so I like to have the end goal in mind and then build up to that. If I weren’t in software engineering, I’d like to do something involving construction: You start with a vision and a flat piece of land — and how you get from there to the end goal of a finished building is a fascinating process to me.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Jeff Lyon

Jeff leads technical operations for AWS Perimeter Protection, where he manages engineering and response teams who defend AWS against Distributed Denial of Service (DDoS) attacks, and other external threats. His teams’ responsibilities include the defense of the AWS network, the defense of AWS services, and responding to attacks on behalf of many AWS customers who rely on services like AWS Shield, AWS WAF, and AWS Firewall Manager. Prior to joining AWS, Jeff founded a startup that was focused on providing DDoS mitigation capabilities to large, distributed networks.


Nihar Bihani

Nihar leads the teams that built the AWS WAF and AWS Firewall Manager services. He joined Amazon in 2009 and has spent time in AWS Marketing and Amazon CloudFront teams, most recently leading Product Management for CloudFront. Nihar was also an intern with AWS in 2008. Prior to Amazon, Nihar worked at a start-up for a few years. Nihar has earned his BS in Computer Science and an MBA in Marketing and Finance.

AWS Security Profiles: Chad Woolf, VP of AWS Security

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-chad-woolf-vp-of-aws-security/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been at AWS for over eight years now, and I work in security assurance. The essence of my work is to help customers move critical and regulated workloads to the cloud. We own and manage security process, tech, and functions that customers can’t individually validate themselves. My job, and my team’s job, is to make those functions transparent to our customers, allowing them to rely on our processes, procedures, and controls. We work toward this goal by facilitating extensive independent audits and making those reports available. We also engage with regulators and customers to help them understand how the cloud works, what things they’ll have to do differently here, and what new opportunities are available to them in terms of better ways to govern their IT and protect and secure their data.

How do you explain your job to non-tech friends?

Sometimes I simplify by telling people, “I do information security at Amazon,” or “I do data protection and privacy at Amazon.” Mentioning the word “privacy” usually hits the limit of many people’s interest and they stop asking questions. To my kids or other family I usually say something like, “I work to keep Amazon safe for everybody.”

What are you currently working on that you’re excited about?

The world of traditional security assurance is complex and broad, so it’s full of interesting challenges. While working on that we’re also looking ahead at augmenting traditional security assurance and quality assurance models with more effective and newer models. A traditional approach might involve auditors doing sample testing and evaluating the narrative of how systems work. But this approach isn’t always technically deep and sometimes it doesn’t provide full, comprehensive insight into the environment, or into the presence of threats and vulnerabilities in the environment. From the onset of this program, we’ve worked to take these traditional models and modify the approach that will provide true assurance for our customers.

In addition, recently we’ve kicked off something I’m really excited about — the work our Automated Reasoning Group (ARG) is doing around developing mathematical proofs of certain aspects of a system. For example, a mathematical proof might be used to prove that there’s no instance of a weak key being used anywhere in the entire system. That’s a much higher bar than just having a “reasonable assurance” of no weak keys, which is the objective that auditors traditionally use. Auditors can’t evaluate all the code and they can’t evaluate all of the instances where keys are being used. With automated reasoning, if we’re able to tell them, “this proof can examine the entire system for a certain value,“ it’s a much higher bar than even today’s advanced control measures, such as automated controls, preventive controls, or detective controls. It’s a proof. We (and our auditors) are really excited about this possibility, because systems are becoming so immense and so complex that it’s hard for us humans to wrap our minds around around the complexity — so we’re using math to do it for us.

What’s the most challenging part of your work?

Most of the challenges I deal with stem from complexity. Each of the new services we release — including all of the things being launched at re:Invent this year — introduces a new, sometimes complex function into our environment and into the environments of the customers who use it. It’s becoming more and more challenging to effectively govern these disparate services, and for people to be certain that they’re applying the right standards across all of them. We have some services to deal with this, and I think we’ll see AWS release more governance-like features to help deal with this challenge more comprehensively in the future.

Another major challenge is that the many governments and regulators hold an understanding of the cloud that hasn’t kept pace with the cloud’s incredibly rapid evolution. Years ago, the cloud was defined in fairly simple terms — infrastructure, platform, and software as a service. Many people still understand it in those dated categorizations. But it’s getting much more complex the more we offer and the bigger this space gets.

What’s the most common misperception you encounter about cloud security and compliance?

The misperception I encounter the most is that the cloud is unfit for regulated data and workloads. Regulators and auditors — many of whom haven’t operated an IT infrastructure — often have only a high-level understanding of the cloud, many times learned through colleagues, high level reports and media reports. They hear things and may not have a way to technically validate whether those things are true. Years ago, it was a pretty common misunderstanding that accessing your data securely using the internet was the same as, “all of your data is openly available on the Internet,” which of course isn’t the case. I’ve had many personal interactions where someone said they absolutely could not have certain data stored in the cloud, because then the whole world would be able to see it. But this basic misperception is pretty much debunked at this stage. Now we spend a lot to time clearing up the misperception that regulated and audited data can’t be moved to the cloud. The reality is that because of the comprehensive control you have, regulated/audited data is actually better suited for the cloud. My team and many other teams at AWS work to help regulators, auditors, security teams and their leadership reach the right technical depth and understanding to give them the confidence to move these kinds of workloads to AWS.

You’re hosting two sessions for re:Invent 2018. How did you choose your particular topics?

I’m co-presenting a session with Byron Cook, the director of ARG, on Automating Compliance Certification with Automated Mathematical Proof. This session stems from what I mentioned before, the trend that traditional assurance methods are becoming less effective as complexity grows. We’ll be talking about new assurance models. But the session isn’t just us saying, “Here’s what we did! Good luck! Go hire your own PhDs to figure this out.” We’re going to give customers the chance to experiment with automated reasoning in their own cloud environments. It’s a chalk talk, so it’ll be a smaller audience, which will let us go quite in-depth with some of our examples. The CEO of one of our assessors will also be there and will talk about what these changes mean for his firm.

I’m also hosting “peer problem-solving roundtable” at the Executive Summit that will focus on staying ahead of privacy regulation. GDPR, which went into effect in May 2018, made a lot of customers push to reach that date in a compliance state, but many didn’t and are still working on it. It’s a big challenge to sustain the effort around GDPR privacy and data protection. It’s not even like you can reach that state and then say, “Okay, we’re done.” It requires ongoing effort. Additionally, all kinds of laws are starting to be enacted all over the world that either match GDPR’s stringency or exceed it. So the session will be a workshop on how to deal with these challenges, and how companies can sustain their efforts and create frameworks that can handle additional regulation that might be enacted down the road.

What are you hoping that your audience will take away from your sessions?

For the automated reasoning session, I want people to leave with ideas about how they can tinker with automated reasoning and proofs of compliance in their own environments. This approach requires experimentation, so I want to empower people to just go ahead and start tinkering.

For the GDPR session, I want people to leave with some good ideas for how to proactively think about compliance — and with some specific actions they can take to move their companies’ privacy programs into a better state. The exact direction of our conversation will depend on the audience, since it’s an interactive workshop, but I’m hopeful that people will walk away with good ideas.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think that security and compliance will follow a trajectory similar to computing in the mid-2000s. Ten to 15 years ago, we all had PCs that required us to install software, which was all over the place in terms of quality — sometimes it worked on your laptop and sometimes it didn’t. We went from that to mobile devices, where the entirety of an installation is in a single container on an app. There might be some limits on what you can do, in terms of exchanging data with other apps and systems, but everything you need as a user is contained within that app. It’s a kit, rather than a bunch of building blocks. You launch it, set some configurations, and then forget about it. I think more of that is going to happen. The compliance scene is becoming exponentially more complex as we move forward with more services, more IT, and with multiple, diverse environments. We’ll need ways of securing it all in a simple way. IT providers will need to offer more app-like experiences, in which we think of the user and what they need to do rather than just providing a bunch of building blocks.

What does cloud security mean to you, personally?

As a consumer, I care about security a lot. When I use an app that’s on the cloud, or access contacts or photos that are stored in the cloud, I’m concerned about it. I make sure that I use encryption when I can. I have random passwords that I don’t reuse. I follow the best practices that security professionals all know and use. But I’m always shocked by how many people don’t really think about these things, or don’t understand the risks involved with not securing your account or encrypting your data, or in using services that clearly don’t follow best practices. For me personally, cloud security is an essential consideration before I actually use or buy anything.

If you had to pick any other job, what would you do with your life?

I’d move into IT transformation. Moving from one IT environment to another involves a lot of organizational change management, from people and process to technology and projects. It’s super complex, and hardly anyone is truly excellent at it. So that’s what I’d get into. I find the complexity there fascinating. Organizational IT transformation takes all the complexity of tech, and then adds to it with the complexity of people, processes, and culture.

As a personal passion, I’d do search and rescue for people who’ve gotten into trouble hiking or biking or rock climbing. It’s a complex, real-world challenge with life-or-death stakes. If I could use my motorcycles to help achieve that, it would be better. It might help justify further motorcycle purchases and help my wife understand the wisdom in this.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah.

How to manage security governance using DevOps methodologies

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/how-to-manage-security-governance-using-devops-methodologies/

I’ve conducted more security audits and reviews than I can comfortably count, and I’ve found that these reviews can be surprisingly open to interpretation (as much as they try not to be). Many companies use spreadsheets to explain and limit business risks, with an annual review to confirm the continued suitability of their controls. However, multiple business stakeholders often influence the master security control set, which can result in challenges like security control definitions being repeated with different wording, or being inconsistently scoped. Reviewing these spreadsheets is not especially fun for anyone.

I believe it’s possible for businesses to not only define their security controls in a less ambiguous way, but also to automate security audits, allowing for more rapid innovation. The approach I’ll demonstrate in this post isn’t a silver bullet, but it’s a method by which you can control some of that inevitable shift in threat evaluations resulting from changes in business and technical operations, such as vulnerability announcements, feature updates, or new requirements.

My solution comes in two parts and borrows some foundational methodologies from DevOps culture. If you’re not familiar with DevOps, you can read more about it here. AWS defines DevOps as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.” Sounds pretty good, right?

User story definitions of security controls

The process of developing security controls should start with a threat modeling exercise. There are some great tools out there that can help you develop very rich threat models for your solutions. I have a personal preference for using STRIDE with my customers, as it’s very widely accepted and has a low barrier to use, but you might also try PASTA, DREAD, VAST, Trike, or OCTAVE. All of these tools result in a risk register being published. A risk register is a prioritized list of risks to your business (or some component of your business or solution), their likelihood of being realized, and their impact if they were to be realized. Combining these factors results in a risk score. You’ll walk through the various mechanisms for those risks to be addressed based on their likelihood or impact. The mechanisms can be directive, preventative, detective or responsive controls; collectively, these are your security controls. (If you want to learn more about the difference between control types, check out the whitepaper AWS Cloud Adoption Framework: Security Perspective.)

Figure 1: The AWS Cloud Adoption Framework Security Perspective

Figure 1 – The AWS Cloud Adoption Framework Security Perspective

Security controls should be carefully worded to avoid ambiguity. Each control typically takes the form of a single statement that requires some action or configuration. The action or configuration results in the documented risk either being mitigated in full or else leaving some residual risk, which can be managed further with other security controls as required by your business’s risk tolerance.

However, the end result can feel like a children’s game of “Telephone” in that an implemented control doesn’t always relate closely to the originally envisioned threat. Consider the following security control definition:

  • Only approved AMIs are allowed to be used.

On the surface, this looks like an easy preventative control to implement, but it immediately raises multiple questions, including:

  • Who approves AMIs, and how are they approved?
  • How can users get AMIs to use?
  • What constitutes “use”? Starting? Connecting to?

This is where DevOps comes in. Many DevOps practices use the notion of a “user story” to help define the requirements for solutions. A user story is simply a syntax for defining a requirement. In other words: As a <user>, I want to <requirement>, so that <outcome>. If you use the same approach to define your security controls, you’ll notice that a lot, if not all, of the ambiguity fades:

As a Security Operations Manager,
I want only images tagged as being hardened by the Security Operations Team to be permitted to start in a VPC
So that I can be assured that the solution is not vulnerable to common attack vectors.

Boom! Now the engineer trying to implement the security control has a better understanding of the intention behind the control, and thus a better idea of how to implement it, and test it.

Documenting these controls for your security stakeholders (legal, governance, CISO, and so on) in an accessible, agile project management tool rather than in a spreadsheet is also a good idea. While spreadsheets are a very common method of documentation, a project management tool makes it easier for you to update your controls, ensuring that they keep pace with your company’s innovations. There are many agile project management suites that can assist you here. I’ve used Jira by Atlassian with most of my customers, but there are a few other tools that achieve similar outcomes: Agilean, Wrike, Trello, and Asana, to name a few.

Continuous integration and evaluation of security controls

Once you’ve written your security control as a user story, you can borrow from DevOps again, and write some acceptance criteria. This is done through a process that’s very similar to creating a threat model in the first place. You’ll create a scenario and then define actions for actors plus expected outcomes. The syntax used is that we start by defining the scenario we’re testing, and then use “Given that <conditions of the test> When <test action> Then <expected outcome>.” For example:

Scenario: User is starting an instance in a VPC

“Given that I am logged in to the AWS Console
and I have permissions to start an instance in a VPC
When I try to start an instance
and the AMI is not tagged as hardened
Then it is denied” [Preventative]

“Given that I am logged into the AWS console
and I have permission to start an instance in the VPC
When I try to start an instance 
and the AMI is not tagged as hardened
Then an email is sent to the Security Operations Team.” [Responsive]

“Given that I am logged into the AWS Console
and I have permission to start an instance in the VPC
When I try to start and instance that is tagged as hardened
Then the instance starts.” [Allowed]

After you write multiple action statements and scenarios supporting the user story (both positive and negative), you can write them up as a runbook, an AWS Config rule, or a combination of both as required.

The second example acceptance criteria above would need to be written as a runbook, as it’s a responsive control. You wouldn’t want to generate a stream of emails to your security operations manager to validate that it’s working.

The other two examples could be written as AWS Config rules using a call to the iam:simulate-custom-policy API, since they are related to preventative controls. An AWS Config rule allows your entire account to be continuously evaluated for compliance, essentially evaluating your control adherence on a <15min basis, rather than from a yearly audit.

Committing those runbook and AWS Config rules to a central code repository fosters the agility of the controls. In the case of runbooks, you may want to adopt a light-weight markup format, such as markdown, that you are able to check in like code. The defined controls can then sit in a CI/CD pipeline, allowing the security controls to be as agile as your pace of innovation.

Figure 2 – A standard DevOps pipeline

Figure 2: A standard DevOps pipeline

There are numerous benefits to this approach:

  • You get immediate feedback on compliance to your security controls and thus your businesses security posture.
  • Unlike traditional annual security compliance audits, you have a record that not only are you compliant now, but you’ve been compliant all year. And publication of this evidence to provide support to audit processes requires almost negligible effort on your part.
  • You may not have to take weeks out of your schedule to audit your security controls.Instead, you can check your AWS Config dashboard and run some simple procedural runbooks.
  • Your developers are now empowered to get early feedback on any solutions they’re designing.
  • Changes to your threat model can quickly radiate down to applicable security controls and acceptances tests, again making security teams enablers of innovation rather than blockers.

One word of caution: You will inevitably have exceptions to your security controls. It’s tempting to hardcode these exceptions or write configuration files that allow for exclusions to rules. However, this approach can create hidden complexity in your control. If a resource is identified as being non-compliant, it may be better to allow it to remain as such, and to document it as an exception to be periodically reviewed. Remember to keep a clean separation between your risk evidence and risk management processes here. Exception lists in code are difficult to maintain and ultimately mean that your AWS Config dashboard can show a distorted evaluation of your resources’ compliance. I advise against codified exceptions in most cases. In fact, if you find yourself preparing to write out exceptions in code, consider that maybe your user-story needs re-writing. And the cycle begins again!

Closing notes

As cloud computing becomes the new normal, agility and innovation are crucial behaviors for long-term success. Adopting the use of user stories and acceptance criteria to mature your security governance process empowers your business to plan for acceleration. I’ve used the DevOps approach with several customers in the finance sector and have seen a shift in the perception of how security governance affects teams. DevOps has the ability to turn security teams into enablers of business innovation.

If you want help finding practical ways to build DevOps into your business, please reach out to the AWS Security, Risk and Compliance Professional Services team. For information about AWS Config pricing, check out the pricing details page. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.


Jonathan Jenkyn

Since graduating from the University of East Anglia in 1998, Jonathan has been involved in IT Security at many levels, from the implementation of cryptographic primitives to managing enterprise security governance. He joined AWS Professional Services as a Senior Security Consultant in 2017 and supports CloudHSM, DevSecOps, Blockchain and GDPR initiatives, as well as the People with Disabilities affinity group. Outside of work, he enjoys running, volunteering for the BHF, and spending time with his wife and 5 children.

AWS Security Profiles: Sam Elmalak, Enterprise Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sam-elmalak-enterprise-solutions-architect/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for three and a half years. I’m an Enterprise Solutions Architect, which means that I help enterprise customers think through their cloud strategy. I work with customers on everything from business goals and how to align those goals with their technology strategy to helping individual developers create well-architected cloud solutions. I also have an area of focus around security by helping a broader set of customers with their cloud journey and security practices.

How do you explain your job to non-tech friends?

I help my customers figure out how to use AWS and the cloud in a way that delivers business value.

What are you currently working on that you’re excited about?

From a project perspective, the AWS Landing Zone initiative (which also happens to be my 2018 re:Invent topic) is the most exciting. For the last two to three years, we’ve been providing guidance to help customers decide how to build environments in a way that incorporates best practices. But the AWS Landing Zone has a team that’s building out a solution that makes it easier for customers to implement those best practices. We’re no longer just telling customers, “Here’s how you should do it.” Instead, we’re providing a real implementation. It’s a prescriptive approach that customers can set up in just a few hours. This can help customers accelerate their cloud journey and reduce the work that goes into setting up governance. And the solution can be used by any company — including enterprises, educational institutions, small businesses, and startups.

What’s the most challenging part of your job?

I need to strike a balance between different initiatives, which means being able to focus on the right priorities for the moment. I don’t always get it right, but my hope is that I can always help customers achieve their goals. Another challenge is the sheer number of launches and releases—it can be difficult to stay on top of everything that’s being released while maintaining expert-level knowledge about it all. But that’s just a side effect of how quickly AWS innovates.

What’s your favorite part of your job?

The people I work with. I get to interact with so many smart, talented achievers and builders, and they’re always so humble and willing to help. Being around people like that is an amazing experience. Also, I get to learn nonstop. There are a lot of challenging problems to figure out, but there are also so many opportunities for growth. The job ends up being whatever you make of it.

In your opinion, what’s the biggest challenge facing cloud security right now?

Often, security organizations take the approach of saying “No.” They block things instead of making things happen by partnering with their business and development teams. I think the biggest challenge is trying to change that mindset. Skillset is also a challenge: Sometimes, people need to learn how to “do” security in the cloud in a way that keeps pace with their development team, and that can require additional skills. I believe training your entire organization to develop automation and approach problems and processes in an automated manner will help remove these barriers.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

I think we’ll see more automation, more tooling, more partners, and more products — all of which will make it simpler for customers to adopt the cloud and operate there in an efficient, secure manner. As customers adopting the cloud mature, I also think the job of the security practitioner will change slightly — the work will become a matter of how to use all the available tooling and other resources in the most efficient manner. I suspect that artificial intelligence and machine learning, predictive analytics, and anomaly detection will start to play a more prominent role, allowing customers to do more and be more secure. I also think customers will be starting to think more of security in terms of users and devices rather than perimeter security.

How did you choose your session topics for re:Invent 2018?

This is my third year holding sessions on establishing a Landing Zone. Back in 2016, I had a few customers who asked me about how to set up their AWS environment. I spent quite a bit of time researching but couldn’t find a solid, well-rounded answer. So I took it upon myself to figure out what that guidance should include. I spoke with a number of more experienced people in AWS, and then proposed a re:Invent session around it. At the time, I thought it would sound boring and no one would want to attend. But after the session, feedback from customers was overwhelmingly positive and I realized that people were hungry for this kind of foundational AWS info. We put a team together to develop more guidance for our customers. The AWS Landing Zone initiative leverages that guidance by implementing best practices built by a talented team whose vision is to make our customers’ lives easier and more secure. Since then, Re:Invent sessions on Landing Zone have expanded. We’re up to at least 18 sessions, workshops, and chalk talks this year, and we’ve even added a tag (awslandingzone) so they’re all searchable in the session catalog and customers can find them. In my presentations at re:Invent, we have a customer who will talk through what their journey looked like and how the AWS Landing Zone Solution has helped them.

What are you hoping that your audience will take away from these sessions?

I want customers to start thinking differently about a few areas. One is how to enable their organizations to innovate, build and release services/products more quickly. To do that, central teams need to think of the rest of their organization as their customers, then think of ways to onboard those customers faster by means of automated, self-service processes. Their idea of an application or a team also needs to be smaller than the traditional definition of an entire business unit. I actually want customers to think smaller — and more agile. I want them to think, “What if I have to accommodate thousands of different projects, and I want them all in different accounts and isolated workspaces, sitting under this Landing Zone umbrella?”

Thinking about that type of design and approach from the beginning will help customers start, innovate, and move forward while avoiding the pitfalls of trying to fit everything into a single AWS account. It’s a cultural mindshift. I want them to start thinking in terms of the people and the groups within their organizations. I want them to think about how to enable those groups and get them to move forward and to spend less time focused on how to control everything that those groups do. I want people to think of the balance between governance/security and control.

Any tips for first-time conference attendees?

Plan to do a lot of walking and have comfortable shoes. If you’ve signed up for sessions, get there early and remember that there are at least five venues this year — it’s important to factor in travel time. Other than that, I’d say visit the partner expo, meet other customers, and learn from each other. And ask us questions; we’ll do everything we can to help. Most importantly, enjoy it and learn!

If you had to pick any other job, what would you want to do with your life?

My current role comes down to helping empower people, which I love, so I’d look for a way to replicate that feeling elsewhere by helping people realize their talents and potential.

As a backup plan, I’d downsize, go live somewhere cheap and enjoy life, nature, music and tango…

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Sam Elmalak

Sam is an Enterprise Solutions Architect at AWS and a member of the AWS security community. In addition to helping customers solve their technical issues, he helps customers navigate organizational complexity and address cultural challenges. Sam is passionate about enabling teams to apply technology to address business challenges and unmet needs. He’s largely an optimist and a believer in people’s abilities to thrive and achieve amazing things.

Use AWS Secrets Manager client-side caching libraries to improve the availability and latency of using your secrets

Post Syndicated from Lanre Ogunmola original https://aws.amazon.com/blogs/security/use-aws-secrets-manager-client-side-caching-libraries-to-improve-the-availability-and-latency-of-using-your-secrets/

At AWS, we offer features that make it easier for you to follow the AWS Identity and Access Management (IAM) best practice of using short-term credentials. For example, you can use an IAM role that rotates and distributes short-term AWS credentials to your applications automatically. Similarly, you can configure AWS Secrets Manager to rotate a database credential daily, turning a typical, long-term credential in to a short-term credential that is rotated automatically. Today, AWS Secrets Manager introduced a client-side caching library for Java and a client-side caching library of Java Database Connectivity (JDBC) drivers that make it easier to distribute these credentials to your applications. Client-side caching can help you improve the availability and latency of using your secrets. It can also help you reduce the cost associated with retrieving secrets. In this post, we’ll walk you through the following topics:

  • Benefits of the Secrets Manager client-side caching libraries
  • Overview of the Secrets Manager client-side caching library for JDBC
  • Using the client-side caching library for JDBC to connect your application to a database

Benefits of the Secrets Manager client-side caching libraries

The key benefits of the client-side caching libraries are:

  • Improved availability: You can cache secrets to reduce the impact of network availability issues, such as increased response times and temporary loss of network connectivity.
  • Improved latency: Retrieving secrets from the cache is faster than retrieving secrets by sending API requests to Secrets Manager within a Virtual Private Network (VPN) or over the Internet.
  • Reduced cost: Retrieving secrets from the cache can reduce the number of API requests made to and billed by Secrets Manager.
  • Automatic distribution of secrets: The library updates the cache periodically, ensuring your applications use the most up to date secret value, which you may have configured to rotate regularly.
  • Update your applications to use client-side caching in two steps: Add the library dependency to your application and then provide the identifier of the secret that you want the library to use.

Overview of the Secrets Manager client-side caching library for JDBC

Java applications use JDBC drivers to interact with databases and connection pooling tools, such as c3p0, to manage connections to databases. The client-side caching library for JDBC operates by retrieving secrets from Secrets Manager and providing these to the JDBC driver transparently, eliminating the need to hard-code the database user name and password in the connection pooling tool. To see how the client-side caching library works, review the diagram below.

Figure 1: Diagram showing how the client-side caching library works

Figure 1: Diagram showing how the client-side caching library works

When an application attempts to connect to a database (step 1), the client-side caching library calls the GetSecretValue command (steps 2) to retrieve the secret (step 3) required to establish this connection. Next, the library provides the secret to the JDBC driver transparently to connect the application to the database (steps 4 and 5). The library also caches the secret. If the application attempts to connect to the database again (step 6), the library retrieves the secret from the cache and calls the JDBC driver to connect to the database (steps 7 and 8).

The library refreshes the cache every hour. The library also handles stale credentials in the cache automatically. For example, after a secret is rotated, an application’s attempt to create new connections using the cached credentials will result in authentication failure. When this happens, the library will catch these authentication failures, refresh the cache, and retry the database connection automatically.

Use the client-side caching library for JDBC to connect your application to a database

Now that you’re familiar with the benefits and functions of client-side caching, we’ll show you how to use the client-side caching library for JDBC to connect your application to a database. These instructions assume your application is built in Java 8 or higher, uses the open-source c3po JDBC connection pooling library to manage connections between the application and the database, and uses the open-source tool Maven for building and managing the application. To get started, follow these steps.

  1. Navigate to the Secrets Manager console and store the user name and password for a MySQL database user. We’ll use the placeholder, CachingLibraryDemo, to denote this secret and the placeholder ARN-CachingLibraryDemo to denote the ARN of this secret. Remember to replace these with the name and ARN of your secret. Note: For step-by-step instructions on storing a secret, read the post on How to use AWS Secrets Manager to rotate credentials for all Amazon RDS database types.
  2. Next, update your application to consume the client-side caching library jar from the Sonatype Maven repository. To make this change, add the following profile to the ~/.m2/settings.xml file.

  3. Update your Maven build file to include the Java cache and JDBC driver dependencies. This ensures your application will include the relevant libraries at run time. To make this change, add the following dependency to the pom.xml file.

  4. For this post, we assume your application uses c3p0 to manage connections to the database. Configuring c3p0 requires providing the database user name and password as parameters. Here’s what the typical c3p0 configuration looks like:
    # c3p0.properties

    Now, update the c3p0 configuration to retrieve this information from the client-side cache by replacing the user name with the ARN of the secret and adding the prefix jdbc-secretsmanager to the JDBC URL. You can provide the name of the secret instead of the ARN.

    # c3p0.properties
    c3p0.user= ARN-CachingLibraryDemo
    c3p0.jdbcUrl= jdbc-secretsmanager::mysql://my-sample-mysql-instance.rds.amazonaws.com:3306

Note: In our code snippet, the JDBC URL points to our database. Update the string my-sample-mysql-instance.rds.amazonaws.com:3306 to point to your database.

You’ve successfully updated your application to use the client-side caching library for JDBC.


In this post, we’ve showed how you can improve availability, reduce latency, and reduce cost of using your secrets by using the Secrets Manager client-side caching library for JDBC. To get started managing secrets, open the Secrets Manager console. To learn more, read How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.


Lanre Ogunmola

Lanre is a Cloud Support Engineer at AWS. He enjoys the culture at Amazon because it aligns with his dedication to lifelong learning. Outside of work, he loves watching soccer. He holds an MS in Cyber Security from the University of Nebraska, and CISA, CISM, and AWS Security Specialist certifications.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

AWS Security Profiles: Adrian Cockcroft, VP of Cloud Architecture Strategy

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-adrian-cockcroft-vp-of-cloud-architecture-strategy/

Amazon Spheres and author info
In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for two years, based out of the Palo Alto office in California. I tell people that I have three jobs. One is similar to the kind of thing that Werner Vogels does: I present keynotes at AWS summits. I’ve done fourteen keynotes so far, the biggest in New York last year and Tokyo this year. This gives me a calendar that takes me around the world, where I also spend a lot of time visiting customers, meeting with sales teams, gathering input, and talking to people about their architectural challenges, cloud migration challenges, and organizational challenges. I specialize in the architecture of highly available, multi-region, redundant use cases. That’s the second job. The third job is that I’ve recruited and now manage the team that looks after open source engagement from AWS (and to some extent from Amazon as a whole, as we support a few projects that are broader than AWS itself). We hired a bunch of senior, principal-level technologists who are open source specialists in different areas, and one of the most well-known things that has come out of this is AWS joining the Cloud Native Computing Foundation. I’m one of two board members representing AWS. My team has also created an open source web page that describes the work that AWS is doing in open source. We also have an open source blog.

What are you currently working on that you’re excited about?

My current focus is on resilience, particularly as it pertains to financial services. The problem that many financial services companies face is that their current infrastructure consists of data centers full of mainframes. But mainframe experts are retiring, and there aren’t very many millennial mainframe developers and operations people around. The talent pool is disappearing. So people at these institutions are beginning to ask themselves, “We use these mainframes to move trillions of dollars around. How do we run something like that on the cloud securely, and with extreme resilience?” These aren’t rhetorical questions. Financial institutions need to comply with government audits and standards and compliance rules. In fact, there’s a designation for these organizations — Strategically Important Financial Institutions (SIFI) — which means that they’re regulated in a very special way due to events like 9/11 and the 2008 market crash, events that can introduce systemic risk across the industry. AWS has the Well-Architected Guide to describe our current availability architecture, and we are deeply involved with some of these customers to upgrade it for SIFI workloads. The team is working across sales organization, solutions architecture, and the service teams. We’re currently focused on the availability side of the question, but the security piece is also important: We’ll need the right options, from key management to private end points, to make it all viable. It’s a really interesting project, and one I’m deeply involved with.

How did you choose your particular topics for re:Invent this year?

I have one talk in the container track on chaos engineering, which I’m co-presenting with an engineer from one of our partners, Gremlin. Ana Medina is going to do a live demo of trying to break some container orchestration, and I’m going to do the setup, which is how we see chaos engineering playing out. Chaos engineering is a hot topic with a lot of customers. The high-level way of thinking about it is that most large customers have a failover strategy for their backup data centers. But most of them don’t test it very often: Testing is a big pain in the neck, it’s not reliable when you need it, and it’s expensive. However, if you’re failing over between two cloud regions, your APIs are the same, your capabilities are the same, and a lot of the things that make testing hard involve the drift between data centers. AWS just doesn’t have those problems. We’re managing all that out for you. This results in a highly automatable, productized, safe way to do failovers, which means you can test a lot more frequently. Instead of having one annual test, you can run them every quarter, or every month, or every week. And you’re doing low-level, fine-grain testing against individual instances and services. The upshot is that you end up with a much more resilient system, rather than something that once a year you come along and say, “I’m going to see if I can get it through the audit.” There are analogs to that in the security space as well: We’re moving from annual audits of your security architecture to continuous security where you’ve got tamperproof logs of configuration so you can prove that your system has never been in an insecure state, for example, rather than inspecting it every now and again and asking everybody if they’re processing tickets properly.

My second session is about trends in digital transformation. As I meet with customers around the world, I often hear them say, “We’re different than everyone else; we have all of these unique challenges.” And when they start to list their challenges, the list sounds exactly like the lists from twenty other companies. So eventually, I put all these challenges into a presentation that says, “Here are the four things that are blocking you from your technology transition.” This isn’t about adopting any particular set of AWS products. It’s really about the step before that: If you can’t absorb technological change, if you can’t do a cloud migration, if you can’t be agile, then you can’t keep up with the rest of the industry. What’s driving this digital transformation is the connectedness of customers and devices. Pretend you’re a manufacturing company that makes door locks. Traditionally, you’d put them in boxes, ship them off, and hope to never see them again — if products come back, it means they didn’t work. Now pretend you’re manufacturing a connected door lock — if you don’t hear from your door locks every five minutes, it’s a problem. It means your product is either broken, or the customer has stopped using it. Either way, the connected version requires you to continually monitor and understand how people are actually using it—and this shift applies to a huge numbers of industries. So I’ll be talking about how to navigate the various organizational and cultural blockers that exist within many companies.

What’s the most common problem you see customers running into when it comes to cloud security and compliance?

Over and over again, I see people doing data center security that’s largely enforced by network architecture. They have these complex sets of networks with firewalls, and they think if you’re in this box here, and we have a firewall around you, you’re safe. This segmentation model in data centers is largely based on network structure. Then, when customers start to move to the cloud, their security teams say, “We don’t care what you’re doing in the cloud as long as it follows this structure that we use in the data center.” This means you need to go off and build incredibly complex structures to resemble data center structures, all in order to get sign-off from the security teams. But once these systems are running, you’ll quickly find they’re much too complex — and completely the wrong architecture for cloud and cloud security. But it’s almost like you have to go through this step. It would be nice if we could convince security teams to buy into cloud best practices from the start and to use larger, flatter networks with other mechanisms for segmentation.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

Five or ten years ago, the cloud was a subset of the functionality of the data center. We’ve now flipped this: It’s hard to build a data center that’s even a pale imitation of a subset of an AWS account. We just have so much scalable functionality. I think that five years out, it will be difficult to even pass an audit in a data center. People are going to say, “You’re running that in a data center? I can’t guarantee anything about your configuration!.” And you’re going to struggle to keep your data center from being overrun by hackers because you can’t control what’s going on. You’ll eventually hit the point where you can’t know enough about the data center to secure it. So you’ll move to the cloud, where, with the proper hygiene, you’ll be able to know everything. You can log everything that’s ever happened in a tamperproof log, and that ability allows you to make strong assertions.

I also think we’re starting to get governments around the world to support banking in the cloud. We’re still in the early stages, since this also requires teaching auditors how to understand what a banking audit looks like in the cloud: The goals are the same, but the implementation of patterns is different. We’re also seeing people using AWS Managed Services to create a PCI-compliant configuration from scratch via an API call, within a few hours. And then the auditor comes in, says, “You didn’t mess anything up. You’re done!” and walks away. I think these highly audited systems will be start to be built in an extremely automated, repeatable way.

What does cloud security mean to you, personally?

I bought a house last year and have been installing all these IoT things, like door locks, lights, blinds, and yard sprinklers. These are all cloud services. I think we’re getting to a point where your personal security is tied up into the cloud. The security of all those items, which used to be physical security, is moving toward a cloud-based security model that’s going to touch people more and more as it all rolls out.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Adrian Cockcroft

Adrian Cockcroft has had a long career working at the leading edge of technology, and is fascinated by what happens next. In his role at AWS, Cockcroft is focused on the needs of cloud native and “all-in” customers, and leads the AWS open source community development team.

AWS Security Profiles: Misty Haddox, AWS Customer Audit Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-misty-haddox-aws-customer-audit-manager/

Amazon Spheres and author info
In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for about four years. I joined the Compliance team in 2013, where I built processes and established the groundwork for our external global audit programs and built our first AWS controls framework. After that, I left AWS for a year to join a software company, where I worked with some cool folks and was able to educate and help determine their strategy for all things compliance. The opportunity gave me great insight into who I am and reaffirmed my passion for being a builder and delivering! So I came back to AWS and joined the Professional Services team within Security, Risk, and Compliance, working directly with customers who are at varying stages of their AWS cloud journey. I’ve actually just started a new role on the Security Assurance team, where I’ll be managing customer audits and am looking forward to continuing my AWS journey.

What’s the most challenging part of your job?

It’s sometimes challenging to convince customers that they need to get all their teams involved in security and compliance. I’ll be supporting customer EBCs (Executive Briefing Centers) at re:Invent, with my topic focused on “compliance in the cloud,” but the attendees joining the meetings from the customer side are IT specialists and chief technology officers, I don’t see anyone from the compliance teams involved. It’s really hard to get customers to avoid operating in siloed environments. There’s always going to be upstream and downstream impacts when decisions are being made without a full understanding of your security and compliance landscape. We have this DevSecOps model at AWS, in which developers, security, and operations teams all work together on initiatives, and when we encourage customers to take a similar approach, we often get a response like, “That sounds great, but how does it really work?” But it does work — it’s what allows AWS to innovate so quickly. It’s so important for teams to talk to each other and work together to build integrated solutions.

What’s your favorite part of your work?

I have an innate ability to find anything wrong with something. It’s a unique skillset. I used to get frustrated with it, because it made me feel like a canary in a coal mine — but there’s actually value in this ability. It gives me the opportunity to dive into things and fix them before they become bigger issues, which I enjoy very much. I like fixing things. And I like having the ability to “look around corners” and understand what needs to be established in order to support or develop new programs, or to help existing programs scale.

What’s your favorite part of your work?

I have an innate ability to find anything wrong with something. It’s a unique skillset. I used to get frustrated with it, because it made me feel like a canary in a coal mine — but there’s actually value in this ability. It gives me the opportunity to dive into things and fix them before they become bigger issues, which I enjoy very much. I like fixing things. And I like having the ability to “look around corners” and understand what needs to be established in order to support or develop new programs, or to help existing programs scale.

What changes have you seen across the cloud security and compliance landscape over the course of your career?

I’ve worked in this field for 20 years, and compliance isn’t seen as a blocker or a bad word any more. People are starting to see it as a business enabler, which is really refreshing. Security in the nineties was IT-focused and very hands-on: You had a tangible thing you could touch, and policies drove the ways in which you hardened your posture. But now, it’s much more about interpretation and establishing your environment based on whatever processing is occurring within it. There’s no single right answer. If you practice security by design, and you understand your environment and your boundaries, and you build controls to support that, then that drives security, and you’re going to be a complaint. This approach enables you more. You get the freedom to be more innovative in the cloud security space.

What’s the most common misperception you encounter about cloud security/compliance?

I sometimes work with customers who think that they’ll inherit all the compliance certifications that AWS provides. People assume that, because AWS has these, they don’t need to worry about anything. But that’s not the case. The controls you need to establish in your particular environment are going to be unique, based on how you build, what kind of data you have, and how you want to use it — compliance isn’t one-size-fits-all.

You’re co-presenting two different sessions for re:Invent 2018. How did you choose your topics?

The sessions are How Enterprises Are Modernizing Their Security, Risk Management, & Compliance Strategy, which I’m co-presenting with David McDermitt and Balaji Palanisamy, and Confidently Execute Your Cloud Audit: Expert Advice, which I’m co-presenting with Kristen Haught and Devendra Awasthi (from Deloitte).

Both are topics I’m super passionate about. At AWS, we talk a lot about the Shared Responsibility Model. But as we’ve deployed more services further up the stack, the lines of demarcation around responsibility have changed, and a lot of customers are uncomfortable determining what they’re responsible for. I’m using re:Invent as a chance to dive into that shared responsibility model with customers. It’s already the crux of every conversation we have with any customer at AWS, but we don’t tell them exactly what to do. Customers will ask what their controls should be, without understanding that it doesn’t start like that. The first step is to architect your environment and understand how it’s being engineered — because, depending on how you put the pieces together, the responsibility changes. So I’m using my sessions as a chance to really dive into the shared responsibility model with customers.

What are you hoping that your audience will take away from your sessions?

For the How Enterprises Are Modernizing Their Security, Risk Management, & Compliance Strategy session, I hope that customers walk away understanding that all teams need to be involved in the security and compliance conversation. It’s important not to operate in a silo.

For the Confidently Execute Your Cloud Audit: Expert Advice session, I want people to walk away understanding how to dive into control responsibility, and how to apply that knowledge once they’re back in their work environment, so they can look at their SOC report, if they issue one, or maybe determine if they even need one, and have a methodology that they can apply.

If you had to pick any other job, what would you want to do with your life?

I would love to be a crime scene investigator. I’m very fascinated with true life crime. I think it’s the challenge of putting the pieces of the puzzle together. I’m also fascinated by people, and I find the underlying sociology and psychology fascinating.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Misty Haddox

Misty is a passionate builder who’s learning to not take herself too seriously. She believes in the AWS mission and that we should raise the bar in all we do. She strives to look at any opportunity or experience, no matter what it is, as a way to learn and grow!