Tag Archives: Foundational (100)

Privacy video: Innovating securely

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/privacy-video-innovating-securely/

I’m pleased to share a video of a conversation about privacy I had with my colleague Laura Dawson, the North American Lead at the AWS Institute. Privacy is becoming more of a strategic issue for our customers, similar to how security is today. We discussed how, while the two topics are similar in some ways, they also have important differences. We also talked about the importance of building a strong privacy program, and how AWS helps customers safeguard privacy while still taking advantage of digital modernization opportunities.

The differences between security and privacy aren’t fully understood in some industries. Security principles are better known in the industry – security involves considering the confidentiality, integrity, and availability of information. It’s about keeping unauthorized parties away from your data, and about making sure access to your systems and data is appropriate. Similarly, privacy is about control of data through its entire lifecycle, specifically personal identifiable information (PII). That includes the collection, use, transmission, and deletion of that data. Properly managing the privacy of PII is like security when you consider the “access control” aspect, but privacy is about making sure you always have granular control of what is happening to that PII from formation/gathering through to deletion.

Unlike security, which is now commonly recognized as a core business function, privacy practices and principles are still in the early stages of being widely accepted. This is why AWS advocates for organizations to follow the principles of Privacy by Design, to ensure that privacy processes and controls are baked into everything you do.

I also discussed with Laura some of the privacy trends I see happening in the tech industry right now, such as homomorphic encryption, anonymization, and PII discovery tools. The privacy challenges organizations face today, however, aren’t just technology challenges; they’re also business challenges, of how to get value from the data you control, in a way that meets privacy best practices and accounts for your customers’ interests.

For more about these and other privacy topics, check out the video of my conversation with Laura. To learn more about privacy at AWS, check out the Data Privacy Center and Data Protection at AWS.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS cloud and leads the AWS trade and product compliance team.

Hardening the security of your AWS Elastic Beanstalk Application the Well-Architected way

Post Syndicated from Laurens Brinker original https://aws.amazon.com/blogs/security/hardening-the-security-of-your-aws-elastic-beanstalk-application-the-well-architected-way/

Launching an application in AWS Elastic Beanstalk is straightforward. You define a name for your application, select the platform you want to run it on (for example, Ruby), and upload the source code. The default Elastic Beanstalk configuration is intended to be a starting point which prioritizes simplicity and ease of setup. This allows you to quickly deploy a web application on the AWS Cloud. For increased security of production applications, we recommend additional steps you can take to complement the default configuration.

In this post we will describe our recommendations, which are aligned with the AWS Well-Architected Framework, to help you harden the security posture of your Elastic Beanstalk applications. The Well-Architected Framework provides best practices to help you build secure, high-performing, resilient, and efficient infrastructure for your applications and workloads. Focusing on the Security pillar of the framework, we will walk you through additional configurations for increased network protection and protection of data at rest and in transit.

Introduction to Elastic Beanstalk

Elastic Beanstalk is an orchestration service that provisions and operates infrastructure in the AWS Cloud. You can use Elastic Beanstalk to deploy and manage applications in the cloud. Elastic Beanstalk supports many programming languages and frameworks, such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. Elastic Beanstalk can help you decrease overhead by handling tasks such as resource provisioning, load balancing, auto scaling, and health monitoring. You only need to upload the application code. Elastic Beanstalk automatically integrates with other AWS services such as Amazon CloudWatch for logging and monitoring.

Target scenario for this post

This post shows you how to achieve the following things:

  • Launch a highly available Ruby application on Elastic Beanstalk.
  • Attach a MySQL database to the application using Amazon RDS.
  • Protect your sensitive data.
  • Align your application’s security configuration to the Security pillar of the Well-Architected Framework.

Figure 1: Target architecture for the two-tier web application deployed using Elastic Beanstalk

Figure 1: Target architecture for the two-tier web application deployed using Elastic Beanstalk

Figure 1 depicts the target architecture, which is a two-tier web application. Clients resolve the website’s domain name using the Domain Name System (DNS) service Amazon Route 53. An Application Load Balancer (ALB) is used to direct traffic to and from the Amazon EC2 instances which are running the web servers. The EC2 instances are deployed in an Auto Scaling group in private subnets. To ensure that clients can always access the application, the infrastructure is setup so that it can automatically deal with system failures and scale up when there’s an increase in demand. This is done by placing the EC2 instances in the Auto Scaling group across two Availability Zones for high availability. There is also an RDS MySQL database deployed in a private subnet, which is replicated to a stand-by instance in another Availability Zone for disaster recovery. Logs and Metrics are sent to CloudWatch, and Amazon Simple Storage Service (Amazon S3) is used to store logs and source code. Finally, a Network Address Translation (NAT) gateway and Internet gateway manage inbound and outbound traffic to subnets.

The following sections focus on the four main security configurations numbered in Figure 1:

  1. Deploying the EC2 and RDS instances from the web and database layer in private subnets.
  2. Encrypting the logging and source code S3 bucket.
  3. Encrypting the RDS instance and its stand-by replica.
  4. Encrypting data in transit by using the HTTPS protocol.

Strengthening your Elastic Beanstalk application based on the Security pillar of the Well-Architected Framework

To harden the security of your Elastic Beanstalk application, you can build on top of the default setup to incorporate the following security best practices:

  1. Protect networks In the default Elastic Beanstalk setup, the EC2 instances are deployed together with an Application Load Balancer (ALB) in a public subnet. In most cases, EC2 instances do not need to be directly accessible from the internet and therefore should be placed in private subnets. The ALB should be left in the public subnet to provide a single entry-point for inbound traffic from external clients and forward this traffic to the instances over a private network. If these instances need to make a direct outbound connection to the internet, for example to call third-party APIs, we recommend creating a Network Address Translation (NAT) gateway in a public subnet, and adding a route from the private subnet where your instances are running to the NAT Gateway. Your instances can then send requests to the internet and receive corresponding responses through the NAT gateway, but the instances themselves will not be directly accessible from the internet. For more options on interactively accessing instances see AWS Systems Manager.
  2. Protect data at rest We recommend encrypting data at rest. Elastic Beanstalk does not encrypt data stored in Amazon S3 buckets by default, so you should modify the default setup to encrypt the bucket. Similarly, when you set up an RDS database directly through Elastic Beanstalk, you don’t have the option to encrypt the database, so you need to set up your database independently and enable encryption.
  3. Protect data in transit – Web traffic sent between your clients and the ALB over the internet should use HTTPS rather than HTTP. The HTTPS protocol creates an encrypted connection through TLS (Transport Layer Security) between client and server before sending any web traffic. The default setup in Elastic Beanstalk uses HTTP, so the choice to use HTTPS and how to enable it sits with the user. Setting up HTTPS can be done with SSL / TLS server certificates (X.509 certificates) which you manage inside AWS using AWS Certificate Manager or through an external provider. ALB supports TLS-termination, which means that it takes care of the encryption and decryption of the traffic communicated with clients, and then forwards the traffic to the instances over the AWS private network.

Implementing the recommended best practices for your application

To implement the best practices from the section above, you will take the following steps to launch your application, protect networks and to protect data at rest and in transit:

  1. Create your own VPC with public and private subnets.
  2. Create a highly-available Elastic Beanstalk application.
  3. Modify the configuration to deploy instances in private subnets.
  4. Encrypt the log and source code bucket.
  5. Launch an encrypted RDS instance.
  6. Set up encryption in-transit by using the HTTPS protocol.

Create your VPC with public and private subnets

  1. In the AWS Management Console, go to VPC, and select Launch VPC wizard.
  2. Select the VPC with Public and Private Subnets option on the left-hand side, as shown in Figure 2.

Figure 2: Launch VPC wizard

Figure 2: Launch VPC wizard

  1. Click Select.
  2. Adjust the VPC specifications as needed. Specify a CIDR range and a name for the VPC. For the private and public subnets, you need to additionally specify the subnets CIDR range as well as which Availability Zone they should be created in. In order for instances in the private subnet to access the internet, the set-up creates a NAT gateway that resides in the public subnet. In order to do that, you need to specify an Elastic IP ID. If you don’t have an Elastic IP yet, under the VPC console go to Elastic IP addresses, click on Allocate Elastic IP address and Allocate. Use the Allocation ID in the VPC wizard.
  3. Select Create VPC.
  4. Because the target architecture is highly available, another set of public and private subnets needs to be created and set to reside in a different Availability Zone from the subnets you configured in step 4. This is done by going to the Subnets section in the VPC Console. Click on Create subnet, select the VPC you just created, add a new subnet, making sure to assign it to a different Availability Zone. Press Add new subnet to add a second subnet on the same configuration page. When done, press Create subnet.
  5. By default, the subnets will use the main routing table, which will treat them as private subnets. In order to make one of the newly created subnets public, it needs to be added to the route table, which has a route to the Internet Gateway. Go to the Route Tables section in the VPC Console and find the route table associated with your newly created VPC, which has the route to the Internet Gateway. This should be the Route Table which has 1 explicit subnet association. Click on the Route Table’s ID, and verify that there’s a route to a target with the igw- prefix. Then, under the Subnet association tab, edit the explicit subnet associations to include the newly created subnet.

After this is done, you should have two public and two private subnets across two Availability Zones for your new VPC.

Create a highly available Elastic Beanstalk application

The following steps will show you how to create a highly available Elastic Beanstalk application.

  1. In the AWS Management Console, choose Elastic Beanstalk, and then, in the Get Started section, select Create Application.
  2. Provide a name for the application and define the platform it should run on. In our example, the platform is Ruby.
  3. Provide the source code for your web application or use the sample code provided in the Elastic Beanstalk setup console.
    • To use the sample code, select Sample Application.
    • To upload your own source code, in the Source code origin section, for Version label, enter the name of your application code, and then for Source code origin, choose Local file, select Choose File, and navigate to the file that you want to use, as shown in Figure 3.

Figure 3: Source code origin section of the Elastic Beanstalk console

Figure 3: Source code origin section of the Elastic Beanstalk console

  1. Select Configure more options
  2. Depending on your application’s needs, you can select a configuration preset that includes recommended values for several configurations. Select High Availability to include a load balancer and auto scaling for multiple Availability Zones.

Deploy your instances in private subnets

In this step, you will set up Elastic Beanstalk to deploy the Application Load Balancer in public subnets to provide a point of access for inbound traffic from the internet, and you deploy the EC2 instances in a private subnet.

While still in the Configure more options settings:

  1. In the Network section, select Edit, and then, from the dropdown list, select the VPC that you just created.
  2. To deploy your instances in private subnets, in the Load balancer settings section, for Load balancer subnets, check the box next to each public subnet, and in the Instance settings section, for Instance subnets, check the box next to each private subnet, as shown in Figure 4.
Figure 4: Elastic Beanstalk subnet settings for Load Balancer and instances

Figure 4: Elastic Beanstalk subnet settings for Load Balancer and instances

  1. Select Save.

Encrypt the log and source code bucket and block public access

After Elastic Beanstalk has created the application, you can encrypt the S3 bucket.

  1. Open the S3 console and choose the bucket that was created automatically as part of the Elastic Beanstalk setup. The bucket name will have the following structure: elasticbeanstalk-region-account-id.
  2. To encrypt the bucket, choose Properties, and then, for Default Encryption, select Edit, and for Server-side encryption, select Enable.
  3. For Encryption key type, you can use an S3-managed encryption key by selecting Amazon S3 key (SSE-S3). If you want more control over the keys used for encryption, select AWS Key Management Service key (SSE-KMS), which is an encryption key protected by AWS Key Management Service (KMS). Here, you can specify to use an AWS managed key or one of your own Customer managed keys from KMS. For more information on SSE-KMS, visit Protecting Data Using Server-Side Encryption with KMS keys Stored in AWS Key Management Service (SSE-KMS).
  4. Select Save changes.

Even though the bucket that was created by Elastic Beanstalk is non-public by default, we recommend to enable “S3 Block Public Access” at the account level or at least at the bucket level to prevent tampering or accidentally changing this setting in the future.

  1. In your S3 console, click on Block Public Access settings for this account.
  2. If Block all public access is not yet enabled, click on Edit, check the box next to Block all public access and click Save.
  3. Apart from that, you can also block public access at the bucket level. For this, click on the respective bucket, open the Permissions section and edit Block public access (bucket settings) similarly to how you did in step 2.

Launch an encrypted RDS instance

Elastic Beanstalk allows you to set up and run RDS instances in your Elastic Beanstalk environment. Until recently, the database was tied to the lifecycle of the Elastic Beanstalk environment, and its use was recommended to be limited to development and testing environments only. For example, if you previously launched an RDS instance using Elastic Beanstalk, and the Elastic Beanstalk environment was terminated, the RDS instance would also be deleted. As of October 6, 2021, Elastic Beanstalk now supports Database Decoupling, so that the database will persist when the environment is deleted.

However, Elastic Beanstalk currently does not allow you to set up encryption for your database. For this reason, this post shows you how to set up your Elastic Beanstalk application with a decoupled database, by creating the database directly in the RDS service, separate from your Elastic Beanstalk application. RDS allows you to encrypt your database.

Decoupling your database and setting it up directly through the RDS service in the AWS console will require additional steps for integration with your Elastic Beanstalk application, which this post will walk you through.

Note: If you are using the Elastic Beanstalk service to create your RDS instance, you can select one of three options:

  • The first option, enabled by selecting the Create snapshot retention option in the database settings in the Elastic Beanstalk console, makes sure that Elastic Beanstalk creates a snapshot of your database prior to termination. You can restore an existing snapshot of your database through the Elastic Beanstalk console. Bear in mind that there will be downtime of your database between snapshot creation and snapshot restore.
  • The second option, Retain, creates a decoupled database, which persists if the Elastic Beanstalk environment has been terminated.
  • The third option Delete removes the database upon termination.

In this step, you will create an encrypted RDS database, allow access to the database from your application’s instances only, and add the required environment variables to your application so you can use your database in the application.

  1. On the RDS service page in the console, select Create database.
  2. For the database creation method, select Standard create.
  3. For Engine options, choose MySQL and select the latest version.
  4. For Templates choose either the Dev/Test or Production template according to your use case.
  5. In the Settings section, provide a name to use as the database identifier and set a username and password.
  6. Select the appropriate DB instance class that meets your processing power and memory requirements.
  7. For Storage, select your storage type and allocate storage.
  8. If you need Multi-AZ deployment, in the Availability & durability section, choose Create a standby instance.
  9. In the Connectivity section, select the VPC that you created in the Create your VPC with public and private subnets section earlier in this blog post, and verify that Public access has been set to No. For VPC security group, choose Create new and provide a name to identify the group later on.
  10. In the Additional configuration section, under Encryption, choose Enable Encryption. You can choose the default AWS KMS key if you’re happy with AWS managing the keys, or provide a custom key if you want more control. Bear in mind that the encryption option cannot be changed after the database has been created.
  11. Leave the defaults for the remaining settings and select Create database.

After you set up the RDS database and your new Elastic Beanstalk application, you can add the database to your application.

  1. In the RDS console, go to your newly created RDS database and scroll down to Security group rules.
  2. Select the security group that has the CIDR/IP – Inbound type.
  3. Under Inbound rules, select the rule that is listed, and then select Edit inbound rules.
  4. Under the Source column, make sure Custom is selected, and in the search-box next to it, select the security group associated with your Elastic Beanstalk Auto Scaling group.

Important: As a security best practice, you should allow traffic to your RDS database from your instances only. Therefore, make sure the security group allows traffic only from the Auto Scaling group’s security group, and that it has no additional entries.

  1. To add the RDS details to the Elastic Beanstalk environment properties, go to your application’s environment in the Elastic Beanstalk service and navigate to Configuration > Software > Edit > Environment Properties. Add RDS_HOSTNAME, RDS_PORT, RDS_DB_NAME, RDS_USERNAME and RDS_PASSWORD as properties and provide the values that you used to set up the database.
  2. Restart the application by going back to your Elastic Beanstalk environment, and then under Environment actions, choose Restart app server(s).

After the server has restarted, you can access the RDS database in your web application by using the environment properties you set in the console, just as you would if you attached the database directly through the Elastic Beanstalk setup. For more information on using environment properties, visit Environment properties and other software settings.

The new database is now separate from your application and it is encrypted to provide data protection at rest.

Important: The environment properties, including the database username and password, are visible and stored in plain text in the Environment Properties in Elastic Beanstalk.

Depending on your security requirements, you can choose to use AWS Secrets Manager to protect your database credentials, which you can then fetch programmatically in your Elastic Beanstalk instance or through Elastic Beanstalk’s custom environment configuration files (.ebextensions). To learn more about using Secrets Manager to protect and rotate database credentials, see Rotate Amazon RDS database credentials automatically with AWS Secrets Manager. However, this will require additional configuration for Elastic Beanstalk and is beyond the scope of this post.

A second possibility is to use IAM database authentication, which allows you to use your Elastic Beanstalk’s EC2 IAM role to connect to your database. This method leverages short-lived authentication tokens rather than a static database password. In order to set this up, you need to enable IAM database authentication, create an IAM policy to allow IAM database access and create a database account for IAM authentication using the AWSAuthenticationPlugin (for MySQL). Authentication tokens are valid for 15 minutes, and if your web instances need to create a new database connection, or reconnect, authentication tokens will need to be refreshed if they have expired, otherwise the connection will be rejected.

For an implementation guide, check out How do I allow users to authenticate to an Amazon RDS MySQL DB instance using their IAM credentials. For Ruby applications, you can get the authentication token in your application by leveraging the auth_token_generator method in the Ruby aws-sdk.

Set up encryption in transit using the HTTPS protocol

In the Elastic Beanstalk architecture, you can encrypt data in transit at three connection points: from your clients to the load balancer, from the load balancer to the EC2 instances, and from the EC2 instances to the RDS database.

Securing the connection from clients to the ALB

You can use a custom domain name to use HTTPS for your Elastic Beanstalk environment and have your clients can connect securely to your environment. If you don’t have a domain name, you can assign a self-signed server certificate to your ALB to use HTTPS for development and testing purposes.

To secure the connection to your ALB, add a HTTPS listener for the traffic inbound port (typically 443) and attach an TLS / SSL server certificate (X.509 certificate). To generate certificates, use AWS Certificate Manager or third-party providers such as Let’s Encrypt. For a walkthrough on how to set up an HTTPS listener through the console or through .ebextensions configuration files, see the Configuring your Elastic Beanstalk environment’s load balancer to terminate HTTPS.

Securing the connection from the ALB to the EC2 instances

While securing the connection between clients and the ALB is enough for most applications, in some cases a complete end-to-end encryption may be required; for example, to comply with (external) regulations. To secure the connection from your ALB to your application running on an EC2 instance, you must use the .ebextensions configuration files to modify the software running on the instance. You then need to allow the HTTPS traffic to pass through from the ALB to your EC2 instance by allowing inbound traffic on port 443 on the instance’s security group. For a Ruby specific example, see Terminating HTTPS on EC2 instances running Ruby.

For a complete end-to-end encryption walkthrough, see How can I configure HTTPS for my Elastic Beanstalk environment?

Securing the RDS connection

To securely connect from your application to your RDS database, you can use SSL or TLS to encrypt the connection. You will need to download an RDS root certificate and require your application to use this certificate when connecting to the RDS instance to verify the RDS server certificate. For more information on how to download and use the root certificate to setup a secure RDS connection, see the Using SSL with a MySQL DB instance documentation page.

This post has shown you how to align your application with some of the security best practices of the Well-Architected Framework. After completing these steps, your architecture includes four key modifications to improve security:

  1. The EC2 and RDS instances are deployed in a private subnet.
  2. The logging and source code S3 bucket is encrypted.
  3. An encrypted RDS instance is attached.
  4. Encryption occurs in transit by using the HTTPS protocol.

Conclusion

In this post, we’ve covered the additional configuration you should be aware of to harden the security posture of your Elastic Beanstalk applications, aligning to the Security pillar of the Well-Architected Framework. The final setup you created uses a VPC and private subnets to allow internet access only to resources that require it, and provides encryption at rest and in transit using AWS Cloud Security services and secure protocols. The Well-Architected Framework describes additional concepts, design principles, and architectural best practices for designing and running workloads in the cloud. To learn more, see AWS Well-Architected.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Laurens Brinker

Laurens Brinker

Laurens Brinker is an Associate Solutions Architect based in London who is part of the Security Community at AWS. Laurens joined AWS as part of the TechU Graduate program in 2020 and now helps customers running their workloads securely in the AWS Cloud. Outside of work, Laurens enjoys cycling, a casual game of chess, and building small web applications.

Katja Philipp

Katja Philipp

Katja Philipp is an Associate Solutions Architect based in Germany. With a background in M.Sc. Information Systems, she joined AWS in September 2020 with the TechU Graduate program. She enables her customers in the Power & Utilities vertical with best practices around their cloud journey. Katja is passionate about sustainability and how technology can be leveraged to solve current challenges for a better future.

Laura Verghote

Laura Verghote

Laura Verghote is an Associate Technical Trainer based in London, UK. With a background in electrical engineering, she joined AWS in September 2020 with the TechU Graduate program. She delivers a variety of technical trainings to AWS customers across EMEA.

Kimessha Paupamah

Kimessha Paupamah

Kimessha Paupamah is a ProServe Consultant based in South Africa. With a background in Computer Science, she joined AWS in September 2020 with the TechU Graduate program. She accelerates customer business outcomes through guidance on how to architect, design, develop and implement the AWS platform. Kimessha is passionate about enabling customers to build innovative solutions in the cloud.

Benjamin Richer

Benjamin Richer

Benjamin Richer is an Associate Solutions Architect based in Paris. With a background in Network & Telecom, he joined AWS in 2020 through the TechU Graduate Program. Currently working in the Digital Native Business segment he helps grown up Startups optimizing their workload in the Cloud.

AWS attained MTCS Level 3 certification under the new SS584:2020 standard

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/aws-attained-mtcs-level-3-certification-under-the-new-ss5842020-standard/

We’re excited to announce the completion of the Multi-Tier Cloud Security (MTCS) Level 3 certification under the new SS584:2020 standard in November 2021 for three Amazon Web Services (AWS) Regions: Singapore, Korea, and United States, excluding AWS GovCloud (US) Regions. The new standard, released in October 2020, includes more stringent controls for greater assurance as compared to the prior version SS584:2015, and a new CSP Self-Disclosure Form to provide to cloud service customers (CSC) for transparency. With the MTCS Level 3 certification, customers can be assured AWS security processes meet the stringent security controls set forth by the new MTCS SS 584:2020 standard for hosting their sensitive workloads.

AWS was the first cloud service provider (CSP) to attain the MTCS Level 3 certification for Singapore, in 2014, and is now one of the first few CSPs certified under the new SS584:2020 Level 3 standard. The services in scope have increased from 130 to 145, about a 10% increase since the last audit (September 2020).

The following services are newly added as in scope:

  1. Amazon Augmented AI (Amazon A2I)
  2. Amazon CloudWatch SDK Metrics for Enterprise Support
  3. Amazon Detective
  4. Amazon Finspace
  5. Amazon Kendra
  6. Amazon Keyspaces (for Apache Cassandra)
  7. Amazon Timestream
  8. AWS App Mesh
  9. AWS Audit Manager
  10. AWS Cloud Map
  11. AWS Device Farm
  12. AWS Glue DataBrew
  13. AWS Ground Station
  14. AWS Personal Health Dashboard

MTCS was the world’s first cloud security standard to specify a management system for cloud security that covers multiple tiers, and it can be applied by CSPs to meet differing cloud user needs for data sensitivity and business criticality. An intent of MTCS is for certified CSPs to be able to better specify the levels of security they can offer their users. AWS achieved this through third-party certification and fulfillment of the self-disclosure requirement for CSPs that covers service-oriented information normally captured in service level agreements. The MTCS framework establishes that the different levels of security help local businesses to pick the right CSP, and use of MTCS is mandated by the Singapore government as a requirement for public sector agencies and regulated organizations.

MTCS has three levels of security, Level 1 being the base and Level 3 the most stringent:

  • Level 1 was designed for non–business critical data and systems with basic security controls, to counter certain risks and threats targeting low-impact information systems (for example, a website that hosts public information).
  • Level 2 addresses the needs of organizations that run their business-critical data and systems in public or third-party cloud systems (for example, confidential business data and email).
  • Level 3 was designed for regulated organizations with specific and more stringent security requirements. Industry-specific regulations can be applied in addition to the baseline controls, to help supplement and address security risks and threats in high-impact information systems (for example, highly confidential business data, financial records, and medical records).

Benefits of MTCS Level 3 certification

AWS’s certification enables Singapore customers in regulated industries with the strictest security requirements to securely host applications and systems with highly sensitive information, ranging from confidential business data to financial and medical records, in a level-3-compliant MTCS environment. With the scope extended beyond Singapore to AWS Regions in Korea and the United States, it provides an alternative for Singapore government agencies to leverage AWS services which haven’t yet launched locally, and also provides resiliency and recovery use cases.

Financial Services Industry (FSI) customers in Korea are able to accelerate cloud adoption with MTCS controls that cover relevant regulations (the Financial Security Institute’s Guideline on Use of Cloud Computing Services in the Financial Industry, and the Regulation on Supervision on Electronic Financial Transactions (RSEFT)).

With increasing cloud adoption across different industries, MTCS certification has the potential to provide assurance to customers globally. Please reach out to your AWS representative if you have any services or Regions you would like to see in scope for the next MTCS audit.

You can now download the latest MTCS certificates and the MTCS Self-Disclosure Form in AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the APJ-Lead Strategist supporting the compliance programs for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

AWS Security Profiles: Jenny Brinkley, Director, AWS Security

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profiles-jenny-brinkley-director-aws-security/

AWS Security Profiles: Jenny Brinkley, Director, AWS Security
In the week leading up to AWS re:Invent 2021, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


How long have you been at AWS, and what do you do in your current role?

I’ve been at AWS for 5½ years. I get to focus on the future of security and compliance. It gives me a lot of space to experiment and try new things, which is how I like to operate.

How did you get started in AWS Security?

I joined AWS through a startup acquisition, and I actually didn’t think I was going to go with the acquisition. I thought AWS would be way too big and move way too slow. I love being in environments where I get to move fast and be entrepreneurial. I started on the product side. I was able to learn what it takes to build and ship products at the scale of AWS – which is on another level and mind-blowing.

Then, like others at AWS, I was able to reinvent myself, find different passions, and experiment with new things. One of those areas for me was compliance. I started to get perspective on how that space was being defined by regulatory activity for the cloud, and it started opening my mind in different ways.

I started thinking, how do you make compliance easier for customers? How do you work with regulated entities to understand how to audit, and to understand the function of how the cloud operates? From there, my career has been about changing how to think about product, about how to make security easier. Layering in this compliance aspect, too, means I get to play in all these different worlds, work with internal and external customers, and work to simplify security, while also understanding where and how compliance fits in, without slowing down innovation.

How do you explain your job to non-tech friends?

I explain my work as removing the fear around security. You go see images of people in hoodies, with darkened faces, and binary code running behind them, and my job is to break that perception and walk in the light – yes, that’s my nod to Olivia Pope in Scandal. I love the idea of that gladiator mentality. You’re going in and solving the big problems, but you’re also creating more visibility and transparency around how security operates. And you’re doing this without making anyone afraid that they’re being watched or monitored, and without holding back innovation. My job is to provide that transparency and clarity, and give people prescriptive guidance on how to operate securely on AWS.

What are you currently working on that you’re excited about?

So much! That’s what I really love about my job – I get to play in a lot of spaces, and the context switching is something that really fuels me. One of the top projects I’m working on is something we just released in response to an ask from the White House, which I feel really privileged to work on. We released a new Cybersecurity Awareness training which is now available to everyone in the world. You can access this training right now, and you can share it with your grandparents or implement it in your corporation or small business. We were able to take a training product we built for all Amazon employees–and then externalize it. The size and the scope is something I’m really excited about. Making security easier for everybody is a big mission for us.

Another big area is up-skilling. You hear a lot about security jobs being the future, so we’re building everything from apprenticeships to new learning paths for anyone interested in security. We’re thinking about how we can build quick learning modules for people to listen to on the go. That’s something I get really excited about in this job – creating opportunities for people to understand that security jobs and opportunities are vast. If you’re curious and want to learn new things, AWS is endless.

You’re presenting at re:Invent this year – can you give readers a sneak peek at what you’re covering?

I am partnering with Eric Brandwine, AWS VP/Distinguished Engineer for a session called Introverts and extroverts collide: Build an inclusive workforce (SEC204). Eric and I are night and day in terms of how we work. In our talk we’ll touch on some of the challenges we had when we first started working together, but how we found value in our different approaches.

We’ll be discussing how he solves problems with technology and how I solve problems regarding people, and thinking about how that empathetic layer resonates between the two perspectives. Not every problem needs technology, and not every problem needs a people-focused solution. But, humans are behind any of those aspects of impact.

We’ll give prescriptive guidance to customers on how they should think about their security culture as it relates to people and as it relates to technology. We’ll talk about how those two worlds can blend together in a way that empowers an entire organization to prioritize security, and that they shouldn’t be afraid of it. We want to help bridge the gaps between the technologists and the empathetic individuals who think about how the technology lands in use cases across a business.

From your perspective, what’s the most important thing leaders can do to create an inclusive work environment?

Listening. Sitting back, getting the feedback, being vulnerable, asking the questions. So much of what we need to do now is practice that listening skill, really understand the motivations of our teams, and then try to create these safe working environments where people feel comfortable sharing their perspectives. It’s not that you’re going to act on everything everyone’s talking about, but at least you get diverse perspectives and points of view to help create an inclusive work environment that makes everyone want to show up, support each other, and do the best work possible.

What’s your favorite Leadership Principle at Amazon and why?

I have two. One is Learn and Be Curious because that is how I like to operate. I think, “what if…” or “why can’t we…”. Then Think Big pairs with “why can’t we…” The culture within AWS really supports that. On a daily basis, we can flip the script on how we think about our jobs and how we position the business.

If you’re entrepreneurial and like to create, this place is like a magic playground. Some people look at my job and they’re so confused with all the different things I get to do – but it goes back to that context switching. I believe that Learn and Be Curious and Think Big fit in that realm for me–I feel like I can be anything, I can do anything. I also had parents who told me as a kid that I could do anything and be anything, so I think that’s just who I am. Those two leadership principles help me to produce and do my best work.

What’s the thing you’re most proud of in your career?

That’s hard. It’s a couple of things. I’ve had a lot of incredible opportunities. One of which was being involved in a startup. We raised the money quickly, we worked with incredible customers, we solved really challenging business issues. The fact that I was able to bring that here to AWS, in a way that now hundreds of thousands of people get to see the kind of work we’re able to produce, is pretty cool.

But honestly, working with some of our new hires who are just getting into the workforce–especially with our diverse candidates–I’m at a place in my career where I want to create opportunities for others. I’m working to create safe spaces for people to operate and do their best work and really break down barriers for people who might not otherwise get those opportunities. That’s what I’m most excited about for the future, and also the most proud about–giving people opportunities to work in careers they never thought were available to them. I love that, and I get to do it daily.

If you had to pick any other job, what would you want to do?

Sports agent. I think I’d be so good at it. I would love to go work with young athletes, especially with the new NCAA ruling that college athletes can get paid for the use of their likeness. I would love to help them develop really interesting business plans.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Jenny Brinkley, Director, AWS Security

Jenny Brinkley

Jenny leads efforts for AWS Security to understand where compliance and security is headed. In her role as a Director, she helps teams understand how to consider security when building their services and deliverables. Prior to joining AWS, Jenny co-founded a security start-up, harvest.ai, that was acquired by AWS in April 2016.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

2021 PCI 3DS report now available

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/2021-pci-3ds-report-now-available/

We are excited to announce that Amazon Web Services (AWS) has released the latest 2021 PCI 3-D Secure (3DS) attestation to support our customers implementing EMV® 3-D Secure services on AWS. Although AWS doesn’t directly perform the functions of 3DS Server (3DSS), 3DS Directory Server (DS), or 3DS Access Control Server (ACS), AWS customers can host their 3DS environments on AWS, using services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS) and Amazon Virtual Private Cloud (Amazon VPC).

The new AWS PCI 3DS attestation of compliance means customers can now attain their own PCI 3DS compliance for services running on AWS. All AWS Regions in scope for PCI DSS are included in the 3DS attestation. AWS was assessed by Coalfire, an independent Qualified Security Assessor (QSA). AWS compliance reports, including this latest PCI 3DS attestation, are available on demand through AWS Artifact. The 3DS package available in AWS Artifact includes the 3DS Attestation of Compliance (AOC) and a Shared Responsibility Guide.

To learn more about our PCI program and other compliance and security programs, please visit AWS Compliance Programs.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

AWS Security Profiles: Merritt Baer, Principal in OCISO

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profiles-merritt-baer-principal-in-ociso/

AWS Security Profiles: Merritt Baer, Principal in OCISO
In the week leading up AWS re:Invent 2021, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


How long have you been at Amazon Web Services (AWS), and what do you do in your current role?

I’m a Principal in the Office of the Chief Information Security Officer (OCISO), and I’ve been at AWS about four years. In the past, I’ve worked in all three branches of the U.S. Government, doing security on behalf of the American people.

My current role involves both internal- and external- facing security.

I love having C-level conversations around hard but simple questions about how to prioritize the team’s resources and attention. A lot of my conversations revolve around organizational change, and how to motivate the move to the cloud from a security perspective. Within that, there’s a technical “how”—we might talk about the move to an intelligent multi-account governance structure using AWS Organizations, or the use of appropriate security controls, including remediations like AWS Config Rules and Amazon EventBridge. We might also talk about the ability to do forensics, which in the cloud looks like logging and monitoring with AWS CloudTrail, Amazon CloudWatch, Amazon GuardDuty, and others aggregated in AWS Security Hub.

I also handle strategic initiatives for our security shop, from operational considerations like how we share threat intelligence internally, to the ways we can better streamline our policy and contract vehicles, to the ways that we can incorporate customer feedback into our products and services. The work I do for AWS’ security gives me the empathy and credibility to talk with our customers—after all, we’re a security organization, running on AWS.

What drew you to security?

(Sidebar: it’s a little bit of who I am— I mean, doesn’t everyone rely on polaroid photos? just kidding— kind of :))
 
Merritt Baer polaroid photo

I always wanted to matter.

I was in school post-9/11, and security was an imperative. Meanwhile, I was in Mark Zuckerberg’s undergrad class at Harvard. A lot of the technologies that feel so intimate and foundational—cloud, AI/ML, IoT, and the use of mobile apps, for example—were just gaining traction back then. I loved both emerging tech and security, and I was convinced that they needed to speak to and with one another. I wanted our approach to include considerations around how our systems impact vulnerable people and communities. I became an expert in child pornography law, which continues to be an important area of security definition.

I am someone who wonders what we’re all doing here, and I got into security because I wanted to help change the world. In the words of Poet Laureate Joy Harjo, “There is no world like the one surfacing.”

How do you explain your job to non-tech friends?

I often frame my work relative to what they do, or where we are when we’re chatting. Today, nearly everyone interacts with cloud infrastructure in our everyday lives. If I’m talking to a person who works in finance, I might point to AWS’ role providing IT infrastructure to the global financial system; if we’re walking through a pharmacy I might describe how research and development cycles have accelerated because of high-performance computing (HPC) on AWS.

What are you currently working on that you’re excited about?

Right now, I’m helping customer executives who’ve had a tumultuous (different, not necessarily all bad) couple of years. I help them adjust to a new reality in their employee behavior and access needs, like the move to fully remote work. I listen to their challenges in the ability to democratize security knowledge through their organizations, including embedding security in dev teams. And I help them restructure their consumption of AWS, which has been changing in light of the events of the last two years.

On a strategic level, I have a lot going on … here’s a good sampling: I’ve been championing new work based on customers asking our experts to be more proactive by “snapshotting” metadata about their resources and evaluating that metadata against our well-architected security framework. I work closely with our Trust and Safety team on new projects that both increase automation for high volume issues but also provide more “high touch” and prioritized responses to trusted reporters. I’m also building the business case for security service teams to make their capabilities even more broadly available by extended free tiers and timelines. I’m providing expertise to our private equity folks on a framework for evaluating the maturity of security capabilities of target acquisitions. Finally, I’ve helped lead our efforts to add tighter security controls when AWS teams provide prototyping and co-development work. I live in Miami, Florida, USA, and I also work on building out the local tech ecosystem here!

I’m also working on some of the ways we can address ransomware. During our interview process, Amazon requests that folks do an hour-long presentation on a topic of your choice. I did mine on ransomware in the cloud, and when I came on board I pointed to that area of need for security solutions. Now we have a ransomware working group I help lead, with efforts underway to help out customers doing both education and architectural guidance, as well as curated solutions with industries and partners, including healthcare.

You’re presenting at AWS re:Invent this year—can you give readers a sneak peek at what you’re covering?

One talk is on cloud-native approaches to ransomware defense, encouraging folks to think innovatively as they mature their IT infrastructure. And a second talk highlights partner solutions that can help meet customers where they are, and improve their anti-ransomware posture using vendors—from MSSPs and systems integrators, to endpoint security, DNS filtering, and custom backup solutions.

What are you hoping the audience will take away from the sessions?

These days, security doesn’t just take the form of security services (like GuardDuty and AWS WAF), but will also manifest in the ways you design a cloud-aware architecture. For example, our managed database service Aurora can be cloned; that clone might act as a canary when you see data drift (a canary is security concept for testing your expectations). You can use this to get back to a known good state.

Security is a bottom line proposition. What I mean by that is:

  1. It’s a business criticality to avoid a bad day
  2. Embracing mature security will enable your entity’s development innovation
  3. The security of your products is a meaningful part of what you deliver on to your customers.

From your perspective, what’s the most important thing to know about ransomware?

Ransomware is a big headline-maker right now, but it’s not new. Most ransomware attacks are not based on zero days; they’re knowable but opportunistic. So, without victim-blaming, I mean to equip us with the confidence to confront the security issue. There’s no need to be ransomed.

I try not to get wrapped around particular issues, and instead emphasize building the foundation right. So sure, we can call it ransomware defense, but we can also point to these security maturity measures as best practices in general.

I think it’s fair to say that you’re passionate about women in tech and in security specifically. You recently presented at the Day of Shecurity conference and the Women in Business Summit, and did an Instagram takeover for Women in CyberSecurity (WiCyS). Why do you feel passionately about this?

I see security as an inherently creative field. As security professionals, we’re capable of freeing the business to get stuff done, and to get it done securely. That sounds simple, and it’s hard!

Any time you’re working in a creative field, you rely on human ingenuity and pragmatism to ensure you’re doing it imaginatively instead of simply accepting old realities. When we want to be creative, we need more of the stuff life is made of: human experience. We know that people who move through the world with different identities and experiences think differently. They approach problems differently. They code differently.

So, I think having women in security is important, both for the women who choose to work in security, and for the security field as a whole.

What advice would you give a woman just starting out in the security industry?

No one is born with a brain full of security knowledge. Technology is human-made and imperfect, and we all had to learn it at some point. Start somewhere. No one is going to tap you on the shoulder and invite you to your life 🙂

Operationally, I recommend:

  • Curate your “elevator pitch” about who you are and what you’re looking for, and be explicit when asking for folks for a career conversation or a referral (you can find me on Twitter @MerrittBaer, feel free to send a note).
  • Don’t accept a first job offer—ask for more.
  • Beware of false choices. For example, sometimes there’s a job that’s not in the description—consider writing your own value proposition and pitching it to the organization. This is a field that’s developing all the time, and you may be seeing a need they hadn’t yet solidified.

What’s your favorite Leadership Principle at Amazon and why?

I think Bias for Action takes precedence for me— there’s a business decision here to move fast. We know that comes with some costs and risks, but we’ve made that calculated decision to pursue high velocity.

I have a law degree, and I see the Leadership Principles sort of like the Bill of Rights: they are frequently in tension and sometimes even at odds with one another (for example, Bias for Action and Are Right, A Lot might demand different modes). That is what makes them timeless—yet even more contingent on our interpretation—as we derive value from them. As a security person, I want us to pursue the good, and also to transcend the particular fears of the day.

If you had to pick any other industry, what would you want to do?

Probably public health. I think if I wasn’t doing security, I would want to do something else landscape-level.

Even before I had a daughter, but certainly now that I have a one-year-old, I would calculate the ROI of my life’s existence and my investment in my working life.

That being said, there are days I just need to come home to some unconditional love from my rescue pug, Peanut Butter.
 
Peanut Butter the dog

 

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

 Merritt Baer

Merritt Baer

Merritt is a Principal in the Office of the CISO. She can be found on Twitter at @merrittbaer and looks forward to meeting you at re:Invent, or in your next executive conversation.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Fall 2021 SOC reports now available with 141 services in scope

Post Syndicated from Ninad Naik original https://aws.amazon.com/blogs/security/fall-2021-soc-reports-now-available-with-141-services-in-scope/

At Amazon Web Services (AWS), we’re committed to providing our customers with continued assurance over the security, availability and confidentiality of the AWS control environment. We’re proud to deliver the System and Organizational (SOC) 1, 2, and 3 reports to enable our AWS customers to maintain confidence in AWS services.

For the Fall 2021 SOC reports, covering April 1, 2021, to September 30, 2021, we are excited to announce eight new services in scope, for a total of 141 total services in scope. You can see the full list on Services in Scope by Compliance Program. The associated infrastructure supporting our in-scope products and services is updated to reflect new regions, edge locations, Wavelength, and Local Zones.

Here are the eight new services in scope for Fall 2021 SOC reports:

The Fall 2021 SOC reports are now available through Artifact in the AWS Management Console. The SOC 3 report can also be downloaded here as PDF.

AWS strives to bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. If there are additional AWS services you would like to see added to the scope of our SOC reports (or other compliance programs), reach out to your AWS representatives.

As always, we value your feedback and questions. Feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to-content, news, and feature announcements? Follow us on Twitter.

 

Author

Ninad Naik

Ninad is a Security Assurance Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Ninad holds a Master’s degree in Information Systems from Syracuse University, NY and a Bachelor’s of Engineering degree in Information Technology from Mumbai University, India. Ninad has 11 years of experience in security assurance and ITIL, CISA, CGEIT, and CISM certifications.

Author

Lu Yu

Lu is a Compliance Program Manager at Amazon Web Services. She leads multiple security and privacy initiatives within AWS. Lu holds a Master’s degree in Accounting and dual Bachelor’s degrees in Accounting and Management Information System from University of Minnesota, Twin Cities. Lu has AWS Cloud Practitioner and CPA certifications and 8 years of experience in security assurance.

Author

Nimesh Ravasa

Nimesh is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Nimesh has 14 years of experience in information security and holds CISSP, CISA, PMP, CSX, AWS Solution Architect – Associate, and AWS Security Specialty certifications.

Fall 2021 SOC 2 Type I Privacy report now available

Post Syndicated from Ninad Naik original https://aws.amazon.com/blogs/security/fall-2021-soc-2-type-i-privacy-report-now-available/

 Your privacy considerations are at the core of our compliance work, and at Amazon Web Services (AWS), we are focused on the protection of your content while using AWS services. Our Fall 2021 SOC 2 Type I Privacy report is now available, demonstrating the privacy compliance commitments we made to you.

The Fall 2021 SOC 2 Type I Privacy report provides you with a third-party attestation of our system and the suitability of the design of our privacy controls. The SOC 2 Privacy Trust Service Criteria (TSC), developed by the American Institute of CPAs (AICPA) establishes the criteria for evaluating controls relating to how personal information is collected, used, retained, disclosed and disposed of to meet AWS’ objectives. You can find additional information related to privacy commitments supporting our SOC 2 Type 1 report in the AWS Customer Agreement documentation.

The scope of the privacy report includes information about how we handle the content that you upload to AWS and how it is protected in all of the services and locations that are in scope for the latest AWS SOC reports. You can find our SOC 2 Type I Privacy report through Artifact in the AWS Management Console.

As always, we value your feedback and questions. Feel free to reach out to the compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to-content, news, and feature announcements? Follow us on Twitter.

Author

Ninad Naik

Ninad is a Security Assurance Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Ninad holds a Master’s degree in Information Systems from Syracuse University, NY and a Bachelor’s of Engineering degree in Information Technology from Mumbai University, India. Ninad has 11 years of experience in security assurance and ITIL, CISA, CGEIT, and CISM certifications.

Author

Lu Yu

Lu is a Compliance Program Manager at Amazon Web Services. She leads multiple security and privacy initiatives within AWS. Lu holds a Master’s degree in Accounting and dual Bachelor’s degrees in Accounting and Management Information System from University of Minnesota, Twin Cities. Lu has AWS Cloud Practitioner and CPA certifications and 8 years of experience in security assurance.

Author

Nimesh Ravasa

Nimesh is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Nimesh has 14 years of experience in information security and holds CISSP, CISA, PMP, CSX, AWS Solution Architect – Associate, and AWS Security Specialty certifications.

AWS achieves GSMA Security Certification for Europe (Paris) Region

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-gsma-security-certification-for-europe-paris-region/

We continue to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that our Europe (Paris) Region is now certified by the GSM Association (GSMA) under its Security Accreditation Scheme Subscription Management (SAS-SM) with scope Data Center Operations and Management (DCOM). This is an addition to our US East (Ohio) Region, which received certification in September 2021. This alignment with GSMA requirements demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers. AWS customers who provide embedded Universal Integrated Circuit Card (eUICC) for mobile devices can run their remote provisioning applications with confidence in the AWS Cloud in the GSMA-certified Regions.

As of this writing, 72 services offered in the Europe (Paris) Region and 128 services offered in the US East (Ohio) Region are in scope of this certification. For up-to-date information, including when additional services are added, see the AWS Services in Scope by Compliance Program and choose GSMA.

AWS was evaluated by independent third-party auditors chosen by GSMA. The Certificate of Compliance that shows that AWS achieved GSMA compliance status is available on the GSMA Website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page. Or if you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

Author

Karthik Amrutesh

Karthik is a senior manager, security assurance at AWS based in New York, U.S. His team is responsible for audits, attestations, certifications, and assessments across the European Union. Karthik has previously worked in risk management, security assurance, and technology audits for the past 18 years.

The Five Ws episode 2: Data Classification whitepaper

Post Syndicated from Jana Kay original https://aws.amazon.com/blogs/security/the-five-ws-episode-2-data-classification-whitepaper/

AWS whitepapers are a great way to expand your knowledge of the cloud. Authored by Amazon Web Services (AWS) and the AWS community, they provide in-depth content that often addresses specific customer situations.

We’re featuring some of our whitepapers in a new video series, The Five Ws. These short videos outline the who, what, when, where, and why of each whitepaper so you can decide whether to dig into it further.

The second whitepaper we’re featuring is Data Classification: Secure Cloud Adoption. This paper provides insight into data classification categories for organizations to consider when moving data to the cloud—and how implementing a data classification program can simplify cloud adoption and management. It outlines a process to build a data classification program, shares examples of data and the corresponding category the data may fall into, and outlines practices and models currently implemented by global first movers and early adopters. The paper also includes data classification and privacy considerations. Note: It’s important to use internationally recognized standards and frameworks when developing your own data classification rules. For more details on the Five Ws of Data Classification: Security Cloud Adoption, check out the video.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jana Kay

Since 2018, Jana Kay has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

Three ways to improve your cybersecurity awareness program

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/three-ways-to-improve-your-cybersecurity-awareness-program/

Raising the bar on cybersecurity starts with education. That’s why we announced in August that Amazon is making its internal Cybersecurity Awareness Training Program available to businesses and individuals for free starting this month. This is the same annual training we provide our employees to help them better understand and anticipate potential cybersecurity risks. The training program will include a getting started guide to help you implement a cybersecurity awareness training program at your organization. It’s aligned with NIST SP 800-53rev4, ISO 27001, K-ISMS, RSEFT, IRAP, OSPAR, and MCTS.

I also want to share a few key learnings for how to implement effective cybersecurity training programs that might be helpful as you develop your own training program:

  1. Be sure to articulate personal value. As humans, we have an evolved sense of physical risk that has developed over thousands of years. Our bodies respond when we sense danger, heightening our senses and getting us ready to run or fight. We have a far less developed sense of cybersecurity risk. Your vision doesn’t sharpen when you assign the wrong permissions to a resource, for example. It can be hard to describe the impact of cybersecurity, but if you keep the message personal, it engages parts of the brain that are tied to deep emotional triggers in memory. When we describe how learning a behavior—like discerning when an email might be phishing—can protect your family, your child’s college fund, or your retirement fund, it becomes more apparent why cybersecurity matters.
  2. Be inclusive. Humans are best at learning when they share a lived experience with their educators so they can make authentic connections to their daily lives. That’s why inclusion in cybersecurity training is a must. But that only happens by investing in a cybersecurity awareness team that includes people with different backgrounds, so they can provide insight into different approaches that will resonate with diverse populations. People from different cultures, backgrounds, and age cohorts can provide insight into culturally specific attack patterns as well as how to train for them. For example, for social engineering in hierarchical cultures, bad actors often spoof authority figures, and for individualistic cultures, they play to the target’s knowledge and importance, and give compliments. And don’t forget to make everything you do accessible for people with varying disability experiences, because everyone deserves the same high-quality training experience. The more you connect with people, the more they internalize your message and provide valuable feedback. Diversity and inclusion breeds better cybersecurity.
  3. Weave it into workflows. Training takes investment. You have to make time for it in your day. We all understand that as part of a workforce we have to do it, but in addition to compliance training, you should be providing just-in-time reminders and challenges to complete. Try working with tooling teams to display messaging when critical tasks are being completed. Make training short and concise—3 minutes at most—so that people can make time for it in their day.

Cybersecurity training isn’t just a once-per-year exercise. Find ways to weave it into the daily lives of your workforce, and you’ll be helping them protect not only your company, but themselves and their loved ones as well.

Get started by going to learnsecurity.amazon.com and take the Cybersecurity Awareness training.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

New AWS workbook for New Zealand financial services customers

Post Syndicated from Julian Busic original https://aws.amazon.com/blogs/security/new-aws-workbook-for-new-zealand-financial-services-customers/

We are pleased to announce a new AWS workbook designed to help New Zealand financial services customers align with the Reserve Bank of New Zealand (RBNZ) Guidance on Cyber Resilience.

The RBNZ Guidance on Cyber Resilience sets out the RBNZ expectations for its regulated entities regarding cyber resilience, and aims to raise awareness and promote the cyber resilience of the financial sector, especially at board and senior management level. The guidance applies to all entities regulated by the RBNZ, including registered banks, licensed non-bank deposit takers, licensed insurers, and designated financial market infrastructures.

While the RBNZ describes its guidance as “a set of recommendations rather than requirements” which are not legally enforceable, it also states that it expects regulated entities to “proactively consider how their current approach to cyber risk management lines up with the recommendations in [the] guidance and look for [opportunities] for improvement as early as possible.”

Security and compliance is a shared responsibility between AWS and the customer. This differentiation of responsibility is commonly referred to as the AWS Shared Responsibility Model, in which AWS is responsible for security of the cloud, and the customer is responsible for their security in the cloud. The new AWS Reserve Bank of New Zealand Guidance on Cyber Resilience (RBNZ-GCR) Workbook helps customers align with the RBNZ Guidance on Cyber Resilience by providing control mappings for the following:

  • Security in the cloud by mapping RBNZ Guidance on Cyber Resilience practices to the five pillars of the AWS Well-Architected Framework.
  • Security of the cloud by mapping RBNZ Guidance on Cyber Resilience practices to control statements from the AWS Compliance Program.

The downloadable AWS RBNZ-GCR Workbook contains two embedded formats:

  • Microsoft Excel – Coverage includes AWS responsibility control statements and Well-Architected Framework best practices.
  • Dynamic HTML – Coverage is the same as in the Microsoft Excel format, with the added feature that the Well-Architected Framework best practices are mapped to AWS Config managed rules and Amazon GuardDuty findings, where available or applicable.

The AWS RBNZ-GCR Workbook is available for download in AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Julian Busic

Julian is a Security Solutions Architect with a focus on regulatory engagement. He works with our customers, their regulators, and AWS teams to help customers raise the bar on secure cloud adoption and usage. Julian has over 15 years of experience working in risk and technology across the financial services industry in Australia and New Zealand.

Introducing the Security at the Edge: Core Principles whitepaper

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/introducing-the-security-at-the-edge-core-principles-whitepaper/

Amazon Web Services (AWS) recently released the Security at the Edge: Core Principles whitepaper. Today’s business leaders know that it’s critical to ensure that both the security of their environments and the security present in traditional cloud networks are extended to workloads at the edge. The whitepaper provides security executives the foundations for implementing a defense in depth strategy for security at the edge by addressing three areas of edge security:

  • AWS services at AWS edge locations
  • How those services and others can be used to implement the best practices outlined in the design principles of the AWS Well-Architected Framework Security Pillar
  • Additional AWS edge services, which customers can use to help secure their edge environments or expand operations into new, previously unsupported environments

Together, these elements offer core principles for designing a security strategy at the edge, and demonstrate how AWS services can provide a secure environment extending from the core cloud to the edge of the AWS network and out to customer edge devices and endpoints. You can find more information in the Security at the Edge: Core Principles whitepaper.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Author

Jana Kay

Since 2018, Jana has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

Introducing the Ransomware Risk Management on AWS Whitepaper

Post Syndicated from Temi Adebambo original https://aws.amazon.com/blogs/security/introducing-the-ransomware-risk-management-on-aws-whitepaper/

AWS recently released the Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper. This whitepaper aligns the National Institute of Standards and Technology (NIST) recommendations for security controls that are related to ransomware risk management, for workloads built on AWS. The whitepaper maps the technical capabilities to AWS services and implementation guidance. While this whitepaper is primarily focused on managing the risks associated with ransomware, the security controls and AWS services outlined are consistent with general security best practices.

The National Cybersecurity Center of Excellence (NCCoE) at NIST has published Practice Guides (NIST 1800-11, 1800-25, and 1800-26) to demonstrate how organizations can develop and implement security controls to combat the data integrity challenges posed by ransomware and other destructive events. Each of the Practice Guides include a detailed set of goals that are designed to help organizations establish the ability to identify, protect, detect, respond, and recover from ransomware events.

The Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper helps AWS customers confidently meet the goals of the Practice Guides the following categories:

Identify and protect

  • Identify systems, users, data, applications, and entities on the network.
  • Identify vulnerabilities in enterprise components and clients.
  • Create a baseline for the integrity and activity of enterprise systems in preparation for an unexpected event.
  • Create backups of enterprise data in advance of an unexpected event.
  • Protect these backups and other potentially important data against alteration.
  • Manage enterprise health by assessing machine posture.

Detect and respond

  • Detect malicious and suspicious activity generated on the network by users, or from applications that could indicate a data integrity event.
  • Mitigate and contain the effects of events that can cause a loss of data integrity.
  • Monitor the integrity of the enterprise for detection of events and after-the-fact analysis.
  • Use logging and reporting features to speed response time for data integrity events.
  • Analyze data integrity events for the scope of their impact on the network, enterprise devices, and enterprise data.
  • Analyze data integrity events to inform and improve the enterprise’s defenses against future attacks.

Recover

  • Restore data to its last known good configuration.
  • Identify the correct backup version (free of malicious code and data for data restoration).
  • Identify altered data, as well as the date and time of alteration.
  • Determine the identity/identities of those who altered data.

To achieve the above goals, the Practice Guides outline a set of technical capabilities that should be established, and provide a mapping between the generic application term and the security controls that the capability provides.

AWS services can be mapped to theses technical capabilities as outlined in the Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper. AWS offers a comprehensive set of services that customers can implement to establish the necessary technical capabilities to manage the risks associated with ransomware. By following the mapping in the whitepaper, AWS customers can identify which services, features, and functionality can help their organization identify, protect, detect, respond, and from ransomware events. If you’d like additional information about cloud security at AWS, please contact us.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Temi Adebambo

Temi is the Senior Manager for the America’s Security and Network Solutions Architect team. His team is focused on working with customers on cloud migration and modernization, cybersecurity strategy, architecture best practices, and innovation in the cloud. Before AWS, he spent over 14 years as a consultant, advising CISOs and security leaders.

AWS achieves FedRAMP P-ATO for 18 additional services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Alexis Robinson original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-18-additional-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

We’re pleased to announce that 18 additional AWS services have achieved Provisional Authority to Operate (P-ATO) by the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB). The following are the 18 additional services with FedRAMP authorization for the US federal government, and organizations with regulated workloads:

  • Amazon Cognito lets you add user sign-up, sign-in, and access control to their web and mobile apps quickly and easily.
  • Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning to extract health data from medical text–no machine learning experience is required.
  • Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service that gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises.
  • Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service.
  • Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud that lets you easily create and publish interactive BI dashboards that include Machine Learning-powered insights.
  • Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application.
  • Amazon Textract is a machine learning service that automatically extracts text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.
  • AWS Backup enables you to centralize and automate data protection across AWS services.
  • AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.
  • AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
  • AWS Ground Station is a fully managed service that lets you control satellite communications, process data, and scale your operations without having to worry about building or managing your own ground station infrastructure.
  • AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise. AWS OpsWorks for Chef Automate provides a fully managed Chef Automate server and suite of automation tools that give you workflow automation for continuous deployment, automated testing for compliance and security, and a user interface that gives you visibility into your nodes and node statuses. AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.
  • AWS Personal Health Dashboard provides alerts and guidance for AWS events that might affect your environment, and provides proactive and transparent notifications about your specific AWS environment.
  • AWS Resource Groups grants you the ability to organize your AWS resources, and manage and automate tasks on large numbers of resources at one time.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • AWS Storage Gateway is a set of hybrid cloud storage services that gives you on-premises access to virtually unlimited cloud storage.
  • AWS Systems Manager provides a unified user interface so you can track and resolve operational issues across your AWS applications and resources from a central place.
  • AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by Region

Service FedRAMP Moderate in AWS US East/West FedRAMP High in AWS GovCloud (US)
Amazon Cognito  
Amazon Comprehend Medical
Amazon Elastic Kubernetes Service (Amazon EKS)  
Amazon Pinpoint  
Amazon QuickSight  
Amazon Simple Email Service (Amazon SES)  
Amazon Textract
AWS Backup
AWS CloudHSM  
AWS CodePipeline
AWS Ground Station  

AWS OpsWorks for Chef Automate and

AWS OpsWorks for Puppet Enterprise

 
AWS Personal Health Dashboard
AWS Resource Groups  
AWS Security Hub  
AWS Storage Gateway
AWS Systems Manager
AWS X-Ray

 
AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. Today, AWS offers 100 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 90 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

To learn what other public sector customers are doing on AWS, see our Customer Success Stories page. For up-to-date information when new services are added, see our AWS Services in Scope by Compliance Program page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alexis Robinson

Alexis is the Head of the U.S. Government Security & Compliance Program for AWS. For over 10 years, she has served federal government clients advising on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment including cloud services applicable to AWS East/West and AWS GovCloud (US) Regions.

137 AWS services achieve HITRUST certification

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/137-aws-services-achieve-hitrust-certification/

We’re excited to announce that 137 Amazon Web Services (AWS) services are certified for the Health Information Trust Alliance (HITRUST) Common Security Framework (CSF) for the 2021 cycle.

The full list of AWS services that were audited by a third-party auditor and certified under HITRUST CSF is available on our Services in Scope by Compliance Program page. You can view and download our HITRUST CSF certification on demand through AWS Artifact.

AWS HITRUST CSF certification is available for customer inheritance

You don’t have to assess inherited controls for your HITRUST validated assessment, because AWS already has! You can deploy business solutions into AWS and inherit our HITRUST CSF certification, provided that you use only in-scope services and apply the controls detailed on the HITRUST website that you are responsible for implementing.

With the HITRUST certification, you, as an AWS customer, can tailor your security control baselines to a variety of factors—including, but not limited to, regulatory requirements and organization type. The HITRUST CSF is widely adopted by leading organizations in a variety of industries as part of their approach to security and privacy. Visit the HITRUST website for more information.

As always, we value your feedback and questions and are committed to helping you achieve and maintain the highest standard of security and compliance. Feel free to contact the team through AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali is a Security Assurance Manager at AWS. She leads the global HITRUST assurance program within AWS. Sonali considers herself a perpetual student of information security, and holds multiple certifications like CISSP, PCIP, CCSK, CEH, CISA, ISO 27001 Lead Auditor, ISO 22301 Lead Auditor, C-GDPR Practitioner, and ITIL.

AWS achieves GSMA security certification for US East (Ohio) Region

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-gsma-security-certification-for-us-east-ohio-region/

We continue to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that our US East (Ohio) Region (us-east-2) is now certified by the GSM Association (GSMA) under its Security Accreditation Scheme Subscription Management (SAS-SM) with scope Data Center Operations and Management (DCOM). This alignment with GSMA requirements demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers. AWS customers who provide embedded Universal Integrated Circuit Card (eUICC) for mobile devices can run their remote provisioning applications with confidence in the AWS Cloud in the GSMA-certified US East (Ohio) Region.

As of this writing, 128 services offered in the US East (Ohio) Region are in scope of this certification. For up-to-date information, including when additional services are added, see the AWS Services in Scope by Compliance Program and choose GSMA.

AWS was evaluated by independent third-party auditors chosen by GSMA. The Certificate of Compliance illustrating the AWS GSMA compliance status is available on the GSMA website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a Security Audit Program Manager at AWS, based in New York. She leads various security audit programs across Europe. She previously worked in security assurance and technology risk management in the financial industry for 10 years.

Author

Karthik Amrutesh

Karthik is a Senior Manager, Security Assurance at AWS, based in New York. He leads a team responsible for audits, attestations, and certifications across the European Union. Karthik has previously worked in risk management, security assurance, and technology audits for over 18 years.

New Standard Contractual Clauses now part of the AWS GDPR Data Processing Addendum for customers

Post Syndicated from Stéphane Ducable original https://aws.amazon.com/blogs/security/new-standard-contractual-clauses-now-part-of-the-aws-gdpr-data-processing-addendum-for-customers/

Today, we’re happy to announce an update to our online AWS GDPR Data Processing Addendum (AWS GDPR DPA) and our online Service Terms to include the new Standard Contractual Clauses (SCCs) that the European Commission (EC) adopted in June 2021. The EC-approved SCCs give our customers the ability to comply with the General Data Protection Regulation (GDPR) when they transfer personal data subject to GDPR to countries outside the European Economic Area (EEA) that haven’t received an EC adequacy decision (third countries). The new SCCs are now better adapted to how our customers operate their applications or run their workloads in the cloud, because they cover different transfer scenarios, and also provide enhanced safeguards for data transfers.

Achieving compliance with GDPR is critical for hundreds of thousands of AWS customers and AWS. The new SCCs allow all AWS customers that are controllers or processors under GDPR to continue to transfer personal data from their AWS accounts in compliance with GDPR. As part of the online Service Terms, the new SCCs will apply automatically whenever an AWS customer uses AWS services to transfer customer data to third countries.

The updated AWS GDPR DPA incorporating the new SCCs supplements our announcement in February 2021 of strengthened commitments to protect customer data, such as challenging law enforcement requests that conflict with EU law. We have also published the blog post How AWS is helping EU customers navigate the new normal for data protection, and the whitepaper Navigating Compliance with EU Data Transfer Requirements to help AWS customers conduct their data transfer assessments and comply with GDPR, the Schrems II ruling, and the recommendations issued by the European Data Protection Board. AWS is constantly working to ensure that our customers can enjoy the benefits of AWS everywhere they operate, and we welcome the new SCCs because they enable our customers to continue using AWS services in compliance with GDPR. If you have questions or need more information, visit our EU Data Protection page and our GDPR Center.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Stéphane Ducable

Stéphane is Vice President of Public Policy – EMEA at AWS. He is focused on increasing awareness of the benefits of adopting cloud computing technology across the EMEA region.

Protect your remote workforce by using a managed DNS firewall and network firewall

Post Syndicated from Patrick Duffy original https://aws.amazon.com/blogs/security/protect-your-remote-workforce-by-using-a-managed-dns-firewall-and-network-firewall/

More of our customers are adopting flexible work-from-home and remote work strategies that use virtual desktop solutions, such as Amazon WorkSpaces and Amazon AppStream 2.0, to deliver their user applications. Securing these workloads benefits from a layered approach, and this post focuses on protecting your users at the network level. Customers can now apply these security measures by using Route 53 Resolver DNS Firewall and AWS Network Firewall, two managed services that provide layered protection for the customer’s virtual private cloud (VPC). This blog post provides recommendations for how you can build network protection for your remote workforce by using DNS Firewall and Network Firewall.

Overview

DNS Firewall helps you block DNS queries that are made for known malicious domains, while allowing DNS queries to trusted domains. DNS Firewall has a simple deployment model that makes it straightforward for you to start protecting your VPCs by using managed domain lists, as well as custom domain lists. With DNS Firewall, you can filter and regulate outbound DNS requests. The service inspects DNS requests that are handled by Route 53 Resolver and applies actions that you define to allow or block requests.

DNS Firewall consists of domain lists and rule groups. Domain lists include custom domain lists that you create and AWS managed domain lists. Rule groups are associated with VPCs and control the response for domain lists that you choose. You can configure rule groups at scale by using AWS Firewall Manager. Rule groups process in priority order and stop processing after a rule is matched.

Network Firewall helps customers protect their VPCs by protecting the workload at the network layer. Network Firewall is an automatically scaling, highly available service that simplifies deployment and management for network administrators. With Network Firewall, you can perform inspection for inbound traffic, outbound traffic, traffic between VPCs, and traffic between VPCs and AWS Direct Connect or AWS VPN traffic. You can deploy stateless rules to allow or deny traffic based on the protocol, source and destination ports, and source and destination IP addresses. Additionally, you can deploy stateful rules that allow or block traffic based on domain lists, standard rule groups, or Suricata compatible intrusion prevention system (IPS) rules.

To configure Network Firewall, you need to create Network Firewall rule groups, a Network Firewall policy, and finally, a network firewall. Rule groups consist of stateless and stateful rule groups. For both types of rule groups, you need to estimate the capacity when you create the rule group. See the Network Firewall Developer Guide to learn how to estimate the capacity that is needed for the stateless and stateful rule engines.

This post shows you how to configure DNS Firewall and Network Firewall to protect your workload. You will learn how to create rules that prevent DNS queries to unapproved DNS servers, and that block resources by protocol, domain, and IP address. For the purposes of this post, we’ll show you how to protect a workload consisting of two Microsoft Active Directory domain controllers, an application server running QuickBooks, and Amazon WorkSpaces to deliver the QuickBooks application to end users, as shown in Figure 1.
 

Figure 1: An example architecture that includes domain controllers and QuickBooks hosted on EC2 and Amazon WorkSpaces for user virtual desktops

Figure 1: An example architecture that includes domain controllers and QuickBooks hosted on EC2 and Amazon WorkSpaces for user virtual desktops

Configure DNS Firewall

DNS Firewall domain lists currently include two managed lists to block malware and botnet command-and-control networks, and you can also bring your own list. Your list can include any domain names that you have found to be malicious and any domains that you don’t want your workloads connecting to.

To configure DNS Firewall domain lists (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under DNS Firewall, choose Domain lists.
  3. Choose Add domain list to configure a customer-owned domain list.
  4. In the domain list builder dialog box, do the following.
    1. Under Domain list name, enter a name.
    2. In the second dialog box, enter the list of domains you want to allow or block.
    3. Choose Add domain list.

When you create a domain list, you can enter a list of domains you want to block or allow. You also have the option to upload your domains by using a bulk upload. You can use wildcards when you add domains for DNS Firewall. Figure 2 shows an example of a custom domain list that matches the root domain and any subdomain of box.com, dropbox.com, and sharefile.com, to prevent users from using these file sharing platforms.
 

Figure 2: Domains added to a customer-owned domain list

Figure 2: Domains added to a customer-owned domain list

To configure DNS Firewall rule groups (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under DNS Firewall, choose Rule group.
  3. Choose Create rule group to apply actions to domain lists.
  4. Enter a rule group name and optional description.
  5. Choose Add rule to add a managed or customer-owned domain list, and do the following.
    1. Enter a rule name and optional description.
    2. Choose Add my own domain list or Add AWS managed domain list.
    3. Select the desired domain list.
    4. Choose an action, and then choose Next.
  6. (Optional) Change the rule priority.
  7. (Optional) Add tags.
  8. Choose Create rule group.

When you create your rule group, you attach rules and set an action and priority for the rule. You can set rule actions to Allow, Block, or Alert. When you set the action to Block, you can return the following responses:

  • NODATA – Returns no response.
  • NXDOMAIN – Returns an unknown domain response.
  • OVERRIDE – Returns a custom CNAME response.

Figure 3 shows rules attached to the DNS firewall.
 

Figure 3: DNS Firewall rules

Figure 3: DNS Firewall rules

To associate your rule group to a VPC (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under DNS Firewall, choose Rule group.
  3. Select the desired rule group.
  4. Choose Associated VPCs, and then choose Associate VPC.
  5. Select one or more VPCs, and then choose Associate.

The rule group will filter your DNS requests to Route 53 Resolver. Set your DNS servers forwarders to use your Route 53 Resolver.

To configure logging for your firewall’s activity, navigate to the Route 53 console and select your VPC under the Resolver section. You can configure multiple logging options, if required. You can choose to log to Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), or Amazon Kinesis Data Firehose. Select the VPC that you want to log queries for and add any tags that you require.

Configure Network Firewall

In this section, you’ll learn how to create Network Firewall rule groups, a firewall policy, and a network firewall.

Configure rule groups

Stateless rule groups are straightforward evaluations of a source and destination IP address, protocol, and port. It’s important to note that stateless rules don’t perform any deep inspection of network traffic.

Stateless rules have three options:

  • Pass – Pass the packet without further inspection.
  • Drop – Drop the packet.
  • Forward – Forward the packet to stateful rule groups.

Stateless rules inspect each packet in isolation in the order of priority and stop processing when a rule has been matched. This example doesn’t use a stateless rule, and simply uses the default firewall action to forward all traffic to stateful rule groups.

Stateful rule groups support deep packet inspection, traffic logging, and more complex rules. Stateful rule groups evaluate traffic based on standard rules, domain rules or Suricata rules. Depending on the type of rule that you use, you can pass, drop, or create alerts on the traffic that is inspected.

To create a rule group (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under AWS Network Firewall, choose Network Firewall rule groups.
  3. Choose Create Network Firewall rule group.
  4. Choose Stateful rule group or Stateless rule group.
  5. Enter the desired settings.
  6. Choose Create stateful rule group.

The example in Figure 4 uses standard rules to block outbound and inbound Server Message Block (SMB), Secure Shell (SSH), Network Time Protocol (NTP), DNS, and Kerberos traffic, which are common protocols used in our example workload. Network Firewall doesn’t inspect traffic between subnets within the same VPC or over VPC peering, so these rules won’t block local traffic. You can add rules with the Pass action to allow traffic to and from trusted networks.
 

Figure 4: Standard rules created to block unauthorized SMB, SSH, NTP, DNS, and Kerberos traffic

Figure 4: Standard rules created to block unauthorized SMB, SSH, NTP, DNS, and Kerberos traffic

Blocking outbound DNS requests is a common strategy to verify that DNS traffic resolves only from local resolvers, such as your DNS server or the Route 53 Resolver. You can also use these rules to prevent inbound traffic to your VPC-hosted resources, as an additional layer of security beyond security groups. If a security group erroneously allows SMB access to a file server from external sources, Network Firewall will drop this traffic based on these rules.

Even though the DNS Firewall policy described in this blog post will block DNS queries for unauthorized sharing platforms, some users might attempt to bypass this block by modifying the HOSTS file on their Amazon WorkSpace. To counter this risk, you can add a domain rule to your firewall policy to block the box.com, dropbox.com, and sharefile.com domains, as shown in Figure 5.
 

Figure 5: A domain list rule to block box.com, dropbox.com, and sharefile.com

Figure 5: A domain list rule to block box.com, dropbox.com, and sharefile.com

Configure firewall policy

You can use firewall policies to attach stateless and stateful rule groups to a single policy that is used by one or more network firewalls. Attach your rule groups to this policy and set your preferred default stateless actions. The default stateless actions will apply to any packets that don’t match a stateless rule group within the policy. You can choose separate actions for full packets and fragmented packets, depending on your needs, as shown in Figure 6.
 

Figure 6: Stateful rule groups attached to a firewall policy

Figure 6: Stateful rule groups attached to a firewall policy

You can choose to forward the traffic to be processed by any stateful rule groups that you have attached to your firewall policy. To bypass any stateful rule groups, you can select the Pass option.

To create a firewall policy (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under AWS Network Firewall, choose Firewall policies.
  3. Choose Create firewall policy.
  4. Enter a name and description for the policy.
  5. Choose Add rule groups.
    1. Select the stateless default actions you want to use.
    2. For any stateless or stateful rule groups, choose Add rule groups to add any rule groups that you want to use.
  6. (Optional) Add tags.
  7. Choose Create firewall policy.

Configure a network firewall

Configuring the network firewall requires you to attach the firewall to a VPC and select at least one subnet.

To create a network firewall (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under AWS Network Firewall, choose Firewalls.
  3. Choose Create firewall.
  4. Under Firewall details, do the following:
    1. Enter a name for the firewall.
    2. Select the VPC.
    3. Select one or more Availability Zones and subnets, as needed.
  5. Under Associated firewall policy, do the following:
    1. Choose Associate an existing firewall policy.
    2. Select the firewall policy.
  6. (Optional) Add tags.
  7. Choose Create firewall.

Two subnets in separate Availability Zones are used for the network firewall example shown in Figure 7, to provide high availability.
 

Figure 7: A network firewall configuration that includes multiple subnets

Figure 7: A network firewall configuration that includes multiple subnets

After the firewall is in the ready state, you’ll be able to see the endpoint IDs of the firewall endpoints, as shown in Figure 8. The endpoint IDs are needed when you update VPC route tables.
 

Figure 8: Firewall endpoint IDs

Figure 8: Firewall endpoint IDs

You can configure alert logs, flow logs, or both to be sent to Amazon S3, CloudWatch log groups, or Kinesis Data Firehose. Administrators configure alert logging to build proactive alerting and flow logging to use in troubleshooting and analysis.

Finalize the setup

After the firewall is created and ready, the last step to complete setup is to update the VPC route tables. Update your routing in the VPC to route traffic through the new network firewall endpoints. Update the public subnets route table to direct traffic to the firewall endpoint in the same Availability Zone. Update the internet gateway route to direct traffic to the firewall endpoints in the matching Availability Zone for public subnets. These routes are shown in Figure 9.
 

Figure 9: Network diagram of the firewall solution

Figure 9: Network diagram of the firewall solution

In this example architecture, Amazon WorkSpaces users are able to connect directly between private subnet 1 and private subnet 2 to access local resources. Security groups and Windows authentication control access from WorkSpaces to EC2-hosted workloads such as Active Directory, file servers, and SQL applications. For example, Microsoft Active Directory domain controllers are added to a security group that allows inbound ports 53, 389, and 445, as shown in Figure 10.
 

Figure 10: Domain controller security group inbound rules

Figure 10: Domain controller security group inbound rules

Traffic from WorkSpaces will first resolve DNS requests by using the Active Directory domain controller. The domain controller uses the local Route 53 Resolver as a DNS forwarder, which DNS Firewall protects. Network traffic then flows from the private subnet to the NAT gateway, through the network firewall to the internet gateway. Response traffic flows back from the internet gateway to the network firewall, then to the NAT gateway, and finally to the user WorkSpace. This workflow is shown in Figure 11.
 

Figure 11: Traffic flow for allowed traffic

Figure 11: Traffic flow for allowed traffic

If a user attempts to connect to blocked internet resources, such as box.com, a botnet, or a malware domain, this will result in a NXDOMAIN response from DNS Firewall, and the connection will not proceed any further. This blocked traffic flow is shown in Figure 12.
  

Figure 12: Traffic flow when blocked by DNS Firewall

Figure 12: Traffic flow when blocked by DNS Firewall

If a user attempts to initiate a DNS request to a public DNS server or attempts to access a public file server, this will result in a dropped connection by Network Firewall. The traffic will flow as expected from the user WorkSpace to the NAT gateway and from the NAT gateway to the network firewall, which inspects the traffic. The network firewall then drops the traffic when it matches a rule with the drop or block action, as shown in Figure 13. This configuration helps to ensure that your private resources only use approved DNS servers and internet resources. Network Firewall will block unapproved domains and restricted protocols that use standard rules.
 

Figure 13: Traffic flow when blocked by Network Firewall

Figure 13: Traffic flow when blocked by Network Firewall

Take extra care to associate a route table with your internet gateway to route private subnet traffic to your firewall endpoints; otherwise, response traffic won’t make it back to your private subnets. Traffic will route from the private subnet up through the NAT gateway in its Availability Zone. The NAT gateway will pass the traffic to the network firewall endpoint in the same Availability Zone, which will process the rules and send allowed traffic to the internet gateway for the VPC. By using this method, you can block outbound network traffic with criteria that are more advanced than what is allowed by network ACLs.

Conclusion

Amazon Route 53 Resolver DNS Firewall and AWS Network Firewall help you protect your VPC workloads by inspecting network traffic and applying deep packet inspection rules to block unwanted traffic. This post focused on implementing Network Firewall in a virtual desktop workload that spans multiple Availability Zones. You’ve seen how to deploy a network firewall and update your VPC route tables. This solution can help increase the security of your workloads in AWS. If you have multiple VPCs to protect, consider enforcing your policies at scale by using AWS Firewall Manager, as outlined in this blog post.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Network Firewall forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Patrick Duffy

Patrick is a Solutions Architect in the Small Medium Business (SMB) segment at AWS. He is passionate about raising awareness and increasing security of AWS workloads. Outside work, he loves to travel and try new cuisines and enjoys a match in Magic Arena or Overwatch.