Tag Archives: Security, Identity & Compliance

AWS Security Profiles: Colm MacCárthaigh, Senior Principal Engineer

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profiles-colm-maccarthaigh-senior-principal-engineer/

AWS Security Profile: Colm MacCarthaigh
In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


How long have you been at AWS and what do you do in your current role?

I joined in 2008 to help build Amazon CloudFront, our content delivery network. These days, I work on Amazon Elastic Compute Cloud (Amazon EC2) and cryptography, focusing on products like AWS Nitro Enclaves and our network encryption.

What’s your favorite part of your job?

Working with smart and awesome people who I get to learn a lot from.

How did you get started in Security?

Around 2000, I became a system administrator for a multiuser university shell service called RedBrick. RedBrick is an old-school Unix terminal service run by students, for students. Thousands of curious people had access to log in, which makes it a very interesting security challenge. We had to keep everything extremely up-to-date and deal with all sorts of nuisances and abuse. I learned how to find and report new kernel vulnerabilities, deal with denial-of-service attacks, and manage campaigns like getting everyone to move to the encrypted SSH protocol rather than Telnet (which was more common at the time). We tried educating users, but in the end I built a client with a one-click SSH to RedBrick button and that did the trick.

How do you explain what you do to non-technical friends or family?

“I work on the internet” is probably the most common, or these days I can say, “I work on the cloud.” Most of my friends and family are non-technical; we hang out and play music, and catch up and socialize. I try to avoid talking about work.

What are you currently working on that you’re excited about?

Nitro Enclaves is going to make it cheaper and easier for customers to isolate sensitive data. That’s a big deal. Anything we can do that is going to improve the security of people’s data is a big deal. We’re all tired and weary of hearing about “yet another data breach.” Not everyone has the depth of expertise and experience that Amazon has. When we can take the lessons we’ve learned, and the techniques we’ve applied, for securing businesses like Amazon.com and then give those lessons and techniques to customers in an easy to consume form—that excites me.

You’re presenting at re:Invent this year—can you give readers a sneak peek of what you’re covering?

I’ll be talking about Nitro Enclaves, but also presenting some more insights into how we build at AWS. We recently launched the Amazon Builders’ Library, which is an ongoing series of articles and deep dives into lessons we’ve learned from building Amazon.com, Alexa, AWS, and other large services. I’m going to cover what simplicity means for us, and also talk about things we do that most customers would never need to do themselves, so that should be fun.

What are you hoping that your audience will do differently after your session?

I’ll be happy if people pick up a few tips and tricks and get a sense of how we break down problems in a customer-obsessed way.

What is your favorite Leadership Principle at Amazon and why?

My favorite leadership principle is Ownership. I love that we’re empowered (and expected) to be owners at Amazon. Part of that is not having to seek a lot of permission, which helps with moving quickly, and part of that is a feeling of team pride that comes from a job well done.

What’s the best career advice you’ve ever received?

Be fully committed or get out of the way, but don’t do anything in between.

If you could go back, what would you tell yourself at the beginning of your career?

I’ve caught enough lucky breaks that I feel like I’ve done really well in my career, definitely wildly beyond what I could have dreamed of when I was a teenager, so I wouldn’t want to change anything. Who knows how things would go then! If I could go back in time, I’d give some hints and help to amazingly talented people I know who got stung by bad luck.

What are you most proud of in your career?

Becoming a Project Management Committee (PMC) member for the Apache httpd webserver was a huge milestone for me. I got to contribute to and maintain Apache, and was trusted to be release manager. That was all volunteer work, but it started everything for me.

I hear you play Irish music. What instruments do you play?

Yes, I play and sing Irish traditional music. Mainly guitar, but also piano, Irish whistle, banjo, cittern, and bouzouki. Those last instruments are double-stringed and used mainly for accompaniment. I’ve played in stage shows, bands, and I get to record and tour often enough, when we’re not on lockdown. It is very hard to beat how fun it is to play music with other people, there’s something very special about it. Now that I live in the U.S., it also connects me to Ireland, where I grew up, and it gives me an opportunity to sing in Irish, the language I spoke at home and at school growing up.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Colm MacCárthaigh

Colm joined AWS in 2008 to work on high-scale systems and security. Today, he works on AWS IAM and network cryptography. Colm is also an active open source and open standards contributor. He’s a long-time author and project maintainer for the Apache httpd webserver, and a contributor to the Linux kernel and IETF standards. Colm grew up in Ireland, and still plays and sings Irish music.

Zero Trust architectures: An AWS perspective

Post Syndicated from Mark Ryland original https://aws.amazon.com/blogs/security/zero-trust-architectures-an-aws-perspective/

Our mission at Amazon Web Services (AWS) is to innovate on behalf of our customers so they have less and less work to do when building, deploying, and rapidly iterating on secure systems. From a security perspective, our customers seek answers to the ongoing question What are the optimal patterns to ensure the right level of confidentiality, integrity, and availability of my systems and data while increasing speed and agility? Increasingly, customers are asking specifically about how security architectural patterns that fall under the banner of Zero Trust architecture or Zero Trust networking might help answer this question.

Given the surge in interest in technology that uses the Zero Trust label, as well as the variety of concepts and models that come under the Zero Trust umbrella, we’d like to provide our perspective. We’ll share our definition and guiding principles for Zero Trust, and then explore the larger subdomains that have emerged under that banner. We’ll also talk about how AWS has woven these principles into the fabric of the AWS cloud since its earliest days, as well as into many recent developments. Finally, we’ll review how AWS can help you on your own Zero Trust journey, focusing on the underlying security objectives that matter most to our customers. Technological approaches rise and fall, but underlying security objectives tend to be relatively stable over time. (A good summary of some of those can be found in the Design Principles of the AWS Well-Architected Framework.)

Definition and guiding principles for Zero Trust

Let’s start out with a general definition. Zero Trust is a conceptual model and an associated set of mechanisms that focus on providing security controls around digital assets that do not solely or fundamentally depend on traditional network controls or network perimeters. The zero in Zero Trust fundamentally refers to diminishing—possibly to zero!—the trust historically created by an actor’s location within a traditional network, whether we think of the actor as a person or a software component. In a Zero Trust world, network-centric trust models are augmented or replaced by other techniques—which we can describe generally as identity-centric controls—to provide equal or better security mechanisms than we had in place previously. Better security mechanisms should be understood broadly to include attributes such as greater usability and flexibility, even if the overall security posture remains the same. Let’s consider more details and possible approaches along the two dimensions.

One dimension is the network. Do we achieve Zero Trust by allowing all network packets to flow between all hosts or endpoints, but implement all security controls above the network layer? Or do we break our systems down into smaller logical components and implement much tighter network segments or packet-level controls—so-called micro-segments or micro-perimeters? Do we add some kind of gateway or proxy technology that enforces a new kind of trust boundary? Do we still use VPN technology for network isolation but make it more dynamic and hidden from the user experience, so that users don’t even notice that network boundaries are being created and torn down as needed? Or some combination of these techniques?

The other dimension is identity and access management. Are we talking about human actors with their PCs, tablets, and phones trying to access web applications? Or are we talking about machine-to-machine, software-to-software communication, where all requests are authenticated and authorized using other kinds of techniques? Or perhaps we’re thinking of some combination of the two. For example, certain security-relevant properties or attributes of the user’s situation—strength of authentication, device type, ownership, posture assessment, health, network location, and others—are propagated to and through the software systems with which the user is interacting, and alter their access dynamically.

Thus, as we start to look more closely at Zero Trust, we can immediately see the possibility of confusion—because many different topics and concepts are implicated—but also a clear indication of opportunities to build better, more flexible, and more secure software systems. What are some of the principles that can help guide us through both the confusion and the opportunities?

Our first guiding principle for Zero Trust is that while the conceptual model decreases reliance on network location, the role of network controls and perimeters remains important to the overall security architecture. In other words, the best security doesn’t come from making a binary choice between identity-centric and network-centric tools, but rather by using both effectively in combination with each other. Identity-centric controls, such as the AWS SigV4 request signing process, which is used to interact with AWS API endpoints, uniquely authenticate and authorize each and every signed API request, and provide very fine-grained access controls. However, network-centric tools such as Amazon Virtual Private Cloud (Amazon VPC), security groups, AWS PrivateLink, and VPC endpoints are straightforward to understand and use, filter unnecessary noise out of the system, and provide excellent guardrails within which identity-centric controls can operate. Ideally, these two kinds of controls should not only coexist, they should be aware of and augment one another. For example, VPC endpoints provide the ability to attach a policy that allows you to write and enforce identity-centric rules at a logical network boundary—in that case, the private network exit from your Amazon VPC on the way to a nearby AWS service endpoint.

Our second guiding principle for Zero Trust is that it can mean different things in different contexts. Arguably one of the key reasons for the ambiguity surrounding Zero Trust is that the term encompasses many different use cases which share only the fundamental technical concept of diminishing the security relevance of a network location or boundary. Yet those use cases differ substantially in what they’re trying to achieve for the organization. As we noted above, common examples of Zero Trust goals range from ensuring workforce agility and mobility—using browsers and mobile apps and the internet to access business systems and applications—to the creation of carefully segmented micro-service architectures inside of new cloud-based applications. By focusing on a specific problem that we’re trying to solve, and approaching it with fresh eyes and new tools, we can avoid getting mired in low-value discussions around whether a new approach to a security challenge is really—or to what degree it is—an application of the Zero Trust concept.

Our third guiding principle is that Zero Trust concepts must be applied in accordance with the organizational value of the system and data being protected. Over time, the application of the Zero Trust conceptual model and associated mechanisms will continue to improve defense in depth, and continue to make security controls we already have work better through the increased visibility and software-defined nature of the cloud. Applied well, the tenets of Zero Trust can significantly raise the security bar, especially for critical workloads. However, if applied in strict orthodoxy, Zero Trust methods can limit the incorporation of more traditional technologies into upgraded or new systems, and stifle innovation by overly taxing organizations where the benefits aren’t commensurate with the effort. For many business systems, network controls and network perimeters will continue to be important and usually adequate controls for a long time, perhaps forever. We believe it’s best to think of Zero Trust concepts as additive to existing security controls and concepts, rather than as replacements.

Examples of Zero Trust principles and capabilities at work today within the AWS cloud

The most prominent example of Zero Trust in AWS is how millions of customers typically interact with AWS every day using the AWS Management Console or securely calling AWS APIs over a diverse set of public and private networks. Whether called via the console, the AWS Command Line Interface (AWS CLI), or software written to the AWS APIs, ultimately all of these methods of interaction reach a set of web services with endpoints that are reachable from the internet. There is absolutely nothing about the security of the AWS API infrastructure that depends on network reachability. Each one of these signed API requests is authenticated and authorized every single time at rates of millions upon millions of requests per second globally. Our customers do so confidently; knowing that the cryptographic strength of the underlying Transport Layer Security (TLS) protocol—augmented by the AWS Signature v4 signing process—properly secures these requests without any regard to the trustworthiness of the underlying network. Interestingly, the use of cloud-based APIs is rarely—if ever—mentioned in Zero Trust discussions. Perhaps this is because AWS led the way with this approach to securing APIs from the start, such that it is now assumed to be a basic part of every cloud security story.

Similarly, but perhaps not as well understood, when individual AWS services need to call each other to operate and deliver their service capabilities, they rely on the same mechanisms that you use as a customer. You can see this in action in the form of service-linked roles. For example, when AWS Auto Scaling determines that it needs to call the Amazon Elastic Compute Cloud (Amazon EC2) API to create or terminate an EC2 instance in your account, the AWS Auto Scaling service assumes the service-linked role you’ve provided in your account, receives the resulting AWS short-term credentials, and uses these credentials to sign requests using the SigV4 process to the appropriate EC2 APIs. On the receiving end, AWS Identity and Access Management (IAM) authenticates and authorizes the incoming calls for EC2. In other words, even though they’re both AWS services, AWS Auto Scaling and EC2 have no inherent trust, network or otherwise, of one another and use strong identity-centric controls as the basis of the security model between the two services as they operate on your behalf. You, the customer, have full visibility into both the privileges that you’re granting to one service, as well as an AWS CloudTrail record of the use of those privileges.

Other great examples of Zero Trust capabilities in the AWS portfolio can be found in the IoT Service. When we launched AWS IoT Core we made a strategic decision—against the prevailing industry norms at the time—to always require TLS network encryption and modern client authentication, including certificate-based mutual TLS, when connecting IoT devices to service endpoints. We subsequently added TLS support to FreeRTOS, enabling modern, secure communication to an entire class of small CPU and small memory devices that were previously assumed to not be capable of it. With AWS IoT Greengrass, we pioneered a way of working with existing no-security devices using a remote gateway that relied on local network presence but also was able to run AWS Lambda functions to validate security and provide a secure proxy to the cloud. These examples highlight where adherence to AWS security standards brought key foundational components of Zero Trust to a technology domain where vast amounts of unauthenticated, unencrypted network messaging over the open internet was previously the norm.

How AWS can help you on your Zero Trust journey

To help you on your own Zero Trust journey, there are a number of AWS cloud-specific identity and networking capabilities that provide core Zero Trust building blocks as standard features. AWS services provide this functionality via simple API calls, without you needing to build, maintain, or operate any infrastructure or additional software components. To help best frame the conversation, we’ll consider these capabilities against the backdrop of three distinct use cases:

  1. Authorizing specific flows between components to eliminate unneeded lateral network mobility.
  2. Enabling friction-free access to internal applications for your workforce.
  3. Securing digital transformation projects such as IoT.

Our first use case focuses mainly on machine-to-machine communications—authorizing specific flows between components to help eliminate lateral network mobility risk. Otherwise put, if two components don’t need to talk to one another across the network, they shouldn’t be able to, even if these systems happen to exist within the same network or network segment. This greatly reduces the overall surface area of the connected systems and eliminates unneeded pathways, particularly those that lead to sensitive data. Within this use case, our discussion should begin with security groups, which have been a part of Amazon EC2 since its earliest days. Security groups provide highly dynamic, software-defined network micro-perimeters for both north-south and east-west traffic. Security group assignments occur automatically as resources come and go, and rules in one security group can reference one another by ID, either within the same Amazon VPC or across larger peered networks in the same or different regions. These properties allow security groups to act as a kind of identity system in which group membership becomes a relevant property for determining whether or not to permit particular network flows. This helps enable you to author extremely granular rules without the associated operational burden of keeping them up-to-date as membership in a group ebbs and flows. Similarly, PrivateLink provides an extremely useful building block in the general space of micro-perimeters and micro-segmentation. Using PrivateLink, a load-balanced endpoint can be exposed as a narrow, one-way gateway between two VPCs, with tight identity-based controls determining who can access the gateway and where incoming packets can land. Initiating network connections in the other direction isn’t allowed at all, and the VPCs don’t even need to have routes between one another. Thousands of customers use PrivateLink today as a fundamental building block of a secure micro-services architecture, as well as secure and private access to PaaS and SaaS services from their suppliers.

Going back to our discussion about AWS APIs, the AWS SigV4 signature process for authenticating and authorizing API requests is no longer just for AWS services. You can achieve the same kind of hardened interface approach using the Amazon API Gateway service, which allows software interfaces to be securely available on the open internet. API Gateway provides distributed denial of service (DDoS) protection, rate limiting, and AWS IAM support as one of several authorization options. When you choose AWS IAM authorization, you author standard IAM policies that define who can call your API and where they can call it from, using the full expressiveness of the IAM policy language. Callers sign their requests using their AWS credentials, typically delivered in the form of IAM roles attached to compute resources, and IAM uniquely authenticates and authorizes every single call to your API according to those policies. With one step, your API is protected behind the massively scaled, super performant, globally available IAM service that protects AWS APIs—with nothing for you to manage or maintain. Calls from the API Gateway front-end to your back-end implementation are secured by mutual TLS, so you’re assured that only API Gateway is able to invoke the back-end implementation. With this strong identity-centric control in place, you have two choices. You can safely place your back-end implementation on the public network, or add the VPC integration model such that the API Gateway call to your back-end implementation running inside of your VPC is protected by an identity-centric control (mutual TLS) and a network-centric control (private connectivity from API Gateway to your code). The security achieved by these feature combinations, arguably only possible in the cloud, makes discussions of east-west concerns seem underwhelming and rooted in constraints of the past.

Our second use case, enabling friction-free access to internal applications for your workforce, is all about improving workforce mobility without compromising security. Traditionally these applications have existed behind a strong VPN front door. However, VPNs can be expensive to scale and aren’t necessarily compatible with the full array of mobile devices that the modern workforce demands. The objective in this case is to make the locks on the individual applications so good that you can eliminate the VPN-based front door. To achieve this, our customers have told us that they want a range of technical solutions to choose from according to their industry, risk tolerance, developer maturity, and other factors. At one end of the spectrum, we have many customers who prefer to use desktop as a serviceAmazon Workspaces—or application as a serviceAmazon AppStream 2.0—models to provide a powerful and flexible pixel proxy approach to Zero Trust. Traditional security controls are applied to those intermediary virtual devices, and then any user with a PC, tablet, or HTML5 client can reach those virtualized desktops or applications over the internet—or behind additional network controls and perimeters, if they so desire—to provide a rich, desktop-like experience without having to worry about the security of the final device in the hands of the user. Similarly, customers have asked for a better way to access their enterprise applications securely from mobile phones without deploying mobile device management or other such often cumbersome and expensive technologies. To meet that requirement, we launched Amazon WorkLink, providing a secure proxy service that renders complex web applications in the AWS cloud. Amazon WorkLink streams only pixels—and a very minimal amount of JavaScript for interactivity—to mobile phones. No sensitive enterprise data is ever stored or cached on the mobile device.

At the other end of the spectrum, we have customers who want to connect their internal web applications directly to the internet. For these customers, the combination of AWS Shield, AWS WAF, and Application Load Balancer with OpenID Connect (OIDC) authentication provides a fully managed identity-aware network protection stack. Shield provides managed DDoS protection services that provide always-on detection and automatic inline mitigations that minimize application downtime and latency. AWS WAF is a web application firewall that lets you monitor and protect web requests before they reach your infrastructure using your desired combination of rule groups provided by AWS, the AWS Marketplace, or your own custom ones. By enabling authentication in Application Load Balancer—beyond the normal load balancing capabilities—you can directly integrate with your existing identity provider (IdP) to offload the work of authenticating users, and to leverage the existing capabilities within your IdP—such as strong authentication, device posture assessment, conditional access, and policy enforcement. Using this combination, your internal custom applications quickly become just as flexible as SaaS applications, allowing your workforce to enjoy the same work-anywhere flexibility as SaaS while unifying your application portfolio under a common security model powered by modern identity standards.

Our third use case—securing digital transformation projects such as IoT—is markedly different from the first two. Consider a connected vehicle, relaying a critical stream of instrumentation over mobile networks and the internet into a cloud based analytics environment for processing and insights. These workloads have always existed entirely outside the traditional enterprise network, and require a security model that accounts for that situation. The family of AWS IoT services provides scalable solutions for issuing unique device identities to every device in your fleet, and then using those identities and their associated access control policies to securely control how they communicate and interact with the cloud. The security of these devices can be easily monitored and maintained with AWS IoT Device Defender, over-the-air software updates, and even entire operating system upgrades—now built in to FreeRTOS—to keep devices safe and secure over time. Moving forward, as more and more IT workloads move closer to the edge to minimize latency and improve user experiences, the prevalence of this use case will continue to expand, even if it isn’t applicable to your business today.

It’s still Day 1

We hope this post has helped communicate our vision for Zero Trust, and highlighted how we believe that our underlying security principles and advancing capabilities represent a bar-raising security model both for the AWS cloud and for the environments that our customers build on top of our services.

At Amazon we obsess over customers and their needs, so our job is never done. We have lots more capabilities we want to build, and lots more guidance still to offer. We look forward to your feedback and to continuing the journey together—reflecting the words and core vision of our founder, Jeff Bezos: “It’s still Day 1.”

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Ryland

Mark is the director of the Office of the CISO for AWS. He has over 29 years of experience in the technology industry and has served in leadership roles in cybersecurity, software engineering, distributed systems, technology standardization and public policy. Previously, he served as the Director of Solution Architecture and Professional Services for the AWS World Public Sector team.

Author

Quint Van Deman

Quint is a Principal Specialist for AWS Identity. In this role, he leads the go-to-market creation and execution for AWS Identity services, field enablement, and strategic customer advisement, and is a company wide subject matter expert on identity, access management, and federation. Before joining the Specialist team, Quint was an early member of the AWS Professional Services team, where he led AWS teams directing several of AWS’ most prominent enterprise customers along their journey to the cloud. Prior to joining AWS, Quint held enterprise architect style roles within a number of mid size organizations and consulting firms, mostly specializing in large scale open source infrastructure.

Automatically update security groups for Amazon CloudFront IP ranges using AWS Lambda

Post Syndicated from Yeshwanth Kottu original https://aws.amazon.com/blogs/security/automatically-update-security-groups-for-amazon-cloudfront-ip-ranges-using-aws-lambda/

Amazon CloudFront is a content delivery network that can help you increase the performance of your web applications and significantly lower the latency of delivering content to your customers. For CloudFront to access an origin (the source of the content behind CloudFront), the origin has to be publicly available and reachable. Anyone with the origin domain name or IP address could request content directly and bypass CloudFront. In this blog post, I describe an automated solution that uses security groups to permit only CloudFront to access the origin.

Amazon Simple Storage Service (Amazon S3) origins provide a feature called Origin Access Identity, which blocks public access to selected buckets, making them accessible only through CloudFront. When you use CloudFront to secure your web applications, it’s important to ensure that only CloudFront can access your origin (such as Amazon Elastic Cloud Compute (Amazon EC2) or Application Load Balancer (ALB)) and any direct access to origin is restricted. This blog post shows you how to create an AWS Lambda function to automatically update Amazon Virtual Private Cloud (Amazon VPC) security groups with CloudFront service IP ranges to permit only CloudFront to access the origin.

AWS publishes the IP ranges in JSON format for CloudFront and other AWS services. If your origin is an Elastic Load Balancer or an Amazon EC2 instance, you can use VPC security groups to allow only CloudFront IP ranges to access your applications. The IP ranges in the list are separated by service and Region, and you must specify only the IP ranges that correspond to CloudFront.

The IP ranges that AWS publishes change frequently and without an automated solution, you would need to retrieve this document frequently to understand the current IP ranges for CloudFront. Frequent polling is inefficient because there is no notice of when the IP ranges change, and if these IP ranges aren’t modified immediately, your client might see 504 errors when they access CloudFront. Additionally, there are numerous IP ranges for each service, performing the change manually isn’t an efficient way of updating these ranges. This means you need infrastructure to support the task. However, in that case you end up with another host to manage—complete with the typical patching, deployment, and monitoring. As you can see, a small task could quickly become more complicated than the problem you intended to solve.

An Amazon Simple Notification Service (Amazon SNS) message is sent to a topic whenever the AWS IP ranges change. Enabling you to build an event-driven, serverless solution that updates the IP ranges for your security groups, as needed by using a Lambda function that is triggered in response to the SNS notification.

Here are the steps we are going to take to implement the solution:

  1. Create your resources
    1. Create an IAM policy and execution role for the Lambda function
    2. Create your Lambda function
  2. Test your Lambda function
  3. Configure your Lambda function’s trigger

Create your resources

The first thing you need to do is create a Lambda function execution role and policy. Lambda function uses execution role to access or create AWS resources. This Lambda function is triggered by an SNS notification whenever there’s a change in the IP ranges document. Based on the number of IP ranges present for CloudFront and also the number of ports (for example, 80,443) that you want to whitelist on the origin, this Lambda function creates the required security groups. These security groups will allow only traffic from CloudFront to your ELB load balancers or EC2 instances.

Create an IAM policy and execution role for the Lambda function

When you create a Lambda function, it’s important to understand and properly define the security context for the Lambda function. Using AWS Identity and Access Management (IAM), you can create the Lambda execution role that determines the AWS service calls that the function is authorized to complete. (Learn more about the Lambda permissions model.)

To create the IAM policy for your role

  1. Log in to the IAM console with the user account that you will use to manage the Lambda function. This account must have administrator permissions.
  2. In the navigation pane, choose Policies.
  3. In the content pane, choose Create policy.
  4. Choose the JSON tab and copy the text from the following JSON policy document. Paste this text into the JSON text box.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "CloudWatchPermissions",
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": "arn:aws:logs:*:*:*"
        },
        {
          "Sid": "EC2Permissions",
          "Effect": "Allow",
          "Action": [
            "ec2:DescribeSecurityGroups",
            "ec2:AuthorizeSecurityGroupIngress",
            "ec2:RevokeSecurityGroupIngress",
            "ec2:CreateSecurityGroup",
            "ec2:DescribeVpcs",
    		"ec2:CreateTags",
            "ec2:ModifyNetworkInterfaceAttribute",
            "ec2:DescribeNetworkInterfaces"
            
          ],
          "Resource": "*"
        }
      ]
    }
    

  5. When you’re finished, choose Review policy.
  6. On the Review page, enter a name for the policy name (e.g. LambdaExecRolePolicy-UpdateSecurityGroupsForCloudFront). Review the policy Summary to see the permissions granted by your policy, and then choose Create policy to save your work.

To understand what this policy allows, let’s look closely at both statements in the policy. The first statement allows the Lambda function to create and write to CloudWatch Logs, which is vital for debugging and monitoring our function. The second statement allows the function to get information about existing security groups, get existing VPC information, create security groups, and authorize and revoke ingress permissions. It’s an important best practice that your IAM policies be as granular as possible, to support the principal of least privilege.

Now that you’ve created your policy, you can create the Lambda execution role that will use the policy.

To create the Lambda execution role

  1. In the navigation pane of the IAM console, choose Roles, and then choose Create role.
  2. For Select type of trusted entity, choose AWS service.
  3. Choose the service that you want to allow to assume this role. In this case, choose Lambda.
  4. Choose Next: Permissions.
  5. Search for the policy name that you created earlier and select the check box next to the policy.
  6. Choose Next: Tags.
  7. (Optional) Add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see Tagging IAM Users and Roles.
  8. Choose Next: Review.
  9. For Role name (e.g. LambdaExecRole-UpdateSecurityGroupsForCloudFront), enter a name for your role.
  10. (Optional) For Role description, enter a description for the new role.
  11. Review the role, and then choose Create role.

Create your Lambda function

Now, create your Lambda function and configure the role that you created earlier as the execution role for this function.

To create the Lambda function

  1. Go to the Lambda console in N. Virginia region and choose Create function. On the next page, choose Author from scratch. (I’ll be providing the code for your Lambda function, but for other functions, the Use a blueprint option can be a great way to get started.)
  2. Give your Lambda function a name (e.g UpdateSecurityGroupsForCloudFront) and description, and select Python 3.8 from the Runtime menu.
  3. Choose or create an execution role: Select the execution role you created earlier by selecting the option Use an Existing Role.
  4. After confirming that your settings are correct, choose Create function.
  5. Paste the Lambda function code from here.
  6. Select Save.

Additionally, in the Basic Settings of the Lambda function, increase the timeout to 10 seconds.

To set the timeout value in the Lambda console

  1. In the Lambda console, choose the function you just created.
  2. Under Basic settings, choose Edit.
  3. For Timeout, select 10s.
  4. Choose Save.

By default, the Lambda function has these settings:

  • The Lambda function is configured to create security groups in the default VPC.
  • CloudFront IP ranges are updated as inbound rules on port 80.
  • The created security groups are tagged with the name prefix AUTOUPDATE.
  • Debug logging is turned off.
  • The service for which IP ranges are extracted is set to CloudFront.
  • The SDK client in the Lambda function set to us-east-1(N. Virginia).

If you want to customize these settings, set the following environment variables for the Lambda function. For more details, see Using AWS Lambda environment variables.

Action Key-value data
To create security groups in a specific VPC Key: VPC_ID
Value: vpc-id
To create security groups rules for a different port or multiple ports
 
The solution in this example supports a total of two ports. One can be used for HTTP and another for HTTPS.

Key: PORTS
Value: portnumber
or
Key: PORTS
Value: portnumber,portnumber
To customize the prefix name tag of your security groups Key: PREFIX_NAME
Value: custom-name
To enable debug logging to CloudWatch Key: DEBUG
Value: true
To extract IP ranges for a different service other than CloudFront Key: SERVICE
Value: servicename
To configure the Region for the SDK client used in the Lambda function
 
If the CloudFront origin is present in a different Region than N. Virginia, the security groups must be created in that region.
Key: REGION
Value: regionname

To set environment variables in the Lambda console

  1. In the Lambda console, choose the function you created.
  2. Under Environment variables, choose Edit.
  3. Choose Add environment variable.
  4. Enter a key and value.
  5. Choose Save.

Test your Lambda function

Now that you’ve created your function, it’s time to test it and initialize your security group.

To create your test event for the Lambda function

  1. In the Lambda console, on the Functions page, choose your function. In the drop-down menu next to Actions, choose Configure test events.
  2. Enter an Event Name (e.g. TriggerSNS)
  3. Replace the following as your sample event, which will represent an SNS notification and then select Create.
    {
        "Records": [
            {
                "EventVersion": "1.0",
                "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
                "EventSource": "aws:sns",
                "Sns": {
                    "SignatureVersion": "1",
                    "Timestamp": "1970-01-01T00:00:00.000Z",
                    "Signature": "EXAMPLE",
                    "SigningCertUrl": "EXAMPLE",
                    "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
                    "Message": "{\"create-time\": \"yyyy-mm-ddThh:mm:ss+00:00\", \"synctoken\": \"0123456789\", \"md5\": \"7fd59f5c7f5cf643036cbd4443ad3e4b\", \"url\": \"https://ip-ranges.amazonaws.com/ip-ranges.json\"}",
                    "Type": "Notification",
                    "UnsubscribeUrl": "EXAMPLE",
                    "TopicArn": "arn:aws:sns:EXAMPLE",
                    "Subject": "TestInvoke"
                }
      		}
        ]
    }
    

  4. After you’ve added the test event, select Save and then select Test. Your Lambda function is then invoked, and you should see log output at the bottom of the console in Execution Result section, similar to the following.
    Updating from https://ip-ranges.amazonaws.com/ip-ranges.json
    MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b: Exception
    Traceback (most recent call last):
      File "/var/task/lambda_function.py", line 29, in lambda_handler
        ip_ranges = json.loads(get_ip_groups_json(message['url'], message['md5']))
      File "/var/task/lambda_function.py", line 50, in get_ip_groups_json
        raise Exception('MD5 Missmatch: got ' + hash + ' expected ' + expected_hash)
    Exception: MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b
    

  5. Edit the sample event again, and this time change the md5 value in the sample event to be the first MD5 hash provided in the log output. In this example, you would update the md5 value in the sample event configured earlier with the hash value seen in the error ‘2e967e943cf98ae998efeec05d4f351c’. Lambda code successfully executes only when the original hash of the IP ranges document and the hash received from the event trigger match. After you modify the hash value from the error message received earlier, the test event matches the hash of the IP ranges document.
  6. Select Save and test. This invokes your Lambda function.

After the function is invoked the second time with updated md5 has Lambda function should execute without any errors. You should be able to see the new security groups created and the IP ranges of CloudFront updated in the rules in the EC2 console, as shown in Figure 1.
 

Figure 1: EC2 console showing the security groups created

Figure 1: EC2 console showing the security groups created

In the initial successful run of this function, it created the total number of security groups required to update all the IP ranges of CloudFront for the ports mentioned. The function creates security groups based on the maximum number of rules that can be added to individual security groups. The new security groups can be identified from the EC2 console by the name AUTOUPDATE_random if you used the default configuration, or a custom name if you provided a PREFIX_NAME.

You can now attach these security groups to your Elastic LoadBalancer or EC2 instances. If your log output is different from what is described here, the output should help you identify the issue.

Configure your Lambda function’s trigger

After you’ve validated that your function is executing properly, it’s time to connect it to the SNS topic for IP changes. To do this, use the AWS Command Line Interface (CLI). Enter the following command, making sure to replace Lambda ARN with the Amazon Resource Name (ARN) of your Lambda function. You can find this ARN at the top right when viewing the configuration of your Lambda function.

aws sns subscribe - -topic-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged" - -region us-east-1 - -protocol lambda - -notification-endpoint "Lambda ARN"

You should receive the ARN of your Lambda function’s SNS subscription.

Now add a permission that allows the Lambda function to be invoked by the SNS topic. The following command also adds the Lambda trigger.

aws lambda add-permission - -function-name "Lambda ARN" - -statement-id lambda-sns-trigger - -region us-east-1 - -action lambda:InvokeFunction - -principal sns.amazonaws.com - -source-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"

When AWS changes any of the IP ranges in the document, an SNS notification is sent and your Lambda function will be triggered. This Lambda function verifies the modified ranges in the document and efficiently updates the IP ranges on the existing security groups. Additionally, the function dynamically scales and creates additional security groups if the number of IP ranges for CloudFront is increased in future. Any newly created security groups are automatically attached to the network interface where the previous security groups are attached in order to avoid service interruption.

Summary

As you followed this blog post, you created a Lambda function to create a security groups and update the security group’s rules dynamically whenever AWS publishes new internal service IP ranges. This solution has several advantages:

  • The solution isn’t designed as a periodic poll, so it only runs when it needs to.
  • It’s automatic, so you don’t need to update security groups manually which lowers the operational cost.
  • It’s simple, because you have no extra infrastructure to maintain as the solution is completely serverless.
  • It’s cost effective, because the Lambda function runs only when triggered by the AmazonIpSpaceChanged SNS topic and only runs for a few seconds, this solution costs only pennies to operate.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon CloudFront forum. If you have any other use cases for using Lambda functions to dynamically update security groups, or even other networking configurations such as VPC route tables or ACLs, we’d love to hear about them!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yeshwanth Kottu

Yeshwanth is a Systems Development Engineer at AWS in Cupertino, CA. With a focus on CloudFront and Lambda@Edge, he enjoys helping customers tackle challenges through cloud-scale architectures. Yeshwanth has an MS in Computer Systems Networking and Telecommunications from Northeastern University. Outside of work, he enjoys travelling, visiting national parks, and playing cricket.

AWS Security Profile: Phillip Miller, Principal Security Advisor

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-phillip-miller-principal-security-advisor/

AWS Security Profile: Phillip Miller
In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


How long have you been at AWS and what do you do in your current role?

I’ve been at AWS since September 2019. I help executives and leaders of builder teams find ways to answer key questions, such as “Is my organization well-protected in the cloud?” and “Are our security investments the best ones to enable scale and optimize outcomes?” Through one-on-one discussions, facilitating workshops, and building automation into compliance programs, I help people envision a secure future that doesn’t limit the outcomes for the business.

What’s your favorite part of your job?

Teaching. Advising includes sharing knowledge and best practices, and finding solutions to customer problems—but I have not performed my role adequately if they have not had an opportunity to learn. It is a tremendous privilege to have leaders invite me to participate in their cloud security journey, and I’m grateful that I am able to help them accomplish key business objectives.

How did you get started in Security?

I can’t really remember a time when I wasn’t working in security, but back in the early 1990s, security was not a distinct function. Through the early 2000s, roles I had in various companies placed different emphasis on infrastructure, or solution delivery, but security always seemed to be “my thing” to emphasize, often because of my legal background. Now, it seems it has come full-circle; everyone recognizes security as “job zero,” and the companies that get this and fully integrate security into all roles are best placed to manage their risk.

How do you explain what you do to non-technical friends or family?

My wife gets the credit for this: “He does difficult things with complex computer systems for large companies that somehow helps them reduce the chance of a data breach.”

What are you currently working on that you’re excited about?

I’ve been helping several companies to create “security frameworks” that can be used to help meet multiple compliance requirements, but also ensure they are satisfying the promise to their customers around privacy and cybersecurity. These frameworks lean in to the benefits of cloud computing, and start with building alignment between CISO, CIO, and CTO so that the business objectives and the security needs do not find themselves in conflict.

You’re presenting at re:Invent this year—can you give readers a sneak peek of what you’re covering?

Compliance is frustrating for many builders; it can be seen as confusing and full of requirements that don’t make sense for modern applications. Executives are increasingly seeking validation that the cloud is reducing cybersecurity risk. My presentation shares six mechanisms for builder teams to use their skills to create gap-closing solutions.

What are you hoping that your audience will do differently after your session?

Take at least one of the six mechanisms that can be used to enhance the relationship between builder teams and compliance groups and try it out.

From your perspective, what’s the biggest thing happening in security right now?

Awareness from a consumer perspective around how companies use data, and the importance for companies to find ways to responsibly use and secure that information.

What is your favorite Leadership Principle at Amazon and why?

Frugality. I enjoy constraints, and how they help sharpen the mind and force us to critically think less about what we need today, but more about what the future will be. 2020 has brought this to the home for a lot of families, who are having to accomplish more with less, such as the home being an office for two people, a schoolhouse, and a gym. When we model frugality at work, it might just help us find ways to make society a better place, too.

What’s the best career advice you’ve ever received?

Always share the bad news as quickly as possible, with clarity, data, and your plan of action. Ensure that the information is flowing properly to everyone with a legitimate need to know, even if it may be uncomfortable to share it.

If you could go back, what would you tell yourself at the beginning of your career?

Always trust your instincts. I began my career building software for microbiology and DNA fingerprinting, but then I selected to read jurisprudence and not pursue a degree relating to transputers and the space industry. I think my instincts were right, but who knows—the alternative reality would probably have been pretty amazing!

What are you most proud of in your career?

I have had so many opportunities to mentor people at all stages in their information security careers. Watching others develop their skills, and helping them unlock potential to reduce risks to their organizations makes my day.

I hear you have an organic farm that you work on in your spare time. How did you get into farming?

Yes, we began farming commercially about a decade ago, mostly out of a desire to explore ways that organic meats could be raised ethically and without excessive markup. In 2021, we’ll be examining ways to turn our success into a teaching farm that also includes opportunities for people to explore woodlands, natural habitats, and cultivated land in one location. It is also a deliberately low-tech respite from the world of cybersecurity!

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Phillip Miller

As a Principal Security Advisor in the Security Advisory and Assurance team, Phillip helps companies mature their approach to security and compliance in the cloud. With nearly three decades of experience across financial services, healthcare, manufacturing, and retail, Phillip understands the challenges builders face securing sensitive workloads. Phillip most recently served as the Chief Information Security Officer at Brooks Brothers.

How to deploy the AWS Solution for Security Hub Automated Response and Remediation

Post Syndicated from Ramesh Venkataraman original https://aws.amazon.com/blogs/security/how-to-deploy-the-aws-solution-for-security-hub-automated-response-and-remediation/

In this blog post I show you how to deploy the Amazon Web Services (AWS) Solution for Security Hub Automated Response and Remediation. The first installment of this series was about how to create playbooks using Amazon CloudWatch Events, AWS Lambda functions, and AWS Security Hub custom actions that you can run manually based on triggers from Security Hub in a specific account. That solution requires an analyst to directly trigger an action using Security Hub custom actions and doesn’t work for customers who want to set up fully automated remediation based on findings across one or more accounts from their Security Hub master account.

The solution described in this post automates the cross-account response and remediation lifecycle from executing the remediation action to resolving the findings in Security Hub and notifying users of the remediation via Amazon Simple Notification Service (Amazon SNS). You can also deploy these automated playbooks as custom actions in Security Hub, which allows analysts to run them on-demand against specific findings. You can deploy these remediations as custom actions or as fully automated remediations.

Currently, the solution includes 10 playbooks aligned to the controls in the Center for Internet Security (CIS) AWS Foundations Benchmark standard in Security Hub, but playbooks for other standards such as AWS Foundational Security Best Practices (FSBP) will be added in the future.

Solution overview

Figure 1 shows the flow of events in the solution described in the following text.

Figure 1: Flow of events

Figure 1: Flow of events

Detect

Security Hub gives you a comprehensive view of your security alerts and security posture across your AWS accounts and automatically detects deviations from defined security standards and best practices.

Security Hub also collects findings from various AWS services and supported third-party partner products to consolidate security detection data across your accounts.

Ingest

All of the findings from Security Hub are automatically sent to CloudWatch Events and Amazon EventBridge and you can set up CloudWatch Events and EventBridge rules to be invoked on specific findings. You can also send findings to CloudWatch Events and EventBridge on demand via Security Hub custom actions.

Remediate

The CloudWatch Event and EventBridge rules can have AWS Lambda functions, AWS Systems Manager automation documents, or AWS Step Functions workflows as the targets of the rules. This solution uses automation documents and Lambda functions as response and remediation playbooks. Using cross-account AWS Identity and Access Management (IAM) roles, the playbook performs the tasks to remediate the findings using the AWS API when a rule is invoked.

Log

The playbook logs the results to the Amazon CloudWatch log group for the solution, sends a notification to an Amazon Simple Notification Service (Amazon SNS) topic, and updates the Security Hub finding. An audit trail of actions taken is maintained in the finding notes. The finding is updated as RESOLVED after the remediation is run. The security finding notes are updated to reflect the remediation performed.

Here are the steps to deploy the solution from this GitHub project.

  • In the Security Hub master account, you deploy the AWS CloudFormation template, which creates an AWS Service Catalog product along with some other resources. For a full set of what resources are deployed as part of an AWS CloudFormation stack deployment, you can find the full set of deployed resources in the Resources section of the deployed AWS CloudFormation stack. The solution uses the AWS Service Catalog to have the remediations available as a product that can be deployed after granting the users the required permissions to launch the product.
  • Add an IAM role that has administrator access to the AWS Service Catalog portfolio.
  • Deploy the CIS playbook from the AWS Service Catalog product list using the IAM role you added in the previous step.
  • Deploy the AWS Security Hub Automated Response and Remediation template in the master account in addition to the member accounts. This template establishes AssumeRole permissions to allow the playbook Lambda functions to perform remediations. Use AWS CloudFormation StackSets in the master account to have a centralized deployment approach across the master account and multiple member accounts.

Deployment steps for automated response and remediation

This section reviews the steps to implement the solution, including screenshots of the solution launched from an AWS account.

Launch AWS CloudFormation stack on the master account

As part of this AWS CloudFormation stack deployment, you create custom actions to configure Security Hub to send findings to CloudWatch Events. Lambda functions are used to provide remediation in response to actions sent to CloudWatch Events.

Note: In this solution, you create custom actions for the CIS standards. There will be more custom actions added for other security standards in the future.

To launch the AWS CloudFormation stack

  1. Deploy the AWS CloudFormation template in the Security Hub master account. In your AWS console, select CloudFormation and choose Create new stack and enter the S3 URL.
  2. Select Next to move to the Specify stack details tab, and then enter a Stack name as shown in Figure 2. In this example, I named the stack SO0111-SHARR, but you can use any name you want.
     
    Figure 2: Creating a CloudFormation stack

    Figure 2: Creating a CloudFormation stack

  3. Creating the stack automatically launches it, creating 21 new resources using AWS CloudFormation, as shown in Figure 3.
     
    Figure 3: Resources launched with AWS CloudFormation

    Figure 3: Resources launched with AWS CloudFormation

  4. An Amazon SNS topic is automatically created from the AWS CloudFormation stack.
  5. When you create a subscription, you’re prompted to enter an endpoint for receiving email notifications from Amazon SNS as shown in Figure 4. To subscribe to that topic that was created using CloudFormation, you must confirm the subscription from the email address you used to receive notifications.
     
    Figure 4: Subscribing to Amazon SNS topic

    Figure 4: Subscribing to Amazon SNS topic

Enable Security Hub

You should already have enabled Security Hub and AWS Config services on your master account and the associated member accounts. If you haven’t, you can refer to the documentation for setting up Security Hub on your master and member accounts. Figure 5 shows an AWS account that doesn’t have Security Hub enabled.
 

Figure 5: Enabling Security Hub for first time

Figure 5: Enabling Security Hub for first time

AWS Service Catalog product deployment

In this section, you use the AWS Service Catalog to deploy Service Catalog products.

To use the AWS Service Catalog for product deployment

  1. In the same master account, add roles that have administrator access and can deploy AWS Service Catalog products. To do this, from Services in the AWS Management Console, choose AWS Service Catalog. In AWS Service Catalog, select Administration, and then navigate to Portfolio details and select Groups, roles, and users as shown in Figure 6.
     
    Figure 6: AWS Service Catalog product

    Figure 6: AWS Service Catalog product

  2. After adding the role, you can see the products available for that role. You can switch roles on the console to assume the role that you granted access to for the product you added from the AWS Service Catalog. Select the three dots near the product name, and then select Launch product to launch the product, as shown in Figure 7.
     
    Figure 7: Launch the product

    Figure 7: Launch the product

  3. While launching the product, you can choose from the parameters to either enable or disable the automated remediation. Even if you do not enable fully automated remediation, you can still invoke a remediation action in the Security Hub console using a custom action. By default, it’s disabled, as highlighted in Figure 8.
     
    Figure 8: Enable or disable automated remediation

    Figure 8: Enable or disable automated remediation

  4. After launching the product, it can take from 3 to 5 minutes to deploy. When the product is deployed, it creates a new CloudFormation stack with a status of CREATE_COMPLETE as part of the provisioned product in the AWS CloudFormation console.

AssumeRole Lambda functions

Deploy the template that establishes AssumeRole permissions to allow the playbook Lambda functions to perform remediations. You must deploy this template in the master account in addition to any member accounts. Choose CloudFormation and create a new stack. In Specify stack details, go to Parameters and specify the Master account number as shown in Figure 9.
 

Figure 9: Deploy AssumeRole Lambda function

Figure 9: Deploy AssumeRole Lambda function

Test the automated remediation

Now that you’ve completed the steps to deploy the solution, you can test it to be sure that it works as expected.

To test the automated remediation

  1. To test the solution, verify that there are 10 actions listed in Custom actions tab in the Security Hub master account. From the Security Hub master account, open the Security Hub console and select Settings and then Custom actions. You should see 10 actions, as shown in Figure 10.
     
    Figure 10: Custom actions deployed

    Figure 10: Custom actions deployed

  2. Make sure you have member accounts available for testing the solution. If not, you can add member accounts to the master account as described in Adding and inviting member accounts.
  3. For testing purposes, you can use CIS 1.5 standard, which is to require that the IAM password policy requires at least one uppercase letter. Check the existing settings by navigating to IAM, and then to Account Settings. Under Password policy, you should see that there is no password policy set, as shown in Figure 11.
     
    Figure 11: Password policy not set

    Figure 11: Password policy not set

  4. To check the security settings, go to the Security Hub console and select Security standards. Choose CIS AWS Foundations Benchmark v1.2.0. Select CIS 1.5 from the list to see the Findings. You will see the Status as Failed. This means that the password policy to require at least one uppercase letter hasn’t been applied to either the master or the member account, as shown in Figure 12.
     
    Figure 12: CIS 1.5 finding

    Figure 12: CIS 1.5 finding

  5. Select CIS 1.5 – 1.11 from Actions on the top right dropdown of the Findings section from the previous step. You should see a notification with the heading Successfully sent findings to Amazon CloudWatch Events as shown in Figure 13.
     
    Figure 13: Sending findings to CloudWatch Events

    Figure 13: Sending findings to CloudWatch Events

  6. Return to Findings by selecting Security standards and then choosing CIS AWS Foundations Benchmark v1.2.0. Select CIS 1.5 to review Findings and verify that the Workflow status of CIS 1.5 is RESOLVED, as shown in Figure 14.
     
    Figure 14: Resolved findings

    Figure 14: Resolved findings

  7. After the remediation runs, you can verify that the Password policy is set on the master and the member accounts. To verify that the password policy is set, navigate to IAM, and then to Account Settings. Under Password policy, you should see that the account uses a password policy, as shown in Figure 15.
     
    Figure 15: Password policy set

    Figure 15: Password policy set

  8. To check the CloudWatch logs for the Lambda function, in the console, go to Services, and then select Lambda and choose the Lambda function and within the Lambda function, select View logs in CloudWatch. You can see the details of the function being run, including updating the password policy on both the master account and the member account, as shown in Figure 16.
     
    Figure 15: Lambda function log

    Figure 16: Lambda function log

Conclusion

In this post, you deployed the AWS Solution for Security Hub Automated Response and Remediation using Lambda and CloudWatch Events rules to remediate non-compliant CIS-related controls. With this solution, you can ensure that users in member accounts stay compliant with the CIS AWS Foundations Benchmark by automatically invoking guardrails whenever services move out of compliance. New or updated playbooks will be added to the existing AWS Service Catalog portfolio as they’re developed. You can choose when to take advantage of these new or updated playbooks.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ramesh Venkataraman

Ramesh is a Solutions Architect who enjoys working with customers to solve their technical challenges using AWS services. Outside of work, Ramesh enjoys following stack overflow questions and answers them in any way he can.

Set up centralized monitoring for DDoS events and auto-remediate noncompliant resources

Post Syndicated from Fola Bolodeoku original https://aws.amazon.com/blogs/security/set-up-centralized-monitoring-for-ddos-events-and-auto-remediate-noncompliant-resources/

When you build applications on Amazon Web Services (AWS), it’s a common security practice to isolate production resources from non-production resources by logically grouping them into functional units or organizational units. There are many benefits to this approach, such as making it easier to implement the principal of least privilege, or reducing the scope of adversely impactful activities that may occur in non-production environments. After building these applications, setting up monitoring for resource compliance and security risks, such as distributed denial of service (DDoS) attacks across your AWS accounts, is just as important. The recommended best practice to perform this type of monitoring involves using AWS Shield Advanced with AWS Firewall Manager, and integrating these with AWS Security Hub.

In this blog post, I show you how to set up centralized monitoring for Shield Advanced–protected resources across multiple AWS accounts by using Firewall Manager and Security Hub. This enables you to easily manage resources that are out of compliance from your security policy and to view DDoS events that are detected across multiple accounts in a single view.

Shield Advanced is a managed application security service that provides DDoS protection for your workloads against infrastructure layer (Layer 3–4) attacks, as well as application layer (Layer 7) attacks, by using AWS WAF. Firewall Manager is a security management service that enables you to centrally configure and manage firewall rules across your accounts and applications in an organization in AWS. Security Hub consumes, analyzes, and aggregates security events produced by your application running on AWS by consuming security findings. Security Hub integrates with Firewall Manager without the need for any action to be taken by you.

I’m going to cover two different scenarios that show you how to use Firewall Manager for:

  1. Centralized visibility into Shield Advanced DDoS events
  2. Automatic remediation of noncompliant resources

Scenario 1: Centralized visibility of DDoS detected events

This scenario represents a fully native and automated integration, where Shield Advanced DDoSDetected events (indicates whether a DDoS event is underway for a particular Amazon Resource Name (ARN)) are made visible as a security finding in Security Hub, through Firewall Manager.

Solution overview

Figure 1 shows the solution architecture for scenario 1.
 

Figure 1: Scenario 1 – Shield Advanced DDoS detected events visible in Security Hub

Figure 1: Scenario 1 – Shield Advanced DDoS detected events visible in Security Hub

The diagram illustrates a customer using AWS Organizations to isolate their production resources into the Production Organizational Unit (OU), with further separation into multiple accounts for each of the mission-critical applications. The resources in Account 1 are protected by Shield Advanced. The Security OU was created to centralize security functions across all AWS accounts and OUs, obscuring the visibility of the production environment resources from the Security Operations Center (SOC) engineers and other security staff. The Security OU is home to the designated administrator account for Firewall Manager and the Security Hub dashboard.

Scenario 1 implementation

You will be setting up Security Hub in an account that has the prerequisite services configured in it as explained below. Before you proceed, see the architecture requirements in the next section. Once Security Hub is enabled for your organization, you can simulate a DDoS event in strict accordance with the AWS DDoS Simulation Testing Policy or use one of AWS DDoS Test Partners.

Architecture requirements

In order to implement these steps, you must have the following:

Once you have all these requirements completed, you can move on to enable Security Hub.

Enable Security Hub

Note: If you plan to protect resources with Shield Advanced across multiple accounts and in multiple Regions, we recommend that you use the AWS Security Hub Multiaccount Scripts from AWS Labs. Security Hub needs to be enabled in all the Regions and all the accounts where you have Shield protected resources. For global resources, like Amazon CloudFront, you should enable Security Hub in the us-east-1 Region.

To enable Security Hub

  1. In the AWS Security Hub console, switch to the account you want to use as the designated Security Hub administrator account.
  2. Select the security standard or standards that are applicable to your application’s use-case, and choose Enable Security Hub.
     
    Figure 2: Enabling Security Hub

    Figure 2: Enabling Security Hub

  3. From the designated Security Hub administrator account, go to the Settings – Account tab, and add accounts by sending invites to all the accounts you want added as member accounts. The invited accounts become associated as member accounts once the owner of the invited account has accepted the invite and Security Hub has been enabled. It’s possible to upload a comma-separated list of accounts you want to send to invites to.
     
    Figure 3: Designating a Security Hub administrator account by adding member accounts

    Figure 3: Designating a Security Hub administrator account by adding member accounts

View detected events in Shield and Security Hub

When Shield Advanced detects signs of DDoS traffic that is destined for a protected resource, the Events tab in the Shield console displays information about the event detected and provides a status on the mitigation that has been performed. Following is an example of how this looks in the Shield console.
 

Figure 4: Scenario 1 - The Events tab on the Shield console showing a Shield event in progress

Figure 4: Scenario 1 – The Events tab on the Shield console showing a Shield event in progress

If you’re managing multiple accounts, switching between these accounts to view the Shield console to keep track of DDoS incidents can be cumbersome. Using the Amazon CloudWatch metrics that Shield Advanced reports for Shield events, visibility across multiple accounts and Regions is easier through a custom CloudWatch dashboard or by consuming these metrics in a third-party tool. For example, the DDoSDetected CloudWatch metric has a binary value, where a value of 1 indicates that an event that might be a DDoS has been detected. This metric is automatically updated by Shield when the DDoS event starts and ends. You only need permissions to access the Security Hub dashboard in order to monitor all events on production resources. Following is an example of what you see in the Security Hub console.
 

Figure 5: Scenario 1 - Shield Advanced DDoS alarm showing in Security Hub

Figure 5: Scenario 1 – Shield Advanced DDoS alarm showing in Security Hub

Configure Shield event notification in Firewall Manager

In order to increase your visibility into possible Shield events across your accounts, you must configure Firewall Manager to monitor your protected resources by using Amazon Simple Notification Service (Amazon SNS). With this configuration, Firewall Manager sends you notifications of possible attacks by creating an Amazon SNS topic in Regions where you might have protected resources.

To configure SNS topics in Firewall Manager

  1. In the Firewall Manager console, go to the Settings page.
  2. Under Amazon SNS Topic Configuration, select a Region.
  3. Choose Configure SNS Topic.
     
    Figure 6: The Firewall Manager Settings page for configuring SNS topics

    Figure 6: The Firewall Manager Settings page for configuring SNS topics

  4. Select an existing topic or create a new topic, and then choose Configure SNS Topic.
     
    Figure 7: Configure an SNS topic in a Region

    Figure 7: Configure an SNS topic in a Region

Scenario 2: Automatic remediation of noncompliant resources

The second scenario is an example in which a new production resource is created, and Security Hub has full visibility of the compliance state of the resource.

Solution overview

Figure 8 shows the solution architecture for scenario 2.
 

Figure 8: Scenario 2 – Visibility of Shield Advanced noncompliant resources in Security Hub

Figure 8: Scenario 2 – Visibility of Shield Advanced noncompliant resources in Security Hub

Firewall Manager identifies that the resource is out of compliance with the defined policy for Shield Advanced and posts a finding to Security Hub, notifying your operations team that a manual action is required to bring the resource into compliance. If configured, Firewall Manager can automatically bring the resource into compliance by creating it as a Shield Advanced–protected resource, and then update Security Hub when the resource is in a compliant state.

Scenario 2 implementation

The following steps describe how to use Firewall Manager to enforce Shield Advanced protection compliance of an application that is deployed to a member account within AWS Organizations. This implementation assumes that you set up Security Hub as described for scenario 1.

Create a Firewall Manager security policy for Shield Advanced protected resources

In this step, you create a Shield Advanced security policy that will be enforced by Firewall Manager. For the purposes of this walkthrough, you’ll choose to automatically remediate noncompliant resources and apply the policy to Application Load Balancer (ALB) resources.

To create the Shield Advanced policy

  1. Open the Firewall Manager console in the designated Firewall Manager administrator account.
  2. In the left navigation pane, choose Security policies, and then choose Create a security policy.
  3. Select AWS Shield Advanced as the policy type, and select the Region where your protected resources are. Choose Next.

    Note: You will need to create a security policy for each Region where you have regional resources, such as Elastic Load Balancers and Elastic IP addresses, and a security policy for global resources such as CloudFront distributions.

    Figure 9: Select the policy type and Region

    Figure 9: Select the policy type and Region

  4. On the Describe policy page, for Policy name, enter a name for your policy.
  5. For Policy action, you have the option to configure automatic remediation of noncompliant resources or to only send alerts when resources are noncompliant. You can change this setting after the policy has been created. For the purposes of this blog post, I’m selecting Auto remediate any noncompliant resources. Select your option, and then choose Next.

    Important: It’s a best practice to first identify and review noncompliant resources before you enable automatic remediation.

  6. On the Define policy scope page, define the scope of the policy by choosing which AWS accounts, resource type, or resource tags the policy should be applied to. For the purposes of this blog post, I’m selecting to manage Application Load Balancer (ALB) resources across all accounts in my organization, with no preference for resource tags. When you’re finished defining the policy scope, choose Next.
     
    Figure 10: Define the policy scope

    Figure 10: Define the policy scope

  7. Review and create the policy. Once you’ve reviewed and created the policy in the Firewall Manager designated administrator account, the policy will be pushed to all the Firewall Manager member accounts for enforcement. The new policy could take up to 5 minutes to appear in the console. Figure 11 shows a successful security policy propagation across accounts.
     
    Figure 11: View security policies in an account

    Figure 11: View security policies in an account

Test the Firewall Manager and Security Hub integration

You’ve now defined a policy to cover only ALB resources, so the best way to test this configuration is to create an ALB in one of the Firewall Manager member accounts. This policy causes resources within the policy scope to be added as protected resources.

To test the policy

  1. Switch to the Security Hub administrator account and open the Security Hub console in the same Region where you created the ALB. On the Findings page, set the Title filter to Resource lacks Shield Advanced protection and set the Product name filter to Firewall Manager.
     
    Figure 12: Security Hub findings filter

    Figure 12: Security Hub findings filter

    You should see a new security finding flagging the ALB as a noncompliant resource, according to the Shield Advanced policy defined in Firewall Manager. This confirms that Security Hub and Firewall Manager have been enabled correctly.
     

    Figure 13: Security Hub with a noncompliant resource finding

    Figure 13: Security Hub with a noncompliant resource finding

  2. With the automatic remediation feature enabled, you should see the “Updated at” time reflect exactly when the automatic remediation actions were completed. The completion of the automatic remediation actions can take up to 5 minutes to be reflected in Security Hub.
     
    Figure 14: Security Hub with an auto-remediated compliance finding

    Figure 14: Security Hub with an auto-remediated compliance finding

  3. Go back to the account where you created the ALB, and in the Shield Protected Resources console, navigate to the Protected Resources page, where you should see the ALB listed as a protected resource.
     
    Figure 15: Shield console in the member account shows that the new ALB is a protected resource

    Figure 15: Shield console in the member account shows that the new ALB is a protected resource

    Confirming that the ALB has been added automatically as a Shield Advanced–protected resource means that you have successfully configured the Firewall Manager and Security Hub integration.

(Optional): Send a custom action to a third-party provider

You can send all regional Security Hub findings to a ticketing system, Slack, AWS Chatbot, a Security Information and Event Management (SIEM) tool, a Security Orchestration Automation and Response (SOAR), incident management tools, or to custom remediation playbooks by using Security Hub Custom Actions.

Conclusion

In this blog post I showed you how to set up a Firewall Manager security policy for Shield Advanced so that you can monitor your applications for DDoS events, and their compliance to DDoS protection policies in your multi-account environment from the Security Hub findings console. In line with best practices for account governance, organizations should have a centralized security account that performs monitoring for multiple accounts. Security Hub and Firewall Manager provide a centralized solution to help you achieve your compliance and monitoring goals for DDoS protection.

If you’re interested in exploring how Shield Advanced and AWS WAF help to improve the security posture of your application, have a look at the following resources:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Fola Bolodeoku

Fola is a Security Engineer on the AWS Threat Research Team, where he focuses on helping customers improve their application security posture against DDoS and other application threats. When he is not working, he enjoys spending time exploring the natural beauty of the Western Cape.

AWS and the New Zealand notifiable privacy breach scheme

Post Syndicated from Adam Star original https://aws.amazon.com/blogs/security/aws-and-the-new-zealand-notifiable-privacy-breach-scheme/

The updated New Zealand Privacy Act 2020 (Privacy Act) will come into force on December 1, 2020. Importantly, it establishes a new notifiable privacy breach scheme (NZ scheme). The NZ scheme gives affected individuals the opportunity to take steps to protect their personal information following a privacy breach that has caused, or is likely to cause, serious harm. It also reinforces entities’ accountability for the personal information they hold.

We’re happy to announce that Amazon Web Services (AWS) now offers two types of New Zealand Notifiable Data Breach (NZNDB) addenda to customers who are subject to the Privacy Act and are using AWS to store and process personal information covered by the NZ scheme. The NZNDB addenda address customers’ need for notification if a security event affects their data.

We’ve made both types of NZNDB addenda available online as click-through agreements in AWS Artifact, which is our customer-facing audit and compliance portal that can be accessed from the AWS Management Console. In AWS Artifact, you can review and activate the relevant NZNDB addendum for those AWS accounts you use to store and process personal information covered by the NZ scheme.

The first type, the Account NZNDB Addendum, applies only to the specific individual account that accepts the Account NZNDB Addendum. The Account NZNDB Addendum must be separately accepted for each AWS account that you need to cover.

The second type, the AWS Organizations ANDB Addendum, once accepted by a management account in AWS Organizations, applies to the management account and all member accounts in that organization. If you don’t need or want to take advantage of the AWS Organizations ANDB Addendum, you can still accept the Account ANDB Addendum for individual accounts.

As with all AWS Artifact features, there is no additional cost to use AWS Artifact to review, accept, and manage either the individual Account NZNDB Addendum or AWS Organizations NZNDB Addendum. To learn more about AWS Artifact, including how to view, download, and accept the NZNDB addenda, visit the AWS Artifact FAQ page.

We welcome the arrival of the NZ scheme, and hope it helps New Zealand entities to improve their security capabilities.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Adam Star

Adam joined Amazon in 2012 and is a Program Manager on the Security Obligations and Contracts team. He enjoys designing practical solutions to help customers meet a range of global compliance requirements including GDPR, HIPAA, and the European Banking Authority’s Guidelines on Outsourcing Arrangements. Adam lives in Seattle with his wife and daughter. Originally from New York, he’s constantly searching for “real” bagels and pizza. He’s an active member of the Washington State Bar Association and American Homebrewers Association, finding the latter much more successful when attempting to make friends in social situations.

AWS Network Firewall – New Managed Firewall Service in VPC

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-network-firewall-new-managed-firewall-service-in-vpc/

Our customers want to have a high availability, scalable firewall service to protect their virtual networks in the cloud. Security is the number one priority of AWS, which has provided various firewall capabilities on AWS that address specific security needs, like Security Groups to protect Amazon Elastic Compute Cloud (EC2) instances, Network ACLs to protect Amazon Virtual Private Cloud (VPC) subnets, AWS Web Application Firewall (WAF) to protect web applications running on Amazon CloudFront, Application Load Balancer (ALB) or Amazon API Gateway, and AWS Shield to protect against Distributed Denial of Service (DDoS) attacks.

We heard customers want an easier way to scale network security across all the resources in their workload, regardless of which AWS services they used. They also want customized protections to secure their unique workloads, or to comply with government mandates or commercial regulations. These customers need the ability to do things like URL filtering on outbound flows, pattern matching on packet data beyond IP/Port/Protocol and the ability to alert on specific vulnerabilities for protocols beyond HTTP/S.

Today, I am happy to announce AWS Network Firewall, a high availability, managed network firewall service for your virtual private cloud (VPC). It enables you to easily deploy and manage stateful inspection, intrusion prevention and detection, and web filtering to protect your virtual networks on AWS. Network Firewall automatically scales with your traffic, ensuring high availability with no additional customer investment in security infrastructure.

With AWS Network Firewall, you can implement customized rules to prevent your VPCs from accessing unauthorized domains, to block thousands of known-bad IP addresses, or identify malicious activity using signature-based detection. AWS Network Firewall makes firewall activity visible in real-time via CloudWatch metrics and offers increased visibility of network traffic by sending logs to S3, CloudWatch and Kinesis Firehose. Network Firewall is integrated with AWS Firewall Manager, giving customers who use AWS Organizations a single place to enable and monitor firewall activity across all your VPCs and AWS accounts. Network Firewall is interoperable with your existing security ecosystem, including AWS partners such as CrowdStrike, Palo Alto Networks, and Splunk. You can also import existing rules from community maintained Suricata rulesets.

Concepts of Network Firewall
AWS Network Firewall runs stateless and stateful traffic inspection rules engines. The engines use rules and other settings that you configure inside a firewall policy.

You use a firewall on a per-Availability Zone basis in your VPC. For each Availability Zone, you choose a subnet to host the firewall endpoint that filters your traffic. The firewall endpoint in an Availability Zone can protect all of the subnets inside the zone except for the one where it’s located.

You can manage AWS Network Firewall with the following central components.

  • Firewall – A firewall connects the VPC that you want to protect to the protection behavior that’s defined in a firewall policy. For each Availability Zone where you want protection, you provide Network Firewall with a public subnet that’s dedicated to the firewall endpoint. To use the firewall, you update the VPC route tables to send incoming and outgoing traffic through the firewall endpoints.
  • Firewall policy – A firewall policy defines the behavior of the firewall in a collection of stateless and stateful rule groups and other settings. You can associate each firewall with only one firewall policy, but you can use a firewall policy for more than one firewall.
  • Rule group – A rule group is a collection of stateless or stateful rules that define how to inspect and handle network traffic. Rules configuration includes 5-tuple and domain name filtering. You can also provide stateful rules using Suricata open source rule specification.

AWS Network Firewall – Getting Started
You can start AWS Network Firewall in AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs for creating and managing firewalls. In the navigation pane in VPC console, expand AWS Network Firewall and then choose Create firewall in Firewalls menu.

To create a new firewall, enter the name that you want to use to identify this firewall and select your VPC from the dropdown. For each availability zone (AZ) where you want to use AWS Network Firewall, create a public subnet to for the firewall endpoint. This subnet must have at least one IP address available and a non-zero capacity. Keep these firewall subnets reserved for use by Network Firewall.

For Associated firewall policy, select Create and associate an empty firewall policy and choose Create firewall.

Your new firewall is listed in the Firewalls page. The firewall has an empty firewall policy. In the next step, you’ll specify the firewall behavior in the policy. Select your newly created the firewall policy in Firewall policies menu.

You can create or add new stateless or stateful rule groups – zero or more collections of firewall rules, with priority settings that define their processing order within the policy, and stateless default action defines how Network Firewall handles a packet that doesn’t match any of the stateless rule groups.

For stateless default action, the firewall policy allows you to specify different default settings for full packets and for packet fragments. The action options are the same as for the stateless rules that you use in the firewall policy’s stateless rule groups.

You are required to specify one of the following options:

  • Allow – Discontinue all inspection of the packet and permit it to go to its intended destination.
  • Drop – Discontinue all inspection of the packet and block it from going to its intended destination.
  • Forward to stateful rule groups – Discontinue stateless inspection of the packet and forward it to the stateful rule engine for inspection.

Additionally, you can optionally specify a named custom action to apply. For this action, Network Firewall sends an CloudWatch metric dimension named CustomAction with a value specified by you. After you define a named custom action, you can use it by name in the same context where you have define it. You can reuse a custom action setting among the rules in a rule group and you can reuse a custom action setting between the two default stateless custom action settings for a firewall policy.

After you’ve defined your firewall policy, you can insert the firewall into your VPC traffic flow by updating the VPC route tables to include the firewall.

How to set up Rule Groups
You can create new stateless or stateful rule groups in Network Firewall rule groups menu, and choose Create rule group. If you select Stateful rule group, you can select one of three options: 1) 5-tuple format, specifying source IP, source port, destination IP, destination port, and protocol, and specify the action to take for matching traffic, 2) Domain list, specifying a list of domain names and the action to take for traffic that tries to access one of the domains, and 3) Suricata compatible IPS rules, providing advanced firewall rules using Suricata rule syntax.

Network Firewall supports the standard stateless “5 tuple” rule specification for network traffic inspection with priority number that indicates the processing order of the stateless rule within the rule group.

Similarly, a stateful 5 tuple rule has the following match settings. These specify what the Network Firewall stateful rules engine looks for in a packet. A packet must satisfy all match settings to be a match.

A rule group with domain names has the following match settings – Domain name, a list of strings specifying the domain names that you want to match, and Traffic direction, a direction of traffic flow to inspect. The following JSON shows an example rule definition for a domain name rule group.

{
  "RulesSource": {
    "RulesSourceList": {
      "TargetType": "FQDN_SNI","HTTP_HOST",
      "Targets": [
        "test.example.com",
        "test2.example.com"
      ],
      "GeneratedRulesType": "DENYLIST"
    }
  } 
}

A stateful rule group with Suricata compatible IPS rules has all settings defined within the Suricata compatible specification. For example, as following is to detect SSH protocol anomalies. For information about Suricata, see the Suricata website.

alert tcp any any -> any 22 (msg:"ALERT TCP port 22 but not SSH"; app-layer-protocol:!ssh; sid:2271009; rev:1;)

You can monitor Network Firewall using CloudWatch, which collects raw data and processes it into readable, near real-time metrics, and AWS CloudTrail, a service that provides a record of API calls to AWS Network Firewall by a user, role, or an AWS service. CloudTrail captures all API calls for Network Firewall as events. To learn more about logging and monitoring, see the documentation.

Network Firewall Partners
At this launch, Network Firewall integrates with a collection of AWS partners. They provided us with lots of helpful feedback. Here are some of the blog posts that they wrote in order to share their experiences (I am updating this article with links as they are published).

Available Now
AWS Network Firewall is now available in US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions. Take a look at the product page, price, and the documentation to learn more. Give this a try, and please send us feedback either through your usual AWS Support contacts or the AWS forum for Amazon VPC.

Learn all the details about AWS Network Firewall and get started with the new feature today.

Channy;

Centrally manage AWS WAF (API v2) and AWS Managed Rules at scale with Firewall Manager

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/centrally-manage-aws-waf-api-v2-and-aws-managed-rules-at-scale-with-firewall-manager/

Since AWS Firewall Manager was introduced in 2018, it has evolved with many more features and today also supports the newest version of AWS WAF, as well as the latest AWS WAF APIs (AWS WAFV2), and AWS Managed Rules for AWS WAF. (Note that the original AWS WAF APIs are still available and supported under the name AWS WAF Classic. Firewall Manager already supported AWS WAF Classic and continues to do so.) In this blog, we walk you through the steps of setting up Firewall Manager policies for AWS WAF and highlight some of the options available. We also walk through how to use the recently launched feature that enables centralized logging for your AWS WAF policies. In addition, we share an AWS CloudFormation template that you can use to set up Firewall Manager policies, AWS WAF rule groups, and the related AWS WAF rules (both custom and managed rules). Before we get into the content of the blog, here’s some background information you should know.

AWS WAF rules define how to inspect web requests and what to do when a web request matches the inspection criteria. Firewall Manager simplifies the administration of AWS WAF rules at scale across multiple accounts.

Background

Web ACL capacity units

The web ACL capacity unit (WCU) is a new concept that we introduced to AWS WAF. WCU is a measurement that is used to calculate and control the operating resources that are used to run your rules with your web access control list (web ACL).

AWS Managed Rules

AWS Managed Rules (AMRs) are a set of AWS WAF rules curated and maintained by the AWS Threat Research Team. With just a few clicks, these rules can help protect your web applications from new and emerging risks, so you don’t need to spend time researching and writing your own rules. The core rule set covers some of the common security risks described in the OWASP Top 10 publication. AMRs also include IP reputation lists based on Amazon threat intelligence that can help reduce your exposure to bot traffic or exploitation attempts. You can add multiple AMRs to your web ACL or write hundreds of your own rules. Additional rule sets from AWS Marketplace sellers like Cyber Security Cloud and Fortinet are also available to use. If you subscribe to rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.

Rule groups

A rule group is a group of AWS WAF rules. In the new AWS WAF, a rule group is defined under AWS WAF, and you can add rule groups as a reusable set of rules under a web ACL. With the addition of AMRs, customers can select from AWS Managed Rule groups in addition to Partner Managed and Custom Configured rule groups. So, you now have the option to create Firewall Manager policies by using either AWS WAF Classic rule groups or new AWS WAF rule groups.

Firewall Manager policy

A Firewall Manager policy is an entity that contains a set of rule groups that can be associated to and applied to your resources. In this policy, you also specify the resource types you want to protect in specific or all accounts. Based on the policy, Firewall Manager creates a Firewall Manager web ACL in the corresponding accounts. In addition, individual application teams can add more rules or rule groups to this web ACL.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites:

  • AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. If you’re new to Organizations, use these steps to enable AWS Organizations in your account. For more information, see Creating an organization and Enabling all features in your organization.
  • A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the administrator for Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization. On the web console of the AWS account that you want to designate as the Firewall Manager administrator, go to the Firewall Manager service, and on the Settings tab, choose Set administrator account to set that account as the administrator, as shown in figure 1.
     
    Figure 1: Firewall Manager administrator account

    Figure 1: Firewall Manager administrator account

  • AWS Config: You must enable AWS Config for all the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all the accounts in your organization, you can use the Enable AWS Config template on the StackSets Sample Templates page. For more information, see Getting Started with AWS Config.

Walkthrough: Set up Firewall Manager policies

In the following steps, we walk you through the steps of creating and applying a Firewall Manager policy across AWS accounts, explaining the implications of the options you can choose. This policy uses AWS WAF rules that include AMRs such as the core rule set, Amazon’s IP reputation list and SQL injection.

To create and apply Firewall Manager policies

  1. In the AWS Management Console, navigate to the Firewall Manager service, choose Security Policies, and then choose the appropriate Region.
  2. Choose the Create Policy button. Under Policy type, you can see four options to choose from, as shown in figure 2. For this walkthrough, choose AWS WAF, which is the most recent version of AWS WAF, and then choose Next.
     
    Figure 2: Choosing the policy type

    Figure 2: Choosing the policy type

  3. You can now see options to add two sets of rule groups, first rule groups and last rule groups, as shown in figure 3.
    • First rule groups: When the web ACL inspects a web request, these are the set of rule groups that are prioritized to be evaluated at the very beginning. Note that these rules could be either custom build rules, or managed AWS WAF rules offered by AWS or other sellers.
    • Last rule groups: If the web request makes it through the first rule set and the AWS WAF rules defined for this web ACL (which will be in the format FMManagedWebACLV2PartialRuleName-policyXXXX), then it is evaluated against this last set of rule groups, before resulting in the action defined in this rule set.
    • Default web ACL action: This option specifies whether you want the web request evaluated by the rule to be allowed or blocked.
    Figure 3: Updating Rule groups

    Figure 3: Updating Rule groups

  4. See the AWS Managed Rules rule groups list to understand the rules that are defined in the managed rule set. You can choose to override the default action set and instead apply the “count” setting to monitor the traffic for the rule group. This will help customers to monitor for false positives before deciding to allow or reject certain web-requests.
     
    You can apply this count mode to specific rules of the rule set by selecting the rule, choosing the Edit button, as shown in figure 4, and then setting the toggle for individual rules.
     
    Figure 4: Edit rule group to override the default action set

    Figure 4: Edit rule group to override the default action set

    Figure 5 shows the Override rules action setting toggled on and off for various rules. AWS Core Rule set contains rules to protect against commonly occurring vulnerabilities described in OWASP publications. The two parameters, “SizeRestrictions_QUERYSTRING” and “SizeRestrictions_BODY” are set to monitoring mode which helps verify that the URI query string length and request body size are within the standard boundary for web applications.
     

    Figure 5: Example of setting a managed rule to override the rules action

    Figure 5: Example of setting a managed rule to override the rules action

  5. On the same page, under Policy action, you can also set the policy action to either identify the resources and monitor those that don’t comply with the policy rules, without auto-remediation, OR choose to auto-remediate any of the non-compliant resources. This option is shown in figure 6.
     
    Auto-remediation: Here is where you can get creative in using Firewall Manager policies based on various requirements. For example, you can create policies for specific departments using AWS tags OR all applications deployed in specific accounts, where you want to enforce certain AWS WAF rules to meet requirements to protect all the resources. Notice in figure 6 that you can choose to auto-remediate. What this means is that these AWS WAF rules are not only applied to all the resources types that you select to protect, but if someone tampers with this Firewall Manager web ACL by deleting the rule group, Firewall Manager creates this rule group again within a few minutes. In the background, Firewall Manager has a tight integration with AWS Config to monitor all the resources across other accounts in that parent organization. Similarly, in the same account, you could create another policy for a different department or team where you don’t want to enforce the AWS WAF rules but instead let the application teams in these departments create their own web ACLs to use.
     
    Figure 6: Setting the default web ACL and auto-remediate actions

    Figure 6: Setting the default web ACL and auto-remediate actions

    In figure 6, you see an option to replace the existing web ACLs. You may have created custom AWS WAF rules and applied those to the resources by using a custom web ACL. With this option, you can either choose not to alter such existing resources that are already protected by another custom web ACL (and not by Firewall Manager–created web ACLs), or to disassociate the resources and have them protected by the new web ACLs created by this policy.

  6. In this last step, under Policy scope, you can decide to apply the policy to specific types of AWS resources, either to all accounts in that organization or just to some of them, and also filter based on tags. This option is shown in figure 7. In the below example, you will see that the policy is restricted to two accounts, and two corresponding OU’s with tags limiting to “DeptName: PCI”.
     
    Figure 7: Defining policy scope for specific accounts

    Figure 7: Defining policy scope for specific accounts

Once these changes are applied, then in the background, with AWS Config enabled in the child accounts that are included in the policy, AWS WAF rules are created in the format “FMManagedWebACLV2<rulegroupname>XXXXXXXXXXXXX”. All resources that meet the conditions called out in the policy are automatically protected.

Validation: You can now log in to one of the child accounts to verify whether the resources are created. In our example, all the Application Load Balancers with the tag name DepName:PCI will be associated to this web ACL. You can also add more AWS WAF rules to this web ACL.
 

Figure 8: Reviewing and managing web ACLs in participating child accounts

Figure 8: Reviewing and managing web ACLs in participating child accounts

Firewall Manager logging capability

As part of the AWS WAF capability you want to make sure that logging is enabled with the recently announced feature for centralized logging of your AWS WAF policies. With this logging feature, you get detailed information about traffic within your organization.

The steps are similar to enabling logging for AWS WAF, and consists of two steps. In the first step, Amazon Kinesis Data Firehose is used to capture logs from your policy’s web ACLs to a configured storage destination through the HTTPS endpoint. In the second step, you enable logging in a Firewall Manager policy and select the Firehose stream you created in the first step.

Step 1: Set up a new Kinesis Data Firehose delivery stream

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Elasticsearch Service (ES), Splunk, and Amazon Redshift. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

To set up the delivery stream

  1. In the AWS Management Console, using your Firewall Manager administrator account, open the Amazon Kinesis Data Firehose service and choose the button to create a new delivery stream.
    1. For Delivery stream name, enter a name for your new stream that starts with aws-waf-logs- as shown in figure 9. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.
    3. We recommend that you also enable server-side encryption. You can either choose to use AWS-owned keys or the customer-managed keys. In this example, we have chosen AWS owned Customer master keys (CMKs). If you have your own CMK’s, you can select the 2nd option of the customer-managed keys and pick your corresponding key from the dropdown list.
       
      Figure 9: Setting the source for the Kinesis Data Firehose delivery stream

      Figure 9: Setting the source for the Kinesis Data Firehose delivery stream

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Data transformation, choose Disabled. You also have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, for Record format conversion, choose Disabled. Figure 10 shows these options.
     
    Figure 10: Setting data transformation and format conversion

    Figure 10: Setting data transformation and format conversion

  3. On the Select destination screen, for Destination, choose Amazon S3, as shown in figure 11.
    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket, since you’ll need the bucket name in a later step in this walkthrough.
    2. (Optional) Set the S3 prefix and error bucket for logs.
       
      Figure 11: Selecting the destination

      Figure 11: Selecting the destination

    3. On the Configure Settings screen, shown in figure 12, choose other S3 options, such as encryption and compression, and creation of an IAM role for granting access to the Firehose delivery stream.
      1. You can leave the default values for S3 buffer values. We also recommend enabling compression and encryption.
      2. Either choose the option to create a new IAM role, or choose an existing role if you’ve already created one.
      Figure 12: Setting S3 options and IAM role

      Figure 12: Setting S3 options and IAM role

  4. In the next screen, review all the options you selected and choose the Create Delivery Stream button to create a Kinesis Data Firehose delivery stream.

Step 2: Enable logging for an AWS WAF policy

In this step, you configure Firewall Manager policy to direct the log ingestion to the Kinesis Data Firehose delivery stream that you created in the previous step.

To enable logging for an AWS WAF policy

  1. On the AWS Management Console, search for AWS Firewall Manager and in the navigation pane, choose Security Policies.
  2. Choose the AWS WAF policy that you want to enable logging for, and on the Policy details tab, in the Policy rules section, choose Edit. For Logging configuration status, choose Enabled.
  3. Choose the Kinesis Data Firehose that you created for your logging. You must choose a firehose that begins with “aws-waf-logs-”.
     
    Figure 13: Enable Firewall Manager logs

    Figure 13: Enable Firewall Manager logs

  4. Review your settings, and then choose Save to save your changes to the policy.

    Note: Firewall Manager supports this option for the latest version of AWS WAF, but not for AWS WAF Classic.

With these two steps, you now have logging enabled for your Firewall Manager policies.

Deploy Firewall Manager policy with a CloudFormation template

In this section, we provide you with an example CloudFormation template that deploys Firewall Manager policy with a rule group that consists of both an AWS Managed rule set and a custom AWS WAF rule. As a part of the custom AWS WAF rule, this template creates an IP set in which you specify the list of IP addresses from which you want to block traffic. It also creates a rule with an AND statement that blocks cross-site scripting requests and requests that originate from the IP addresses that you specified. You will also notice that this rule is applied to specific accounts that are entered in the parameters as comma-delimited values. Out of many available AWS Managed Rules, we used two rules: AWSManagedRulesCommonRuleSet and AWSManagedRulesSQLiRuleSet. For the former rule, we set the override action to count, which means the requests won’t be blocked but will be counted for further investigation. The latter rule contains rules to block request patterns associated with exploitation of SQL databases, such as SQL injection attacks. This can help prevent remote injection of unauthorized queries.

As a best practice, before using a rule group in production, test it in a non-production environment, with the action override set to count. Evaluate the rule group using Amazon CloudWatch metrics combined with AWS WAF sampled requests or AWS WAF logs. When you’re satisfied that the rule group does what you want, remove the override on the group.

To deploy the CloudFormation template

  1. Copy the following template code and save it in a file named deploy-firewall-manager-policy.yaml.
    Description: Create IPSet, RuleGroup and Firewall Manager Policy. Firewall Policy contains two AWS managed rule groups and one custom rule group.
    Parameters:
      BlockIpAddressCIDR:
        Type: CommaDelimitedList
        Description: "Enter IP Address range by using CIDR notation separated by comma to block incoming traffic originating from them. For eg: To specify the IPV4 address 192.0.2.44 type 192.0.2.44/32 or 10.0.2.0/24"
      AWSAccountIds:
        Type: CommaDelimitedList
        Description: "Enter AWS Account IDs separated by comma in which you want to apply Firewall Manager policy"
    
    Resources:
      WAFIPSetFMS:
          Type: 'AWS::WAFv2::IPSet'
          Properties:
            Description: Block ranges of IP addresses using this IP Set
            Name: WAFIPSetFMS
            Scope: REGIONAL
            IPAddressVersion: IPV4	
            Addresses: !Ref BlockIpAddressCIDR
      RuleGroupXssFMS:
        Type: 'AWS::WAFv2::RuleGroup'
        DependsOn: WAFIPSetFMS
        Properties: 
          Capacity: 500
          Description: AWS WAF Rule Group to block web requests that contain cross site scripting injection attacks and originate from specific IP ranges.
          Name: RuleGroupXssFMS
          Scope: REGIONAL
          VisibilityConfig:
              SampledRequestsEnabled: true
              CloudWatchMetricsEnabled: true
              MetricName: RuleGroupXssFMS
          Rules: 
            - Name: xssException
              Priority: 0
              Action:
                Block: {}
              VisibilityConfig:
                  SampledRequestsEnabled: true
                  CloudWatchMetricsEnabled: true
                  MetricName: xssException
              Statement:
                AndStatement:
                  Statements:
                  - XssMatchStatement:
                      FieldToMatch:
                        Body: {}
                      TextTransformations:
                      - Type: HTML_ENTITY_DECODE
                        Priority: 0
                      - Type: LOWERCASE
                        Priority: 1
                  - IPSetReferenceStatement:
                      Arn: !GetAtt WAFIPSetFMS.Arn
      PolicyWAFv2:
        Type: AWS::FMS::Policy
        Properties:
          ExcludeResourceTags: false
          PolicyName: WAF-Policy
          IncludeMap: 
            ACCOUNT: !Ref AWSAccountIds
          RemediationEnabled: true
          ResourceType: AWS::ElasticLoadBalancingV2::LoadBalancer 
          SecurityServicePolicyData: 
            Type: WAFV2
            ManagedServiceData: !Sub '{"type":"WAFV2", 
                                      "preProcessRuleGroups":[{ 
                                      "ruleGroupType":"RuleGroup",
                                      "ruleGroupArn":"${RuleGroupXssFMS.Arn}",
                                      "overrideAction":{"type":"NONE"}},{
                                      "managedRuleGroupIdentifier":{
                                      "managedRuleGroupName":"AWSManagedRulesCommonRuleSet", 
                                      "vendorName":"AWS"},
                                      "overrideAction":{"type":"COUNT"}, 
                                      "excludeRules":[],"ruleGroupType":"ManagedRuleGroup"},{
                                      "managedRuleGroupIdentifier":{
                                      "managedRuleGroupName":"AWSManagedRulesSQLiRuleSet", 
                                      "vendorName":"AWS"},
                                      "overrideAction":{"type":"NONE"}, 
                                      "excludeRules":[],"ruleGroupType":"ManagedRuleGroup"}],
                                      "postProcessRuleGroups":[],
                                      "defaultAction":{"type":"BLOCK"}}'
    
    

  2. Execute the following AWS CLI command to deploy the stack. If you haven’t configured AWS CLI, refer to this quickstart.
    aws cloudformation create-stack --stack-name firewall-manager-policy-stack \
          --template-body file://deploy-firewall-manager-policy.yaml \
          --parameters \
    ParameterKey=BlockIpAddressCIDR,ParameterValue=<Enter-BlockIpAddressCIDR>
    ParameterKey= AWSAccountIds,ParameterValue=<Enter-AWSAccountIds>
    

Pricing

As of today, price per AWS WAF protection policy per Region is $100.00. To get an overall idea on pricing, we recommend that you review this AWS Firewall Manager pricing guide that covers a few scenarios.

AWS WAF charges based on the number of web access control lists (web ACLs) that you create, the number of rules that you add per web ACL, and the number of web requests that you receive. There are no upfront commitments. Learn more about AWS WAF pricing.

There is no additional charge for using AWS Managed Rules for AWS WAF. When you subscribe to a Managed Rule Group provided by an AWS Marketplace seller, you will be charged additional fees based on the price set by the seller.

AWS Shield Advanced customers can use Firewall Manager to apply AWS Shield Advanced and AWS WAF protections across their entire organization at no additional cost.

Conclusion

This blog post describes how you can create Firewall Manager policies with the new version of AWS WAF rules, by using the web console or AWS CloudFormation. You can also create these rules by using the command line interface (CLI), or programmatically with the SDK and other similar scripting tools. Using both AWS WAF and Firewall Manager, you can create a deployment strategy to safeguard all your accounts centrally at the organization level, and also choose to automatically remediate the AWS WAF rules if anything is changed after deployment.

For further reading, see the AWS Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Senior Cloud Infrastructure Architect with AWS who delivers proof-of-concept projects and topical workshops, and leads implementation projects. He holds a bachelor’s degree in Computer Science & Engineering from the National Institute of Technology, Jamshedpur (India). Outside of work, he enjoys watching documentaries, biking, practicing meditation and discussing spirituality.

Author

Mahek Pavagadhi

Mahek is a Cloud Infrastructure Architect at AWS in San Francisco, CA. She has a master’s degree in Software Engineering with a major in Cloud Computing. She is passionate about cloud services and building solutions with it. Outside of work, she is an avid traveler who loves to explore local cafeterias.

120 AWS services achieve HITRUST certification

Post Syndicated from Hadis Ali original https://aws.amazon.com/blogs/security/120-aws-services-achieve-hitrust-certification/

We’re excited to announce that 120 Amazon Web Services (AWS) services are certified for the HITRUST Common Security Framework (CSF) for the 2020 cycle.

The full list of AWS services that were audited by a third-party assessor and certified under HITRUST CSF is available on our Services in Scope by Compliance Program page. You can view and download our HITRUST CSF certification from AWS Artifact.

AWS HITRUST CSF certification is available for customer inheritance

You don’t have to assess the inherited controls, because AWS already has! You can deploy environments onto AWS and inherit our HITRUST CSF certification provided that you use only in-scope services and apply the controls detailed on the HITRUST website that you are responsible for implementing.

The HITRUST certification allows you, as an AWS customer, to tailor your security control baselines to a variety of factors including, but not limited to, regulatory requirements and organization type. The HITRUST CSF is widely adopted by leading organizations in a variety of industries in their approach to security and privacy. Visit the HITRUST website for more information.

As always, we value your feedback and questions and are committed to helping you achieve and maintain the highest standard of security and compliance. Feel free to contact the team through AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Hadis Ali

Hadis is a Security and Privacy Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS Security Assurance. Hadis holds Bachelor’s degrees in Accounting and Information Systems from the University of Washington.

Investigate VPC flow with Amazon Detective

Post Syndicated from Ross Warren original https://aws.amazon.com/blogs/security/investigate-vpc-flow-with-amazon-detective/

Many Amazon Web Services (AWS) customers need enhanced insight into IP network flow. Traditionally, cost, the complexity of collection, and the time required for analysis has led to incomplete investigations of network flows. Having good telemetry is paramount, and VPC Flow Logs are a very important part of a robust centralized logging architecture. The information that VPC Flow Logs provide is frequently used by security analysts to determine the scope of security issues, to validate that network access rules are working as expected, and to help analysts investigate issues and diagnose network behaviors. Flow logs capture information about the IP traffic going to and from EC2 interfaces in a VPC. Each record describes aspects of the traffic flow, such as where it originated and where it was sent to, what network ports were used, and how many bytes were sent.

Amazon Detective now enables you to interactively examine the details of the virtual private cloud (VPC) network flows of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Detective automatically collects VPC flow logs from your monitored accounts, aggregates them by EC2 instance, and presents visual summaries and analytics about these network flows. Detective doesn’t require VPC Flow Logs to be configured and doesn’t impact existing flow log collection.

In this blog post, I describe how to use the new VPC flow feature in Detective to investigate an UnauthorizedAccess:EC2/TorClient finding from Amazon GuardDuty. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty documentation states that this alert can indicate unauthorized access to your AWS resources with the intent of hiding the unauthorized user’s true identity. I’ll demonstrate how to use Amazon Detective to investigate an instance that was flagged by Amazon GuardDuty to determine whether it is compromised or not.

Starting the investigation in GuardDuty

In my GuardDuty console, I’m going to select the UnauthorizedAccess:EC2/TorClient finding shown in Figure 1, choose the Actions menu, and select Investigate.
 

Figure 1: Investigating from the GuardDuty console

Figure 1: Investigating from the GuardDuty console

This opens a new browser tab and launches the Amazon Detective console, where I’m presented with the profile page for this finding, shown in Figure 2. You must have Detective enabled to pivot between a GuardDuty finding and Detective. Detective provides profile pages for supported GuardDuty findings and AWS resources (for example, IP address, EC2 instance, user, and role) that include information and data visualizations that summarize observed behaviors and give guidance for interpreting them. Profiles help analysts to determine whether the finding is of genuine concern or a false positive. For resources, profiles provide supporting details for an investigation into a finding or for a general hunt for suspicious activity.
 

Figure 2: Finding profile panel

Figure 2: Finding profile panel

In this case, the profile page for this GuardDuty UnauthorizedAccess:EC2/TorClient finding provides contextual and behavioral data about the EC2 instance on which GuardDuty has noted the issue. As I dive into this finding, I’m going to be asking questions that help assess whether the instance was in fact accessed unintentionally, such as, “What IP port or network service was in use at that time?,” “Were any large data transfers involved?,” “Was the traffic allowed by my security groups?” Profile pages in Detective organize content that helps security analysts investigate GuardDuty findings, examine unexpected network behavior, and identify other AWS resources that might be affected by a potential security issue.

I begin scrolling down the page and notice the Findings associated with EC2 instance i-9999999999999999 panel. Detective displays related findings to provide analysts with additional evidence and context about potentially related issues. The finding I’m investigating is listed there, as well as an Unusual Behaviors/VM/Behavior:EC2-NetworkPortUnusual finding. GuardDuty builds a baseline on your network traffic and will generate findings where there is traffic outside the calculated normal. While we might not investigate every instance of anomalous traffic, having these alerts correlated by Detective provides context for validating the issue. Keeping this in mind as I scroll down, at the bottom of this profile page, I find the Overall VPC flow volume panel. If you choose the Info link next to the panel title, you can see helpful tips that describe how to use the visualizations and provide ideas for questions to ask within your investigation. These info links are available throughout Detective. Check them out!

Investigating VPC flow in Detective

In this investigation, I’m very curious about the two large spikes in inbound traffic that I see in the Overall VPC flow volume panel, which seem to be visually associated with some unusual outbound traffic spikes. It’s most likely that these outbound spikes are related to the Unusual Behaviors/VM/Behavior:EC2-NetworkPortUnusual finding I mentioned earlier. To start the investigation, I choose the display details for scope time button, shown circled at the bottom of Figure 2. This expands the VPC Flow Details, shown in Figure 3.
 

Figure 3: Our first look at VPC Flow Details

Figure 3: Our first look at VPC Flow Details

We now can see that each entry displays the volume of inbound traffic, the volume of outbound traffic, and whether the access request was accepted or rejected. Detective provides annotations on the VPC flows to help guide your investigation. These From finding annotations make it clear which flows and resources were involved in the finding. In this case, we can easily see (in Figure 3) the three IP addresses at the top of the list that triggered this GuardDuty finding.

I’m first going to focus on the spikes in traffic that are above the baseline. When I click on one of the spikes in the graph, the time window for the VPC flow activity now matches the dates of these spikes I’m investigating.

If I choose the Inbound Traffic column header, shown in Figure 4, I can find the flows that contributed to the spike during this time window.
 

Figure 4: Inbound traffic spikes

Figure 4: Inbound traffic spikes

Note that the two large inbound spikes aren’t associated with the IP address from the UnauthorizedAccess:EC2/TorClient finding, based on the Detective annotation From finding. Let’s check the outbound traffic. If I do a quick sort of the table based on the outbound traffic column, as shown in Figure 5, we can also see the outbound spikes, and it isn’t immediately evident whether the spikes are associated with this finding. I could continue to investigate the spikes (because they are a visual anomaly), or focus just on the VPC flow traffic that GuardDuty and Detective have labeled as associated with this TOR finding.
 

Figure 5: Outbound traffic spikes

Figure 5: Outbound traffic spikes

Let’s focus on the outbound and inbound spikes and see if we can determine what’s happening. The inbound spikes are on port 443, typically an HTTPS port, or a secure web connection. The outbound spikes are on port 22 (ssh), but go to IP addresses that look to be internal based on their addresses of 172.16.x.x. The port 443 traffic might indicate a web server that’s open to the internet and receiving traffic. With further investigation, we can determine if this idea is valid, and continue hunting for potentially malicious traffic.

A good next step would be to investigate the two specific IP addresses to rule out their involvement in the finding. I can do this by right-clicking on either of the external IP addresses and opening a new tab, where I can focus on investigating these two specific IP addresses. I would take this line of investigation to possibly rule out the involvement of these IP addresses in this finding, determine if they regularly communicate with my resources, find out what instance(s) they’re related to, and see if there are other findings associated with these instances or IP addresses. This deeper investigation is outside the scope of this blog post, but it’s something you should be doing in your own environment.

IP addresses in AWS are ephemeral in nature. The unique identifier in VPC flow logs is the Instance ID. At the time of this investigation, 172.16.0.7 is assigned to the instance related to this finding, so let’s continue to take a look at the internal 172.16.0.7 IP address with 218 MB outbound traffic on port 22. I choose 172.16.0.7, and Detective opens up the profile page for this specific IP address, as shown in Figure 6. Here we see some interesting correlations: two other GuardDuty findings related to SSH brute-force attacks. These could be related to our outbound port 22 spikes, because they’re certainly in the window of time we’re investigating.
 

Figure 6: IP address profile panel

Figure 6: IP address profile panel

As part of a deeper investigation, you would investigate the SSH brute-force findings for 198.51.100.254 and 203.0.113.83 but for now I’m interested what this IP is involved in. Detective easily associates this 172.16.0.7 IP address with the instance that was assigned the IP during the scope time. I scroll down to the bottom of the profile page for 172.16.0.7 and investigate the i-9999999999999999 instance by choosing the instance name.

Filtering VPC flow activity

In Detective, as the investigator we are looking at an instance profile panel, similar to the one in Figure 2, and since we’re interested in VPC flow details, I’m going to scroll down and select display details for scope time.

To focus on specific activity, I can filter the activity details by the following values:

  • IP address
  • Local or remote port
  • Direction
  • Protocol
  • Whether the request was accepted or rejected

I’m going to filter these VPC flow details and just look at port 22 (sshd) inbound traffic. I select the Filter check box and select Local Port and 22, as shown in Figure 7. Detective fills in all the available ports for you, making it easy to complete this filter.
 

Figure 7: Port 22 traffic

Figure 7: Port 22 traffic

The activity details show a few IP addresses related to port 22, and we’re still following the large outbound spikes of traffic. It’s outside the scope of this blog post, but now it would be time to start looking at your security groups and network access control lists (ACLs) and determine why port 22 is open to the internet and sending all this traffic.

Understanding traffic behavior

As an investigator, I now have a good picture of the traffic related to the initial finding, and by diving deeper we’re able to discover other interesting traffic during the same timeframe. While we may not always determine “who has done it,” the goal should be to improve our understanding of the behavior of our environment and gather important technical evidence. Detective helps you identify and investigate anomalies to give you insight into your environment. If we were to continue our investigation into the finding, here are some actions we can take within Detective.

Investigate VPC findings with Detective:

  • Perform ports and utilization analysis
    • Identify service and ephemeral ports
    • Determine whether traffic was accepted or rejected based on security groups and NACL configurations
    • Investigate possible reconnaissance traffic by exploring the significant amount of rejected traffic
  • Correlate EC2 instances to TCP/IP ports and IPs
  • Analyze traffic spikes and anomalies
  • Discover traffic patterns and make behavioral correlations

Explore EC2 instance behavior with Detective:

  • Directional Traffic Analysis
  • Investigate possible data exfiltration events by digging into large transfers
  • Enumerate distinct IP connections and sort and filter by protocol, amount of traffic, and traffic direction
  • Gather data related to a spike in port count from a single IP address (potential brute force) or multiple IP addresses (distributed denial of service (DDoS))

Additional forensics steps to consider

  • Snapshot EC2 Volumes
  • Memory dump of EC2 instance
  • Isolate EC2 instance
  • Review your authentication strategy and assess whether the chosen authentication method is sufficient to protect your asset

Summary

Without requiring you to set up infrastructure or spend time configuring log ingestion, Detective collects, organizes, and presents relevant data for your threat analysis and investigations. Security and operations teams will find this new capability helpful for simplifying EC2 traffic analysis, validating security group permissions, and diagnosing EC2 instance behavior. Detective does the heavy lifting of storing, and analyzing VPC flow data so you can focus on quickly answering your investigative questions. VPC network flow details are available now in all Detective supported Regions and are included as part of your service subscription.

To get started, you can enable a 30-day free trial of Amazon Detective. See the AWS Regional Services page for all the regions where Detective is available. To learn more, visit the Amazon Detective product page.

Are you a visual learner? Check out Amazon Detective Overview and Demonstration. This video helps you learn how and when to use Amazon Detective to improve the security of your AWS resources.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ross Warren

Ross Warren is a Solution Architect at AWS based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Ross has worked at a handful of startups and has enjoyed the transition to AWS because he can continue to build solutions for customers on today’s most innovative platform.

Author

Jim Miller

Jim is a Solution Architect at AWS based in Connecticut. Jim has worked within cyber security his entire career with areas of focus including cyber security architecture and incident response. At AWS he loves building secure solutions for customers to enable teams to build and innovate with confidence.

Round 2 post-quantum TLS is now supported in AWS KMS

Post Syndicated from Alex Weibel original https://aws.amazon.com/blogs/security/round-2-post-quantum-tls-is-now-supported-in-aws-kms/

AWS Key Management Service (AWS KMS) now supports three new hybrid post-quantum key exchange algorithms for the Transport Layer Security (TLS) 1.2 encryption protocol that’s used when connecting to AWS KMS API endpoints. These new hybrid post-quantum algorithms combine the proven security of a classical key exchange with the potential quantum-safe properties of new post-quantum key exchanges undergoing evaluation for standardization. The fastest of these algorithms adds approximately 0.3 milliseconds of overheard compared to a classical TLS handshake. The new post-quantum key exchange algorithms added are Round 2 versions of Kyber, Bit Flipping Key Encapsulation (BIKE), and Supersingular Isogeny Key Encapsulation (SIKE). Each organization has submitted their algorithms to the National Institute of Standards and Technology (NIST) as part of NIST’s post-quantum cryptography standardization process. This process spans several rounds of evaluation over multiple years, and is likely to continue beyond 2021.

In our previous hybrid post-quantum TLS blog post, we announced that AWS KMS had launched hybrid post-quantum TLS 1.2 with Round 1 versions of BIKE and SIKE. The Round 1 post-quantum algorithms are still supported by AWS KMS, but at a lower priority than the Round 2 algorithms. You can choose to upgrade your client to enable negotiation of Round 2 algorithms.

Why post-quantum TLS is important

A large-scale quantum computer would be able to break the current public-key cryptography that’s used for key exchange in classical TLS connections. While a large-scale quantum computer isn’t available today, it’s still important to think about and plan for your long-term security needs. TLS traffic using classical algorithms recorded today could be decrypted by a large-scale quantum computer in the future. If you’re developing applications that rely on the long-term confidentiality of data passed over a TLS connection, you should consider a plan to migrate to post-quantum cryptography before the lifespan of the sensitivity of your data would be susceptible to an unauthorized user with a large-scale quantum computer. As an example, this means that if you believe that a large-scale quantum computer is 25 years away, and your data must be secure for 20 years, you should migrate to post-quantum schemes within the next 5 years. AWS is working to prepare for this future, and we want you to be prepared too.

We’re offering this feature now instead of waiting for standardization efforts to be complete so you have a way to measure the potential performance impact to your applications. Offering this feature now also gives you the protection afforded by the proposed post-quantum schemes today. While we believe that the use of this feature raises the already high security bar for connecting to AWS KMS endpoints, these new cipher suites will impact bandwidth utilization and latency. However, using these new algorithms could also create connection failures for intermediate systems that proxy TLS connections. We’d like to get feedback from you on the effectiveness of our implementation or any issues found so we can improve it over time.

Hybrid post-quantum TLS 1.2

Hybrid post-quantum TLS is a feature that provides the security protections of both the classical and post-quantum key exchange algorithms in a single TLS handshake. Figure 1 shows the differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2. Hybrid post-quantum TLS 1.2 has three major differences from classical TLS 1.2:

  • The negotiated post-quantum key is appended to the ECDHE key before being used as the hash-based message authentication code (HMAC) key.
  • The text hybrid in its ASCII representation is prepended to the beginning of the HMAC message.
  • The entire client key exchange message from the TLS handshake is appended to the end of the HMAC message.
Figure 1: Differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2

Figure 1: Differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2

Some background on post-quantum TLS

Today, all requests to AWS KMS use TLS with key exchange algorithms that provide perfect forward secrecy and use one of the following classical schemes:

While existing FFDHE and ECDHE schemes use perfect forward secrecy to protect against the compromise of the server’s long-term secret key, these schemes don’t protect against large-scale quantum computers. In the future, a sufficiently capable large-scale quantum computer could run Shor’s Algorithm to recover the TLS session key of a recorded classical session, and thereby gain access to the data inside. Using a post-quantum key exchange algorithm during the TLS handshake protects against attacks from a large-scale quantum computer.

The possibility of large-scale quantum computing has spurred the development of new quantum-resistant cryptographic algorithms. NIST has started the process of standardizing post-quantum key encapsulation mechanisms (KEMs). A KEM is a type of key exchange that’s used to establish a shared symmetric key. AWS has chosen three NIST KEM submissions to adopt in our post-quantum efforts:

Hybrid mode ensures that the negotiated key is as strong as the weakest key agreement scheme. If one of the schemes is broken, the communications remain confidential. The Internet Engineering Task Force (IETF) Hybrid Post-Quantum Key Encapsulation Methods for Transport Layer Security 1.2 draft describes how to combine post-quantum KEMs with ECDHE to create new cipher suites for TLS 1.2.

These cipher suites use a hybrid key exchange that performs two independent key exchanges during the TLS handshake. The key exchange then cryptographically combines the keys from each into a single TLS session key. This strategy combines the proven security of a classical key exchange with the potential quantum-safe properties of new post-quantum key exchanges being analyzed by NIST.

The effect of hybrid post-quantum TLS on performance

Post-quantum cipher suites have a different performance profile and bandwidth usage from traditional cipher suites. AWS has measured bandwidth and latency across 2,000 TLS handshakes between an Amazon Elastic Compute Cloud (Amazon EC2) C5n.4xlarge client and the public AWS KMS endpoint, which were both in the us-west-2 Region. Your own performance characteristics might differ, and will depend on your environment, including your:

  • Hardware–CPU speed and number of cores.
  • Existing workloads–how often you call AWS KMS and what other work your application performs.
  • Network–location and capacity.

The following graphs and table show latency measurements performed by AWS for all newly supported Round 2 post-quantum algorithms, in addition to the classical ECDHE key exchange algorithm currently used by most customers.

Figure 2 shows the latency differences of all hybrid post-quantum algorithms compared with classical ECDHE alone, and shows that compared to ECDHE alone, SIKE adds approximately 101 milliseconds of overhead, BIKE adds approximately 9.5 milliseconds of overhead, and Kyber adds approximately 0.3 milliseconds of overhead.
 

Figure 2: TLS handshake latency at varying percentiles for four key exchange algorithms

Figure 2: TLS handshake latency at varying percentiles for four key exchange algorithms

Figure 3 shows the latency differences between ECDHE with Kyber, and ECDHE alone. The addition of Kyber adds approximately 0.3 milliseconds of overhead.
 

Figure 3: TLS handshake latency at varying percentiles, with only top two performing key exchange algorithms

Figure 3: TLS handshake latency at varying percentiles, with only top two performing key exchange algorithms

The following table shows the total amount of data (in bytes) needed to complete the TLS handshake for each cipher suite, the average latency, and latency at varying percentiles. All measurements were gathered from 2,000 TLS handshakes. The time was measured on the client from the start of the handshake until the handshake was completed, and includes all network transfer time. All connections used RSA authentication with a 2048-bit key, and ECDHE used the secp256r1 curve. All hybrid post-quantum tests used the NIST Round 2 versions. The Kyber test used the Kyber-512 parameter, the BIKE test used the BIKE-1 Level 1 parameter, and the SIKE test used the SIKEp434 parameter.

Item Bandwidth
(bytes)
Total
handshakes
Average
(ms)
p0
(ms)
p50
(ms)
p90
(ms)
p99
(ms)
ECDHE (classic) 3,574 2,000 3.08 2.07 3.02 3.95 4.71
ECDHE + Kyber R2 5,898 2,000 3.36 2.38 3.17 4.28 5.35
ECDHE + BIKE R2 12,456 2,000 14.91 11.59 14.16 18.27 23.58
ECDHE + SIKE R2 4,628 2,000 112.40 103.22 108.87 126.80 146.56

By default, the AWS SDK client performs a TLS handshake once to set up a new TLS connection, and then reuses that TLS connection for multiple requests. This means that the increased cost of a hybrid post-quantum TLS handshake is amortized over multiple requests sent over the TLS connection. You should take the amortization into account when evaluating the overall additional cost of using post-quantum algorithms; otherwise performance data could be skewed.

AWS KMS has chosen Kyber Round 2 to be KMS’s highest prioritized post-quantum algorithm, with BIKE Round 2, and SIKE Round 2 next in priority order for post-quantum algorithms. This is because Kyber’s performance is closest to the classical ECDHE performance that most AWS KMS customers are using today and are accustomed to.

How to use hybrid post-quantum cipher suites

To use the post-quantum cipher suites with AWS KMS, you need the preview release of the AWS Common Runtime (CRT) HTTP client for the AWS SDK for Java 2.x. Also, you will need to configure the AWS CRT HTTP client to use the s2n post-quantum hybrid cipher suites. Post-quantum TLS for AWS KMS is available in all AWS Regions except for AWS GovCloud (US-East), AWS GovCloud (US-West), AWS China (Beijing) Region operated by Beijing Sinnet Technology Co. Ltd (“Sinnet”), and AWS China (Ningxia) Region operated by Ningxia Western Cloud Data Technology Co. Ltd. (“NWCD”). Since NIST has not yet standardized post-quantum cryptography, connections that require Federal Information Processing Standards (FIPS) compliance cannot use the hybrid key exchange. For example, kms.<region>.amazonaws.com supports the use of post-quantum cipher suites, while kms-fips.<region>.amazonaws.com does not.

  1. If you’re using the AWS SDK for Java 2.x, you must add the preview release of the AWS Common Runtime client to your Maven dependencies.
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>aws-crt-client</artifactId>
        <version>2.14.13-PREVIEW</version>
    </dependency>
    

  2. You then must configure the new SDK and cipher suite in the existing initialization code of your application:
    if(!TLS_CIPHER_PREF_KMS_PQ_TLSv1_0_2020_07.isSupported()){
        throw new RuntimeException("Post Quantum Ciphers not supported on this Platform");
    }
    
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_PREF_KMS_PQ_TLSv1_0_2020_07)
              .build();
              
    KmsAsyncClient kms = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();
             
    ListKeysResponse response = kms.listKeys().get();
    

Now, all connections made to AWS KMS in supported Regions will use the new hybrid post-quantum cipher suites! To see a complete example of everything set up, check out the example application here.

Things to try

Here are some ideas about how to use this post-quantum-enabled client:

  • Run load tests and benchmarks. These new cipher suites perform differently than traditional key exchange algorithms. You might need to adjust your connection timeouts to allow for the longer handshake times or, if you’re running inside an AWS Lambda function, extend the execution timeout setting.
  • Try connecting from different locations. Depending on the network path your request takes, you might discover that intermediate hosts, proxies, or firewalls with deep packet inspection (DPI) block the request. This could be due to the new cipher suites in the ClientHello or the larger key exchange messages. If this is the case, you might need to work with your security team or IT administrators to update the relevant configuration to unblock the new TLS cipher suites. We’d like to hear from you about how your infrastructure interacts with this new variant of TLS traffic. If you have questions or feedback, please start a new thread on the AWS KMS discussion forum.

Conclusion

In this blog post, I announced support for Round 2 hybrid post-quantum algorithms in AWS KMS, and showed you how to begin experimenting with hybrid post-quantum key exchange algorithms for TLS when connecting to AWS KMS endpoints.

More info

If you’d like to learn more about post-quantum cryptography check out:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alex Weibel

Alex is a Senior Software Engineer on the AWS Crypto Algorithms team. He’s one of the maintainers for Amazon’s TLS Library s2n. Previously, Alex worked on TLS termination and request proxying for S3 and the Elastic Load Balancing Service developing new features for customers. Alex holds a Bachelor of Science degree in Computer Science from the University of Texas at Austin.

Fall 2020 SOC 2 Type I Privacy report now available

Post Syndicated from Ninad Naik original https://aws.amazon.com/blogs/security/fall-2020-soc-2-type-i-privacy-report-now-available/

Your privacy considerations are at the core of our compliance work, and at AWS, we are focused on the protection of your content while using Amazon Web Services. Our Fall 2020 SOC 2 Type I Privacy report is now available, demonstrating the privacy compliance commitments we made to you.

The Fall 2020 SOC 2 Type I Privacy report provides you with a third-party attestation of our system and the suitability of the design of our privacy controls. The SOC 2 Privacy Trust Service Criteria (TSC), developed by the American Institute of CPAs (AICPA) establishes the criteria for evaluating controls relating to how personal information is collected, used, retained, disclosed and disposed of to meet AWS’s objectives. Customers can find additional information related to privacy commitments supporting our SOC2 Type 1 report in the Customer Agreement documentation.

The scope of the privacy report includes information about how we handle the content that you upload to AWS and how it is protected in all of the services and locations that are in scope for the latest AWS SOC reports. You can find our SOC 2 Type I Privacy report through Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ninad Naik

Ninad is a Security Assurance Manager at Amazon Web Services, leading multiple security and privacy initiatives within AWS. He has a Master’s degree in Information Systems from Syracuse University, NY and a Bachelor’s of Engineering degree in Information Technology from Mumbai University, India. Ninad has 10 years of experience in security assurance and ITIL, CISA, CGEIT, and CISM certifications.

Fall 2020 SOC reports now available with 124 services in scope

Post Syndicated from Ninad Naik original https://aws.amazon.com/blogs/security/fall-2020-soc-reports-now-available-with-124-services-in-scope/

At AWS, we’re committed to providing our customers with continued assurance over the security, availability and confidentiality of the AWS control environment. We’re proud to deliver the System and Organizational (SOC) 1, 2 and 3 reports to enable our AWS customers to maintain confidence in AWS services.

For the Fall 2020 SOC reports, covering 04/01/2020 to 09/30/2020, we are excited to announce two new services in scope, for a total of 124 total services in scope. The associated infrastructure supporting our in-scope products and services is updated to reflect new regions and edge locations.

Here are the 2 new services in scope for Fall SOC 2020 reports:

The Fall 2020 SOC reports are now available through Artifact in the AWS Management Console. The SOC 3 report can also be downloaded here as PDF.

AWS strives to bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. If there are additional AWS services which you would like to see added to the scope of our SOC reports (or other compliance programs), please reach out to your AWS Representatives.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ninad Naik

Ninad is a Security Assurance Manager at Amazon Web Services, leading multiple security and privacy initiatives within AWS. Ninad holds a Master’s degree in Information Systems from Syracuse University, NY and a Bachelor’s of Engineering degree in Information Technology from Mumbai University, India. Ninad has 10 years of experience in security assurance and ITIL, CISA, CGEIT, and CISM certifications.

How to record a video of Amazon AppStream 2.0 streaming sessions

Post Syndicated from Nicolas Malaval original https://aws.amazon.com/blogs/security/how-to-record-video-of-amazon-appstream-2-0-streaming-sessions/

Amazon AppStream 2.0 is a fully managed service that lets you stream applications and desktops to your users. In this post, I’ll show you how to record a video of AppStream 2.0 streaming sessions by using FFmpeg, a popular media framework.

There are many use cases for session recording, such as auditing administrative access, troubleshooting user issues, or quality assurance. For example, you could publish administrative tools with AppStream 2.0, such as a Remote Desktop Protocol (RDP) client, to protect access to your backend systems (see How to use Amazon AppStream 2.0 to reduce your bastion host attack surface) and you may want to record a video of what your administrators do when accessing and operating backend systems. You may also want to see what a user did to reproduce an issue, or view activities in a call center setting, such as call handling or customer support, for review and training.

This solution is not designed or intended for people surveillance, or for the collection of evidence for legal proceedings. You are responsible for complying with all applicable laws and regulations when using this solution.

Overview and architecture

In this section, you can learn about the steps for recording AppStream 2.0 streaming sessions and see an overview of the solution architecture. Later in this post, you can find instructions about how to implement and test the solution.

AppStream 2.0 enables you to run custom scripts to prepare the streaming instance before the applications launch or after the streaming session has completed. Figure 1 shows a simplified description of what happens before, during and after a streaming session.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Before the streaming session starts, AppStream 2.0 runs script A, which uses PsExec, a utility that enables administrators to run commands on local or remote computers, to launch script B. Script B then runs during the entire streaming session. PsExec can run the script as the LocalSystem account, a service account that has extensive privileges on a local system, while it interacts with the desktop of another session. Using the LocalSystem account, you can use FFmpeg to record the session screen and prevent AppStream 2.0 users from stopping or tampering with the solution, as long as they aren’t granted local administrator rights.
  2. Script B launches FFmpeg and starts recording the desktop. The solution uses the FFmpeg built-in screen-grabber to capture the desktop across all the available screens.
  3. When FFmpeg starts recording, it captures the area covered by the desktop at that time. If the number of screens or the resolution changes, a portion of the desktop might be outside the recorded area. In that case, script B stops the recording and starts FFmpeg again.
  4. After the streaming session ends, AppStream 2.0 runs script C, which notifies script B that it must end the recording and close. Script B stops FFmpeg.
  5. Before exiting, script B uploads the video files that FFmpeg generated to Amazon Simple Storage Service (Amazon S3). It also stores user and session metadata in Amazon S3, along with the video files, for easy retrieval of session recordings.

For a more comprehensive understanding of how the session scripts works, you can refer to the GitHub repository that contains the solution artifacts, where I go into the details of each script.

Implementing and testing the solution

Now that you understand the architecture of this solution, you can follow the instructions in this section to implement this blog post’s solution in your AWS account. You will:

  1. Create a virtual private cloud (VPC), an S3 bucket and an AWS Identity and Access Management (IAM) role with AWS CloudFormation.
  2. Create an AppStream 2.0 image builder.
  3. Configure the solution scripts on the image builder.
  4. Specify an application to publish and create an image.
  5. Create an AppStream 2.0 fleet.
  6. Create an AppStream 2.0 stack.
  7. Create a user in the AppStream 2.0 user pool.
  8. Launch a streaming session and test the solution.

Step 1: Create a VPC, an S3 bucket, and an IAM role with AWS CloudFormation

For the first step in the solution, you create a new VPC where AppStream 2.0 will be deployed, or choose an existing VPC, a new S3 bucket to store the session recordings, and a new IAM role to grant AppStream 2.0 the necessary IAM permissions.

To create the VPC, the S3 bucket, and the IAM role with AWS CloudFormation

  1. Select the following Launch Stack button to open the CloudFormation console and create a CloudFormation stack from the template. You can change the Region where resources are deployed in the navigation bar.
     
    Select the Launch Stack button to launch the template

    The latest template can also be downloaded on GitHub.

  2. Choose Next. For VPC ID, Subnet 1 ID and Subnet 2 ID, you can optionally select a VPC and two subnets, if you want to deploy the solution in an existing VPC, or leave these fields blank to create a new VPC. Then follow the on-screen instructions. AWS CloudFormation creates the following resources:
    • (If you chose to create a new VPC) An Amazon Virtual Private Cloud (Amazon VPC) with an internet gateway attached.
    • (If you chose to create a new VPC) Two public subnets on this Amazon VPC with a new route table to make them publicly accessible.
    • An S3 bucket to store the session recordings.
    • An IAM role to grant AppStream 2.0 permissions to upload video and metadata files to Amazon S3.
  3. After the stack creation has completed, choose the Outputs tab in the CloudFormation console and note the values that the process returned: the name and Region of the S3 bucket, the name of the IAM role, the ID of the VPC, and the two subnets.

Step 2: Create an AppStream 2.0 image builder

The next step is to create a new AppStream 2.0 image builder. An image builder is a virtual machine that you can use to install and configure applications for streaming, and then create a custom image.

To create the AppStream 2.0 image builder

  1. Open the AppStream 2.0 console and select the Region in the navigation bar. Choose Get Started then Skip if you are new to the console.
  2. Choose Images in the left pane, and then choose Image Builder. Choose Launch Image Builder.
  3. In Step 1: Choose Image:
    1. Select the name of the latest AppStream 2.0 base image for the Windows Server version of your choice. You can find its name in the AppStream 2.0 base image version history. For example, at the time of writing, the name of the latest Windows Server 2019 base image is AppStream-WinServer2019-07-16-2020.
    2. Choose Next.
  4. In Step 2: Configure Image Builder:
    1. For Name, enter session-recording.
    2. For Instance Type, choose stream.standard.medium.
    3. For IAM role, select the IAM role that AWS CloudFormation created.
    4. Choose Next.
  5. In Step 3: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. For VPC, select the ID of the VPC, and for Subnet 1, select the ID of Subnet 1.
    3. For Security group(s), select the ID of the security group. Refer back to the Outputs tab of the CloudFormation stack if you are unsure which VPC, subnet and security group to select.
    4. Choose Review.
  6. In Step 4: Review, choose Launch.

Step 3: Configure the solution scripts on the image builder

The session scripts to run before streaming sessions start or after sessions end are specified within an AppStream 2.0 image. In this step, you install the solution scripts on your image builder and specify the scripts to run in the session scripts configuration file.

To configure the solution scripts on the image builder

  1. Wait until the image builder is in the Running state, and then choose Connect.
  2. Within the AppStream 2.0 streaming session, on the Local User tab, choose Administrator.
  3. To install the solution scripts:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell, and then choose Run as Administrator.
    3. Run the following commands in the PowerShell terminal to create the required folders, and to copy the solution scripts and the session scripts configuration file from public objects in GitHub to the local disk. If you aren’t using Google Chrome or the AppStream 2.0 client, you need to choose the Clipboard icon in the AppStream 2.0 navigation bar, and then select Paste to remote session.
      New-Item -Path C:\SessionRecording -ItemType directory
      New-Item -Path C:\SessionRecording\Scripts -ItemType directory
      New-Item -Path C:\SessionRecording\Output -ItemType directory
      New-Item -Path C:\SessionRecording\Bin -ItemType directory
      
      $Acl = Get-Acl C:\SessionRecording
      $Acl.SetAccessRuleProtection($true,$false)
      $AccessRule1 = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule1)
      $AccessRule2 = New-Object System.Security.AccessControl.FileSystemAccessRule("SYSTEM","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule2)
      $AccessRule3 = New-Object System.Security.AccessControl.FileSystemAccessRule("ImageBuilderAdmin","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule3)
      Set-Acl C:\SessionRecording $Acl
      
      [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_a.ps1 -OutFile C:\SessionRecording\Scripts\script_a.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_b.ps1 -OutFile C:\SessionRecording\Scripts\script_b.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_c.ps1 -OutFile C:\SessionRecording\Scripts\script_c.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/variables.ps1 -OutFile C:\SessionRecording\Scripts\variables.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/config.json -OutFile C:\AppStream\SessionScripts\config.json
      

    4. Close the PowerShell terminal.
  4. To edit the variables.ps1 file with your own values:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell ISE, and then choose Run as Administrator.
    3. Choose File, then Open. Navigate to the folder C:\SessionRecording\Scripts\ and open the file variables.ps1.
    4. Edit the name and the Region of the S3 bucket with the values returned by AWS CloudFormation in the Outputs tab. You can also customize the number of frames per second, and the maximum duration in seconds of each video file. Save the file.
    5. Save and close the file.
  5. To download the latest FFmpeg and PsExec executables to the image builder:
    1. From the image builder desktop, open the Firefox desktop icon.
    2. Navigate to the URL https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-github and choose the link that contains essentials_build.zip to download FFmpeg. Choose Open to download and extract the ZIP archive. Copy the file ffmpeg.exe in the bin folder of the ZIP archive to C:\SessionRecording\Bin\.

      Note: FFmpeg only provides source code and compiled packages are available at third-party locations. If the link above is invalid, go to the FFmpeg download page and follow the instructions to download the latest release build for Windows.

    3. Navigate to the URL https://download.sysinternals.com/files/PSTools.zip to download PsExec. Choose Open to download and extract the ZIP archive. Copy the file PsExec64.exe to C:\SessionRecording\Bin\. You must agree with the license terms, because the solution in this blog post automatically accepts them.
    4. Close Firefox.

Step 4: Specify an application to publish and create an image

In this step, you publish Firefox on your image builder and create an AppStream 2.0 custom image. I chose Firefox because it’s easy to test later in the procedure. You can choose other or additional applications to publish, if needed.

To specify the application to publish and create the image

  1. From the image builder desktop, open the Image Assistant icon available on the desktop. Image Assistant guides you through the image creation process.
  2. In 1. Add Apps:
    1. Choose + Add App.
    2. Enter the location C:\Program Files (x86)\Mozilla Firefox\firefox.exe to add Firefox.
    3. Choose Open. Keep the default settings and choose Save.
    4. Choose Next multiple times until you see 4. Optimize.
  3. In 4. Optimize:
    1. Choose Launch.
    2. Choose Continue until you can see 5. Configure Image.
  4. In 5. Configure Image:
    1. For Name, enter session-recording for your image name.
    2. Choose Next.
  5. In 6. Review:
    1. Choose Disconnect and Create Image.
  6. Back in the AppStream 2.0 console:
    1. Choose Images in the left pane, and then choose the Image Registry tab.
    2. Change All Images to Private and shared with others. You will see your new AppStream 2.0 image.
    3. Wait until the image is in the Available state. This can take more than 30 minutes.

Step 5: Create an AppStream 2.0 fleet

Next, create an AppStream 2.0 fleet that consists of streaming instances that run your custom image.

To create the AppStream 2.0 fleet

  1. In the left pane of the AppStream 2.0 console, choose Fleets, and then choose Create Fleet.
  2. In Step 1: Provide Fleet Details:
    1. For Name, enter session-recording-fleet.
    2. Choose Next.
  3. In Step 2: Choose an Image:
    1. Select the name of the custom image that you created with the image builder.
    2. Choose Next.
  4. In Step 3: Configure Fleet:
    1. For Instance Type, select stream.standard.medium.
    2. For Fleet Type, choose Always-on.
    3. For Stream view, you can choose to stream either the applications or the entire desktop.
    4. For IAM role, select the IAM role.
    5. Keep the defaults for all other parameters, and choose Next.
  5. In Step 4: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. Select the VPC, the two subnets, and the security group.
    3. Choose Next.
  6. In Step 5: Review, choose Create.
  7. Wait until the fleet is in the Running state.

Step 6: Create an AppStream 2.0 stack

Create an AppStream 2.0 stack and associate it with the fleet that you just created.

To create the AppStream 2.0 stack

  1. In the left pane of the AppStream 2.0 console, choose Stacks, and then choose Create Stack.
  2. In Step 1: Stack Details:
    1. For Name, enter session-recording-stack.
    2. For Fleet, select the fleet that you created.
  3. Then follow the on-screen instructions and keep the defaults for all other parameters until the stack is created.

Step 7: Create a user in the AppStream 2.0 user pool

The AppStream 2.0 user pool provides a simplified way to manage access to applications for your users. In this step, you create a user in the user pool that you will use later in the procedure to test the solution.

To create the user in the AppStream 2.0 user pool

  1. In the left pane of the AppStream 2.0 console, choose User Pool, and then choose Create User.
  2. Enter your email address, first name, and last name. Choose Create User.
  3. Select the user you just created. Choose Actions, and then choose Assign stack.
  4. Select the stack, and then choose Assign stack.

Step 8: Test the solution

Now, sign in to AppStream 2.0 with the user that you just created, launch a streaming session, and check that the session recordings are delivered to Amazon S3.

To launch a streaming session and test the solution

  1. AppStream 2.0 sends you a notification email. Connect to the sign in portal by entering the information included in the notification email, and set a permanent password.
  2. Sign in to AppStream 2.0 by entering your email address and the permanent password.
  3. After you sign in, you can view the application catalog. Choose Firefox to launch a Firefox window and browse any websites you’d like.
  4. Choose the user icon at the top-right corner, and then choose Logout to end the session.

In the Amazon S3 console, navigate to the S3 bucket to browse the session recordings. For the session you just terminated, you can find one text file that contains user and instance metadata, and one or more video files that you can download and play with a media player like VLC.

Step 9: Clean up resources

You can now delete the two CloudFormation stacks to clean up the resources that were just created.

To clean up resources

  1. To delete the image builder:
    1. In the left pane of the AppStream 2.0 console, choose Images, and then choose Image Builder.
    2. Select the image builder. Choose Actions, then choose Delete.
  2. To delete the stack:
    1. In the left pane of the AppStream 2.0 console, choose Stacks.
    2. Select the image builder. Choose Actions, then choose Disassociate Fleet. Choose Disassociate to confirm.
    3. Choose Actions, then choose Delete.
  3. To delete the fleet:
    1. In the left pane of the AppStream 2.0 console, choose Fleets.
    2. Select the fleet. Choose Actions, then choose Stop. Choose Stop to confirm.
    3. Wait until the fleet is in the Stopped state.
    4. Choose Actions, then choose Delete.
  4. To disable the user in the user pool:
    1. In the left pane of the AppStream 2.0 console, choose User Pool.
    2. Select the user. Choose Actions, then choose Disable user. Choose Disable User to confirm.
  5. Empty the S3 bucket that CloudFormation created (see How do I empty an S3 bucket?). Repeat the same operation with the buckets that AppStream 2.0 created, whose names start with appstream-settings, appstream-logs and appstream2.
  6. Delete the CloudFormation stack on the AWS CloudFormation console (see Deleting a stack on the AWS CloudFormation console).

Conclusion

In this blog post, I showed you a way to record AppStream 2.0 sessions to video files for administrative access auditing, troubleshooting, or quality assurance. While this blog post focuses on Amazon AppStream 2.0, you could adapt and deploy the solution in Amazon Workspaces or in Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

For a deep-dive explanation of how the solution scripts function, you can refer to the GitHub repository that contains the solution artifacts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon AppStream 2.0 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nicolas Malaval

Nicolas is a Solution Architect for Amazon Web Services. He lives in Paris and helps our healthcare customers in France adopt cloud technology and innovate with AWS. Before that, he spent three years as a Consultant for AWS Professional Services, working with enterprise customers.

Combining encryption and signing with AWS asymmetric keys

Post Syndicated from J.D. Bean original https://aws.amazon.com/blogs/security/combining-encryption-and-signing-with-aws-asymmetric-keys/

In this post, I discuss how to use AWS Key Management Service (KMS) to combine asymmetric digital signature and asymmetric encryption of the same data.

The addition of support for asymmetric keys in AWS KMS has exciting use cases for customers. The ability to create, manage, and use public and private key pairs with KMS enables you to perform digital signing operations using RSA and Elliptic Curve Cryptography (ECC) keys. AWS KMS asymmetric keys can also be used to perform digital encryption operations using RSA keys. You can use these features together to digitally sign and encrypt the same data.

Another notable property of AWS KMS asymmetric keys is that they enable disconnected use cases. For example AWS KMS asymmetric keys can be used to cryptographically verify a digital signature client-side without the need for a network connection. AWS KMS asymmetric keys also enable scenarios where customers can use KMS to securely manage decryption of data that has been encrypted by a partner’s system that does not integrate with AWS APIs or have access to AWS account credentials. For the sake of simplicity, however, the example that I discuss in this post describes a connected use case where all cryptographic actions are performed server-side in AWS KMS using AWS credentials. The use of AWS KMS asymmetric keys throughout this post allows the overall approach to be adapted to disconnected and/or non-AWS-integrated use cases.

Overview

This post contains three basic steps.

  1. Create and configure AWS asymmetric customer master keys (CMK), AWS Identity and Access Management (IAM) roles, and key policies.
  2. Use your asymmetric CMKs to encrypt and sign a sample message in the role of a sender.
  3. Use AWS KMS to decrypt and verify the message signature of the sample message archive you generated in the previous procedure using your asymmetric CMKs in the role of a receiver.

Prerequisites

The commands I use in this tutorial were tested using AWS Command Line Interface (AWS CLI) version 2.50 on Amazon Linux 2. In order to run these commands in your in your own local environment ensure that you have first installed and updated the AWS CLI.

I assume you have at least one administrator identity available to you that has broad rights for creating roles, assuming roles, as well as creating, managing and using KMS keys. This can be a federated identity (for example, from your corporate identity provider or from a social identity), or it can be an AWS IAM user. Where no AWS identity is mentioned, I assume that you will be accessing the AWS Management Console or the AWS CLI using this administrator identity.

For simplicity, I create the KMS keys in the same region as each other. You must specify an AWS Region when using the AWS CLI, either explicitly or by setting a default Region. Before beginning, you should select an AWS Region to work in such as US East (N. Virginia). If you have not configured the AWS CLI in your environment please review the Configuration basics section of the AWS Command Line Interface User Guide for instructions. You may revert this configuration once you have finished if you do not wish to continue using a default Region with your AWS CLI. Take note of your selected region. When working in the AWS Console, if you do not see resources, such as AWS KMS keys, that you expect you may want to confirm that you are viewing resources in your chosen Region. For more information on selecting your Region in the AWS Console see Choosing a Region in AWS Management Console Getting Started Guide.

Create and configure resources

In the first phase of this tutorial you create and configure two asymmetric AWS KMS CMKs, two AWS IAM roles, and configure the key policies for both of your KMS CMKs to grant permissions to the roles. Shown in the following figure.
 

Figure 1: Create keys, roles, and key policies

Figure 1: Create keys, roles, and key policies

Create asymmetric signing and encryption key pairs

In the first step, you create two asymmetric master keys (CMK). One is configured for signing and verifying digital signatures while the other is configured for encrypting and decrypting data.

Note: The CMKs configured for this post are examples. RSA and Elliptic curve CMKs key specs can differ in a variety of dimensions. The RSA or elliptic curve key spec that you choose might be determined by your security standards or the requirements of your task. Different CMK key specs are priced differently and are subject to different request quotas because they each have different performance profiles. In general, use RSA or ECC keys with the highest security level that is practical and affordable for your task. For more information on CMK configuration options, please review the How to choose your CMK configuration section of the KMS Developer Guide.

To create a CMK for encryption and decryption

  1. Use the KMS CreateKey API. Pass RSA_4096 for the CustomerMasterKeySpec parameter and ENCRYPT_DECRYPT for the KeyUsage parameter in the AWS CLI example command below in order to generate a RSA 4096 key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec RSA_4096 \
        --key-usage ENCRYPT_DECRYPT \
        --description "Sample Digital Encryption Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value in this object.

  2. As a best practice, assign an alias for your key. Use the following command to assign an alias of sample-encrypt-decrypt-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID). Mapping a human-readable alias to the KeyID will make it easier to identify, use, and manage.
    aws kms create-alias \
        --alias-name alias/sample-encrypt-decrypt-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

To create a CMK for signature and verification

  1. Use the KMS CreateKey API. Pass ECC_NIST_P521 for the CustomerMasterKeySpec parameter and SIGN_VERIFY for the KeyUsage parameter in the AWS CLI example command below in order to generate an elliptic curve (ECC) key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec \
        ECC_NIST_P521  \
        --key-usage SIGN_VERIFY \
        --description "Sample Digital Signature Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value.

  2. Use the following command to assign an alias of sample-sign-verify-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID).
    aws kms create-alias \
        --alias-name alias/sample-sign-verify-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

Create sender and receiver roles

For the next step of this tutorial, you create two AWS principals. Use the steps that follow to create two roles—a sender principal and a receiver principal. Later, you will grant permissions to perform private key operations (sign and decrypt) and public key operations (verify and encrypt) to these roles.

To create and configure the roles

  1. Navigate to the AWS Identity and Access Management (IAM) Create role console dialogue that allows entities in a specified account to assume the role. Enter your Account ID and choose Next, as shown in the following figure.

    Note: If you don’t know you AWS account ID, please read Finding you AWS account ID in the AWS IAM User Guide for guidance on how to obtain this information.

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

  2. Select Next through the next two screens.

    Note: By clicking next through these dialogues you do not attach an IAM permissions policy or a tag to this new role.

  3. On the final screen, enter a Role name of SenderRole and a Role description of your choice, as shown in the following figure.
     
    Figure 3: Create the sender role

    Figure 3: Create the sender role

  4. Choose Create role to finish creating the sender role.
  5. To create the receiver role, repeat the preceding role creation process. However, in step 3, substitute the name ReceiverRole for SenderRole.

Configure key policy permissions

Best practice is to adhere to the principle of least privilege and provide each AWS principal with the minimal permissions necessary to perform its tasks. The sender and receiver roles that you created in the previous step currently have no permissions in your account. For this scenario, the receiver principal must be granted permission to verify digital signatures and decrypt data in AWS KMS using your asymmetric CMKs and the sender principal must be granted permission to create digital signatures and encrypt data in KMS using your asymmetric CMKs.

To provide access control permissions for AWS KMS actions to your AWS principals, attach a key policy to each of your CMKs.

Modify the CMK key policy

For the sample-encrypt-decrypt-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Encrypt permissions and the IAM role for the receiver principal (ReceiverRole) kms:Decrypt permissions in the CMK key policy.

To modify the CMK key policy (console)

  1. Navigate to the AWS KMS page in the AWS Console and select customer-managed keys.
  2. Select your sample-encrypt-decrypt-key CMK.
  3. In the key policy section, choose edit.
  4. To allow your receiver principal to use the CMK to decrypt data encrypted under that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for decryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Decrypt",
        "Resource": "*"
    }
    

  5. To allow your sender principal to use the CMK to encrypt data, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for encryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Encrypt",
        "Resource": "*"
    }
    

  6. Choose Save changes.

Note: The kms:Encrypt permission is sufficient to permit the sender principal to encrypt small amounts of arbitrary data using your CMK directly.

Grant sign and verify permissions to the CMK key policy

For the sample-sign-verify-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Sign permissions in the CMK key policy and the IAM role for the receiver principal (ReceiverRole) kms:Verify permissions in the CMK key policy.

To grant sign and verify permissions

  1. Using the same process as above, navigate to the key policy edit dialog for the sample-sign-verify-key CMK in the AWS console.
  2. To allow your sender principal to use the CMK to create digital signatures, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for digital signing",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Sign",
        "Resource": "*"
    }
    

  3. To allow your receiver principal to use the CMK to verify signatures created by that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for digital signature verification",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Verify",
        "Resource": "*"
    }
    

  4. Choose Save changes.

Key permissions summary

When these key policy edits have been completed the sender principal:

  • Will have permissions to encrypt data using the sample-encrypt-decrypt-key CMK and generate digital signatures using the sample-sign-verify-key CMK.
  • Will not have permissions to decrypt or to verify signatures using the CMKs.

The receiver principal:

  • Will have permissions to decrypt data which has been encrypted using the sample-encrypt-decrypt-key CMK and to verify signatures created using the sample-sign-verify-key CMK.
  • Will not have permissions to encrypt or to generate signatures using the CMKs.
Figure 4: Summary of key policy permissions

Figure 4: Summary of key policy permissions

Signing and encrypting a sample message

So far, you’ve created two asymmetric CMKs, created a set of sender and receiver roles, and configured permissions for those roles in each of your CMK key policies. In the second phase of this tutorial, you assume the role of sender and use your asymmetric signature and verification CMK to sign a sample message. You then bundle the sample message and its corresponding digital signature together into an archive and use your encryption and decryption asymmetric CMK to encrypt the archive.
 

Figure 5: Creating a message signature and encrypting the message along with its signature

Figure 5: Creating a message signature and encrypting the message along with its signature

Note: The order of operations in this process is that the message is first signed and then the signature and the message are encrypted together. This order is intentional. When a message is signed and then encrypted, neither the contents nor the identity of the sender will be available to unauthorized 3rd parties. If the order of operations were reversed, however, and a message was first encrypted and then signed it could leak information about the sender’s identity to unauthorized 3rd parties. Moreover, when a message is encrypted and then signed, an unauthorized 3rd party with access to the files could discard the authentic signature created by the sender and replace it with a valid signature created by their own key. This creates the potential for a 3rd party to deceptively create the appearance that they are the legitimate sender of the message and exploit that misperception further.

Assume the sender role

Start by assuming the sender role. In order to successfully assume a role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the sender role.

To assume the sender role

  1. Run the following command, but be sure to replace the account ID value of 111122223333 with your account ID:
    aws sts assume-role \
        --role-arn arn:aws:iam::111122223333::role/SenderRole \
        --role-session-name AWSCLI-Session
    

  2. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  3. Confirm that you’ve successfully assumed the sender role by issuing:
    aws sts get-caller-identity
    

    Note: If the output of this command contains the text assumed-role/SenderRole, then you’ve successfully assumed the sender role.

Create a message

Now, create a sample message file called message.json.

To create a message

Run the following command to create a message with the following content:

echo "
{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}
" > ./message.json 

Create a digital signature

Creating and verifying a digital signature for the message provides confidence that the message contents haven’t been altered after being sent. This characteristic is known as integrity. Furthermore, when access to a signing key is scoped to a particular principal, creating and verifying a digital signature for the message provides confidence in the sender’s identity. This characteristic is known as authenticity. Finally, a high degree of confidence in both the integrity and authenticity of a message limits the plausible ability of a sender to fraudulently deny having signed a message. This characteristic is known a non-repudiation.

To create a digital signature

Run the following command to create a digital signature for message.json:

aws kms sign \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --signing-algorithm ECDSA_SHA_512 \
    --message fileb://message.json \
    --output text \
    --query Signature | base64 --decode > message.sig

This generates an independent digital signature file, message.sig, for message.json. Any modification to the contents of message.json, such as changing the sender or message fields, will now cause signature validation of message.sig to fail for message.json.

Encrypt the message and signature

Even with the benefits of a digital signature, the message could still be viewed by any party with access to the file. In order to provide confidence that the message contents aren’t exposed to unauthorized parties, you can encrypt the message. This characteristic is known as confidentiality. In order to retain the benefits of your digital signature you can encrypt the message and corresponding signature together in a single package.

To encrypt the message and signature

  1. Combine your message and signature into an archive. For example, with the GNU Tar utility you can issue the following:
    tar -czvf message.tar.gz message.sig message.json
    

    This will create a new archive file named message.tar.gz containing both your message and message signature.

  2. Encrypt the archive using AWS KMS. To do so, issue the following command:
    aws kms encrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --plaintext fileb://message.tar.gz \
        --output text \
        --query CiphertextBlob | base64 --decode > message.enc
    

    This will output a message.enc file containing an encrypted copy of the message.tar.gz archive.

Decrypting and verifying a sample message

Now that you’ve created, signed, and encrypted a message, let’s change gears and see what working with this message.enc file is like from the perspective of a receiving party. In the final phase of this tutorial you assume the role of receiver and use your asymmetric CMKs to decrypt the encrypted message archive and verify the digital signature that you created. Finally, you will view your message. The process is shown in the following figure.
 

Figure 6: Decrypting a message archive and verifying the message signature

Figure 6: Decrypting a message archive and verifying the message signature

Assume the receiver role

Assume the receiver role so that you can simulate receiving a signed and encrypted message. As before, in order to assume the receiver role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the receiver role.

To assume the receiver role

  1. Copy the message.enc file to a new directory to create a clean working space and navigate there in a terminal session.
  2. Assume your receiver role. To do so, execute the following command, replacing the account ID value of 111122223333 with your own:
    aws sts assume-role \
    	--role-arn arn:aws:iam::111122223333::role/ReceiverRole \
    	--role-session-name AWSCLI-Session
    

  3. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  4. Confirm that you have successfully assumed the receiver role by issuing:
    aws sts get-caller-identity
    

If the output of this command contains the text assumed-role/ReceiverRole then you have successfully assumed the receiver role.

Decrypt the encrypted message archive in AWS KMS

Decrypt the encrypted message archive to access the plaintext of the message and message signature files.

To decrypt the encrypted message archive

  1. Issue the following command:
    aws kms decrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --ciphertext-blob fileb://EncryptedMessage \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --output text \
        --query Plaintext | base64 --decode > message.tar.gz
    

  2. This will create an unencrypted message.tar.gz file that you can unpack with:
    tar -xvfz message.tar.gz
    

This, in turn, will expand the archive contents message.sig and message.json in your working directory.

Verify the message signature

To verify the signature on the message issue the following command:

aws kms verify \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --message fileb://message.json \
    --signing-algorithm ECDSA_SHA_512 \
    --signature fileb://message.sig

In the response you should see that SignatureValid is marked true indicating that the signature has been verified using the specified sample-sign-verify-key that you granted the sender principal permission to generate signatures with.

View the message

Finally, open message.json and view the file’s contents by issuing the following command:

less message.json

You will see that the contents of the file have not been modified and still read:

{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}

Note: Be careful to avoid making any changes to the contents of this file. Even a minor modification of the message contents will compromise the integrity of the message and cause future attempts at signature validation using your message.sig file to fail.

Summary

In this tutorial, you signed and encrypted data using two AWS KMS asymmetric CMKs and later decrypted and verified your signature using those CMKs.

You first created two asymmetric CMKs in AWS KMS, one for creating and verifying digital signatures and the other for encrypting and decrypting data. You then configured key policy permissions for your sender and receiver principals. Acting as your sender principal, you digitally signed a message in AWS KMS, added the message and signature to an archive and then encrypted that archive in AWS KMS. Next you assumed your receiver role and decrypted the archive in AWS KMS, viewed your message, and verified its signature in AWS KMS.

To learn more about the asymmetric keys feature of AWS KMS, please read the AWS KMS Developer Guide. If you have questions about the asymmetric keys feature, please start a new thread on the AWS KMS forum. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

J.D. Bean

J.D. is a Senior Solutions Architect at AWS working with public sector organizations and financial institutions based out of New York City. His interests include security, privacy, and compliance. He is passionate about his work enabling AWS customers’ successful cloud journeys. J.D. holds a Bachelor of Arts from The George Washington University and a Juris Doctor from New York University School of Law.

Verified, episode 2 – A Conversation with Emma Smith, Director of Global Cyber Security at Vodafone

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/verified-episode-2-conversation-with-emma-smith-director-of-global-cyber-security-at-vodafone/

Over the past 8 months, it’s become more important for us all to stay in contact with peers around the globe. Today, I’m proud to bring you the second episode of our new video series, Verified: Presented by AWS re:Inforce. Even though we couldn’t be together this year at re:Inforce, our annual security conference, we still wanted to share some of the conversations with security leaders that would have taken place at the conference. The series showcases conversations with security leaders around the globe. In episode two, I’m talking to Emma Smith, Vodafone’s Global Cyber Security Director.

Vodafone is a global technology communications company with an optimistic culture. Their focus is connecting people and building the digital future for society. During our conversation, Emma detailed how the core values of the Global Cyber Security team were inspired by the company. “We’ve got a team of people who are ultimately passionate about protecting customers, protecting society, protecting Vodafone, protecting all of our services and our employees.” Emma shared experiences about the evolution of the security organization during her past 5 years with the company.

We were also able to touch on one of Emma’s passions, diversity and inclusion. Emma has worked to implement diversity and drive a policy of inclusion at Vodafone. In June, she was named Diversity Champion in the SC Awards Europe. In her own words: “It makes me realize that my job is to smooth the way for everybody else and to try and remove some of those obstacles or barriers that were put in their way… it means that I’m really passionate about trying to get a very diverse team in security, but also in Vodafone, so that we reflect our customer base, so that we’ve got diversity of thinking, of backgrounds, of experience, and people who genuinely feel comfortable being themselves at work—which is easy to say but really hard to create that culture of safety and belonging.”

Stay tuned for future episodes of Verified: Presented by AWS re:Inforce here on the AWS Security Blog. You can watch episode one, an interview with Jason Chan, Vice President of Information Security at Netflix on YouTube. If you have an idea or a topic you’d like covered in this series, please drop us a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

How to secure your Amazon WorkSpaces for external users

Post Syndicated from Olivia Carline original https://aws.amazon.com/blogs/security/how-to-secure-your-amazon-workspaces-for-external-users/

In response to the current shift towards a remote workforce, companies are providing greater access to corporate applications from a range of different devices. Amazon WorkSpaces is a desktop-as-a-service solution that can be used to quickly deploy cloud-based desktops to your external users, including employees, third-party vendors, and consultants. Amazon WorkSpaces desktops are accessible from anywhere with an internet connection. In this blog post, I review some key security controls that you can use to architect your Amazon WorkSpaces environment to provide external users access to your corporate applications and data in a way that satisfies your unique security and compliance objectives.

Amazon Workspaces provides a virtual desktop infrastructure that removes the need for upfront infrastructure expenditure. Instead, you can pay for Windows or Linux desktop environments as you need them. These environments can be provisioned in a few minutes, and enable you to scale up to thousands of desktops that can be accessed from wherever your users are located.

As part of the shared responsibility model, security is a shared responsibility between Amazon Web Services (AWS) and you. AWS is responsible for protecting the infrastructure that runs the AWS services while you are responsible for securing your data in AWS through appropriate permissions and WorkSpace management as outlined in the Best Practices for Deploying Amazon WorkSpaces whitepaper. Amazon WorkSpaces has been independently assessed to meet the requirements of a wide range of compliance programs, including IRAP, SOC, PCI DSS, FedRAMP, and HIPAA.

Prerequisites

Define user groups

A user group is a collection of people who all have the same security rights and permissions. Leveraging user groups helps you to identify the types of access and your requirements for user authentication. How you define your user groups should reflect how you classify your data and the access controls associated with the classifications. A common approach is to begin by separating your internal (employees) and external (vendors and consultants) users. Classifying your users into different groups helps you to define your security controls. For example, the security and configuration of your external users’ devices will be different from the configuration for your internal users’ devices. The identification process also helps to ensure that you’re following the principle of least privilege by limiting access to certain applications or resources. These user groups are the building blocks for designing the rest of your security controls, including the directories, access controls, and security groups.

In this blog post, I walk you through the security configurations for the following example external user groups. How you configure security for your user groups will depend on your own security requirements.

Example user groups

Internal users: Employees who need access to company resources from any location. In addition to having access to the internet and the internal network from any supported device, internal users have administrator access on their virtual desktops so they can install applications.

External users: Third-party vendors and consultants who need access to specific websites that are inside the corporate network. They have fewer permissions and tighter guardrails on their virtual desktops and can only access resources through trusted devices. External users should have access to only pre-installed applications and not be able to install additional applications onto their WorkSpaces.

At this stage, it’s okay to separate your user groups broadly based on the preceding requirements. Later, you can configure fine-grained access controls for individual users.

Configure your directories

Amazon WorkSpaces uses directories to manage information and configuration of WorkSpaces and users. Each WorkSpace that you provision exists within a directory. There are a couple of different options for configuring the directory. Amazon Workspaces can create and manage a directory for you so that users are entered into that directory when you provision a WorkSpace. As an alternative, you can integrate WorkSpaces with an existing, on-premises Microsoft Active Directory (AD) so your users can use the credentials they already know to access applications.

Within Amazon WorkSpaces, directories play a large part in how access to workspaces is configured. Directories within Amazon WorkSpaces are used to store and manage information for your WorkSpaces and users. Based on the preceding two example user groups, let’s split your users’ WorkSpaces across two directories. That will help you to establish different access control settings for the two groups.

To define the two directories, you must set up the directories within AWS Directory Service. As previously mentioned, there are various approaches to handling user management that depend on your existing user directories and requirements. For this example, you can configure two simple Active Directories—one for internal users and one for external users. Handling the external users in a separate directory allows you to ensure your user groups are configured with least privilege. With this approach, external users can still be given access to objects inside the internal directory through a trust if required but can be configured with stricter access controls than users inside the internal directory.

A comprehensive guide to setting up your directories is available in the Amazon WorkSpaces administration guide and outlines the steps to configure a directory using AWS Managed Microsoft AD, Simple AD, or AD Connector.

Configure security settings

After you define what privileges and access controls you want in place for your external users and configure the directories you need, it’s time to establish the security controls for your WorkSpaces. This blog will focus on the external users’ security configurations from the prerequisites. Use the following steps to implement the security requirements:

  1. Establish security groups
  2. Disable local administrator rights
  3. Configure IP access control groups
  4. Define trusted devices
  5. Configure monitoring of WorkSpaces

Establish security groups

With your two AD directories configured, you can start implementing the security controls for your external users. Your Amazon WorkSpaces are configured within a logically isolated network known as Amazon Virtual Private Cloud (VPC). A key concept within Amazon VPC is security groups, which act as virtual firewalls to control inbound and outbound traffic to the virtual desktops. A properly configured security group can limit access to resources in your network or to the internet at the individual WorkSpace level or at the directory level.

To ensure that your external users can access only the network resources you want them to, you can define security groups with restrictive network access settings. One approach is to configure security groups so that your external users only have HTTP and HTTPS access to specific internal websites by trusted IP addresses. To define more fine-grained access control for individual users, you can define another restrictive security group and attach it to an individual user’s WorkSpace. This way, you can use a single directory to handle many different users with different network security requirements and ensure that third-party users only have access to authorized data and systems. In addition to security groups, you can use your preferred host-based firewall on a given WorkSpace to limit network access to resources within the VPC.

To establish and configure security groups

  1. In the Amazon WorkSpaces menu, select Directories from the left menu. Choose the directory you created for your external users. Select Actions and then Update Details as shown in the following figure.
     
    Figure 1: Updating details of your directory

    Figure 1: Updating details of your directory

  2. In the Update Directory Details screen that appears, select the down arrow next to Security Group to expand the section. Select Create New next to the dropdown menu to configure a new security group.
     
    Figure 2: Adding a security group to your directory

    Figure 2: Adding a security group to your directory

  3. In the next window, select Create security group.
  4. Enter a descriptive name for the Security group name and a description for the security group in Description. For example, the description could be external-workspaces-users-sg.
  5. Change the VPC using the dropdown menu to the VPC hosting the WorkSpaces.
  6. In the Inbound rules section, leave the rules as default. The default configuration will block everything except for sessions that have been already established from the Workspace.
  7. In the Outbound rules section, configure the following settings:
    1. Select Delete the existing outbound rule.
    2. Select Add rule.
    3. Set Type to HTTP.
    4. Leave Protocol as TCP and Port range as 80.
    5. Change Source to Custom and enter the appropriate range for your Destination based on where your internal resources are located.
    6. Select Add rule again.
    7. Set Type to HTTPS.
    8. Leave Protocol as TCP and Port range as 443.
    9. Change Source to Custom and enter the appropriate range for your Destination based on where your internal resources are located.
    Figure 3: Configuring your security groups

    Figure 3: Configuring your security groups

  8. Select Create security group.
  9. Return to the WorkSpaces directory tab and select Refresh to see the newly created security group.
  10. Select Update and Exit.

Disable local administrator rights

One of the recommendations for external users is to disable the local administrator setting on their WorkSpaces and provide them with access to only specific, preinstalled applications. This guardrail helps to ensure that external users have limited permissions and to reduce the risk that they might access or share sensitive information. If local administrator isn’t disabled, users can install applications and modify settings on their WorkSpaces. You can disable local administrator access from within the external users’ directory. Changes to the directory are applied to all new WorkSpaces that you create and can be applied to existing WorkSpaces by rebuilding them after the making changes.

Note: If your internal users don’t need local administrator access, it’s a best practice to follow the principle of least privilege and disable it for them as well.

To disable local administrator rights for external users

  1. In the Amazon WorkSpaces menu, select Directories from the left menu. Choose the directory you configured for your external users.
  2. Select Actions and then Update Details.
  3. In Update Directory Details, select Local Administrator Setting and choose the Enable radio button.
  4. Select Update and Exit as shown in the following figure.
     
    Figure 4: Disabling your local administrator setting

    Figure 4: Disabling your local administrator setting

Define IP access control

So far the security groups you have defined previously allow external users access to company resources only from inside the corporate network. You can enhance this security configuration by leveraging IP access control groups to limit traffic and only allow certain IPs to access the WorkSpaces. An IP access control group acts as a virtual firewall and filters access to WorkSpaces by controlling the source classless inter-domain routing (CIDR) ranges that users can access their WorkSpaces from. Each IP access control group consists of a set of rules that specify a permitted IP address or range of addresses that Amazon WorkSpaces can be accessed from. Using this feature, you can configure rules that permit access to your WorkSpaces only if they are coming from your company’s VPN. To achieve this control, you must define rules that specify the ranges of IP addresses for your trusted networks within IP access control groups associated to the external users directory.

Note: Currently only IPV4 addresses are permitted.

To define IP access control

  1. Inside the Amazon WorkSpaces page, select IP Access Controls on the left panel. Select Create IP Group and enter a Group Name and Description in the window that appears.
  2. Select Create as shown in the following figure.
     
    Figure 5: Creating an IP group

    Figure 5: Creating an IP group

  3. Select the box next to the IP group you just created to open the new rules form.
  4. Select Add Rule.
  5. Enter the individual IP addresses or CIDR IP ranges that you want to allow WorkSpaces to have access from in Source. If you want to restrict access to your VPN make sure to add the public IPs of the VPN. Enter a description in Description.
  6. Select Save as shown in the following figure.
     
    Figure 6: Adding rules to your IP group

    Figure 6: Adding rules to your IP group

Configure trusted devices

Regulating the devices that can connect to your workspaces can help reduce the risk of unauthorized access to your network and applications. By default, all Amazon WorkSpaces users can access their virtual desktop from any supported device that has internet connectivity. However, it’s a good practice to configure additional guardrails to limit external users to only accessing their WorkSpaces through trusted devices, otherwise known as managed devices (currently this feature only applies to Amazon WorkSpaces Windows and macOS clients). With this feature enabled, only devices that have been authenticated through a certificate-based approach will have access to WorkSpaces. If the WorkSpaces client application cannot verify that a device is trusted, it blocks attempts to log in or connect from the device.

Note: If you haven’t already configured certificates, you will need to follow the steps in the Amazon WorkSpaces Administration Guide that walkthrough the requirements of the certificates as well as the process to generate one.

To configure trusted devices

  1. In the Amazon WorkSpaces menu, select Directories in the left menu. After selecting the directory that has been configured for your external users, select Actions and then Update Details.
  2. In Update Directory Details, select Access Control Options. Select Allow next to Windows and MacOS to allow only trusted Windows and macOS devices to access WorkSpaces.
  3. Select Import to import your root certificate.
  4. Next to Other Platforms select Block so that only Windows and MacOS devices will have access.
  5. Select Update and Exit.
     
    Figure 7: Establishing trusted devices

    Figure 7: Establishing trusted devices

  6. Test your settings by trying to access one of your WorkSpaces from a trusted device and from a non-trusted device.

Use Amazon CloudWatch to monitor your WorkSpaces

Once the guardrails for your external users have been set up, it’s important to monitor your environment for suspicious behavior and potential threats. Monitoring your infrastructure should be a fundamental aspect in your security plan. Amazon WorkSpaces is natively integrated with Amazon CloudWatch, which you can use to gather and analyze metrics to gain visibility into individual WorkSpaces and at a directory level. Alongside metrics, Amazon CloudWatch Events can also be used to provide visibility into your Amazon WorkSpaces fleet so you can view, filter, and respond to logins to your WorkSpaces. This approach lets you create a thorough monitoring pipeline that enhances your security. It lets you filter and automatically respond to suspicious activity in real time. A comprehensive example of this approach is outlined in this blog post that covers the steps involved to set up a CloudWatch based monitoring system for your WorkSpaces.

Conclusion

While you’ve used Amazon WorkSpaces features to help provide secure access for your external users, it’s also important to implement the principle of least privilege across all WorkSpaces users. You can use the design considerations and procedures in this blog post to help secure your WorkSpaces for all users, internal and external. You can learn more about best practices for securing your Amazon WorkSpaces by reading the Best Practices for Deploying Amazon WorkSpaces whitepaper to understand other features and capabilities that are available.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon WorkSpaces forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Olivia Carline

Olivia is an Associate Solutions Architect working in the public sector team. In her role she enjoys helping customers up-skill and build their cloud knowledge with a particular focus on cloud security. In her free time, you can find her exploring local hiking tracks and trying out new recipes.

Integrating CloudEndure Disaster Recovery into your security incident response plan

Post Syndicated from Gonen Stein original https://aws.amazon.com/blogs/security/integrating-cloudendure-disaster-recovery-into-your-security-incident-response-plan/

An incident response plan (also known as procedure) contains the detailed actions an organization takes to prepare for a security incident in its IT environment. It also includes the mechanisms to detect, analyze, contain, eradicate, and recover from a security incident. Every incident response plan should contain a section on recovery, which outlines scenarios ranging from single component to full environment recovery. This recovery section should include disaster recovery (DR), with procedures to recover your environment from complete failure. Effective recovery from an IT disaster requires tools that can automate preparation, testing, and recovery processes. In this post, I explain how to integrate CloudEndure Disaster Recovery into the recovery section of your incident response plan. CloudEndure Disaster Recovery is an Amazon Web Services (AWS) DR solution that enables fast, reliable recovery of physical, virtual, and cloud-based servers on AWS. This post also discusses how you can use CloudEndure Disaster Recovery to reduce downtime and data loss when responding to a security incident, and best practices for maintaining your incident response plan.

How disaster recovery fits into a security incident response plan

The AWS Well-Architected Framework security pillar provides guidance to help you apply best practices and current recommendations in the design, delivery, and maintenance of secure AWS workloads. It includes a recommendation to integrate tools to secure and protect your data. A secure data replication and recovery tool helps you protect your data if there’s a security incident and quickly return to normal business operation as you resolve the incident. The recovery section of your incident response plan should define recovery point objectives (RPOs) and recovery time objectives (RTOs) for your DR-protected workloads. RPO is the window of time that data loss can be tolerated due to a disruption. RTO is the amount of time permitted to recover workloads after a disruption.

Your DR response to a security incident can vary based on the type of incident you encounter. For example, your DR plan for responding to a security incident such as ransomware—which involves data corruption—should describe how to recover workloads on your secondary DR site using a recovery point prior to the data corruption. This use case will be discussed further in the next section.

In addition to tools and processes, your security incident response plan should define the roles and responsibilities necessary during an incident. This includes the people and roles in your organization who perform incident mitigation steps, in addition to those who need to be informed and consulted. This can include technology partners, application owners, or subject matter experts (SMEs) outside of your organization who can offer additional expertise. DR-related roles for your incident response plan include:

  • A person who analyzes the situation and provides visibility to decision-makers.
  • A person who decides whether or not to trigger a DR response.
  • A person who actively triggers the DR response.

Be sure to include all of the stakeholders you identify in your documented security incident response procedures and runbooks. Test your plan to verify that the people in these roles have the pre-provisioned access they need to perform their defined role.

How to use CloudEndure Disaster Recovery during a security incident

CloudEndure Disaster Recovery continuously replicates your servers—including OS, system state configuration, databases, applications, and files—to a staging area in your target AWS Region. The staging area contains low-cost resources automatically provisioned and managed by CloudEndure Disaster Recovery. This reduces the cost of provisioning duplicate resources during normal operation. Your fully provisioned recovery environment is launched only during an incident or drill.

If your organization experiences a security incident that can be remediated using DR, you can use CloudEndure Disaster Recovery to perform failover to your target AWS Region from your source environment. When you perform failover, CloudEndure Disaster Recovery orchestrates the recovery of your environment in your target AWS Region. This enables quick recovery, with RPOs of seconds and RTOs of minutes.

To deploy CloudEndure Disaster Recovery, you must first install the CloudEndure agent on the servers in your environment that you want to replicate for DR, and then initiate data replication to your target AWS Region. Once data replication is complete and your data is in sync, you can launch machines in your target AWS Region from the CloudEndure User Console. CloudEndure Disaster Recovery enables you to launch target machines in either Test Mode or Recovery Mode. Your launched machines behave the same way in either mode; the only difference is how the machine lifecycle is displayed in the CloudEndure User Console. Launch machines by opening the Machines page, shown in the following figure, and selecting the machines you want to launch. Then select either Test Mode or Recovery Mode from the Launch Target Machines menu.
 

Figure 1: Machines page on the CloudEndure User Console

Figure 1: Machines page on the CloudEndure User Console

You can launch your entire environment, a group of servers comprising one or more applications, or a single server in your target AWS Region. When you launch machines from the CloudEndure User Console, you’re prompted to choose a recovery point from the Choose Recovery Point dialog box (shown in the following figure).

Use point-in-time recovery to respond to security incidents that involve data corruption, such as ransomware. Your incident response plan should include a mechanism to determine when data corruption occurred. Knowing how to determine which recovery point to choose in the CloudEndure User Console helps you minimize response time during a security incident. Each recovery point is a point-in-time snapshot of your servers that you can use to launch recovery machines in your target AWS Region. Select the latest recovery point before the data corruption to restore your workloads on AWS, and then choose Continue With Launch.
 

Figure 2: Selection of an earlier recovery point from the Choose Recovery Point dialog box

Figure 2: Selection of an earlier recovery point from the Choose Recovery Point dialog box

Run your recovered workloads in your target AWS Region until you’ve resolved the security incident. When the incident is resolved, you can perform failback to your primary environment using CloudEndure Disaster Recovery. You can learn more about CloudEndure Disaster Recovery setup, operation, and recovery by taking this online CloudEndure Disaster Recovery Technical Training.

Test and maintain the recovery section of your incident response plan

Your entire incident response plan must be kept accurate and up to date in order to effectively remediate security incidents if they occur. A best practice for achieving this is through frequently testing all sections of your plan, including your tools. When you first deploy CloudEndure Disaster Recovery, begin running tests as soon as all of your replicated servers are in sync on your target AWS Region. DR solution implementation is generally considered complete when all initial testing has succeeded.

By correctly configuring the networking and security groups in your target AWS Region, you can use CloudEndure Disaster Recovery to launch a test workload in an isolated environment without impacting your source environment. You can run tests as often as you want. Tests don’t incur additional fees beyond payment for the fully provisioned resources generated during tests.

Testing involves two main components: launching the machines you wish to test on AWS, and performing user acceptance testing (UAT) on the launched machines.

  1. Launch machines to test.
     
    Select the machines to test from the Machines page of the CloudEndure User Console by selecting the check box next to the machine. Then choose Test Mode from the Launch Target Machines menu, as shown in the following figure. You can select the latest recovery point or an earlier recovery point.
     
    Figure 3: Select Test Mode to launch selected machines

    Figure 3: Select Test Mode to launch selected machines

     

    The following figure shows the CloudEndure User Console. The Disaster Recovery Lifecycle column shows that the machines have been Tested Recently.

    Figure 4: Machines launched in Test Mode display purple icons in the Status column and Tested Recently in the Disaster Recovery Lifecycle column

    Figure 4: Machines launched in Test Mode display purple icons in the Status column and Tested Recently in the Disaster Recovery Lifecycle column

  2. Perform UAT testing.
     
    Begin UAT testing when the machine launch job is successfully completed and your target machines have booted.

After you’ve successfully deployed, configured, and tested CloudEndure Disaster Recovery on your source environment, add it to your ongoing change management processes so that your incident response plan remains accurate and up-to-date. This includes deploying and testing CloudEndure Disaster Recovery every time you add new servers to your environment. In addition, monitor for changes to your existing resources and make corresponding changes to your CloudEndure Disaster Recovery configuration if necessary.

How CloudEndure Disaster Recovery keeps your data secure

CloudEndure Disaster Recovery has multiple mechanisms to keep your data secure and not introduce new security risks. Data replication is performed using AES 256-bit encryption in transit. Data at rest can be encrypted by using Amazon Elastic Block Store (Amazon EBS) encryption with an AWS managed key or a customer key. Amazon EBS encryption is supported by all volume types, and includes built-in key management infrastructure that has no performance impact. Replication traffic is transmitted directly from your source servers to your target AWS Region, and can be restricted to private connectivity such as AWS Direct Connect or a VPN. CloudEndure Disaster Recovery is ISO 27001 and GDPR compliant and HIPAA eligible.

Summary

Each organization tailors its incident response plan to meet its unique security requirements. As described in this post, you can use CloudEndure Disaster Recovery to improve your organization’s incident response plan. I also explained how to recover from an earlier point in time when you respond to security incidents involving data corruption, and how to test your servers as part of maintaining the DR section of your incident response plan. By following the guidance in this post, you can improve your IT resilience and recover more quickly from security incidents. You can also reduce your DR operational costs by avoiding duplicate provisioning of your DR infrastructure.

Visit the CloudEndure Disaster Recovery product page if you would like to learn more. You can also view the AWS Raise the Bar on Data Protection and Security webinar series for additional information on how to protect your data and improve IT resilience on AWS.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Gonen Stein

Gonen is the Head of Product Strategy for CloudEndure, an AWS company. He combines his expertise in business, cloud infrastructure, storage, and information security to assist enterprise organizations with developing and deploying IT resilience and business continuity strategies in the cloud.