Tag Archives: Security, Identity & Compliance

AWS Security Profile: Philip Winstanley, Security Engineering

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-philip-winstanley-security-engineering/

AWS Security Profile: Philip Winstanley, Security Engineering
In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services (AWS) Security and help keep our customers safe and secure. This interview is with Philip Winstanley, a security engineer and AWS Guardian. The Guardians program identifies and develops security experts within engineering teams across AWS, enabling these teams to use Amazon Security more effectively. Through the empowerment of these security-minded Amazonians called “Guardians,” we foster a culture of informed security ownership throughout the development lifecycle.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for just over three years now. I joined in Dublin, Ireland, and I’ve since transferred back to the UK, back to my home city of Manchester. I’m a security engineer on the service team for AWS Managed Services (AMS). We support customer workloads in the cloud and help customers manage them, optimize them, and keep them safe and secure.

How did you get started in the world of security?

I was a software developer for many years, and in building software I discovered that security is an integral part of delivering safe and secure solutions to my customers. That really sparked my interest in the security space, and I started researching and learning about all the different types of attacks that were out there, and learning about organized crime. That led me to work with the UK’s National Crime Agency, where I became a special officer, and to the United Kingdom Royal Airforce, where I worked in the cyber defense team. I managed to merge my technical knowledge with my law enforcement and military knowledge, and then bring them all together as the security engineer that I am today.

What are you currently working on that you’re excited about?

I have the joy of working with full-spectrum security, which is everything from protecting our environments to detecting risks within our environments to responding to those risks. But the bulk of my work is in helping our service teams build safe and secure software. Sometimes we call that AppSec (application security), sometimes we call it secure development. As part of that, I work with a group of volunteers and specialists within engineering teams that we call Guardians. They are our security specialists embedded within AWS service teams. These are people who champion security and make sure that everything we build meets a high security bar, which often goes beyond what we’re asked to do by compliance or regulation. We take it that extra mile. As Guardians, we push our development teams to continually raise the bar on security, privacy, compliance, and the confidentiality of customer data.

What are the most important aspects of being a Guardian?

A Guardian is there to help teams do the right thing when it comes to security—to contextualize knowledge of their team’s business and technology and help them identify areas and opportunities to improve security. Guardians will often think outside the box. They will come at things from a security point of view, not just a development point of view. But they do it within the context of what our customers need. Guardians are always looking around corners; they’re looking at what’s coming next. They’re looking at the risks that are out there, looking at the way environments are evolving, and trying to build in protections now for issues that will come down the line. Guardians are there to help our service teams anticipate and protect against future risks.

How have you as a Guardian improved the quality of security outcomes for customers?

Many of our customers are moving to the cloud, some for the first time, and they have high standards around data sovereignty, around the privacy of the data they manage. In addition to helping service teams meet the security bar, Guardians seek to understand our customers’ security and privacy requirements. As a result, our teams’ Guardians inform the development of features that not only meet our security bar, but also help our customers meet their security, privacy, and compliance requirements.

How have you helped develop security experts within your team?

I have the joy of working with security experts from many different fields. Inside Amazon, we have a huge community of security expertise, touching every single domain of security. What we try to do is cross-pollinate; we teach each other about our own areas of expertise. I focus on application security and work very closely with my colleagues who work in threat intelligence and incident response. We all work together and collaborate to raise the bar for each of us, sharing our knowledge, our skills, our expertise. We do this through training that we build, we do it through knowledge-sharing sessions where we get together and talk about security issues, we do it through being jointly introspective about the work that we’ve done. We will even do reviews of each other’s work and bar raise, adding our own specialist knowledge and expertise to that of our colleagues.

What advice would you give to customers who are considering their own Guardians program?

Security culture is something that comes from within an organization. It’s also something that’s best when it’s done from the ground up. You can’t just tell people to be secure, you have to find people who are passionate about security and empower them. Give them permission to put that passion into their work and give them the opportunity to learn from security training and experts. What you’ll see, if you have people with that passion for security, is that they’ll bring that enthusiasm into the work from the start. They’ll already care about security and want to do more of it.

You’re a self-described “disruptive anti-CISO.” What does that mean?

I wrote a piece on LinkedIn about what it really is, but I’ll give a shorter answer. The world of information security is not new—it’s been around for 20, 30 years, so all the thinking around security comes from a world of on-premises infrastructure. It’s from a time before the cloud even existed and unfortunately, a lot of the security thinking out there is still borne of that age. When we’re in a world of hyper-scaled environments, where we’re dealing with millions of resources, millions of endpoints, we can’t use that traditional thinking anymore. We can’t just lock everything in a box and make sure no one’s got access to it. Quite the opposite, we need to enable innovations, we need to let the business drive that creativity and produce solutions, which means security needs to be an enabler of creativity, not a blocker. I have a firm belief that security plays a part in delivering solutions, in helping solutions land, and making sure that they succeed. Security is not and should never be a gatekeeper to success. More often than not in industries, that was the position that security took. I believe in the opposite—security should enable business. I take that thinking and use it to help AWS customers succeed, through sharing our experience and knowledge with them to keep them safe and secure in the cloud.

What’s the thing you’re most proud of in your career?

When I was at the National Crime Agency, I worked in the dark web threat intelligence unit and some of my work was to combat child exploitation and human trafficking. The work I did there was some of the most rewarding I’ve ever done, and I’m incredibly proud of what we achieved. But it wasn’t just within that agency, it was partnering with other organizations, police forces around the world, and cloud providers such as AWS that combat exploitation and help move vulnerable children into safety. Working to protect victims of crime, especially the most vulnerable, helped me build a customer-centric view to security, ensuring we always think about our end customers and their customers. It’s all about people; we are here to protect and defend families and real lives, not just 1’s and 0’s.

If you had to pick an industry outside of security, what would you want to do?

I have always loved space and would adore working in the space sector. I’m fascinated by all of the renewed space exploration that’s happening at the moment, be it through Blue Origin or Space X or any of these other people out there doing it. If I could have my time again, or even if I could pivot now in my career, I would go and be a space man. I don’t need to be an astronaut, but I would want to contribute to the success of these missions and see humanity go out into the stars.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Philip Winstanley

Philip Winstanley

Philip works in Security Engineering to help people, teams, and organizations succeed in the cloud. Philip brings his law enforcement and military experience, combined with technical expertise, to deliver innovative pragmatic security solutions.

Best practices: Securing your Amazon Location Service resources

Post Syndicated from Dave Bailey original https://aws.amazon.com/blogs/security/best-practices-securing-your-amazon-location-service-resources/

Location data is subjected to heavy scrutiny by security experts. Knowing the current position of a person, vehicle, or asset can provide industries with many benefits, whether to understand where a current delivery is, how many people are inside a venue, or to optimize routing for a fleet of vehicles. This blog post explains how Amazon Web Services (AWS) helps keep location data secured in transit and at rest, and how you can leverage additional security features to help keep information safe and compliant.

The General Data Protection Regulation (GDPR) defines personal data as “any information relating to an identified or identifiable natural person (…) such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Also, many companies wish to improve transparency to users, making it explicit when a particular application wants to not only track their position and data, but also to share that information with other apps and websites. Your organization needs to adapt to these changes quickly to maintain a secure stance in a competitive environment.

On June 1, 2021, AWS made Amazon Location Service generally available to customers. With Amazon Location, you can build applications that provide maps and points of interest, convert street addresses into geographic coordinates, calculate routes, track resources, and invoke actions based on location. The service enables you to access location data with developer tools and to move your applications to production faster with monitoring and management capabilities.

In this blog post, we will show you the features that Amazon Location provides out of the box to keep your data safe, along with best practices that you can follow to reach the level of security that your organization strives to accomplish.

Data control and data rights

Amazon Location relies on global trusted providers Esri and HERE Technologies to provide high-quality location data to customers. Features like maps, places, and routes are provided by these AWS Partners so solutions can have data that is not only accurate but constantly updated.

AWS anonymizes and encrypts location data at rest and during its transmission to partner systems. In parallel, third parties cannot sell your data or use it for advertising purposes, following our service terms. This helps you shield sensitive information, protect user privacy, and reduce organizational compliance risks. To learn more, see the Amazon Location Data Security and Control documentation.

Integrations

Operationalizing location-based solutions can be daunting. It’s not just necessary to build the solution, but also to integrate it with the rest of your applications that are built in AWS. Amazon Location facilitates this process from a security perspective by integrating with services that expedite the development process, enhancing the security aspects of the solution.

Encryption

Amazon Location uses AWS owned keys by default to automatically encrypt personally identifiable data. AWS owned keys are a collection of AWS Key Management Service (AWS KMS) keys that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned keys are not in your AWS account, Amazon Location can use the associated AWS owned keys to protect the resources in your account.

If customers choose to use their own keys, they can benefit from AWS KMS to store their own encryption keys and use them to add a second layer of encryption to geofencing and tracking data.

Authentication and authorization

Amazon Location also integrates with AWS Identity and Access Management (IAM), so that you can use its identity-based policies to specify allowed or denied actions and resources, as well as the conditions under which actions are allowed or denied on Amazon Location. Also, for actions that require unauthenticated access, you can use unauthenticated IAM roles.

As an extension to IAM, Amazon Cognito can be an option if you need to integrate your solution with a front-end client that authenticates users with its own process. In this case, you can use Cognito to handle the authentication, authorization, and user management for you. You can use Cognito unauthenticated identity pools with Amazon Location as a way for applications to retrieve temporary, scoped-down AWS credentials. To learn more about setting up Cognito with Amazon Location, see the blog post Add a map to your webpage with Amazon Location Service.

Limit the scope of your unauthenticated roles to a domain

When you are building an application that allows users to perform actions such as retrieving map tiles, searching for points of interest, updating device positions, and calculating routes without needing them to be authenticated, you can make use of unauthenticated roles.

When using unauthenticated roles to access Amazon Location resources, you can add an extra condition to limit resource access to an HTTP referer that you specify in the policy. The aws:referer request context value is provided by the caller in an HTTP header, and it is included in a web browser request.

The following is an example of a policy that allows access to a Map resource by using the aws:referer condition, but only if the request comes from the domain example.com.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "MapsReadOnly",
      "Effect": "Allow",
      "Action": [
        "geo:GetMapStyleDescriptor",
        "geo:GetMapGlyphs",
        "geo:GetMapSprites",
        "geo:GetMapTile"
      ],
      "Resource": "arn:aws:geo:us-west-2:111122223333:map/MyMap",
      "Condition": {
        "StringLike": {
          "aws:Referer": "https://www.example.com/*"
        }
      }
    }
  ]
}

To learn more about aws:referer and other global conditions, see AWS global condition context keys.

Encrypt tracker and geofence information using customer managed keys with AWS KMS

When you create your tracker and geofence collection resources, you have the option to use a symmetric customer managed key to add a second layer of encryption to geofencing and tracking data. Because you have full control of this key, you can establish and maintain your own IAM policies, manage key rotation, and schedule keys for deletion.

After you create your resources with customer managed keys, the geometry of your geofences and all positions associated to a tracked device will have two layers of encryption. In the next sections, you will see how to create a key and use it to encrypt your own data.

Create an AWS KMS symmetric key

First, you need to create a key policy that will limit the AWS KMS key to allow access to principals authorized to use Amazon Location and to principals authorized to manage the key. For more information about specifying permissions in a policy, see the AWS KMS Developer Guide.

To create the key policy

Create a JSON policy file by using the following policy as a reference. This key policy allows Amazon Location to grant access to your KMS key only when it is called from your AWS account. This works by combining the kms:ViaService and kms:CallerAccount conditions. In the following policy, replace us-west-2 with your AWS Region of choice, and the kms:CallerAccount value with your AWS account ID. Adjust the KMS Key Administrators statement to reflect your actual key administrators’ principals, including yourself. For details on how to use the Principal element, see the AWS JSON policy elements documentation.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Amazon Location",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "kms:DescribeKey",
        "kms:CreateGrant"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "kms:ViaService": "geo.us-west-2.amazonaws.com",
          "kms:CallerAccount": "111122223333"
        }
      }
    },
    {
      "Sid": "Allow access for Key Administrators",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/KMSKeyAdmin"
      },
      "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:TagResource",
        "kms:UntagResource",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion"
      ],
      "Resource": "*"
    }
  ]
}

For the next steps, you will use the AWS Command Line Interface (AWS CLI). Make sure to have the latest version installed by following the AWS CLI documentation.

Tip: AWS CLI will consider the Region you defined as the default during the configuration steps, but you can override this configuration by adding –region <your region> at the end of each command line in the following command. Also, make sure that your user has the appropriate permissions to perform those actions.

To create the symmetric key

Now, create a symmetric key on AWS KMS by running the create-key command and passing the policy file that you created in the previous step.

aws kms create-key –policy file://<your JSON policy file>

Alternatively, you can create the symmetric key using the AWS KMS console with the preceding key policy.

After running the command, you should see the following output. Take note of the KeyId value.

{
  "KeyMetadata": {
    "Origin": "AWS_KMS",
    "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab",
    "Description": "",
    "KeyManager": "CUSTOMER",
    "Enabled": true,
    "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
    "KeyUsage": "ENCRYPT_DECRYPT",
    "KeyState": "Enabled",
    "CreationDate": 1502910355.475,
    "Arn": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
    "AWSAccountId": "111122223333",
    "MultiRegion": false
    "EncryptionAlgorithms": [
      "SYMMETRIC_DEFAULT"
    ],
  }
}

Create an Amazon Location tracker and geofence collection resources

To create an Amazon Location tracker resource that uses AWS KMS for a second layer of encryption, run the following command, passing the key ID from the previous step.

aws location \
	create-tracker \
	--tracker-name "MySecureTracker" \
	--kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab"

Here is the output from this command.

{
    "CreateTime": "2021-07-15T04:54:12.913000+00:00",
    "TrackerArn": "arn:aws:geo:us-west-2:111122223333:tracker/MySecureTracker",
    "TrackerName": "MySecureTracker"
}

Similarly, to create a geofence collection by using your own KMS symmetric keys, run the following command, also modifying the key ID.

aws location \
	create-geofence-collection \
	--collection-name "MySecureGeofenceCollection" \
	--kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab"

Here is the output from this command.

{
    "CreateTime": "2021-07-15T04:54:12.913000+00:00",
    "TrackerArn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/MySecureGeoCollection",
    "TrackerName": "MySecureGeoCollection"
}

By following these steps, you have added a second layer of encryption to your geofence collection and tracker.

Data retention best practices

Trackers and geofence collections are stored and never leave your AWS account without your permission, but they have different lifecycles on Amazon Location.

Trackers store the positions of devices and assets that are tracked in a longitude/latitude format. These positions are stored for 30 days by the service before being automatically deleted. If needed for historical purposes, you can transfer this data to another data storage layer and apply the proper security measures based on the shared responsibility model.

Geofence collections store the geometries you provide until you explicitly choose to delete them, so you can use encryption with AWS managed keys or your own keys to keep them for as long as needed.

Asset tracking and location storage best practices

After a tracker is created, you can start sending location updates by using the Amazon Location front-end SDKs or by calling the BatchUpdateDevicePosition API. In both cases, at a minimum, you need to provide the latitude and longitude, the time when the device was in that position, and a device-unique identifier that represents the asset being tracked.

Protecting device IDs

This device ID can be any string of your choice, so you should apply measures to prevent certain IDs from being used. Some examples of what to avoid include:

  • First and last names
  • Facility names
  • Documents, such as driver’s licenses or social security numbers
  • Emails
  • Addresses
  • Telephone numbers

Latitude and longitude precision

Latitude and longitude coordinates convey precision in degrees, presented as decimals, with each decimal place representing a different measure of distance (when measured at the equator).

Amazon Location supports up to six decimal places of precision (0.000001), which is equal to approximately 11 cm or 4.4 inches at the equator. You can limit the number of decimal places in the latitude and longitude pair that is sent to the tracker based on the precision required, increasing the location range and providing extra privacy to users.

Figure 1 shows a latitude and longitude pair, with the level of detail associated to decimals places.

Figure 1: Geolocation decimal precision details

Figure 1: Geolocation decimal precision details

Position filtering

Amazon Location introduced position filtering as an option to trackers that enables cost reduction and reduces jitter from inaccurate device location updates.

  • DistanceBased filtering ignores location updates wherein devices have moved less than 30 meters (98.4 ft).
  • TimeBased filtering evaluates every location update against linked geofence collections, but not every location update is stored. If your update frequency is more often than 30 seconds, then only one update per 30 seconds is stored for each unique device ID.
  • AccuracyBased filtering ignores location updates if the distance moved was less than the measured accuracy provided by the device.

By using filtering options, you can reduce the number of location updates that are sent and stored, thus reducing the level of location detail provided and increasing the level of privacy.

Logging and monitoring

Amazon Location integrates with AWS services that provide the observability needed to help you comply with your organization’s security standards.

To record all actions that were taken by users, roles, or AWS services that access Amazon Location, consider using AWS CloudTrail. CloudTrail provides information on who is accessing your resources, detailing the account ID, principal ID, source IP address, timestamp, and more. Moreover, Amazon CloudWatch helps you collect and analyze metrics related to your Amazon Location resources. CloudWatch also allows you to create alarms based on pre-defined thresholds of call counts. These alarms can create notifications through Amazon Simple Notification Service (Amazon SNS) to automatically alert teams responsible for investigating abnormalities.

Conclusion

At AWS, security is our top priority. Here, security and compliance is a shared responsibility between AWS and the customer, where AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. The customer assumes the responsibility to perform all of the necessary security configurations to the solutions they are building on top of our infrastructure.

In this blog post, you’ve learned the controls and guardrails that Amazon Location provides out of the box to help provide data privacy and data protection to our customers. You also learned about the other mechanisms you can use to enhance your security posture.

Start building your own secure geolocation solutions by following the Amazon Location Developer Guide and learn more about how the service handles security by reading the security topics in the guide.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on Amazon Location Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Rafael Leandro Junior

Rafael Leandro, Junior

Rafael Leandro, Junior, is a senior global solutions architect who currently focuses on the consumer packaged goods and transportation industries. He helps large global customers on their journeys with AWS.

David Bailey

David Bailey

David Bailey is a senior security consultant who helps AWS customers achieve their cloud security goals. He has a passion for building new technologies and providing mentorship for others.

ISO/IEC 27001 certificates now available in French and Spanish

Post Syndicated from Rodrigo Fiuza original https://aws.amazon.com/blogs/security/iso-iec-27001-certificates-now-available-in-french-and-spanish/

French version
Spanish version

We continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs at Amazon Web Services (AWS). We are pleased to announce that ISO/IEC 27001 certificates for AWS are now available in French and Spanish on AWS Artifact. These translated reports will help drive greater engagement and alignment with customer and regulatory requirements across Latin America, Canada, and EMEA.

Current translated (French and Spanish) ISO/IEC 27001 certificates are available through AWS Artifact. Future ISO certificates will be published on an annual basis in accordance with the audit period.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

.

 


 
 

Les certificats ISO/IEC 27001 sont désormais disponibles en français et en espagnol

Nous restons à l’écoute de nos clients, des régulateurs et des parties prenantes pour comprendre leurs besoins en matière de programmes d’audit, d’assurance, de certification et d’attestation chez Amazon Web Services (AWS). Nous avons le plaisir d’annoncer que les certificats ISO/IEC 27001 d’AWS sont désormais disponibles en français et en espagnol sur AWS Artifact. Ces rapports traduits permettront de renforcer l’engagement et l’alignement sur les exigences des clients et des réglementations en Amérique latine, au Canada et en EMEA.

Les certificats ISO/IEC 27001 actuellement traduits (français et espagnol) sont disponibles via AWS Artifact. Les futurs certificats ISO seront publiés sur une base annuelle en fonction de la période d’audit.

Vos commentaires et vos questions sont importants pour nous. N’hésitez pas à contacter notre équipe ou à nous faire part de vos commentaires sur cet article par le biais de notre page Nous contacter.

Si vous avez des commentaires sur cet article, envoyez-les dans la section Commentaires ci-dessous.

Vous voulez plus d’informations sur la sécurité AWS ? Suivez-nous sur Twitter.

.

 


 
 

Los certificados ISO/IEC 27001 ahora están disponibles en francés y español

Seguimos escuchando a nuestros clientes y reguladores y entendemos sus necesidades con respecto a los programas de garantías en Amazon Web Services (AWS) y nos complace anunciar que los certificados ISO/IEC 27001 ya están disponibles en francés y español. Estos certificados traducidos ayudarán a impulsar los requisitos regulatorios y de los clientes locales en las regiones de LATAM, Canadá y EMEA.

Los certificados ISO/IEC 27001 traducidos actualmente (Francés y Español) están disponibles en AWS Artifact. Los futuros certificados ISO se publicarán anualmente según el período de auditoría.

Valoramos sus comentarios y preguntas; no dude en ponerse en contacto con nuestro equipo o enviarnos sus comentarios sobre esta publicación a través de nuestra página Contáctenos.

Si tienes comentarios sobre esta publicación, envía comentarios en la sección Comentarios a continuación.

¿Desea obtener más noticias sobre seguridad de AWS? Síguenos en Twitter.

Author

Rodrigo Fiuza

Rodrigo is a security audit manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, the Caribbean, and Europe. Rodrigo previously worked in risk management, security assurance, and technology audits for 12 years.

Naranjan Goklani

Naranjan Goklani

Naranjan is a security audit manager at AWS, based in Toronto. He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has previously worked in risk management, security assurance, and technology audits for the past 12 years.

Author

Sonali Vaidya

Sonali is a compliance program manager at AWS, where she leads multiple global compliance programs including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, ISO 22301, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications such as CISSP, CCSK, CEH, CISA, and ISO 22301 LA.

How to use AWS Security Hub and Amazon OpenSearch Service for SIEM

Post Syndicated from Ely Kahn original https://aws.amazon.com/blogs/security/how-to-use-aws-security-hub-and-amazon-opensearch-service-for-siem/

AWS Security Hub provides you with a consolidated view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and current AWS security recommendations. Although Security Hub has some similarities to security information and event management (SIEM) tools, it is not designed as standalone a SIEM replacement. For example, Security Hub only ingests AWS-related security findings and does not directly ingest higher volume event logs, such as AWS CloudTrail logs. If you have use cases to consolidate AWS findings with other types of findings from on-premises or other non-AWS workloads, or if you need to ingest higher volume event logs, we recommend that you use Security Hub in conjunction with a SIEM tool.

There are also other benefits to using Security Hub and a SIEM tool together. These include being able to store findings for longer periods of time than Security Hub, aggregating findings across multiple administrator accounts, and further correlating Security Hub findings with each other and other log sources. In this blog post, we will show you how you can use Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) as a SIEM and integrate Security Hub with it to accomplish these three use cases. Amazon OpenSearch Service is a fully managed service that makes it easier to deploy, manage, and scale Elasticsearch and Kibana. OpenSearch Service is a distributed, RESTful search and analytics engine that is capable of addressing a growing number of use cases. You can expand OpenSearch Service with AWS services like Kinesis or Kinesis Data Firehose, by integrating with other AWS services, or by using traditional agents like Beats and Logstash for log ingestion, and Kibana for data visualization. Although the OpenSearch Service also is not a SIEM out-of-the-box tool, with some customization, you can use it for SIEM tool use cases.

Security Hub plus SIEM use cases

By enabling Security Hub within your AWS Organizations account structure, you immediately start receiving the benefits of viewing all of your security findings from across various AWS and partner services on a single screen. Some organizations want to go a step further and use Security Hub in conjunction with a SIEM tool for the following reasons:

  • Correlate Security Hub findings with each other and other log sources – This is the most popular reason customers choose to implement this solution. If you have various log sources outside of Security Hub findings (such as application logs, database logs, partner logs, and security tooling logs), then it makes sense to consolidate these log sources into a single SIEM solution. Then you can view both your Security Hub findings and miscellaneous logs in the same place and create alerts based on interesting correlations.
  • Store findings for longer than 90 days after the last update date – Some organizations want or need to store Security Hub findings for longer than 90 days after the last update date. They may want to do this for historical investigation, or for audit and compliance needs. Either way, this solution offers you the ability to store Security Hub findings in a private Amazon Simple Storage Service (Amazon S3) bucket, which is then consumed by Amazon OpenSearch Service.
  • Aggregate findings across multiple administrator accounts – Security Hub has a feature customers can use to designate an administrator account if they have enabled Security Hub in multiple accounts. A Security Hub administrator account can view data from and manage configuration for its member accounts. This allows customers to view and manage all their findings from multiple member accounts in one place. Sometimes customers have multiple Security Hub administrator accounts, because they have multiple organizations in AWS Organizations. In this situation, you can use this solution to consolidate all of the Security Hub administrator accounts into a single OpenSearch Service with Kibana SIEM implementation to have a single view across your environments. This related blog post walks through this use case in more detail, and shows how to centralize Security Hub findings across multiple AWS Regions and administrators. However, this blog post takes this approach further by introducing OpenSearch Service with Kibana to the use case, for a full SIEM experience.

Solution architecture

Figure 1: SIEM implementation on Amazon OpenSearch Service

Figure 1: SIEM implementation on Amazon OpenSearch Service

The solution represented in Figure 1 shows the flexibility of integrations that are possible when you create a SIEM by using Amazon OpenSearch Service. The solution allows you to aggregate findings across multiple accounts, store findings in an S3 bucket indefinitely, and correlate multiple AWS and non-AWS services in one place for visualization. This post focuses on Security Hub’s integration with the solution, but the following AWS services are also able to integrate:

Each of these services has its own dedicated dashboard within the OpenSearch SIEM solution. This makes it possible for customers to view findings and data that are relevant to each service that the SIEM tool is ingesting. OpenSearch Service also allows the customer to create aggregated dashboards, consolidating multiple services within a single dashboard, if needed.

Prerequisites

We recommend that you enable Security Hub and AWS Config across all of your accounts and Regions. For more information about how to do this, see the documentation for Security Hub and AWS Config. We also recommend that you use Security Hub and AWS Config integration with AWS Organizations to simplify the setup and automatically enable these services in all current and future accounts in your organization.

Launch the solution

In order to launch this solution within your environment, you can either launch the solution by using an AWS CloudFormation template, or by following the steps presented later in this post to customize the deployment to support integrations with non-AWS services, multi-Organization deployments, or launch within your existing OpenSearch Service environment.

To launch the solution, follow the instructions for SIEM on Amazon OpenSearch Service on GitHub.

Use the solution

Before you start using the solution, we’ll show you how this solution appears in the Security Hub dashboard, as shown in Figure 2. Navigate here by following Step 3 from the GitHub README.

Figure 2: Pre-built dashboards within solution

Figure 2: Pre-built dashboards within solution

The Security Hub dashboard highlights all major components of the service within an OpenSearch Service dashboard environment. This includes supporting all of the service integrations that are available within Security Hub (such as GuardDuty, AWS Identity and Access Management (IAM) Access Analyzer, Amazon Inspector, Amazon Macie, and AWS Systems Manager Patch Manager). The dashboard displays both findings and security standards, and you can filter by AWS account, finding type, security standard, or service integration. Figure 3 shows an overview of the visual dashboard experience when you deploy the solution.

Figure 3: Dashboard preview

Figure 3: Dashboard preview

Use case 1: Correlate Security Hub findings with each other and other log sources and create alerts

This solution uses OpenSearch Service and Kibana to allow you to search through both Security Hub findings and logs from any other AWS and non-AWS systems. You can then create alerts within Kibana based on interesting correlations between Security Hub and any other logged events. Although Security Hub supports ingesting a vast number of integrations and findings, it cannot create correlation rules like a SIEM tool can. However, you can create such rules using SIEM on OpenSearch Service. It’s important to take a closer look when multiple AWS security services generate findings for a single resource, because this potentially indicates elevated risk or multiple risk vectors. Depending on your environment, the initial number of findings in Security Hub may be high, so you may need to prioritize which findings require immediate action. Security Hub natively gives you the ability to filter findings by resource, account, severity, and many other details.

As part of the findings, you can send notifications through alerts that are generated by SIEM on OpenSearch Service in several ways: Amazon Simple Notification Service (Amazon SNS) by consuming messages in an appropriate tool or configuring recipient email addresses, Amazon Chime, Slack (using AWS Chatbot) or custom webhook to your organization’s ticketing system. You can then respond to these new security incident-oriented findings through ticketing, chat, or incident management systems.

Solution overview for use case 1

Figure 4: Solution overview diagram

Figure 4: Solution overview diagram

Figure 4 gives an overview of the solution for use case 1. This solution requires that you have Security Hub and GuardDuty enabled in your AWS account. Logs from AWS services, including Security Hub, are ingested into an S3 bucket, then are automatically extracted, transformed, and loaded (ETL) and populated into the SIEM system that is running on OpenSearch Service using AWS Lambda. After capturing the logs, you will be able to visualize them on the dashboard and analyze correlations of multiple logs. Within the SIEM on OpenSearch Service solution, you will create a rule to detect failures, such as CloudTrail authentication failures in logs. Then, you will configure the solution to publish alerts to Amazon SNS and send emails when logs match rules.

Implement the solution for use case 1

You will now set up this workflow to alert you by email when logs in OpenSearch match certain rules that you create.

Step 1: Create and visualize findings in OpenSearch Dashboards

Security Hub and other AWS services export findings to Amazon S3 in a centralized log bucket. You can ingest logs from CloudTrail, VPC Flow Logs, and GuardDuty, which are often used in AWS security analytics. In this step, you import simulated security incident data in OpenSearch Dashboards, and use the dashboard to visualize the data in the logs.

To navigate OpenSearch Dashboards

  1. Generate pseudo-security incidents. You can simulate the results by generating sample findings in GuardDuty.
  2. In OpenSearch Dashboards, go to the Discover screen. The Discover screen is divided into three major sections: Search bar, index/display field list, and time-series display, as shown in Figure 5.
    Figure 5: OpenSearch Dashboards

    Figure 5: OpenSearch Dashboards

  3. In OpenSearch Dashboards, select log-aws-securityhub-* or log-aws-vpcflowlogs-* or log-aws-cloudtrail-* or any other index patterns and add event.module to the display field. event.module is a field that indicates where the log originates from. If you are collecting other threat information, such as Security Hub, @log-type is Security Hub, and event.module indicates where the log originated from (either Amazon Inspector or Amazon Macie for example). After you have added event.module, filter the desired Security Hub integrated service (for example, Amazon Inspector) to display. When testing the environment covered in this blog post outside a production context, you can use Kinesis Data Generator to generate sample user traffic. Other tools are also available.
  4. Select the following on the dashboard to see the visualized information:
    • CloudTrail Summary
    • VpcFlowLogs Summary
    • GuardDuty Summary
    • All – Threat Hunting

Step 2: Configure alerts to match log criteria

Next, you will configure alerts to match log criteria. First you need to set the destination for alerts, and then set what to monitor.

To configure alerts

  1. In OpenSearch Dashboards, in the left menu, choose Alerting.
  2. To add the details of SNS, on the Destinations tab, choose Add destinations, and enter the following parameters:
    • Name: aes-siem-alert-destination
    • Type: Amazon SNS
    • SNS Alert: arn:aws:sns:<AWS-REGION>:<111111111111>:aes-siem-alert
      • Replace <111111111111> with your AWS account ID and correct the Region name
      • Replace <AWS-REGION> with the Region you are using, for example, eu-west-1
    • IAM Role ARN: arn:aws:iam::<111111111111>:role/aes-siem-sns-role
      • Replace &<111111111111> with your AWS account ID
  3. Choose Create to complete setting the alert destination.
    Figure 6: Edit alert destination

    Figure 6: Edit alert destination

  4. In OpenSearch Dashboards, in the left menu, select Alerting. You will now set what to monitor. Here you monitor a CloudTrail trail authentication failure. There are two normalized log times: @timestamp and event.ingested. The difference is between the log occurrence time (@timestamp) and the SIEM reception time (event.ingested). Use event.ingested for logs with a large time lag from occurrence to reception. You can specify flexible conditions by selecting Define using extraction query for the filter definition.
  5. On the Monitors tab, choose Create monitor.
  6. Enter the following parameters. If there is no description, use the default value.
    • Name: Authentication failed
    • Method of definition: Define using extraction query
    • Indices: log-aws-cloudtrail-* (manual input, not pull-down)
    • Define extraction query: Enter the following query.
      {
      	"query": {
      		"bool": {
      			"filter": [
      			{"term": {"eventSource": "signin.amazonaws.com"}},
      			{"term": {"event.outcome": "failure"}},
      			{"range": {
      				"event.ingested": {
      				"from": "{{period_end}}||-20m",
      				"to": "{{period_end}}"}}
      				}
      			]
      		}
      	}
      }
      

  7. Enter the following remaining parameters of the monitor:
    • Frequency: By interval
    • Monitor schedule: Every 3 minutes
  8. Choose Create to create the monitor.

Step 3: Set up trigger to send email via Amazon SNS

Now you will set the alert firing condition, known as the trigger. This is the setting for alerting when the monitored conditions (Monitors) are met. By default, the alert will be triggered if the number of hits is greater than 0. In this step , you will not change it, only give it a name.

To set up the trigger

  1. Select Create trigger and for Trigger name, enter Authentication failed trigger.
  2. Scroll down to Configure actions.
    Figure 7: Create trigger

    Figure 7: Create trigger

  3. Set what the trigger should do (action). In this case, you want to publish to SNS. Set the following parameters for the body of the email
    • Action name: Authentication failed action
    • Destination: Choose aes-siem-alert-destination – (Amazon SNS)
    • Message subject: (SIEM) Auth failure alert
    • Action throttling: Select Enable action throttling, and set throttle action to only trigger every 10 minutes.
    • Message: Copy and paste the following message into the text box. After pasting, choose Send test message at the bottom right of the screen to confirm that you can receive the test email.

      Monitor ctx.monitor.name just entered alert status. Please investigate the issue.

      Trigger: ctx.trigger.name

      Severity: ctx.trigger.severity

      @timestamp: ctx.results.0.hits.hits.0._source.@timestamp

      event.action: ctx.results.0.hits.hits.0._source.event.action

      error.message: ctx.results.0.hits.hits.0._source.error.message

      count: ctx.results.0.hits.total.value

      source.ip: ctx.results.0.hits.hits.0._source.source.ip

      source.geo.country_name: ctx.results.0.hits.hits.0._source.source.geo.country_name

    Figure 8: Configure actions

    Figure 8: Configure actions

  4. You will receive an alert email in a few minutes. You can check the occurrence status, including the history, by the following method:
    1. In OpenSearch Dashboards, on the left menu, choose Alerting.
    2. On the Monitors tab, choose Authentication failed.
    3. You can check the status of the alert in the History pane.
    Figure 9: Email alert

    Figure 9: Email alert

Use case 1 shows you how to correlate various Security Hub findings through this OpenSearch Service SIEM solution. However, you can take the solution a step further and build more complex correlation checks by following the procedure in the blog post Correlate security findings with AWS Security Hub and Amazon EventBridge. This information can then be ingested into this OpenSearch Service SIEM solution for viewing on a single screen.

Use case 2: Store findings for longer than 90 days after last update date

Security Hub has a maximum storage time of 90 days for events, but your organization might require data storage beyond that period, with flexibility to specify a custom retention period to meet your needs. The SIEM on Amazon OpenSearch Service solution creates a centralized S3 bucket where findings from Security Hub and various other services are collected and stored, and this bucket can be configured to store data as long as you require. The S3 bucket can persist data indefinitely, or you can create an S3 object lifecycle policy to set a custom retention timeframe. Lifecycle policies allow you to either transition objects between S3 storage classes or delete objects after a specified period. Alternatively, you can use S3 Intelligent-Tiering to allow the Amazon S3 service to move data between tiers, based on user access patterns.

Either lifecycle policies or S3 Intelligent-Tiering will allow you to optimize costs for data that is stored in S3, to keep data for archive or backup purposes when it is no longer available in Security Hub or OpenSearch Service. Within the solution, this centralized bucket is called aes-siem-xxxxxxxx-log and is configured to store data for OpenSearch Service to consume indefinitely. The Amazon S3 User Guide has instructions for configuring an S3 lifecycle policy that is explicitly defined by the user on the centralized bucket. Or you can follow the instructions for configuring intelligent tiering to allow the S3 service to manage which tier data is stored in automatically. After data is archived, you can use Amazon Athena to query the S3 bucket for historical information that has been removed from OpenSearch Service, because this S3 bucket acts as a centralized security event repository.

Use case 3: Aggregate findings across multiple administrator accounts

There are cases where you might have multiple Security Hub administrator accounts within one or multiple organizations. For these use cases, you can consolidate findings across these multiple Security Hub administrator accounts into a single S3 bucket for centralized storage, archive, backup, and querying. This gives you the ability to create a single SIEM on OpenSearch Service to minimize the number of monitoring tools you need. In order to do this, you can use S3 replication to automatically copy findings to a centralized S3 bucket. You can follow this detailed walkthrough on how to set up the correct bucket permissions in order to allow replication between the accounts. You can also follow this related blog post to configure cross-Region Security Hub findings that are centralized in a single S3 bucket, if cross-Region replication is appropriate for your security needs. With cross-account S3 replication set up for Security Hub archived event data, you can import data from the centralized S3 bucket into OpenSearch Service by using the Lambda function within the solution in this blog post. This Lambda function automatically normalizes and enriches the log data and imports it into OpenSearch Service, so that users only need to configure data storage in the S3 bucket, and the Lambda function will automatically import the data.

Conclusion

In this blog post, we showed how you can use Security Hub with a SIEM to store findings for longer than 90 days, aggregate findings across multiple administrator accounts, and correlate Security Hub findings with each other and other log sources. We used the solution to walk through building the SIEM and explained how Security Hub could be used within that solution to add greater flexibility. This post describes one solution to create your own SIEM using OpenSearch Service; however, we also recommend that you read the blog post Visualize AWS Security Hub Findings using Analytics and Business Intelligence Tools, in order to see a different method of consolidating and visualizing insights from Security Hub.

To learn more, you can also try out this solution through the new SIEM on AWS OpenSearch Service workshop.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, please start a new thread on the Security Hub forum or contact AWS Support.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ely Kahn

Ely Kahn

Ely Kahn is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Anthony Pasquariello

Anthony Pasquariello

Anthony Pasquariello is a Senior Solutions Architect at AWS based in New York City. He specializes in modernization and security for our advanced enterprise customers. Anthony enjoys writing and speaking about all things cloud. He’s pursuing an MBA, and received his MS and BS in Electrical & Computer Engineering.

Aashmeet Kalra

Aashmeet Kalra

Aashmeet Kalra is a Principal Solutions Architect working in the Global and Strategic team at AWS in San Francisco. Aashmeet has over 17 years of experience designing and developing innovative solutions for customers globally. She specializes in advanced analytics, machine learning and builder/developer experience.

Grant Joslyn

Grant Joslyn

Grant Joslyn is a solutions architect for the US state and local government public sector team at Amazon Web Services (AWS). He specializes in end user compute and cloud automation. He provides technical and architectural guidance to customers building secure solutions on AWS. He is a subject matter expert and thought leader for strategic initiatives that help customers embrace DevOps practices.

Akihiro Nakajima

Akihiro Nakajima

Akihiro Nakajima is a Senior Solutions Architect, Security Specialist at Amazon Web Services Japan. He has more than 20 years of experience in security, specifically focused on incident analysis and response, threat hunting, and digital forensics. He leads development of open-source software, “SIEM on Amazon OpenSearch Service”.

Ransomware mitigation: Using Amazon WorkDocs to protect end-user data

Post Syndicated from James Perry original https://aws.amazon.com/blogs/security/ransomware-mitigation-using-amazon-workdocs-to-protect-end-user-data/

Amazon Web Services (AWS) has published whitepapers, blog articles, and videos with prescriptive guidance to assist you in developing an enterprise strategy to mitigate risks associated with ransomware and other destructive events. We also announced a strategic partnership with CrowdStrike and Presidio where together we developed a Ransomware Risk Mitigation Kit, and a Quick-Start engagement to assist with deployment, to provide you with tools to deal with security events before and after they occur.

Developing a ransomware mitigation strategy often uses a risk-based approach, where priority is given to protecting mission-critical applications and data. Managing identified risks associated with individual end users is often deemed a lower priority. However, in many organizations, such as research universities, the work performed by individual researchers is the organizational mission.

End users are increasingly mobile. They’re working remotely, on the go, and frequently moving from one project to the next. They’re also collaborating across borders, time zones, and organizations. You need options for your employees to work securely from any location.

This post covers how you can help prevent, back up, and recover your critical end-user data from ransomware by using Amazon WorkDocs.

Introduction to Amazon WorkDocs

Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you can create, edit, and share content, and because content is stored centrally on AWS, access it from anywhere, on any device. Amazon WorkDocs makes it easier to collaborate with others, and lets you share content, provide rich feedback, and collaboratively edit documents.

You can access Amazon WorkDocs on the web, or install apps for Windows, MacOS, Android, and iOS devices. In addition, the Amazon WorkDocs Companion lets you open and edit a file from the web client in a single step. When you edit a file, Companion saves your changes to Amazon WorkDocs as a new file version. Amazon WorkDocs Drive enables you to open and work with Amazon WorkDocs files on your computer’s desktop. And the Amazon WorkDocs SDK includes APIs that allow you to build new applications or create integrations with existing Amazon WorkDocs solutions and applications.

As illustrated in Figure 1, these features combine to enable end-user and team file storage, team content and collaboration workflows, secure and auditable content sharing, cloud-based file sharing, and mobile workforce enablement, with support for automation and extensibility.

Figure 1: Common use cases enabled by Amazon WorkDocs

Figure 1: Common use cases enabled by Amazon WorkDocs

Amazon WorkDocs security

Amazon WorkDocs is built with security in mind. Amazon WorkDocs files are stored using the highly durable AWS storage infrastructure, and are encrypted both while in transit and at rest. The service supports the use of multi-factor authentication (MFA), IP filtering of allow lists, and the ability to specify which AWS Region will be used to meet data residency requirements. Your organization can set security policies that prevent your employees from sharing documents externally. Third-party auditors assess the security and compliance of Amazon WorkDocs as part of multiple AWS compliance programs, including SOC, PCI DSS, FedRAMP, HIPAA, ISO 9001, ISO 27001, ISO 27017, and ISO 27018.

Auto activation and authentication

Amazon WorkDocs uses a directory to store and manage organization information for your users and their documents. You can choose from three supported options: Simple Active Directory (Simple AD), Active Directory (AD) Connector, or AWS Managed Microsoft AD.

Simple AD

You can use Simple AD as a standalone directory in the cloud to support Windows workloads that need basic AD features and compatible AWS applications, or to support Linux workloads that need LDAP service. However, Simple AD does not support MFA. For more information, see Simple Active Directory.

AD Connector

AD Connector is a proxy service that provides an easy way to connect compatible AWS applications, such as Amazon WorkDocs, to your existing on-premises Microsoft Active Directory. With AD Connector, you can simply add one service account to your Active Directory. AD Connector also eliminates the need for directory synchronization, as well as the cost and complexity of hosting a federation infrastructure.

AWS Managed Microsoft AD

AWS Managed Microsoft AD is powered by Microsoft Windows Server Active Directory (AD), managed by AWS in the AWS Cloud. It enables you to migrate a broad range of Active Directory–aware applications to the AWS Cloud. AWS Managed Microsoft AD works with Microsoft SharePoint, Microsoft SQL Server Always-On Availability Groups, and many .NET applications. It also supports AWS managed applications and services, including Amazon WorkDocs.

You can attach a supported directory to a WorkDocs site during provisioning. When you do, an Amazon WorkDocs feature called Auto activation adds the users in the directory to the site as managed users, meaning they don’t need separate credentials to log in to your site. You can also create user groups, enable MFA, and configure single sign-on (SSO) for your Amazon WorkDocs site.

Ransomware risk mitigation with Amazon WorkDocs

Amazon WorkDocs also includes built-in security features that enable you to selectively prevent file downloads and changes, revert files to a previous version, and recover deleted files, all of which can mitigate impact and support recovery from a ransomware event.

File versioning

You can keep track of prior versions in Amazon WorkDocs with unlimited versioning. A new version of a file is created every time you save it. With Amazon WorkDocs, all feedback is associated with a specific file version, so you can refer back to comments in earlier iterations. Previous versions can be retrieved, as shown in Figure 2, when you access Amazon WorkDocs with a web browser.

Figure 2: File versioning in Amazon WorkDocs via web browser

Figure 2: File versioning in Amazon WorkDocs via web browser

Using the file versioning feature can help enable the restoration of an unlocked file that has been altered by ransomware to a previous version.

File recovery

When files or folders are deleted, they are stored in an end-user managed recycle bin, as shown in Figure 3, where they can be recovered by the end user if needed.

Figure 3: End-user file recovery from recycle bin in Amazon WorkDocs via web browser

Figure 3: End-user file recovery from recycle bin in Amazon WorkDocs via web browser

After a period of 30 days, the files and folders will be retained for an additional 60 days in an Amazon WorkDocs site administrator-managed recovery bin before being permanently deleted. 60 days is the default retention period, but site administrators can adjust this period to any value from 0 to 365 days. Files will be retained for the specified period and permanently deleted when the retention period limit is reached.

In addition, customers can sync files from Amazon WorkDocs to Amazon S3 for additional resiliency.

Using the file recovery features can provide the ability to restore individual files and folders that were deleted—by ransomware or even just by accident. Note that as of today, file recovery works on a per file or folder basis.

File control

Amazon WorkDocs lets you control who can access, comment on, and download or print your files. And, because the Amazon WorkDocs web client performs remote file rendering via HTML (see supported file types), users gain protection they would not otherwise be afforded when viewing potentially infected files locally. This, combined with the ability to prevent a file from being downloaded as illustrated in Figure 4, can help to mitigate the risk of malware spreading.

You can also lock files while making changes, and enable settings that prevent edits from being overwritten by other contributors, eliminating the need to coordinate changes. You can also disable feedback when you’ve completed a file. When you lock a file, as illustrated in Figure 4, a new version of that file cannot be uploaded until you unlock the file. If someone else needs access to the file, they can request that you unlock it, and you’ll be notified of the request.

Figure 4: End-user file lock settings in Amazon WorkDocs via web browser

Figure 4: End-user file lock settings in Amazon WorkDocs via web browser

Using the file locking feature can prevent ransomware from making unauthorized changes (such as encrypting) to a locked file.

Conclusion

In this blog post, I showed how AWS customers can help prevent, back up, and recover critical end-user data from ransomware incidents by using the file versioning, recovery, and control features of Amazon WorkDocs.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

James Perry

James Perry

James is the Solutions Architecture Security Leader for the Amazon Web Services Worldwide Public Sector Education and State & Local Government team.

How to set up federated single sign-on to AWS using Google Workspace

Post Syndicated from Wei Chen original https://aws.amazon.com/blogs/security/how-to-set-up-federated-single-sign-on-to-aws-using-google-workspace/

Organizations who want to federate their external identity provider (IdP) to AWS will typically do it through AWS Single Sign-On (AWS SSO), AWS Identity and Access Management (IAM), or use both. With AWS SSO, you configure federation once and manage access to all of your AWS accounts centrally. With AWS IAM, you configure federation to each AWS account, and manage access individually for each account. AWS SSO supports identity synchronization through the System for Cross-domain Identity Management (SCIM) v2.0 for several identity providers. For IdPs not currently supported, you can provision users manually. Otherwise, you can choose to federate to AWS from Google Workspace through IAM federation, which this post will cover below.

Google Workspace offers a single sign-on service based off of the Security Assertion Markup Language (SAML) 2.0. Users can use this service to access to your AWS resources by using their existing Google credentials. For users to whom you grant access, they will see an additional SAML app in their Google Workspace console. When your users choose this SAML app, they will be redirected to www.google.com the AWS Management Console.

Solution Overview

In this solution, you will create a SAML identity provider in IAM to establish a trusted communication channel across which user authentication information may be securely passed with your Google IdP in order to permit your Google Workspace users to access the AWS Management Console. You, as the AWS administrator, delegate responsibility for user authentication to a trusted IdP, in this case Google Workspace. Google Workspace leverages SAML 2.0 messages to communicate user authentication information between Google and your AWS account. The information contained within the SAML 2.0 messages allows an IAM role to grant the federated user permissions to sign in to the AWS Management Console and access your AWS resources. The IAM policy attached to the role they select determines which permissions the federated user has in the console.

Figure 1: Login process for IAM federation

Figure 1: Login process for IAM federation

Figure 1 illustrates the login process for IAM federation. From the federated user’s perspective, this process happens transparently: the user starts at the Google Workspace portal and ends up at the AWS Management Console, without having to supply yet another user name and password.

  1. The portal verifies the user’s identity in your organization. The user begins by browsing to your organization’s portal and selects the option to go to the AWS Management Console. In your organization, the portal is typically a function of your IdP that handles the exchange of trust between your organization and AWS. In Google Workspace, you navigate to https://myaccount.google.com/ and select the nine dots icon on the top right corner. This will show you a list of apps, one of which will log you in to AWS. This blog post will show you how to configure this custom app.
    Figure 2: Google Account page

    Figure 2: Google Account page

  2. The portal verifies the user’s identity in your organization.
  3. The portal generates a SAML authentication response that includes assertions that identify the user and include attributes about the user. The portal sends this response to the client browser. Although not discussed here, you can also configure your IdP to include a SAML assertion attribute called SessionDuration that specifies how long the console session is valid. You can also configure the IdP to pass attributes as session tags.
  4. The client browser is redirected to the AWS single sign-on endpoint and posts the SAML assertion.
  5. The endpoint requests temporary security credentials on behalf of the user, and creates a console sign-in URL that uses those credentials.
  6. AWS sends the sign-in URL back to the client as a redirect.
  7. The client browser is redirected to the AWS Management Console. If the SAML authentication response includes attributes that map to multiple IAM roles, the user is first prompted to select the role for accessing the console.

The list below is a high-level view of the specific step-by-step procedures needed to set up federated single sign-on access via Google Workspace.

The setup

Follow these top-level steps to set up federated single sign-on to your AWS resources by using Google Apps:

  1. Download the Google identity provider (IdP) information.
  2. Create the IAM SAML identity provider in your AWS account.
  3. Create roles for your third-party identity provider.
  4. Assign the user’s role in Google Workspace.
  5. Set up Google Workspace as a SAML identity provider (IdP) for AWS.
  6. Test the integration between Google Workspace and AWS IAM.
  7. Roll out to a wider user base.

Detailed procedures for each of these steps compose the remainder of this blog post.

Step 1. Download the Google identity provider (IdP) information

First, let’s get the SAML metadata that contains essential information to enable your AWS account to authenticate the IdP and locate the necessary communication endpoint locations:

  1. Log in to the Google Workspace Admin console
  2. From the Admin console Home page, select Security > Settings > Set up single sign-on (SSO) with Google as SAML Identity Provider (IdP).
    Figure 3: Accessing the "single sign-on for SAML applications" setting

    Figure 3: Accessing the “single sign-on for SAML applications” setting

  3. Choose Download Metadata under IdP metadata.
    Figure 4: The "SSO with Google as SAML IdP" page

    Figure 4: The “SSO with Google as SAML IdP” page

Step 2. Create the IAM SAML identity provider in your account

Now, create an IAM IdP for Google Workspace in order to establish the trust relationship between Google Workspace and your AWS account. The IAM IdP you create is an entity within your AWS account that describes the external IdP service whose users you will configure to assume IAM roles.

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Identity providers and then choose Add provider.
  3. For Configure provider, choose SAML.
  4. Type a name for the identity provider (such as GoogleWorkspace).
  5. For Metadata document, select Choose file then specify the SAML metadata document that you downloaded in Step 1–c.
  6. Verify the information that you have provided. When you are done, choose Add provider.
    Figure 5: Adding an Identity provider

    Figure 5: Adding an Identity provider

  7. Document the Amazon Resource Name (ARN) by viewing the identity provider you just created in step f. The ARN should looks similar to this:

    arn:aws:iam::123456789012:saml-provider/GoogleWorkspace

Step 3. Create roles for your third-party Identity Provider

For users accessing the AWS Management Console, the IAM role that the user assumes allows access to resources within your AWS account. The role is where you define what you allow a federated user to do after they sign in.

  1. To create an IAM role, go to the AWS IAM console. Select Roles > Create role.
  2. Choose the SAML 2.0 federation role type.
  3. For SAML Provider, select the provider which you created in Step 2.
  4. Choose Allow programmatic and AWS Management Console access to create a role that can be assumed programmatically and from the AWS Management Console.
  5. Review your SAML 2.0 trust information and then choose Next: Permissions.
    Figure 6: Reviewing your SAML 2.0 trust information

    Figure 6: Reviewing your SAML 2.0 trust information

GoogleSAMLPowerUserRole:

  1. For this walkthrough, you are going to create two roles that can be assumed by SAML 2.0 federation. For GoogleSAMLPowerUserRole, you will attach the PowerUserAccess AWS managed policy. This policy provides full access to AWS services and resources, but does not allow management of users and groups. Choose Filter policies, then select AWS managed – job function from the dropdown. This will show a list of AWS managed policies designed around specific job functions.
    Figure 7: Selecting the AWS managed job function

    Figure 7: Selecting the AWS managed job function

  2. To attach the policy, select PowerUserAccess. Then choose Next: Tags, then Next: Review.
    Figure 8: Attaching the PowerUserAccess policy to your role

    Figure 8: Attaching the PowerUserAccess policy to your role

  3. Finally, choose Create role to finalize creation of your role.
    Figure 9: Creating your role

    Figure 9: Creating your role

GoogleSAMLViewOnlyRole

Repeat steps a to g for the GoogleSAMLViewOnlyRole, attaching the ViewOnlyAccess AWS managed policy.

Figure 10: Creating the GoogleSAMLViewOnlyRole

Figure 10: Creating the GoogleSAMLViewOnlyRole

Figure 11: Attaching the ViewOnlyAccess permissions policy

Figure 11: Attaching the ViewOnlyAccess permissions policy

  1. Document the ARN of both roles. The ARN should be similar to

    arn:aws:iam::123456789012:role/GoogleSAMLPowerUserRole and

    arn:aws:iam::123456789012:role/GoogleSAMLViewOnlyAccessRole.

Step 4. Assign the user’s role in Google Workspace

Here you will specify the role or roles that this user can assume in AWS.

  1. Log in to the Google Admin console.
  2. From the Admin console Home page, go to Directory > Users and select Manage custom attributes from the More dropdown, and choose Add Custom Attribute.
  3. Configure the custom attribute as follows:

    Category: AWS
    Description: Amazon Web Services Role Mapping

    For Custom fields, enter the following values:

    Name: AssumeRoleWithSaml
    Info type: Text
    Visibility: Visible to user and admin
    InNo. of values: Multi-value
  4. Choose Add. The new category should appear in the Manage user attributes page.
    Figure12: Adding the custom attribute

    Figure12: Adding the custom attribute

  5. Navigate to Users, and find the user you want to allow to federate into AWS. Select the user’s name to open their account page, then choose User Information.
  6. Select on the custom attribute you recently created, named AWS. Add two rows, each of which will include the values you recorded earlier, using the format below for each AssumeRoleWithSaml row.

    Row 1:
    arn:aws:iam::123456789012:role/GoogleSAMLPowerUserRole,arn:aws:iam:: 123456789012:saml-provider/GoogleWorkspace

    Row 2:
    arn:aws:iam::123456789012:role/GoogleSAMLViewOnlyAccessRole,arn:aws:iam:: 123456789012:saml-provider/GoogleWorkspace

    The format of the AssumeRoleWithSaml is constructed by using the RoleARN(from Step 3-h) + “,”+ Identity provider ARN (from Step 2-g), this value will be passed as SAML attribute value for attribute with name https://aws.amazon.com/SAML/Attributes/Role. The final result will look similar to below:

    Figure 13: Adding the roles that the user can assume

    Figure 13: Adding the roles that the user can assume

Step 5. Set up Google Workspace as a SAML identity provider (IdP) for AWS

Now you’ll set up the SAML app in your Google Workspace account. This includes adding the SAML attributes that the AWS Management Console expects in order to allow a SAML-based authentication to take place.

Log into the Google Admin console.

  1. From the Admin console Home page, go to Apps > Web and mobile apps.
  2. Choose Add custom SAML app from the Add App dropdown.
  3. Enter AWS Single-Account Access for App name and upload an optional App icon to identify your SAML application, and select Continue.
    Figure 14: Naming the custom SAML app and setting the icon

    Figure 14: Naming the custom SAML app and setting the icon

  4. Fill in the following values:

    ACS URL: https://signin.aws.amazon.com/saml
    Entity ID: urn:amazon:webservices
    Name ID format: EMAIL
    Name ID: Basic Information > Primary email

    Note: Your primary email will become your role’s AWS session name

  5. Choose CONTINUE.
    Figure 15: Adding the custom SAML app

    Figure 15: Adding the custom SAML app

  6. AWS requires the IdP to issue a SAML assertion with some mandatory attributes (known as claims). The AWS documentation explains how to configure the SAML assertion. In short, you need to create an assertion with the following:
    • An attribute of name https://aws.amazon.com/SAML/Attributes/Role. This element contains one or more AttributeValue elements that list the IAM identity provider and role to which the user is mapped by your IdP. The IAM role and IAM identity provider are specified as a comma-delimited pair of ARNs in the same format as the RoleArn and PrincipalArn parameters that are passed to AssumeRoleWithSAML.
    • An attribute of name https://aws.amazon.com/SAML/Attributes/RoleSessionName (again, this is just a definition of type, not an actual URL) with a string value. This is the federated user’s role session name in AWS.
    • A name identifier (NameId) that is used to identify the subject of a SAML assertion.

      Google Directory attributes App attributes
      AWS > AssumeRoleWithSaml https://aws.amazon.com/SAML/Attributes/Role
      Basic Information > Primary email https://aws.amazon.com/SAML/Attributes/RoleSessionName
      Figure 16: Mapping between Google Directory attributes and SAML attributes

      Figure 16: Mapping between Google Directory attributes and SAML attributes

  7. Choose FINISH and save the mapping.

Step 6. Test the integration between Google Workspace and AWS IAM

  1. Log into the Google Admin portal.
  2. From the Admin console Home page, go to Apps > Web and mobile apps.
  3. Select the Application you created in Step 5-i.
  4. At the top left, select TEST SAML LOGIN, then choose ALLOW ACCESS within the popup box.
    Figure 18: Testing the SAML login

    Figure 18: Testing the SAML login

  5. Select ON for everyone in the Service status section, and choose SAVE. This will allow every user in Google Workspace to see the new SAML custom app.
    Figure 19: Saving the custom app settings

    Figure 19: Saving the custom app settings

  6. Now navigate to Web and mobile apps and choose TEST SAML LOGIN again. Amazon Web Services should open in a separate tab and display two roles for users to choose from:
    FIgure 20: Testing SAML login again

    FIgure 20: Testing SAML login again

    Figure 21: Selecting the IAM role you wish to assume for console access

    Figure 21: Selecting the IAM role you wish to assume for console access

  7. Select the desired role and select Sign in.
  8. You should now be redirected to AWS Management Console home page.
  9. Google workspace users should now be able to access the AWS application from their workspace:
    Figure 22: Viewing the AWS custom app

    Figure 22: Viewing the AWS custom app

Conclusion

By following the steps in this blog post, you’ve configured your Google Workspace directory and AWS accounts to allow SAML-based federated sign-on for selected Google Workspace users. Using this over IAM users helps centralize identity management, making it easier to adopt a multi-account strategy.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Wei Chen

Wei Chen

Wei Chen is a Sr. Solutions Architect at Amazon Web Services, based in Austin, TX. He has more than 20 years of experience assisting customers with the building of solutions to significantly complex challenges. At AWS, Wei helps customers achieve their strategic business objectives by rearchitecting their applications to take full advantage of the cloud. He specializes on mastering the compliance frameworks, technical compliance programs, physical security, security processes, and AWS Security services.

Roy Tokeshi

Roy Tokeshi

Roy is a Solutions Architect for Amazon End User Computing. He enjoys making in AWS, CNC, laser engravers, and IoT. He likes to help customers build mechanisms to create business value.

Michael Chan

Michael Chan

Michael is a Solutions Architect for AWS Identity. He enjoys understanding customer problems with AWS IAM and working backwards to provide practical solutions.

Customers can now request the AWS CyberGRX report for their third-party supplier due diligence

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/customers-can-now-request-the-aws-cybergrx-report-for-their-third-party-supplier-due-diligence/

CyberGRX

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). We are continuously expanding our compliance programs to provide customers with more tools and resources to be able to perform effective due diligence on AWS. We are excited to announce the availability of the AWS CyberGRX report for our customers.

With the increase in adoption of cloud platforms and services across multiple sectors and industries, AWS has become one of the most critical components of customers’ third-party ecosystems. Regulated customers, such as those in the financial services sector, are held to higher standards by their regulators and auditors when it comes to exercising effective due diligence on their third parties. Customers are using third-party cyber risk management (TPCRM) platforms such as CyberGRX to better manage risks from their evolving third-party ecosystems and drive operational efficiencies. To help customers in such efforts, AWS has completed CyberGRX assessment of its security posture. The assessment is performed annually and is validated by independent CyberGRX partners.

CyberGRX assessment applies a dynamic approach to third-party risk assessment, which is updated in line with changes in risk level of cloud service providers, or as AWS updates its security posture and controls. This approach eliminates outdated static spreadsheets for third-party risk assessments, in which the risk matrices are not updated in near real time. CyberGRX assessment provides advanced capabilities by integrating AWS responses with analytics, threat intelligence, and sophisticated risk models to provide an in-depth view of the AWS security posture. In addition, AWS customers can use CyberGRX’s Framework Mapper feature to map AWS assessment controls and responses to well-known industry standards and frameworks (such as NIST 800-53, NIST Cybersecurity Framework (CSF), ISO 27001, PCI DSS, HIPAA) which can significantly reduce customers’ third-party supplier due-diligence burden.

The AWS CyberGRX report is available to all customers free of cost. Customers can request access to the report by completing an access request form, available on the AWS CyberGRX page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page, or if you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across Europe and other regions. During his professional career, he has helped multiple cloud service providers in obtaining global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto. He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has previously worked in risk management, security assurance, and technology audits for the past 12 years.

SOC reports now available in Spanish

Post Syndicated from Rodrigo Fiuza original https://aws.amazon.com/blogs/security/soc-reports-now-available-in-spanish/

At Amazon Web Services (AWS), we continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs. We are pleased to announce that Fall 2021 AWS SOC 1, SOC 2 and SOC 3 reports are now available in Spanish. These translated reports will help drive greater engagement and alignment with customer and regulatory requirements across Latin America and Spain.

The English language version of the reports should be taken into account regarding the independent opinion issued by the auditors and control test results. They will be a complement to the Spanish version.

Translated SOC Reports in Spanish are available through AWS Artifact. Translated SOC reports in Spanish will be published twice a year in alignment with the Fall and Spring reporting cycles.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

 


 

Los informes SOC ahora están disponibles en español

Seguimos escuchando a nuestros clientes, reguladores y partes interesadas para comprender sus necesidades en relación con los programas de auditoría, garantía, certificación y atestación en Amazon Web Services (AWS). Nos complace anunciar que los informes SOC 1, SOC 2 y SOC 3 de AWS de otoño de 2021 ya están disponibles en español. Estos informes traducidos ayudarán a impulsar un mayor compromiso y alineación con los requisitos regulatorios y de los clientes en las regiones de América Latina y España.

La versión en inglés de los informes debe tenerse en cuenta en relación con la opinión independiente emitida por los auditores y los resultados de las pruebas de control, como complemento de las versiones en español.

Los informes SOC traducidos en español están disponibles en AWS Artifact. Los informes SOC traducidos en español se publicarán dos veces al año según los ciclos de informes de otoño y primavera.

Valoramos sus comentarios y preguntas; no dude en ponerse en contacto con nuestro equipo o enviarnos sus comentarios sobre esta publicación a través de nuestra página Contáctenos.

Si tienes comentarios sobre esta publicación, envíalos en la sección Comentarios a continuación.

¿Desea obtener más noticias sobre seguridad de AWS? Síguenos en Twitter.
 

Rodrigo Fiuza

Rodrigo Fiuza

Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, Caribbean and Europe. Rodrigo has previously worked in risk management, security assurance, and technology audits for the past 12 years.

Author

Nimesh Ravasa

Nimesh is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Nimesh has 14 years of experience in information security and holds CISSP, CISA, PMP, CSX, AWS Solution Architect – Associate, and AWS Security Specialty certifications.

Emma Zhang

Emma Zhang

Emma is a Compliance Program Manager at Amazon Web Services. She leads multiple process improvement projects across multiple compliance programs within AWS. Emma has 8 years of experience in risk management, IT risk assurance, and technology risk advisory.

Streamlining evidence collection with AWS Audit Manager

Post Syndicated from Nicholas Parks original https://aws.amazon.com/blogs/security/streamlining-evidence-collection-with-aws-audit-manager/

In this post, we will show you how to deploy a solution into your Amazon Web Services (AWS) account that enables you to simply attach manual evidence to controls using AWS Audit Manager. Making evidence-collection as seamless as possible minimizes audit fatigue and helps you maintain a strong compliance posture.

As an AWS customer, you can use APIs to deliver high quality software at a rapid pace. If you have compliance-focused teams that rely on manual, ticket-based processes, you might find it difficult to document audit changes as those changes increase in velocity and volume.

As your organization works to meet audit and regulatory obligations, you can save time by incorporating audit compliance processes into a DevOps model. You can use modern services like Audit Manager to make this easier. Audit Manager automates evidence collection and generates reports, which helps reduce manual auditing efforts and enables you to scale your cloud auditing capabilities along with your business.

AWS Audit Manager uses services such as AWS Security Hub, AWS Config, and AWS CloudTrail to automatically collect and organize evidence, such as resource configuration snapshots, user activity, and compliance check results. However, for controls represented in your software or processes without an AWS service-specific metric to gather, you need to manually create and provide documentation as evidence to demonstrate that you have established organizational processes to maintain compliance. The solution in this blog post streamlines these types of activities.

Solution architecture

This solution creates an HTTPS API endpoint, which allows integration with other software development lifecycle (SDLC) solutions, IT service management (ITSM) products, and clinical trial management systems (CTMS) solutions that capture trial process change amendment documentation (in the case of pharmaceutical companies who use AWS to build robust pharmacovigilance solutions). The endpoint can also be a backend microservice to an application that allows contract research organizations (CRO) investigators to add their compliance supporting documentation.

In this solution’s current form, you can submit an evidence file payload along with the assessment and control details to the API and this solution will tie all the information together for the audit report. This post and solution is directed towards engineering teams who are looking for a way to accelerate evidence collection. To maximize the effectiveness of this solution, your engineering team will also need to collaborate with cross-functional groups, such as audit and business stakeholders, to design a process and service that constructs and sends the message(s) to the API and to scale out usage across the organization.

To download the code for this solution, and the configuration that enables you to set up auto-ingestion of manual evidence, see the aws-audit-manager-manual-evidence-automation GitHub repository.

Architecture overview

In this solution, you use AWS Serverless Application Model (AWS SAM) templates to build the solution and deploy to your AWS account. See Figure 1 for an illustration of the high-level architecture.

Figure 1. The architecture of the AWS Audit Manager automation solution

Figure 1. The architecture of the AWS Audit Manager automation solution

The SAM template creates resources that support the following workflow:

  1. A client can call an Amazon API Gateway endpoint by sending a payload that includes assessment details and the evidence payload.
  2. An AWS Lambda function implements the API to handle the request.
  3. The Lambda function uploads the evidence to an Amazon Simple Storage Service (Amazon S3) bucket (3a) and uses AWS Key Management Service (AWS KMS) to encrypt the data (3b).
  4. The Lambda function also initializes the AWS Step Functions workflow.
  5. Within the Step Functions workflow, a Standard Workflow calls two Lambda functions. The first looks for a matching control within an assessment, and the second updates the control within the assessment with the evidence.
  6. When the Step Functions workflow concludes, it sends a notification for success or failure to subscribers of an Amazon Simple Notification Service (Amazon SNS) topic.

Deploy the solution

The project available in the aws-audit-manager-manual-evidence-automation GitHub repository contains source code and supporting files for a serverless application you can deploy with the AWS SAM command line interface (CLI). It includes the following files and folders:

src Code for the application’s Lambda implementation of the Step Functions workflow.
It also includes a Step Functions definition file.
template.yml A template that defines the application’s AWS resources.

Resources for this project are defined in the template.yml file. You can update the template to add AWS resources through the same deployment process that updates your application code.

Prerequisites

This solution assumes the following:

  1. AWS Audit Manager is enabled.
  2. You have already created an assessment in AWS Audit Manager.
  3. You have the necessary tools to use the AWS SAM CLI (see details in the table that follows).

For more information about setting up Audit Manager and selecting a framework, see Getting started with Audit Manager in the blog post AWS Audit Manager Simplifies Audit Preparation.

The AWS SAM CLI is an extension of the AWS CLI that adds functionality for building and testing Lambda applications. The AWS SAM CLI uses Docker to run your functions in an Amazon Linux environment that matches Lambda. It can also emulate your application’s build environment and API.

To use the AWS SAM CLI, you need the following tools:

AWS SAM CLI Install the AWS SAM CLI
Node.js Install Node.js 14, including the npm package management tool
Docker Install Docker community edition

To deploy the solution

  1. Open your terminal and use the following command to create a folder to clone the project into, then navigate to that folder. Be sure to replace <FolderName> with your own value.

    mkdir Desktop/<FolderName> && cd $_

  2. Clone the project into the folder you just created by using the following command.

    git clone https://github.com/aws-samples/aws-audit-manager-manual-evidence-automation.git

  3. Navigate into the newly created project folder by using the following command.

    cd aws-audit-manager-manual-evidence-automation

  4. In the AWS SAM shell, use the following command to build the source of your application.

    sam build

  5. In the AWS SAM shell, use the following command to package and deploy your application to AWS. Be sure to replace <DOC-EXAMPLE-BUCKET> with your own unique S3 bucket name.

    sam deploy –guided –parameter-overrides paramBucketName=<DOC-EXAMPLE-BUCKET>

  6. When prompted, enter the AWS Region where AWS Audit Manager was configured. For the rest of the prompts, leave the default values.
  7. To activate the IAM authentication feature for API gateway, override the default value by using the following command.

    paramUseIAMwithGateway=AWS_IAM

To test the deployed solution

After you deploy the solution, run an invocation like the one below for an assessment (using curl). Be sure to replace <YOURAPIENDPOINT> and <AWS REGION> with your own values.

curl –location –request POST
‘https://<YOURAPIENDPOINT>.execute-api.<AWS REGION>.amazonaws.com/Prod’ \
–header ‘x-api-key: ‘ \
–form ‘payload=@”<PATH TO FILE>”‘ \
–form ‘AssessmentName=”GxP21cfr11″‘ \
–form ‘ControlSetName=”General requirements”‘ \
–form ‘ControlIdName=”11.100(a)”‘

Check to see that your file is correctly attached to the control for your assessment.

Form-data interface parameters

The API implements a form-data interface that expects four parameters:

  1. AssessmentName: The name for the assessment in Audit Manager. In this example, the AssessmentName is GxP21cfr11.
  2. ControlSetName: The display name for a control set within an assessment. In this example, the ControlSetName is General requirements.
  3. ControlIdName: this is a particular control within a control set. In this example, the ControlIdName is 11.100(a).
  4. Payload: this is the file representing evidence to be uploaded.

As a refresher of Audit Manager concepts, evidence is collected for a particular control. Controls are grouped into control sets. Control sets can be grouped into a particular framework. The assessment is considered an implementation, or an instance, of the framework. For more information, see AWS Audit Manager concepts and terminology.

To clean up the deployed solution

To clean up the solution, use the following commands to delete the AWS CloudFormation stack and your S3 bucket. Be sure to replace <YourStackId> and <DOC-EXAMPLE-BUCKET> with your own values.

aws cloudformation delete-stack –stack-name <YourStackId>
aws s3 rb s3://<DOC-EXAMPLE-BUCKET> –force

Conclusion

This solution provides a way to allow for better coordination between your software delivery organization and compliance professionals. This allows your organization to continuously deliver new updates without overwhelming your security professionals with manual audit review tasks.

Next steps

There are various ways to extend this solution.

  1. Update the API Lambda implementation to be a webhook for your favorite software development lifecycle (SDLC) or IT service management (ITSM) solution.
  2. Modify the steps within the Step Functions state machine to more closely match your unique compliance processes.
  3. Use AWS CodePipeline to start Step Functions state machines natively, or integrate a variation of this solution with any continuous compliance workflow that you have.

Learn more AWS Audit Manager, DevOps, and AWS for Health and start building!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Nicholas Parks

Nicholas Parks

Nicholas has been using AWS since 2010 across various enterprise verticals including healthcare, life sciences, financial, retail, and telecommunications. Nicholas focuses on modernizations in pursuit of new revenue as well as application migrations. He specializes in Lean, DevOps cultural change, and Continuous Delivery.

Brian Tang

Brian Tang

Brian Tang is an AWS Solutions Architect based out of Boston, MA. He has 10 years of experience helping enterprise customers across a wide range of industries complete digital transformations by migrating business-critical workloads to the cloud. His core interests include DevOps and serverless-based solutions. Outside of work, he loves rock climbing and playing guitar.

New – Additional Checksum Algorithms for Amazon S3

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/

Amazon Simple Storage Service (Amazon S3) is designed to provide 99.999999999% (11 9s) of durability for your objects and for the metadata associated with your objects. You can rest assured that S3 stores exactly what you PUT, and returns exactly what is stored when you GET. In order to make sure that the object is transmitted back-and-forth properly, S3 uses checksums, basically a kind of digital fingerprint.

S3’s PutObject function already allows you to pass the MD5 checksum of the object, and only accepts the operation if the value that you supply matches the one computed by S3. While this allows S3 to detect data transmission errors, it does mean that you need to compute the checksum before you call PutObject or after you call GetObject. Further, computing checksums for large (multi-GB or even multi-TB) objects can be computationally intensive, and can lead to bottlenecks. In fact, some large S3 users have built special-purpose EC2 fleets solely to compute and validate checksums.

New Checksum Support
Today I am happy to tell you about S3’s new support for four checksum algorithms. It is now very easy for you to calculate and store checksums for data stored in Amazon S3 and to use the checksums to check the integrity of your upload and download requests. You can use this new feature to implement the digital preservation best practices and controls that are specific to your industry. In particular, you can specify the use of any one of four widely used checksum algorithms (SHA-1, SHA-256, CRC-32, and CRC-32C) when you upload each of your objects to S3.

Here are the principal aspects of this new feature:

Object Upload – The newest versions of the AWS SDKs compute the specified checksum as part of the upload, and include it in an HTTP trailer at the conclusion of the upload. You also have the option to supply a precomputed checksum. Either way, S3 will verify the checksum and accept the operation if the value in the request matches the one computed by S3. In combination with the use of HTTP trailers, this feature can greatly accelerate client-side integrity checking.

Multipart Object Upload – The AWS SDKs now take advantage of client-side parallelism and compute checksums for each part of a multipart upload. The checksums for all of the parts are themselves checksummed and this checksum-of-checksums is transmitted to S3 when the upload is finalized.

Checksum Storage & Persistence – The verified checksum, along with the specified algorithm, are stored as part of the object’s metadata. If Server-Side Encryption with KMS Keys is requested for the object, then the checksum is stored in encrypted form. The algorithm and the checksum stick to the object throughout its lifetime, even if it changes storage classes or is superseded by a newer version. They are also transferred as part of S3 Replication.

Checksum Retrieval – The new GetObjectAttributes function returns the checksum for the object and (if applicable) for each part.

Checksums in Action
You can access this feature from the AWS Command Line Interface (CLI), AWS SDKs, or the S3 Console. In the console, I enable the Additional Checksums option when I prepare to upload an object:

Then I choose a Checksum function:

If I have already computed the checksum I can enter it, otherwise the console will compute it.

After the upload is complete I can view the object’s properties to see the checksum:

The checksum function for each object is also listed in the S3 Inventory Report.

From my own code, the SDK can compute the checksum for me:

with open(file_path, 'rb') as file:
    r = s3.put_object(
        Bucket=bucket,
        Key=key,
        Body=file,
        ChecksumAlgorithm='sha1'
    )

Or I can compute the checksum myself and pass it to put_object:

with open(file_path, 'rb') as file:
    r = s3.put_object(
        Bucket=bucket,
        Key=key,
        Body=file,
        ChecksumSHA1='fUM9R+mPkIokxBJK7zU5QfeAHSy='
    )

When I retrieve the object, I specify checksum mode to indicate that I want the returned object validated:

r = s3.get_object(Bucket=bucket, Key=key, ChecksumMode='ENABLED')

The actual validation happens when I read the object from r['Body'], and an exception will be raised if there’s a mismatch.

Watch the Demo
Here’s a demo (first shown at re:Invent 2021) of this new feature in action:

Available Now
The four additional checksums are now available in all commercial AWS Regions and you can start using them today at no extra charge.

Jeff;

Let’s Architect! Architecting for Security

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-security/

At AWS, security is “job zero” for every employee—it’s even more important than any number one priority. In this Let’s Architect! post, we’ve collected security content to help you protect data, manage access, protect networks and applications, detect and monitor threats, and ensure privacy and compliance.

Managing temporary elevated access to your AWS environment

One challenge many organizations face is maintaining a solid security governance across AWS accounts.

This Security Blog post provides a practical approach to temporarily elevate access for specific users. For example, imagine a developer wants to access a resource in the production environment. With elevated access, you won’t have to provide them an account that has access to the production environment. You would just elevate their access for a short period of time. The following diagram shows the few steps needed to temporarily elevate access to a user.

This diagram shows the few steps needed to temporarily elevate access to a user

This diagram shows the few steps needed for to temporarily elevate access to a user

Security should start left: The problem with shift left

You already know security is job zero at AWS. But it’s not just a technology challenge. The gaps between security, operations, and development cycles are widening. To close these gaps, teams must have real-time visibility and control over their tools, processes, and practices to prevent security breaches.

This re:Invent session shows how establishing relationships, empathy, and understanding between development and operations teams early in the development process helps you maintain the visibility and control you need to keep your applications secure.

Screenshot from re:Invent session

Empowering developers means shifting security left and presenting security issues as early as possible in your process

AWS Security Reference Architecture: Visualize your security

Securing a workload in the cloud can be tough; almost every workload is unique and has different requirements. This re:Invent video shows you how AWS can simplify the security of your workloads, no matter their complexity.

You’ll learn how various services work together and how you can deploy them to meet your security needs. You’ll also see how the AWS Security Reference Architecture can automate common security tasks and expand your security practices for the future. The following diagram shows how AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts.

The AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts

The AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts

Network security for serverless workloads

Serverless technologies can improve your security posture. You can build layers of control and security with AWS managed and abstracted services, meaning that you don’t have to do as much security work and can focus on building your system.

This video from re:Invent provides serverless strategies to consider to gain greater control of networking security. You will learn patterns to implement security at the edge, as well as options for controlling an AWS Lambda function’s network traffic. These strategies are designed to securely access resources (for example, databases) placed in a virtual private cloud (VPC), as well as resources outside of a VPC. The following screenshot shows how
Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints.

Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints

Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints

See you next time!

Thanks for reading! If you’re looking for more ways to architect your workload for security, check out Best Practices for Security, Identity, & Compliance in the AWS Architecture Center.

See you in a couple of weeks when we discuss the best tools offered by AWS for software architects!

Other posts in this series

What is cryptographic computing? A conversation with two AWS experts

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/a-conversation-about-cryptographic-computing-at-aws/

Joan Feigenbaum
Joan Feigenbaum
Amazon Scholar, AWS Cryptography
Bill Horne
Bill Horne
Principal Product Manager, AWS Cryptography

AWS Cryptography tools and services use a wide range of encryption and storage technologies that can help customers protect their data both at rest and in transit. In some instances, customers also require protection of their data even while it is in use. To address these needs, Amazon Web Services (AWS) is developing new techniques for cryptographic computing, a set of technologies that allow computations to be performed on encrypted data, so that sensitive data is never exposed. This foundation is used to help protect the privacy and intellectual property of data owners, data users, and other parties involved in machine learning activities.

We recently spoke to Bill Horne, Principal Product Manager in AWS Cryptography, and Joan Feigenbaum, Amazon Scholar in AWS Cryptography, about their experiences with cryptographic computing, why it’s such an important topic, and how AWS is addressing it.

Tell me about yourselves: what made you decide to work in cryptographic computing? And, why did you come to AWS to do cryptographic computing?

Joan: I’m a computer science professor at Yale and an Amazon Scholar. I started graduate school at Stanford in Computer Science in the fall of 1981. Before that, I was an undergraduate math major at Harvard. Almost from the beginning, I have been interested in what has now come to be called cryptographic computing. During the fall of 1982, Andrew Yao, who was my PhD advisor, published a paper entitled “Protocols for Secure Computation,” which introduced the millionaire’s problem: Two millionaires want to run a protocol at the end of which they will know which one of them has more millions, but not know exactly how many millions the other one has. If you dig deeper, you’ll find a few antecedents, but that’s the paper that’s usually credited with launching the field of cryptographic computing. Over the course of my 40 years as a computer scientist, I’ve worked in many different areas of computer science research, but I’ve always come back to cryptographic computing, because it’s absolutely fascinating and has many practical applications.

Bill: I originally got my PhD in Machine Learning in 1993, but I switched over to security in the late 1990s. I’ve spent most of my career in industrial research laboratories, where I was always interested in how to bring technology out of the lab and get it into real products. There’s a lot of interest from customers right now around cryptographic computing, and so I think that we’re at a really interesting point in time, where this could take off in the next few years. Being a part of something like this is really exciting.

What exactly is cryptographic computing?

Bill: Cryptographic computing is not a single thing. Rather, it is a methodology for protecting data in use—a set of techniques for doing computation over sensitive data without revealing that data to other parties. For example, if you are a financial services company, you might want to work with other financial services companies to develop machine learning models for credit card fraud detection. You might need to use sensitive data about your customers as training data for your models, but you don’t want to share your customer data in plaintext form with the other companies, and vice versa. Cryptographic computing gives organizations a way to train models collaboratively without exposing plaintext data about their customers to each other, or even to an intermediate third party such as a cloud provider like AWS.

Why is it challenging to protect data in use? How does cryptographic computing help with this challenge?

Bill: Protecting data-at-rest and data-in-transit using cryptography is very well understood.

Protecting data-in-use is a little trickier. When we say we are protecting data-in-use, we mean protecting it while we are doing computation on it. One way to do that is with other types of security mechanisms besides encryption. Specifically, we can use isolation and access control mechanisms to tightly control who or what can gain access to those computations. The level of control can vary greatly from standard virtual machine isolation, all the way down to isolated, hardened, and constrained enclaves backed by a combination of software and specialized hardware. The data is decrypted and processed within the enclave, and is inaccessible to any external code and processes. AWS offers Nitro Enclaves, which is a very tightly controlled environment that uses this kind of approach.

Cryptographic computing offers a completely different approach to protecting data-in-use. Instead of using isolation and access control, data is always cryptographically protected, and the processing happens directly on the protected data. The hardware doing the computation doesn’t even have access to the cryptographic keys used to encrypt the data, so it is computationally intractable for that hardware, any software running on that hardware, or any person who has access to that hardware to learn anything about your data. In fact, you arguably don’t even need isolation and access control if you are using cryptographic computing, since nothing can be learned by viewing the computation.

What are some cryptographic computing techniques and how do they work?

Bill: Two applicable fundamental cryptographic computing techniques are homomorphic encryption and secure multi-party computation. Homomorphic encryption allows for computation on encrypted data. Basically, the idea is that there are special cryptosystems that support basic mathematical operations like addition and multiplication which work on encrypted data. From those simple operations, you can form complex circuits to implement any function you want.

Secure multi-party computation is a very different paradigm. In secure multi-party computation, you have two or more parties who want to jointly compute some function, but they don’t want to reveal their data to each other. An example might be that you have a list of customers and I have a list of customers, and we want to find out what customers we have in common without revealing anything else about our data to each other, in order to protect customer privacy. That’s a special kind of multi-party computation called private set intersection (PSI).

Joan: To add some detail to what Bill said, homomorphic encryption was heavily influenced by a 2009 breakthrough by Craig Gentry, who is now a Research Fellow at the Algorand Foundation. If a customer has dataset X, needs f(X), and is willing to reveal X to the server, he uploads X and has the cloud service compute Y= f(X) and return Y. If he wants (or is required by law or policy) to hide X from the cloud provider, he homomorphically encrypts X on the client side to get X’, uploads it, receives an encrypted result Y’, and homomorphically decrypts Y’ (again on the client side) to get Y. The confidential data, the result, and the cryptographic keys all remain on the client side.

In secure multi-party computation, there are n ≥ 2 parties that have datasets X1, X2, …, Xn, and they wish to compute Y=f(X1, X2, …, Xn). No party wants to reveal to the others anything about his own data that isn’t implied by the result Y. They execute an n-party protocol in which they exchange messages and perform local computations; at the end, all parties know the result, but none has obtained additional information about the others’ inputs or the intermediate results of the (often multi-round) distributed computation. Multi-party computation might use encryption, but often it uses other data-hiding techniques such as secret sharing.

Cryptographic computing seems to be appearing in the popular technical press a lot right now and AWS is leading work in this area. Why is this a hot topic right now?

Joan: There’s strong motivation to deploy this stuff now, because cloud computing has become a big part of our tech economy and a big part of our information infrastructure. Parties that might have previously managed compute environments on-premises where data privacy is easier to reason about are now choosing third-party cloud providers to provide this compute environment. Data privacy is harder to reason about in the cloud, so they’re looking for techniques where they don’t have to completely rely on their cloud provider for data privacy. There’s a tremendous amount of confidential data—in health care, medical research, finance, government, education, and so on—data which organizations want to use in the cloud to take advantage of state-of-the-art computational techniques that are hard to implement in-house. That’s exactly what cryptographic computing is intended for: using data without revealing it.

Bill: Data privacy has become one the most important issues in security. There is clearly a lot of regulatory pressure right now to protect the privacy of individuals. But progressive companies are actually trying to go above and beyond what they are legally required to do. Cryptographic computing offers customers a compelling set of new tools for being able to protect data throughout its lifecycle without exposing it to unauthorized parties.

Also, there’s a lot of hype right now about homomorphic encryption that’s driving a lot of interest in the popular tech press. But I don’t think people fully understand its power, applicability, or limitations. We’re starting to see homomorphic encryption being used in practice for some small-scale applications, but we are just at the beginning of what homomorphic encryption can offer. AWS is actively exploring ideas and finding new opportunities to solve customer problems with this technology.

Can you talk about the research that’s been done at AWS in cryptographic computing?

Joan: We researched and published on a novel use of homomorphic encryption applied to a popular machine learning algorithm called XGBoost. You have an XGBoost model that has been trained in the standard way, and a large set of users that want to query that model. We developed PPXGBoost inference (where the “PP” stands for privacy preserving). Each user stores a personalized, encrypted version of the model on a remote server, and then submits encrypted queries to that server. The user receives encrypted inferences, which are decrypted and stored on a personal device. For example, imagine a healthcare application, where over time the device uses these inferences to build up a health profile that is stored locally. Note that the user never reveals any personal health data to the server, because the submitted queries are all encrypted.

There’s another application our colleague Eric Crockett, Sr. Applied Scientist, published a paper about. It deals with a standard machine-learning technique called logistic regression. Crockett developed HELR, an application that trains logistic-regression models on homomorphically encrypted data.

Both papers are available on the AWS Cryptographic Computing webpage. The HELR code and PPXGBoost code are available there as well. You can download that code, experiment with it, and use it in your applications.

What are you working on right now that you’re excited about?

Bill: We’ve been talking with a lot of internal and external customers about their data protection problems, and have identified a number of areas where cryptographic computing offers solutions. We see a lot of interest in collaborative data analysis using secure multi-party computation. Customers want to jointly compute all sorts of functions and perform analytics without revealing their data to each other. We see interest in everything from simple comparisons of data sets through jointly training machine learning models.

Joan: To add to what Bill said: We’re exploring two use cases in which cryptographic computing (in particular, secure multi-party computation and homomorphic encryption) can be applied to help solve customers’ security and privacy challenges at scale. The first use case is privacy-preserving federated learning, and the second is private set intersection (PSI).

Federated learning makes it possible to take advantage of machine learning while minimizing the need to collect user data. Imagine you have a server and a large set of clients. The server has constructed a model and pushed it out to the clients for use on local devices; one typical use case is voice recognition. As clients use the model, they make personalized updates that improve it. Some of the local improvements made locally in my environment could also be relevant in millions of other users’ environments. The server gathers up all these local improvements and aggregates them into one improvement to the global model; then the next time it pushes out a new model to existing and new clients, it has an improved model to push out. To accomplish privacy-preserving federated learning, one uses cryptographic computing techniques to ensure that individual users’ local improvements are never revealed to the server or to other users in the process of computing a global improvement.

Using PSI, two or more AWS customers who have related datasets can compute the intersection of their datasets—that is, the data elements that they all have in common—while hiding crucial information about the data elements that are not common to all of them. PSI is a key enabler in several business use cases that we have heard about from customers, including data enrichment, advertising, and healthcare.

This post is meant to introduce some of the cryptographic computing and novel use cases AWS is exploring. If you are serious about exploring this approach, we encourage you to reach out to us and discuss what problems you are trying to solve and whether cryptographic computing can help you. Learn more and get in touch with us at our Cryptographic Computing webpage or send us an email at [email protected]

Want more AWS Security news? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS, focused on marketing, encryption, and emerging areas of cybersecurity. She has worked to drive large scale marketing and content initiatives forward in a variety of regulated industries. She is passionate about helping customers learn best practices to secure their AWS cloud environment so they can innovate faster on behalf of their business.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

AWS achieves FedRAMP P-ATO for 15 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Alexis Robinson original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-15-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

AWS is pleased to announce that 15 additional AWS services have achieved Provisional Authority to Operate (P-ATO) from the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB).

AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. AWS now offers 111 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 91 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

Figure 1. Newly authorized services list

Figure 1. Newly authorized services list

Descriptions of AWS Services now in FedRAMP P-ATO

These additional AWS services now provide the following capabilities for the U.S. federal government and customers with regulated workloads:

  • Amazon Detective simplifies analyzing, investigating, and quickly identifying the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources, and uses machine learning, statistical analysis, and graph theory to build a linked set of data enabling you to easily conduct faster and more efficient security investigations.
  • Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system.
  • Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities.
  • Amazon Kendra is an intelligent search service powered by machine learning (ML).
  • Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service.
  • Amazon Lex is an AWS service for building conversational interfaces into applications using voice and text.
  • Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
  • Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that simplifies setting up and operating message brokers on AWS.
  • AWS CloudHSM is a cloud-based hardware security module (HSM) that lets you generate and use your own encryption keys on the AWS Cloud.
  • AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and CloudMap maintains the updated location of these dynamically changing resources.
  • AWS Glue DataBrew is a new visual data preparation tool that lets data analysts and data scientists quickly clean and normalize data to prepare it for analytics and machine learning.
  • AWS Outposts (hardware excluded) is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables you to build and run applications on premises using the same programming interfaces used in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs.
  • AWS Resource Groups grants you the ability to organize your AWS resources, managing and automating tasks for large numbers of resources at the same time.
  • AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. After an initial assessment, a Snowmobile will be transported to your data center and AWS personnel will configure it so it can be accessed as a network storage target. After you load your data, the Snowmobile is driven back to an AWS regional data center, where AWS imports the data into Amazon Simple Storage Service (Amazon S3).
  • AWS Transfer Family securely scales your recurring business-to-business file transfers to Amazon S3 and Amazon Elastic File System (Amazon EFS) using SFTP, FTPS, and FTP protocols.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by Region

Service FedRAMP Moderate in AWS US East/West FedRAMP High in AWS GovCloud (US)
Amazon Detective
Amazon FSx for Lustre
Amazon FSx for Windows File Server
Amazon Kendra
Amazon Keyspaces (for Apache Cassandra)
Amazon Lex
Amazon Macie
Amazon MQ
AWS CloudHSM
AWS Cloud Map
AWS Glue DataBrew
AWS Outposts
AWS Resource Groups
AWS Snowmobile
AWS Transfer Family

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS Account Team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

Want more AWS Security news? Follow us on Twitter.

Author

Alexis Robinson

Alexis is the Head of the U.S. Government Security and Compliance Program for AWS. For over 10 years, she has served federal government clients advising on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment including cloud services applicable to AWS East/West and AWS GovCloud (US) Regions.

Fine-tune and optimize AWS WAF Bot Control mitigation capability

Post Syndicated from Dmitriy Novikov original https://aws.amazon.com/blogs/security/fine-tune-and-optimize-aws-waf-bot-control-mitigation-capability/

Introduction

A few years ago at Sydney Summit, I had an excellent question from one of our attendees. She asked me to help her design a cost-effective, reliable, and not overcomplicated solution for protection against simple bots for her web-facing resources on Amazon Web Services (AWS). I remember the occasion because with the release of AWS WAF Bot Control, I can now address the question with an elegant solution. The Bot Control feature now makes this a matter of switching it on to start filtering out common and pervasive bots that generate over 50 percent of the traffic against typical web applications.

Reduce Unwanted Traffic on Your Website with New AWS WAF Bot Control introduced AWS WAF Bot Control and some of its capabilities. That blog post covers everything you need to know about where to start and what elements it uses for configuration and protection. This post unpacks closely-related functionalities, and shares key considerations, best practices, and how to customize for common use cases. Use cases covered include:

  • Limiting the crawling rate of a bot leveraging labels and AWS WAF response headers
  • Enabling Bot Control only for certain parts of your application with scope down statements
  • Prioritizing verified bots or allowing only specific ones using labels
  • Inserting custom headers into requests from certain bots based on their labels

Key elements of AWS WAF Bot Control fine-tuning

Before moving on to precise configuration of the bot mitigation capability, it is important to understand the components that go into the process.

Labels

Although labels aren’t unique to Bot Control, the feature takes advantage of them, and many configurations use labels as the main input. A label is a string value that is applied to a request based on matching a rule statement. One way of thinking about them is as tags that belong to the specific request. The request acquires them after being processed by a rule statement, and can be used as identification of similar requests in all subsequent rules within the same web ACL. Labels enable you to act on a group of requests that meets specific criteria. That’s because the subsequent rules in the same web ACL have access to the generated labels and can match against them.

Labels go beyond just a mechanism for matching a rule. Labels are independent of a rule’s action, as they can be generated for Block, Allow, and Count. That opens up opportunities to filter or construct queries against records in AWS WAF logs based on labels, and so implement sophisticated analytics.

A label is a string made up of a prefix, optional namespace, and a name delimited by a colon. For example: prefix:[namespace:]name. The prefix is automatically added by AWS WAF.

AWS WAF Bot Control includes various labels and namespaces:

  • bot:category: Type of bot. For example, search_engine, content_fetcher
  • bot:name: Name of a specific bot (if available). For example, scrapy, mauibot, crawler4j
  • bot:verified: Verified bots are generally safe for web applications. For example, googlebot and linkedin. Bot Control performs validation to confirm that such bots come from the source that they claim, using the bot confirmation detection logic described later in this section.

    By default, verified bots are not blocked by Bot Control, but you can use a label to block them with a custom rule.

  • signal: attributes of the request indicate a bot activity. For example, non_browser_user_agent, automated_browser

These labels are added through managed bot detection logic, and Bot Control uses them to perform the following:

Known bot categorization: Comparing the request user-agent to known bots to categorize and allow customers to block by category. Bots are categorized by their function, such as scrapers, search engines, social media.

Bot confirmation: Most respectable bots provide a way to validate beyond the user-agent, typically by doing a reverse DNS lookup of the IP address to confirm the validity of domain and host names. These automatic checks will help you to ensure that only legitimate bots are allowed, and provide a signal to flag requests to downstream systems for bot detection.

Header validation: Request headers validation is performed against a series of checks to look for missing headers, malformed headers, or invalid headers.

Browser signature matching: TLS handshake data and request headers can be deconstructed and partially recombined to create a browser signature that identifies browser and OS combinations. This signature can be validated against the user-agent to confirm they match, and checked against lists of known-good browser known-bad browser signatures.

Below are a few examples of labels that Bot Control has. You can obtain the full list by calling the DescribeManagedRuleGroup API.

awswaf:managed:aws:bot-control:bot:category:search_engine
awswaf:managed:aws:bot-control:bot:name:scrapy
awswaf:managed:aws:bot-control:bot:verified
awswaf:managed:aws:bot-control:signal:non_browser_user_agent

Best practice to start with Bot Control

Although Bot Control can be enabled and start protecting your web resources with the default Block action, you can switch all rules in the rule group into a Count action at the beginning. This accomplishes the following:

  • Avoids false positives with requests that might match one of the rules in Bot Control but still be a valid bot for your resource.
  • Allows you to accumulate enough data points in the form of labels and actions on requests with them, if some of the requests matched rules in Bot Control. That enables you to make informed decisions on constructing rules for each desired bot or category and when switching them into a default action is appropriate.

Labels can be looked up in Amazon CloudWatch metrics and AWS WAF logs, and as soon as you have them, you can start planning whether exceptions or any custom rules are needed to cater for a specific scenario. This blog post explores examples of such use cases in the Common use cases sections below.

Additionally, as AWS WAF processes rules in sequential order, you should consider where the Bot Control rule group is located in your web ACL. To filter out requests that you confidently consider unwanted, you can place AWS Managed Rules rule groups—such as the Amazon IP reputation list—before the Bot Control rule group in the evaluation order. This decreases the number of requests processed by Bot Control, and makes it more cost effective. Simultaneously, Bot Control should be early enough in the rules to:

  • Enable label generation for downstream rules. That also provides higher visibility as a side benefit.
  • Decrease false positives by not blocking desired bots before they reach Bot Control.

AWS WAF Bot Control fine-tuning wouldn’t be complete and configurable without a set of recently released features and capabilities of AWS WAF. Let’s unpack them.

How to work with labels in CloudWatch metrics and AWS WAF logs

Generated labels generate CloudWatch metrics and are placed into AWS WAF logs. It enables you to see what bots and categories hit your website, and the labels associated with them that you can use for fine tuning.

CloudWatch metrics are generated with the following dimensions and metrics.

  • Region dimension is available for all Regions except Amazon CloudFront. When web ACL is associated with CloudFront, metrics are in the Northern Virginia Region.
  • WebACL dimension is the name of the WebACL
  • Namespace is the fully qualified namespace, including the prefix
  • LabelValue is the label name
  • Action is the terminating action (for example, Allow, Block, Count)

AWS WAF includes a shortcut to associated CloudWatch metrics at the top of the Overview page, as shown in Figure 1.

Figure 1: Title and description of the chart in AWS WAF with a shortcut to CloudWatch

Figure 1: Title and description of the chart in AWS WAF with a shortcut to CloudWatch

Alternatively, you can find them in the WAFV2 service category of the CloudWatch Metrics section.

CloudWatch displays generated labels and the volume across dates and times, so you can evaluate and make informed decisions to structure the rules or address false positives. Figure 2 illustrates what labels were generated for requests from bots that hit my website. This example configured only a couple of explicit Allow actions, so most of them were blocked. The top section of the figure 2 shows the load from two selected labels.

Figure 2: WAFV2 CloudWatch metrics for generated Label Namespaces

Figure 2: WAFV2 CloudWatch metrics for generated Label Namespaces

In AWS WAF logs, generated labels are included in an array under the field labels. Figure 3 shows an example request with the labels array at the bottom.

Figure 3: An example of an AWS WAF log record

Figure 3: An example of an AWS WAF log record

This example shows three labels generated for the same request. Uptimerobot follows the monitoring category label, and combining these two labels is useful to provide flexibility for configurations based on them. You can use the whole category, or be laser-focused using the label of the specific bot. You will see how and why that matters later in this blog post. The third label, non_browser_user_agent, is a signal of forwarded requests that have extra headers. For protection from bots in conjunction with labels, you can construct extra scanning in your application for certain requests.

Scope-down statements

Given that Bot Control is a premium feature and is a paid AWS Managed Rules, the ability to keep your costs in control is crucial. The scope-down statement allows you to optimize for cost by filtering out any traffic that doesn’t require inspection by Bot Control.

To address this goal, you can use scope down statements that can be applied to two broad scenarios.

You can exclude certain parts of your resource from scanning by Bot Control. Think of parts of your web site that you don’t mind being accessed by bots, typically that would be static content, such as images and CSS files. Leaving protection on everything else, such as APIs and login pages. You can also exclude IP ranges that can be considered safe from bot management. For example, traffic that’s known to come from your organization or viewers that belong to your partners or customers.

Alternatively, you can look at this from a different angle, and only apply bot management to a small section of your resources. For example, you can use Bot Control to protect a login page, or certain sensitive APIs, leaving everything else outside of your bot management.

With all of these tools in our toolkit let’s put them into perspective and dive deep into use cases and scenarios.

Common use cases for AWS WAF Bot Control fine-tuning

There are several methods for fine tuning Bot Control to better meet your needs. In this section, you’ll see some of the methods you can use.

Limit the crawling rate

In some cases, it is necessary to allow bots access to your websites. A good example is search engine bots, that crawl the web and create an index. If optimization for search engines is important for your business, but you notice excessive load from too many requests hitting your web resource, you might face a dilemma of how to slow crawlers down without unnecessarily blocking them. You can solve this with a combination of Bot Control detection logic and a rate-based rule with a response status code and header to communicate your intention back to crawlers. Most crawlers that are deemed useful have a built-in mechanism to decrease their crawl rate when you detect and respond to increased load.

To customize bot mitigation and set the crawl rate below limits that might negatively affect your web resource

  1. In the AWS WAF console, select Web ACLs from the left menu. Open your web ACL or follow the steps to create a web ACL.
  2. Choose the Rules tab and select Add rules. Select Add managed rule groups and proceed with the following settings:
    1. In the AWS managed rule groups section, select the switch Add to web ACL to enable Bot Control in the web ACL. This also gives you labels that you can use in other rules later in the evaluation process inside the web ACL.
    2. Select Add rules and choose Save
  3. In the same web ACL, select Add rules menu and select Add my own rules and rule groups.
  4. Using the provided Rule builder, configure the following settings:
    1. Enter a preferred name for the rule and select Rate-based rule.
    2. Enter a preferred rate limit for the rule. For example, 500.

      Note: The rate limit is the maximum number of requests allowed from a single IP address in a five-minute period.

    3. Select Only consider requests that match the criteria in a rule statement to enable the scope-down statement to narrow the scope of the requests that the rule evaluates.
    4. Under the Inspect menu, select Has a label to focus only on certain types of bots.
    5. In the Match key field, enter one of the following labels to match based on broad categories, such as verified bots or all bots identified as scraping as illustrated on Figure 4:

      awswaf:managed:aws:bot-control:bot:verified
      awswaf:managed:aws:bot-control:bot:category:scraping_framework

    6. Alternatively, you can narrow down to a specific bot using its label:

      awswaf:managed:aws:bot-control:bot:name:Googlebot

      Figure 4: Label match rule statement in a rule builder with a specific match key

      Figure 4: Label match rule statement in a rule builder with a specific match key

  5. In the Action section, configure the following settings:
    1. Select Custom response to enable it.
    2. Enter 429 as the Response code to indicate and communicate back to the bot that it has sent too many requests in a given amount of time.
    3. Select Add new custom header and enter Retry-After in the Key field and a value in seconds for the Value field. The value indicates how many seconds a bot must wait before making a new request.
  6. Select Add rule.
  7. It’s important to place the rule after the Bot Control rule group inside your web ACL, so that the label is available in this custom rule.
    1. In the Set rule priority section, check that the new rate-based rule is under the existing Bot Control rule set and if not, choose the newly created rule and select Move up or Move down until the rule is located after it.
    2. Select Save.
Figure 5: AWS WAF rule action with a custom response code

Figure 5: AWS WAF rule action with a custom response code

With the preceding configuration, Bot Control sets required labels, which you then use in the scope-down statement in a rate-based rule to not only establish a ceiling of how many requests you will allow from specific bots, but also communicate to bots when their crawling rate is too high. If they don’t respect the response and lower their rate, the rule will temporarily block them, protecting your web resource from being overwhelmed.

Note: If you use a category label, such as scraping_framework, all bots that have that label will be counted by your rate-based rule. To avoid unintentional blocking of bots that use the same label, you can either narrow down to a specific bot with a precise bot:name: label, or select a higher rate limit to allow a greater margin for the aggregate.

Enable Bot Control only for certain parts of your application

As mentioned earlier, excluding parts of your web resource from Bot Control protection is a mechanism to reduce the cost of running the feature by focusing only on a subset of the requests reaching a resource. There are a few common scenarios that take advantage of this approach.

To run Bot Control only on dynamic parts of your traffic

  1. In the AWS WAF console, select Web ACLs from the left menu. Open a web ACL that you have, or follow the steps to create a web ACL.
  2. Choose the Rules tab and select Add rules. Then select Add managed rule groups to proceed with the following settings:
    1. In the AWS managed rule groups section, select Add to web ACL to enable Bot Control in the web ACL.
    2. Select Edit.
  3. Select Scope-down statement – optional and select Enable Scope-down statement.
  4. In If a request, select doesn’t match the statement (NOT).
  5. In the Statement section, configure the following settings:
    1. Choose URI path in the Inspect field.
    2. For the Match type, choose Starts with string.
    3. Depending on the structure of your resource, you can enter a whole URI string—such as images/—in the String to match field. The string will be excluded from Bot Control evaluation.
    Figure 6: A scope-down statement to match based on a string that a URI path starts with

    Figure 6: A scope-down statement to match based on a string that a URI path starts with

  6. Select Save rule.

An alternative to using string matching

As an alternative to a string match type, you can use a regex pattern set. If you don’t have a regex pattern set, create one using the following guide.

Note: This pattern matches most common file extensions associated with static files for typical web resources. You can customize the pattern set if you have different file types.

  1. Follow steps 1-4 of the previous procedure.
  2. In the Statement section, configure the following settings:
    1. Choose URI path in the Inspect field.
    2. For the Match type, choose Matches pattern from regex pattern set and select your created set in the Regex pattern set. as illustrated in Figure 7.
    3. In Regex pattern set, enter the pattern
      (?i)\.(jpe?g|gif|png|svg|ico|css|js|woff2?)$

      Figure 7: A scope-down statement to match based on a regex pattern set as part of a URI path

      Figure 7: A scope-down statement to match based on a regex pattern set as part of a URI path

To run Bot Control only on the most sensitive parts of your application.

Another option is to exclude almost everything, by only enabling the Bot Control on the most sensitive part of your application. For example, a login page.

Note: The actual URI path depends on the structure of your application.

  1. Inside the Scope-down statement, in the If a request menu, select matches the statement.
  2. In the Statement section:
    1. In the Inspect field, select URI path.
    2. For the Match type, select Contains string.
    3. In the String to match field, enter the string you want to match. For example, login as shown in the Figure 8.
  3. Choose Save rule.
    Figure 8: A scope-down statement to match based on a string within a URI path

    Figure 8: A scope-down statement to match based on a string within a URI path

To exclude more than one part of your application from Bot Control.

If you have more than one part to exclude, you can use an OR logical statement to list each part in a scope-down statement.

  1. Inside the Scope-down statement, in the If a request menu, select matches at least one of the statements (OR).
  2. In the Statement 1 section, configure the following settings:
    1. Choose URI path in the Inspect field.
    2. For the Match type choose Contains string.
    3. In the String to match field enter a preferred value. For example, login.
  3. In the Statement 2 section, configure the following settings:
    1. Choose URI path in the Inspect field.
    2. For the Match type choose Starts with string.
    3. In the String to match field enter a preferred URI value. For example, payment/.
  4. Select Save rule.

Figure 9 builds on the previous example of an exact string match by adding an OR statement to protect an API named payment.

Figure 9: A scope-down statement with OR logic for more sophisticated matching

Figure 9: A scope-down statement with OR logic for more sophisticated matching

Note: The visual editor on the console supports up to five statements. To add more, edit the JSON representation of the rule on the console or use the APIs.

Prioritize verified bots that you don’t want to block

Since verified bots aren’t blocked by default, in most cases there is no need to apply extra logic to allow them through. However, there are scenarios where other AWS WAF rules might match some aspects of requests from verified bots and block them. That can hurt some metrics for SEO, or prevent links from your website from properly propagating and displaying in social media resources. If this is important for your business, then you might want to ensure you protect verified bots by explicitly allowing them in AWS WAF.

To prioritize the verified bots category

  1. In the AWS WAF menu, select Web ACLs from the left menu. Open a web ACL that you have, or follow the steps to create a web ACL. The next steps assume you already have a Bot Control rule group enabled inside the web ACL.
  2. In the web ACL, select Add rules, and then select Add my own rules and rule groups.
  3. Using the provided Rule builder, configure the following settings:
    1. Enter a name for the rule in the Name field.
    2. Under the Inspect menu, select Has a label.
    3. In the Match key field, enter the following label to match based on the label that each verified bot has:

      awswaf:managed:aws:bot-control:bot:verified

    4. In the Action section, select Allow to confirm the action on a request match
  4. Select Add rule. It’s important to place the rule after the Bot Control rule group inside your web ACL, so that the bot:verified label is available in this custom rule. To complete this, configure the following steps:
    1. In the Set rule priority section, check that the rule you just created is listed immediately after the existing Bot Control rule set. If it’s not, choose the newly created rule and select Move up or Move down until the rule is located immediately after the existing Bot Control rule set.
    2. Select Save.
Figure 10: Label match rule statement in a Rule builder with a specific match key

Figure 10: Label match rule statement in a Rule builder with a specific match key

Allow a specific bot

Labels also enable you to single out the bot you don’t want to block from the category that is blocked. One of the common examples are third-party bots that perform monitoring of your web resources.

Let’s take a look at a scenario where UptimeRobot is used to allow a specific bot. The bot falls into a category that’s being blocked by default—bot:category:monitoring. You can either exclude the whole category, which can have a wider impact on resource than you want, or allow only UptimeRobot.

To explicitly allow a specific bot

  1. Analyze CloudWatch metrics or AWS WAF logs to find the bot that is being blocked and its associated labels. Unless you want to allow the whole category, the label you would be looking for is bot:name: The example that follows is based on the label awswaf:managed:aws:bot-control:bot:name:uptimerobot.

    From the logs, you can also verify which category the bot belongs to, which is useful for configuring Scope-down statements.

  2. In the AWS WAF console, select Web ACLs from the left menu. Open a web ACL that you have, or follow the steps to create a web ACL. For the next steps, it’s assumed that you already have a Bot Control rule group enabled inside the webACL.
  3. Open the Bot Control rule set in the list inside your web ACL and choose Edit
  4. From the list of Rules find CategoryMonitoring and set to Count. This will prevent the default block action of the category.
  5. Select Scope-down statement – optional and select Scope-down statement. Then configure the following settings:
    1. Inside the Scope-down statement, in the If a request menu, choose matches all the statements (AND). This will allow you to construct the complex logic necessary to block the category but allow a specified bot.
    2. In the Statement 1 section under the Inspect menu select Has a label.
    3. In the Match key field, enter the label of the broad category that you set to count in step number 4. In this example, it is monitoring. This configuration will keep other bots from the category blocked:

      awswaf:managed:aws:bot-control:bot:category:monitoring

    4. In the Statement 2 section, select Negate statement results to allow you to exclude a specific bot.
    5. Under the Inspect menu, select Has a label.
    6. In the Match key field, enter the label that will uniquely identify the bot you want to explicitly allow. In this example, it’s uptimerobot with the following label:

      awswaf:managed:aws:bot-control:bot:name:uptimerobot

  6. Choose Save rule.
Figure 11: Label match rule statement with AND logic to single out a specific bot name from a category

Figure 11: Label match rule statement with AND logic to single out a specific bot name from a category

Note: This approach is the best practice for analyzing and, if necessary, addressing false positives situations. You can apply exclusion to any bot, or multiple bots, based on the unique bot:name: label.

Insert custom headers into requests from certain bots

There are situations when you want to further process or analyze certain requests. or implement logic that is provided by systems in the downstream. In such cases, you can use AWS WAF Bot Control to categorize the requests. Applications later in the process can then apply the intended logic on either a broad group of requests, such as all bots within a category, or as narrow as a certain bot.

To insert a custom header

  1. In the AWS WAF console, select Web ACLs from the left menu. Open a web ACL that you have, or follow the steps to create a web ACL. The next steps assume that you already have Bot Control rule group enabled inside the webACL.
  2. Open the Bot Control rule set in the list inside your web ACL and choose Edit.
  3. From the list of Rules set the targeted category to Count.
  4. Choose Save rule.
  5. In the same web ACL, choose the Add rules menu and select Add my own rules and rule groups.
  6. Using the provided Rule builder, configure the following settings:
    1. Enter a name for the rule in the Name field.
    2. Under the Inspect menu, select Has a label.
    3. In the Match key field, enter the label to match either a targeted category or a bot. This example uses the security category label:
      awswaf:managed:aws:bot-control:bot:category:security
    4. In the Action section, select Count
    5. Open Custom request – optional and select Add new custom header
    6. Enter values in the Key and Value fields that correspond to the inserted custom header key-value pair that you want to use in downstream systems. The example in Figure 12 shows this configuration.
    7. Choose Add rule.

    AWS WAF prefixes your custom header names with x-amzn-waf- when it inserts them, so when you add abc-category, your downstream system sees it as x-amzn-waf-abc-category.

Figure 12: AWS WAF rule action with a custom header inserted by the service

Figure 12: AWS WAF rule action with a custom header inserted by the service

The custom rule located after Bot Control now inserts the header into any request that it labeled as coming from bots within the security category. Then the security appliance that is after AWS WAF acts on the requests based on the header, and processes them accordingly.

This implementation can serve other scenarios. For example, using your custom headers to communicate to your Origin to append headers that will explicitly prevent caching certain content. That makes bots always get it from the Origin. Inserted headers are accessible within AWS Lambda@Edge functions and CloudFront Functions, this opens up advanced processing scenarios.

Conclusion

This post describes the primary building blocks for using Bot Control, and how you can combine and customize them to address different scenarios. It’s not an exhaustive list of the use cases that Bot Control can be fine-tuned for, but hopefully the examples provided here inspire and provide you with ideas for other implementations.

If you already have AWS WAF associated with any of your web-facing resources, you can view current bot traffic estimates for your applications based on a sample of requests currently processed by the service. Visit the AWS WAF console to view the bot overview dashboard. That’s a good starting point to consider implementing learnings from this blog to improve your bot protection.

It is early days for the feature, and it will keep gaining more capabilities, stay tuned!

 
If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on AWS WAF re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Dmitriy Novikov

Dmitriy Novikov

In his role as Senior Solutions Architect at Amazon Web Services, Dmitriy supports AWS customers to utilize emerging technologies for business value generation. He’s a technology enthusiast who gets a charge out of finding innovative solutions to complex security challenges. He enjoys sharing his learnings on architecture and best practices in blogs, whitepapers and public speaking events. Outside work, Dmitriy has a passion for reading and triathlon.

Introducing s2n-quic, a new open-source QUIC protocol implementation in Rust

Post Syndicated from Panos Kampanakis original https://aws.amazon.com/blogs/security/introducing-s2n-quic-open-source-protocol-rust/

At Amazon Web Services (AWS), security, high performance, and strong encryption for everyone are top priorities for all our services. With these priorities in mind, less than a year after QUIC ratification in the Internet Engineering Task Force (IETF), we are introducing support for the QUIC protocol which can boost performance for web applications that currently use Transport Layer Security (TLS) over Transmission Control Protocol (TCP). We are pleased to announce the availability of s2n-quic, an open-source Rust implementation of the QUIC protocol added to our set of AWS encryption open-source libraries.

What is QUIC?

QUIC is an encrypted transport protocol designed for performance and is the foundation of HTTP/3. It is specified in a set of IETF standards ratified in May 2021. QUIC protects its UDP datagrams by using encryption and authentication keys established in a TLS 1.3 handshake carried over QUIC transport. It is designed to improve upon TCP by providing improved first-byte latency and handling of multiple streams, and solving issues such as head-of-line blocking, mobility, and data loss detection. This enables web applications to perform faster, especially over poor networks. Other potential uses include latency-sensitive connections and UDP connections currently using DTLS, which now can run faster.

Renaming s2n

AWS has long supported open-source encryption libraries; in 2015 we introduced s2n as a TLS library. The name s2n is short for signal to noise, and is a nod to the almost magical act of encryption—disguising meaningful signals, like your critical data, as seemingly random noise.

Now that AWS introduces our new QUIC open-source library, we are renaming s2n to s2n-tls. s2n-tls is an efficient TLS library built over other crypto libraries like OpenSSL libcrypto or AWS libcrypto (AWS-LC). AWS-LC is a general-purpose cryptographic library maintained by AWS which originated from the Google project BoringSSL. The s2n family of AWS encryption open-source libraries now consists of s2n-tls, s2n-quic, and s2n-bignum. s2n-bignum is a collection of bignum arithmetic routines maintained by AWS designed for crypto applications.

s2n-quic details

Similar to s2n-tls, s2n-quic is designed to be small and fast, with simplicity as a priority. It is written in Rust, so it reaps some of its benefits such as performance, thread and memory-safety. s2n-quic depends either on s2n-tls or rustls for the TLS 1.3 handshake.

The main advantages of s2n-quic are:

  • Simple API. For example, a QUIC echo server-example can be built with just a few API calls.
  • Highly configurable. s2n-quic is configured with code through providers that allow an application to granularly control functionality. You can see an example of the server’s simple config in the QUIC echo server-example.
  • Extensive testing. Fuzzing (libFuzzer, American Fuzzy Fop (AFL), and honggfuzz), corpus replay unit testing of derived corpus files, testing of concrete and symbolic execution engines with bolero, and extensive integration and unit testing are used to validate the correctness of our implementation.
  • Thorough interoperability testing for every code change. There are multiple public QUIC implementations; s2n-quic is continuously tested to interoperate with many of them.
  • Verified correctness, post-quantum hybrid key exchange, and maturity for the TLS handshake when built with s2n-tls.
  • Thorough compliance coverage tracking of normative language in relevant standards.

Some important features in s2n-quic that can improve performance and connection management include CUBIC congestion controller support, packet pacing, Generic Segmentation Offload (GSO) support, Path MTU Discovery, and unique connection identifiers detached from the address.

AWS is continuing to invest in encryption optimization techniques, UDP performance improvement technologies, and formal code verification with the AWS Automated Reasoning Group to further enhance the library.

Like s2n-tls, which has already been introduced in various AWS services, AWS services that need to make use of the benefits of QUIC will begin integrating s2n-quic. QUIC is a standardized protocol which, when introduced in a service like web content delivery, can improve user experience or application performance. AWS still plans to continue support for existing protocols like TLS, so existing applications will remain interoperable. Amazon CloudFront is scheduled to be the first AWS service to integrate s2n-quic with its support for HTTP/3 in 2022.

Conclusion

If you are interested in using or contributing to s2n-quic source code or documentation, they are publicly available under the terms of the Apache Software License 2.0 from our s2n-quic GitHub repository.

If you package or distribute s2n-quic or s2n-tls, or use it as part of a large multi-user service, you may be eligible for pre-notification of security issues. Please contact [email protected].

If you discover a potential security issue in s2n-quic or s2n-tls, we ask that you notify AWS Security by using our vulnerability reporting page.

Stay tuned for more topics on s2n-quic like quantum-resistance, performance analyses, uses, and other technical details.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Panos Kampanakis

Panos has extensive experience on cybersecurity, applied cryptography, security automation, and vulnerability management. He has trained and presented on various security topics at technical events for numerous years, and also co-authored Cisco Press books, papers, standards, and research publications. He has participated in various security standards bodies to provide common interoperable protocols and languages for security information sharing, cryptography, and PKI. In his current role, Panos works with engineers and industry standards partners to provide cryptographically secure tools, protocols, and standards.

Control access to Amazon Elastic Container Service resources by using ABAC policies

Post Syndicated from Kriti Heda original https://aws.amazon.com/blogs/security/control-access-to-amazon-elastic-container-service-resources-by-using-abac-policies/

As an AWS customer, if you use multiple Amazon Elastic Container Service (Amazon ECS) services/tasks to achieve better isolation, you often have the challenge of how to manage access to these containers. In such cases, using tags can enable you to categorize these services in different ways, such as by owner or environment.

This blog post shows you how tags allow conditional access to Amazon ECS resources. You can use attribute-based access control (ABAC) policies to grant access rights to users through the use of policies that combine attributes together. ABAC can be helpful in rapidly-growing environments, where policy management can become cumbersome. This blog post uses ECS resource tags (owner tag and environment tag) as the attributes that are used to control access in the policies.

Amazon ECS resources have many attributes, such as tags, which can be used to control permissions. You can attach tags to AWS Identity and Access Management (IAM) principals, and create either a single ABAC policy, or a small set of policies for your IAM principals. These ABAC policies can be designed to allow operations when the principal tag (a tag that exists on the user or role making the call) matches the resource tag. They can be used to simplify permission management at scale. A single Amazon ECS policy can enforce permissions across a range of applications, without having to update the policy each time you create new Amazon ECS resources.

This post provides a step-by-step procedure for creating ABAC policies for controlling access to Amazon ECS containers. As the team adds ECS resources to its projects, permissions are automatically applied based on the owner tag and the environment tag. As a result, no policy update is required for each new resource. Using this approach can save time and help improve security, because it relies on granular permissions rules.

Condition key mappings

It’s important to note that each IAM permission in Amazon ECS supports different types of tagging condition keys. The following table maps each condition key to its ECS actions.

Condition key Description ECS actions
aws:RequestTag/${TagKey} Set this tag value to require that a specific tag be used (or not used) when making an API request to create or modify a resource that allows tags. ecs:CreateCluster,
ecs:TagResource,
ecs:CreateCapacityProvider
aws:ResourceTag/${TagKey} Set this tag value to allow or deny user actions on resources with specific tags. ecs:PutAttributes,
ecs:StopTask,
ecs:DeleteCluster,
ecs:DeleteService,
ecs:CreateTaskSet,
ecs:DeleteAttributes,
ecs:DeleteTaskSet,
ecs:DeregisterContainerInstance
aws:RequestTag/${TagKey}
and
aws:ResourceTag/${TagKey}
Supports both RequestTag and ResourceTag ecs:CreateService,
ecs:RunTask,
ecs:StartTask,
ecs:RegisterContainerInstance

For a detailed guide of Amazon ECS actions and the resource types and condition keys they support, see Actions, resources, and condition keys for Amazon Elastic Container Service.

Tutorial overview

The following tutorial gives you a step-by-step process to create and test an Amazon ECS policy that allows IAM roles with principal tags to access resources with matching tags. When a principal makes a request to AWS, their permissions are granted based on whether the principal and resource tags match. This strategy allows individuals to view or edit only the ECS resources required for their jobs.

Scenario

Example Corp. has multiple Amazon ECS containers created for different applications. Each of these containers are created by different owners within the company. The permissions for each of the Amazon ECS resources must be restricted based on the owner of the container, and also based on the environment where the action is performed.

Assume that you’re a lead developer at this company, and you’re an experienced IAM administrator. You’re familiar with creating and managing IAM users, roles, and policies. You want to ensure that the development engineering team members can access only the containers they own. You also need a strategy that will scale as your company grows.

For this scenario, you choose to use AWS resource tags and IAM role principal tags to implement an ABAC strategy for Amazon ECS resources. The condition key mappings table shows which tagging condition keys you can use in a policy for each Amazon ECS action and resources. You can define the tags in the role you created. For this scenario, you define two tags Owner and Environment. These tags restrict permissions in the role based on the tags you defined.

Prerequisites

To perform the steps in this tutorial, you must already have the following:

  • An IAM role or user with sufficient privileges for services like IAM and ECS. Following the security best practices the role should have a minimum set of permissions and grant additional permissions as necessary. You can add the AWS managed policies IAMFullAccess and AmazonECS_FullAccess to create the IAM role to provide permissions for creating IAM and ECS resources.
  • An AWS account that you can sign in to as an IAM role or user.
  • Experience creating and editing IAM users, roles, and policies in the AWS Management Console. For more information, see Tutorial to create IAM resources.

Create an ABAC policy for Amazon ECS resources

After you complete the prerequisites for the tutorial, you will need to define which Amazon ECS privileges and access controls you want in place for the users, and configure the tags needed for creating the ABAC policies. This tutorial focuses on providing step-by-step instructions for creating test users, defining the ABAC policies for the Amazon ECS resources, creating a role, and defining tags for the implementation.

To create the ABAC policy

You create an ABAC policy that defines permissions based on attributes. In AWS, these attributes are called tags.

The sample ABAC policy that follows provides ECS permissions to users when the principal’s tag matches the resource tag.

Sample ABAC policy for ECS resources

The sample ECS ABAC policy that follows allows the user to perform action on the ECS resources, but only when those resources are tagged with the same key-pairs as the principal.

  1. Download the sample ECS policy. This policy allows principals to create, read, edit, and delete resources, but only when those resources are tagged with the same key-value pairs as the principal.
  2. Use the downloaded ECS policy to create the ECS ABAC policy, and name your new policy ECSABAC policy. For more information, see Creating IAM policies.

This sample policy provides permission to each ECS action based on the condition key that action supports. See to the condition key mappings table for a mapping of the ECS actions and the condition key they support.

What does this policy do?

  • The ECSCreateCluster statement allows users to create cluster, create and tag resources. These ECS actions only support the RequestTag condition key. This condition block returns true if every tag passed (tags: owner and environment) in the request is included in the specified list. This is done using the StringEquals condition operator. If an incorrect tag key other than owner or environment tag is passed, or incorrect value for the tags are passed, then the condition returns false. The ECS actions within these statements do not have a specific requirement of a resource type.
  • The ECSDeletion, ECSUpdate, and ECSDescribe statements allow users to update, delete or list/describe ECS resources. The ECS actions under these statements only support the ResourceTag condition key. Statements return true if the specified tag keys are present on the ECS resource and their values match the principal’s tags. These statements return false for mismatched tags (in this policy, the only acceptable tags are owner and environment), or for an incorrect value for the owner and environment tag passed to the ECS resources. They also return false for any ECS action that does not support resource tagging.
  • The ECSCreateService, ECSTaskControl, and ECSRegistration statements contain ECS actions that allow users to create a service, start or run tasks and register container instances in ECS. The ECS actions within these statements support both Request and Resource tag condition keys.

Create IAM roles

Create the following IAM roles and attach the ECSABAC policy you created in the previous procedure. You can create the roles and add tags to them using the AWS console, through the role creation flow, as shown in the following steps.

To create IAM roles

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. In the left navigation pane, select Roles, and then select Create Role.
  3. Choose the Another AWS account role type.
  4. For Account ID, enter the AWS account ID mentioned in the prerequisites to which you want to grant access to your resources.
  5. Choose Next: Permissions.
  6. IAM includes a list of the AWS managed and customer managed policies in your account. Select the ECSABAC policy you created previously from the dropdown menu to use for the permissions policy. Alternatively, you can choose Create policy to open a new browser tab and create a new policy, as shown in Figure 1.
    Figure 1. Attach the ECS ABAC policy to the role

    Figure 1. Attach the ECS ABAC policy to the role

  7. Choose Next: Tags.
  8. Add metadata to the role by attaching tags as key-value pairs. Add the following tags to the role: for Key owner, enter Value mock_owner; and for Key environment, enter development, as shown in Figure 2.
    Figure 2. Define the tags in the IAM role

    Figure 2. Define the tags in the IAM role

  9. Choose Next: Review.
  10. For Role name, enter a name for your role. Role names must be unique within your AWS account.
  11. Review the role and then choose Create role.

Test the solution

The following sections present some positive and negative test cases that show how tags can provide fine-grained permission to users through ABAC policies.

Prerequisites for the negative and positive testing

Before you can perform the positive and negative tests, you must first do these steps in the AWS Management Console:

  1. Follow the procedures above for creating IAM role and the ABAC policy.
  2. Switch the role from the role assumed in the prerequisites to the role you created in To create IAM Roles above, following the steps in the documentation Switching to a role.

Perform negative testing

For the negative testing, three test cases are presented here that show how the ABAC policies prevent successful creation of the ECS resources if the owner or environment tags are missing, or if an incorrect tag is used for the creation of the ECS resource.

Negative test case 1: Create cluster without the required tags

In this test case, you check if an ECS cluster is successfully created without any tags. Create an Amazon ECS cluster without any tags (in other words, without adding the owner and environment tag).

To create a cluster without the required tags

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. From the navigation bar, select the Region to use.
  3. In the navigation pane, choose Clusters.
  4. On the Clusters page, choose Create Cluster.
  5. For Select cluster compatibility, choose Networking only, then choose Next Step.
  6. On the Configure cluster page, enter a cluster name. For Provisioning Model, choose On-Demand Instance, as shown in Figure 3.
    Figure 3. Create a cluster

    Figure 3. Create a cluster

  7. In the Networking section, configure the VPC for your cluster.
  8. Don’t add any tags in the Tags section, as shown in Figure 4.
    Figure 4. No tags added to the cluster

    Figure 4. No tags added to the cluster

  9. Choose Create.

Expected result of negative test case 1

Because the owner and the environment tags are absent, the ABAC policy prevents the creation of the cluster and throws an error, as shown in Figure 5.

Figure 5. Unsuccessful creation of the ECS cluster due to missing tags

Figure 5. Unsuccessful creation of the ECS cluster due to missing tags

Negative test case 2: Create cluster with a missing tag

In this test case, you check whether an ECS cluster is successfully created missing a single tag. You create a cluster similar to the one created in Negative test case 1. However, in this test case, in the Tags section, you enter only the owner tag. The environment tag is missing, as shown in Figure 6.

To create a cluster with a missing tag

  1. Repeat steps 1-7 from the Negative test case 1 procedure.
  2. In the Tags section, add the owner tag and enter its value as mock_user.
    Figure 6. Create a cluster with the environment tag missing

    Figure 6. Create a cluster with the environment tag missing

Expected result of negative test case 2

The ABAC policy prevents the creation of the cluster, due to the missing environment tag in the cluster. This results in an error, as shown in Figure 7.

Figure 7. Unsuccessful creation of the ECS cluster due to missing tag

Figure 7. Unsuccessful creation of the ECS cluster due to missing tag

Negative test case 3: Create cluster with incorrect tag values

In this test case, you check whether an ECS cluster is successfully created with incorrect tag-value pairs. Create a cluster similar to the one in Negative test case 1. However, in this test case, in the Tags section, enter incorrect values for the owner and the environment tag keys, as shown in Figure 8.

To create a cluster with incorrect tag values

  1. Repeat steps 1-7 from the Negative test case 1 procedure.
  2. In the Tags section, add the owner tag and enter the value as test_user; add the environment tag and enter the value as production.
    Figure 8. Create a cluster with the incorrect values for the tags

    Figure 8. Create a cluster with the incorrect values for the tags

Expected result of negative test case 3

The ABAC policy prevents the creation of the cluster, due to incorrect values for the owner and environment tags in the cluster. This results in an error, as shown in Figure 9.

Figure 9. Unsuccessful creation of the ECS cluster due to incorrect value for the tags

Figure 9. Unsuccessful creation of the ECS cluster due to incorrect value for the tags

Perform positive testing

For the positive testing, two test cases are provided here that show how the ABAC policies allow successful creation of ECS resources, such as ECS clusters and ECS tasks, if the correct tags with correct values are provided as input for the ECS resources.

Positive test case 1: Create cluster with all the correct tag-value pairs

This test case checks whether an ECS cluster is successfully created with the correct tag-value pairs when you create a cluster with both the owner and environment tag that matches the ABAC policy you created earlier.

To create a cluster with all the correct tag-value pairs

  1. Repeat steps 1-7 from the Negative test case 1 procedure.
  2. In the Tags section, add the owner tag and enter the value as mock_user; add the environment tag and enter the value as development, as shown in Figure 10.
    Figure 10. Add correct tags to the cluster

    Figure 10. Add correct tags to the cluster

Expected result of positive test case 1

Because both the owner and the environment tags were input correctly, the ABAC policy allows the successful creation of the cluster without throwing an error, as shown in Figure 11.

Figure 11. Successful creation of the cluster

Figure 11. Successful creation of the cluster

Positive test case 2: Create standalone task with all the correct tag-value pairs

Deploying your application as a standalone task can be ideal in certain situations. For example, suppose you’re developing an application, but you aren’t ready to deploy it with the service scheduler. Maybe your application is a one-time or periodic batch job, and it doesn’t make sense to keep running it, or to restart when it finishes.

For this test case, you run a standalone task with the correct owner and environment tags that match the ABAC policy.

To create a standalone task with all the correct tag-value pairs

  1. To run a standalone task, see Run a standalone task in the Amazon ECS Developer Guide. Figure 12 shows the beginning of the Run Task process.
    Figure 12. Run a standalone task

    Figure 12. Run a standalone task

  2. In the Task tagging configuration section, under Tags, add the owner tag and enter the value as mock_user; add the environment tag and enter the value as development, as shown in Figure 13.
    Figure 13. Creation of the task with the correct tag

    Figure 13. Creation of the task with the correct tag

Expected result of positive test case 2

Because you applied the correct tags in the creation phase, the task is created successfully, as shown in Figure 14.

Figure 14. Successful creation of the task

Figure 14. Successful creation of the task

Cleanup

To avoid incurring future charges, after completing testing, delete any resources you created for this solution that are no longer needed. See the following links for step-by-step instructions for deleting the resources you created in this blog post.

  1. Deregistering an ECS Task Definition
  2. Deleting ECS Clusters
  3. Deleting IAM Policies
  4. Deleting IAM Roles and Instance Profiles

Conclusion

This post demonstrates the basics of how to use ABAC policies to provide fine-grained permissions to users based on attributes such as tags. You learned how to create ABAC policies to restrict permissions to users by associating tags with each ECS resource you create. You can use tags to manage and secure access to ECS resources, including ECS clusters, ECS tasks, ECS task definitions, and ECS services.

For more information about the ECS resources that support tagging, see the Amazon Elastic Container Service Guide.

 
If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on AWS Secrets Manager re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

hedakrit

Kriti Heda

Kriti is a NJ-based Security Transformation Consultant in the SRC team at AWS. She’s a technology enthusiast who enjoys helping customers find innovative solutions to complex security challenges. She spends her day working to builds and deploy security infrastructure, and automate security operations for the customers. Outside of work, she enjoys adventures, sports, and dancing.

AWS User Guide to Financial Services Regulations and Guidelines in Switzerland and FINMA workbooks publications

Post Syndicated from Margo Cronin original https://aws.amazon.com/blogs/security/aws-user-guide-to-financial-services-regulations-and-guidelines-in-switzerland-and-finma/

AWS is pleased to announce the publication of the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland whitepaper and workbooks.

This guide refers to certain rules applicable to financial institutions in Switzerland, including banks, insurance companies, stock exchanges, securities dealers, portfolio managers, trustees and other financial entities which are overseen (directly or indirectly) by the Swiss Financial Market Supervisory Authority (FINMA).

Amongst other topics, this guide covers requirements created by the following regulations and publications of interest to Swiss financial institutions:

  • Federal Laws – including Article 47 of the Swiss Banking Act (BA). Banks and Savings Banks are overseen by FINMA and governed by the BA (Bundesgesetz über die Banken und Sparkassen, Bankengesetz, BankG). Article 47 BA holds relevance in the context of outsourcing.
  • Response on Cloud Guidelines for Swiss Financial institutions produced by the Swiss Banking Union, Schweizerische Bankiervereinigung SBVg.
  • Controls outlined by FINMA, Switzerland’s independent regulator of financial markets, that may be applicable to Swiss banks and insurers in the context of outsourcing arrangements to the cloud.

In combination with the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland whitepaper, customers can use the detailed AWS FINMA workbooks and ISAE 3000 report available from AWS Artifact.

The five core FINMA circulars are intended to assist Swiss-regulated financial institutions in understanding approaches to due diligence, third-party management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The AWS FINMA workbooks and ISAE 3000 report scope covers, in detail, requirements of the following FINMA circulars:

  • 2018/03 Outsourcing – banks and insurers (04.11.2020)
  • 2008/21 Operational Risks – Banks – Principle 4 Technology Infrastructure (31.10.2019)
  • 2008/21 Operational Risks – Banks – Appendix 3 Handling of electronic Client Identifying Data (31.10.2019)
  • 2013/03 Auditing – Information Technology (04.11.2020)
  • 2008/10 Self-regulation as a minimum standard – Minimum Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013)

Customers can use the detailed FINMA workbooks, which include detailed control mappings for each FINMA control, covering both the AWS control activities and the Customer User Entity Controls. Where applicable, under the AWS Shared Responsibility Model, these workbooks provide industry standard practices, incorporating AWS Well-Architected, to assist Swiss customers in their own preparation for FINMA circular alignment.

This whitepaper follows the issuance of the second Swiss Financial Market Supervisory Authority (FINMA) ISAE 3000 Type 2 attestation report. The latest report covers the period from October 1, 2020 to September 30, 2021, with a total of 141 AWS services and 23 global AWS Regions included in the scope. Customers can download the report from AWS Artifact. A full list of certified services and Regions is presented within the published FINMA report.

As always, AWS is committed to bringing new services into the scope of our FINMA program in the future based on customers’ architectural and regulatory needs. Please reach out to your AWS account team if you have any questions or feedback. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Margo Cronin

Margo is a Principal Security Specialist at Amazon Web Services based in Zurich, Switzerland. She spends her days working with customers, from startups to the largest of enterprises, helping them build new capabilities and accelerating their cloud journey. She has a strong focus on security, helping customers improve their security, risk, and compliance in the cloud.

Top 2021 AWS Security service launches security professionals should review – Part 1

Post Syndicated from Ryan Holland original https://aws.amazon.com/blogs/security/top-2021-aws-security-service-launches-part-1/

Given the speed of Amazon Web Services (AWS) innovation, it can sometimes be challenging to keep up with AWS Security service and feature launches. To help you stay current, here’s an overview of some of the most important 2021 AWS Security launches that security professionals should be aware of. This is the first of two related posts; Part 2 will highlight some of the important 2021 launches that security professionals should be aware of across all AWS services.

Amazon GuardDuty

In 2021, the threat detection service Amazon GuardDuty expanded the internal AWS security intelligence it consumes to use more of the intel that AWS internal threat detection teams collect, including additional nation-state threat intelligence. Sharing more of the important intel that internal AWS teams collect lets you quickly improve your protection. GuardDuty also launched domain reputation modeling. These machine learning models take all the domain requests from across all of AWS, and feed them into a model that allows AWS to categorize previously unseen domains as highly likely to be malicious or benign based on their behavioral characteristics. In practice, AWS is seeing that these models often deliver high-fidelity threat detections, identifying malicious domains 7–14 days before they are identified and available on commercial threat feeds.

AWS also launched second generation anomaly detection for GuardDuty. Shortly after the original GuardDuty launch in 2017, AWS added additional anomaly detection for user behavior analytics and monitoring for unusual activity of AWS Identity and Access Management (IAM) users. After receiving customer feedback that the original feature was a little too noisy, and that it was difficult to understand why some findings were generated, the GuardDuty analytics team rebuilt this functionality on an entirely new machine learning model, considerably reducing the number of detections and generating a more accurate positive-detection rate. The new model also added additional context that security professionals (such as analysts) can use to understand why the model shows findings as suspicious or unusual.

Since its introduction, GuardDuty has detected when AWS EC2 Role credentials are used to call AWS APIs from IP addresses outside of AWS. Beginning in early 2022, GuardDuty now supports detection when credentials are used from other AWS accounts, inside the AWS network. This is a complex problem for customers to solve on their own, which is why the GuardDuty team added this enhancement. The solution considers that there are legitimate reasons why a source IP address that is communicating with AWS services APIs might be different than the Amazon Elastic Compute Cloud (Amazon EC2) instance IP address, or a NAT gateway associated with the instance’s VPC. The enhancement also considers complex network topologies that route traffic to one or multiple VPCs—for example, AWS Transit Gateway or AWS Direct Connect.

Our customers are increasingly running container workloads in production; helping to raise the security posture of these workloads became an AWS development priority in 2021. GuardDuty for EKS Protection is one recent feature that has resulted from this investment. This new GuardDuty feature monitors Amazon Elastic Kubernetes Service (Amazon EKS) cluster control plane activity by analyzing Kubernetes audit logs. GuardDuty is integrated with Amazon EKS, giving it direct access to the Kubernetes audit logs without requiring you to turn on or store these logs. Once a threat is detected, GuardDuty generates a security finding that includes container details such as pod ID, container image ID, and associated tags. See below for details on how the new Amazon Inspector is also helping to protect containers.

Amazon Inspector

At AWS re:Invent 2021, we launched the new Amazon Inspector, a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. The original Amazon Inspector was completely re-architected in this release to automate vulnerability management and to deliver near real-time findings to minimize the time needed to discover new vulnerabilities. This new Amazon Inspector has simple one-click enablement and multi-account support using AWS Organizations, similar to our other AWS Security services. This launch also introduces a more accurate vulnerability risk score, called the Inspector score. The Inspector score is a highly contextualized risk score that is generated for each finding by correlating Common Vulnerability and Exposures (CVE) metadata with environmental factors for resources such as network accessibility. This makes it easier for you to identify and prioritize your most critical vulnerabilities for immediate remediation. One of the most important new capabilities is that Amazon Inspector automatically discovers running EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR), at any scale, and immediately starts assessing them for known vulnerabilities. Now you can consolidate your vulnerability management solutions for both Amazon EC2 and Amazon ECR into one fully managed service.

AWS Security Hub

In addition to a significant number of smaller enhancements throughout 2021, in October AWS Security Hub, an AWS cloud security posture management service, addressed a top customer enhancement request by adding support for cross-Region finding aggregation. You can now view all your findings from all accounts and all selected Regions in a single console view, and act on them from an Amazon EventBridge feed in a single account and Region. Looking back at 2021, Security Hub added 72 additional best practice checks, four new AWS service integrations, and 13 new external partner integrations. A few of these integrations are Atlassian Jira Service Management, Forcepoint Cloud Security Gateway (CSG), and Amazon Macie. Security Hub also achieved FedRAMP High authorization to enable security posture management for high-impact workloads.

Amazon Macie

Based on customer feedback, data discovery tool Amazon Macie launched a number of enhancements in 2021. One new feature, which made it easier to manage Amazon Simple Storage Service (Amazon S3) buckets for sensitive data, was criteria-based bucket selection. This Macie feature allows you to define runtime criteria to determine which S3 buckets should be included in a sensitive data-discovery job. When a job runs, Macie identifies the S3 buckets that match your criteria, and automatically adds or removes them from the job’s scope. Before this feature, once a job was configured, it was immutable. Now, for example, you can create a policy where if a bucket becomes public in the future, it’s automatically added to the scan, and similarly, if a bucket is no longer public, it will no longer be included in the daily scan.

Originally Macie included all managed data identifiers available for all scans. However, customers wanted more surgical search criteria. For example, they didn’t want to be informed if there were exposed data types in a particular environment. In September 2021, Macie launched the ability to enable/disable managed data identifiers. This allows you to customize the data types you deem sensitive and would like Macie to alert on, in accordance with your organization’s data governance and privacy needs.

Amazon Detective

Amazon Detective is a service to analyze and visualize security findings and related data to rapidly get to the root cause of potential security issues. In January 2021, Amazon Detective added a convenient, time-saving integration that allows you to start security incident investigation workflows directly from the GuardDuty console. This new hyperlink pivot in the GuardDuty console takes findings directly from the GuardDuty console into the Detective console. Another time-saving capability added was the IP address drill down functionality. This new capability can be useful to security forensic teams performing incident investigations, because it helps quickly determine the communications that took place from an EC2 instance under investigation before, during, and after an event.

In December 2021, Detective added support for AWS Organizations to simplify management for security operations and investigations across all existing and future accounts in an organization. This launch allows new and existing Detective customers to onboard and centrally manage the Detective graph database for up to 1,200 AWS accounts.

AWS Key Management Service

In June 2021, AWS Key Management Service (AWS KMS) introduced multi-Region keys, a capability that lets you replicate keys from one AWS Region into another. With multi-Region keys, you can more easily move encrypted data between Regions without having to decrypt and re-encrypt with different keys for each Region. Multi-Region keys are supported for client-side encryption using direct AWS KMS API calls, or in a simplified manner with the AWS Encryption SDK and Amazon DynamoDB Encryption Client.

AWS Secrets Manager

Last year was a busy year for AWS Secrets Manager, with four feature launches to make it easier to manage secrets at scale, not just for client applications, but also for platforms. In March 2021, Secrets Manager launched multi-Region secrets to automatically replicate secrets for multi-Region workloads. Also in March, Secrets Manager added three new rules to AWS Config, to help administrators verify that secrets in Secrets Manager are configured according to organizational requirements. Then in April 2021, Secrets Manager added a CSI driver plug-in, to make it easy to consume secrets from Amazon EKS by using Kubernetes’s standard Secrets Store interface. In November, Secrets Manager introduced a higher secret limit of 500,000 per account to simplify secrets management for independent software vendors (ISVs) that rely on unique secrets for a large number of end customers. Although launched in January 2022, it’s also worth mentioning Secrets Manager’s release of rotation windows to align automatic rotation of secrets with application maintenance windows.

Amazon CodeGuru and Secrets Manager

In November 2021, AWS announced a new secrets detector feature in Amazon CodeGuru that searches your codebase for hardcoded secrets. Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations to detect security vulnerabilities, improve code quality, and identify an application’s most expensive lines of code.

This new feature can pinpoint locations in your code with usernames and passwords; database connection strings, tokens, and API keys from AWS; and other service providers. When a secret is found in your code, CodeGuru Reviewer provides an actionable recommendation that links to AWS Secrets Manager, where developers can secure the secret with a point-and-click experience.

Looking ahead for 2022

AWS will continue to deliver experiences in 2022 that meet administrators where they govern, developers where they code, and applications where they run. A lot of customers are moving to container and serverless workloads; you can expect to see more work on this in 2022. You can also expect to see more work around integrations, like CodeGuru Secrets Detector identifying plaintext secrets in code (as noted previously).

To stay up-to-date in the year ahead on the latest product and feature launches and security use cases, be sure to read the Security service launch announcements. Additionally, stay tuned to the AWS Security Blog for Part 2 of this blog series, which will provide an overview of some of the important 2021 launches that security professionals should be aware of across all AWS services.

If you’re looking for more opportunities to learn about AWS security services, check out AWS re:Inforce, the AWS conference focused on cloud security, identity, privacy, and compliance, which will take place June 28-29 in Houston, Texas.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Ryan Holland

Ryan is a Senior Manager with GuardDuty Security Response. His team is responsible for ensuring GuardDuty provides the best security value to customers, including threat intelligence, behavioral analytics, and finding quality.

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

New for Amazon CodeGuru Reviewer – Detector Library and Security Detectors for Log-Injection Flaws

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-codeguru-reviewer-detector-library-and-security-detectors-for-log-injection-flaws/

Amazon CodeGuru Reviewer is a developer tool that detects security vulnerabilities in your code and provides intelligent recommendations to improve code quality. For example, CodeGuru Reviewer introduced Security Detectors for Java and Python code to identify security risks from the top ten Open Web Application Security Project (OWASP) categories and follow security best practices for AWS APIs and common crypto libraries. At re:Invent, CodeGuru Reviewer introduced a secrets detector to identify hardcoded secrets and suggest remediation steps to secure your secrets with AWS Secrets Manager. These capabilities help you find and remediate security issues before you deploy.

Today, I am happy to share two new features of CodeGuru Reviewer:

  • A new Detector Library describes in detail the detectors that CodeGuru Reviewer uses when looking for possible defects and includes code samples for both Java and Python.
  • New security detectors have been introduced for detecting log-injection flaws in Java and Python code, similar to what happened with the recent Apache Log4j vulnerability we described in this blog post.

Let’s see these new features in more detail.

Using the Detector Library
To help you understand more clearly which detectors CodeGuru Reviewer uses to review your code, we are now sharing a Detector Library where you can find detailed information and code samples.

These detectors help you build secure and efficient applications on AWS. In the Detector Library, you can find detailed information about CodeGuru Reviewer’s security and code quality detectors, including descriptions, their severity and potential impact on your application, and additional information that helps you mitigate risks.

Note that each detector looks for a wide range of code defects. We include one noncompliant and compliant code example for each detector. However, CodeGuru uses machine learning and automated reasoning to identify possible issues. For this reason, each detector can find a range of defects in addition to the explicit code example shown on the detector’s description page.

Let’s have a look at a few detectors. One detector is looking for insecure cross-origin resource sharing (CORS) policies that are too permissive and may lead to loading content from untrusted or malicious sources.

Detector Library screenshot.

Another detector checks for improper input validation that can enable attacks and lead to unwanted behavior.

Detector Library screenshot.

Specific detectors help you use the AWS SDK for Java and the AWS SDK for Python (Boto3) in your applications. For example, there are detectors that can detect hardcoded credentials, such as passwords and access keys, or inefficient polling of AWS resources.

New Detectors for Log-Injection Flaws
Following the recent Apache Log4j vulnerability, we introduced in CodeGuru Reviewer new detectors that check if you’re logging anything that is not sanitized and possibly executable. These detectors cover the issue described in CWE-117: Improper Output Neutralization for Logs.

These detectors work with Java and Python code and, for Java, are not limited to the Log4j library. They don’t work by looking at the version of the libraries you use, but check what you are actually logging. In this way, they can protect you if similar bugs happen in the future.

Detector Library screenshot.

Following these detectors, user-provided inputs must be sanitized before they are logged. This avoids having an attacker be able to use this input to break the integrity of your logs, forge log entries, or bypass log monitors.

Availability and Pricing
These new features are available today in all AWS Regions where Amazon CodeGuru is offered. For more information, see the AWS Regional Services List.

The Detector Library is free to browse as part of the documentation. For the new detectors looking for log-injection flaws, standard pricing applies. See the CodeGuru pricing page for more information.

Start using Amazon CodeGuru Reviewer today to improve the security of your code.

Danilo