Tag Archives: Compliance

LGPD workbook for AWS customers managing personally identifiable information in Brazil

Post Syndicated from Rodrigo Fiuza original https://aws.amazon.com/blogs/security/lgpd-workbook-for-aws-customers-managing-personally-identifiable-information-in-brazil/

Portuguese version

AWS is pleased to announce the publication of the Brazil General Data Protection Law Workbook.

The General Data Protection Law (LGPD) in Brazil was first published on 14 August 2018, and started its applicability on 18 August 2020. Companies that manage personally identifiable information (PII) in Brazil as defined by LGPD will have to comply with and attend to the law.

To better help customers prepare and implement controls that focus on LGPD Chapter VII Security and Best Practices, AWS created a workbook based on industry best practices, AWS service offerings, and controls.

Amongst other topics, this workbook covers information security and AWS controls from:

In combination with Brazil General Data Protection Law Workbook, customers can use the detailed Navigating LGPD Compliance on AWS whitepaper.

AWS adheres to a shared responsibility model. Customers will have to observe which services offer privacy features and determine their applicability to their specific compliance requirements. Further information about data privacy at AWS can be found at our Data Privacy Center. Specific information about LGPD and data privacy at AWS in Brazil can be found on our Brazil Data Privacy page.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security news? Follow us on Twitter.
 


Portuguese

Workbook da LGPD para Clientes AWS que gerenciam Informações de Identificação Pessoal no Brasil

A AWS tem o prazer de anunciar a publicação do Workbook Lei Geral de Proteção de Dados do Brasil.

A Lei Geral de Proteção de Dados (LGPD) teve sua primeira publicação em 14 de agosto de 2018 no Brasil e iniciou sua aplicabilidade em 18 de agosto de 2020. Empresas que gerenciam informações pessoais identificáveis (PII) conforme definido na LGPD terão que cumprir e atender às cláusulas da lei.

Para ajudar melhor os clientes a preparar e implementar controles que se concentram no Capítulo VII da LGPD “da Segurança e Boas Práticas”, a AWS criou uma pasta de trabalho com base nas melhores práticas do setor, ofertas de serviços e controles da AWS.

Entre outros tópicos, esta pasta de trabalho aborda a segurança da informação e os controles da AWS de:

Em combinação com o Workbook Lei Geral de Proteção de Dados do Brasil, os clientes podem usar o whitepaper detalhado Navegando na conformidade com a LGPD na AWS.

A AWS adere a um modelo de responsabilidade compartilhada. Clientes terão que observar quais serviços oferecem recursos de privacidade e determinar sua aplicabilidade aos seus requisitos específicos de compliance. Mais informações sobre a privacidade de dados na AWS podem ser encontradas em nosso Centro de Privacidade de Dados. Informações adicionais sobre LGPD e Privacidade de dados na AWS no Brasil podem ser encontradas em nossa página de Privacidade de Dados no Brasil.

Para saber mais sobre nossos programas de conformidade e segurança, consulte Programas de conformidade da AWS. Como sempre, valorizamos seus comentários e perguntas; entre em contato com a equipe de conformidade da AWS por meio da página Fale conosco.

Se você tiver feedback sobre esta postagem, envie comentários na seção Comentários abaixo.

Quer mais notícias sobre segurança da AWS? Siga-nos no Twitter.

Author

Rodrigo Fiuza

Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, Caribbean and Europe. Rodrigo has previously worked in risk management, security assurance, and technology audits for the past 12 years.

Canadian Centre for Cyber Security Assessment Summary report now available in AWS Artifact

Post Syndicated from Rob Samuel original https://aws.amazon.com/blogs/security/canadian-centre-for-cyber-security-assessment-summary-report-now-available-in-aws-artifact/

French version

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of AWS services. We are pleased to announce the availability of the Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS, which you can view and download on demand through AWS Artifact.

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium Cloud Security Profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT Security Risk Management: A Lifecycle Approach, Annex 3 – Security Control Catalogue). As of September, 2021, 120 AWS services in the Canada (Central) Region have been assessed by the CCCS, and meet the requirements for medium cloud security profile. Meeting the medium cloud security profile is required to host workloads that are classified up to and including medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope by Compliance Program page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. If you have questions about this blog post, please start a new thread on the AWS Artifact forum or contact AWS Support.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Rob Samuel

Rob Samuel

Rob Samuel is a Principal technical leader for AWS Security Assurance. He partners with teams across AWS to translate data protection principles into technical requirements, aligns technical direction and priorities, orchestrates new technical solutions, helps integrate security and privacy solutions into AWS services and features, and addresses cross-cutting security and privacy requirements and expectations. Rob has more than 20 years of experience in the technology industry, and has previously held leadership roles, including Head of Security Assurance for AWS Canada, Chief Information Security Officer (CISO) for the Province of Nova Scotia, various security leadership roles as a public servant, and served as a Communications and Electronics Engineering Officer in the Canadian Armed Forces.

Naranjan Goklani

Naranjan Goklani

Naranjan Goklani is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 12 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the retail, ecommerce, and utilities industries.

Brian Mycroft

Brian Mycroft

Brian Mycroft is a Chief Technologist at AWS, based in Ottawa (Canada), specializing in national security, intelligence, and the Canadian federal government. Brian is the lead architect of the AWS Secure Environment Accelerator (ASEA) and focuses on removing public sector barriers to cloud adoption.

.


 

Rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité disponible sur AWS Artifact

Par Robert Samuel, Naranjan Goklani et Brian Mycroft
Amazon Web Services (AWS) s’engage à fournir à ses clients une assurance continue à travers des évaluations, des certifications et des attestations qui appuient l’adoption des services proposés par AWS. Nous avons le plaisir d’annoncer la mise à disposition du rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité (CCCS) pour AWS, que vous pouvez dès à présent consulter et télécharger à la demande sur AWS Artifact.

Le CCC est l’autorité canadienne qui met son expertise en matière de cybersécurité au service du gouvernement canadien, du secteur privé et du grand public. Les organisations des secteurs public et privé établies au Canada dépendent de la rigoureuse évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques conduite par le CCC pour leur décision relative à l’utilisation de ces services infonuagiques. De plus, le processus d’évaluation de la sécurité des technologies de l’information est une étape obligatoire pour permettre à AWS de fournir des services infonuagiques aux agences et aux ministères du gouvernement fédéral canadien.

Le Processus d’évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques détermine si les exigences en matière de technologie de l’information du Gouvernement du Canada (GC) pour le profil de contrôle de la sécurité infonuagique moyen (précédemment connu sous le nom de Protégé B/Intégrité moyenne/Disponibilité moyenne) sont satisfaites conformément à l’ITSG-33 (Gestion des risques liés à la sécurité des TI : Une méthode axée sur le cycle de vie, Annexe 3 – Catalogue des contrôles de sécurité). En date de septembre 2021, 120 services AWS de la région (centrale) du Canada ont été évalués par le CCC et satisfont aux exigences du profil de sécurité moyen du nuage. Satisfaire les exigences du niveau moyen du nuage est nécessaire pour héberger des applications classées jusqu’à la catégorie moyenne incluse. Le CCC évalue périodiquement les nouveaux services, ou les services qui n’ont pas encore été évalués, et réévalue les services AWS précédemment évalués pour s’assurer qu’ils continuent de satisfaire aux exigences du Gouvernement du Canada. Le CCC priorise l’évaluation des nouveaux services AWS selon leur disponibilité au Canada et en fonction de la demande des clients pour les services AWS. La liste complète des services AWS évalués par le CCC est consultable sur notre page Services AWS concernés par le programme de conformité.

Pour en savoir plus sur l’évaluation du CCC ainsi que sur nos autres programmes de conformité et de sécurité, visitez la page Programmes de conformité AWS. Comme toujours, nous accordons beaucoup de valeur à vos commentaires et à vos questions; vous pouvez communiquer avec l’équipe Conformité AWS via la page Communiquer avec nous.

Si vous avez des commentaires sur cette publication, n’hésitez pas à les partager dans la section Commentaires ci-dessous. Vous souhaitez en savoir plus sur AWS Security? Retrouvez-nous sur Twitter.

Biographies des auteurs :

Rob Samuel : Rob Samuel est responsable technique principal d’AWS Security Assurance. Il collabore avec les équipes AWS pour traduire les principes de protection des données en recommandations techniques, aligne la direction technique et les priorités, met en œuvre les nouvelles solutions techniques, aide à intégrer les solutions de sécurité et de confidentialité aux services et fonctionnalités proposés par AWS et répond aux exigences et aux attentes en matière de confidentialité et de sécurité transversale. Rob a plus de 20 ans d’expérience dans le secteur de la technologie et a déjà occupé des fonctions dirigeantes, comme directeur de l’assurance sécurité pour AWS Canada, responsable de la cybersécurité et des systèmes d’information (RSSI) pour la province de la Nouvelle-Écosse, divers postes à responsabilités en tant que fonctionnaire et a servi dans les Forces armées canadiennes en tant qu’officier du génie électronique et des communications.

Naranjan Goklani : Naranjan Goklani est responsable des audits de sécurité pour AWS, il est basé à Toronto (Canada). Il est responsable des audits, des attestations, des certifications et des évaluations pour l’Amérique du Nord et l’Europe. Naranjan a plus de 12 ans d’expérience dans la gestion des risques, l’assurance de la sécurité et la réalisation d’audits de technologie. Naranjan a exercé dans l’une des quatre plus grandes sociétés de comptabilité et accompagné des clients des industries de la distribution, du commerce en ligne et des services publics.

Brian Mycroft : Brian Mycroft est technologue en chef pour AWS, il est basé à Ottawa (Canada) et se spécialise dans la sécurité nationale, le renseignement et le gouvernement fédéral du Canada. Brian est l’architecte principal de l’AWS Secure Environment Accelerator (ASEA) et s’intéresse principalement à la suppression des barrières à l’adoption du nuage pour le secteur public.

Best practices: Securing your Amazon Location Service resources

Post Syndicated from Dave Bailey original https://aws.amazon.com/blogs/security/best-practices-securing-your-amazon-location-service-resources/

Location data is subjected to heavy scrutiny by security experts. Knowing the current position of a person, vehicle, or asset can provide industries with many benefits, whether to understand where a current delivery is, how many people are inside a venue, or to optimize routing for a fleet of vehicles. This blog post explains how Amazon Web Services (AWS) helps keep location data secured in transit and at rest, and how you can leverage additional security features to help keep information safe and compliant.

The General Data Protection Regulation (GDPR) defines personal data as “any information relating to an identified or identifiable natural person (…) such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Also, many companies wish to improve transparency to users, making it explicit when a particular application wants to not only track their position and data, but also to share that information with other apps and websites. Your organization needs to adapt to these changes quickly to maintain a secure stance in a competitive environment.

On June 1, 2021, AWS made Amazon Location Service generally available to customers. With Amazon Location, you can build applications that provide maps and points of interest, convert street addresses into geographic coordinates, calculate routes, track resources, and invoke actions based on location. The service enables you to access location data with developer tools and to move your applications to production faster with monitoring and management capabilities.

In this blog post, we will show you the features that Amazon Location provides out of the box to keep your data safe, along with best practices that you can follow to reach the level of security that your organization strives to accomplish.

Data control and data rights

Amazon Location relies on global trusted providers Esri and HERE Technologies to provide high-quality location data to customers. Features like maps, places, and routes are provided by these AWS Partners so solutions can have data that is not only accurate but constantly updated.

AWS anonymizes and encrypts location data at rest and during its transmission to partner systems. In parallel, third parties cannot sell your data or use it for advertising purposes, following our service terms. This helps you shield sensitive information, protect user privacy, and reduce organizational compliance risks. To learn more, see the Amazon Location Data Security and Control documentation.

Integrations

Operationalizing location-based solutions can be daunting. It’s not just necessary to build the solution, but also to integrate it with the rest of your applications that are built in AWS. Amazon Location facilitates this process from a security perspective by integrating with services that expedite the development process, enhancing the security aspects of the solution.

Encryption

Amazon Location uses AWS owned keys by default to automatically encrypt personally identifiable data. AWS owned keys are a collection of AWS Key Management Service (AWS KMS) keys that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned keys are not in your AWS account, Amazon Location can use the associated AWS owned keys to protect the resources in your account.

If customers choose to use their own keys, they can benefit from AWS KMS to store their own encryption keys and use them to add a second layer of encryption to geofencing and tracking data.

Authentication and authorization

Amazon Location also integrates with AWS Identity and Access Management (IAM), so that you can use its identity-based policies to specify allowed or denied actions and resources, as well as the conditions under which actions are allowed or denied on Amazon Location. Also, for actions that require unauthenticated access, you can use unauthenticated IAM roles.

As an extension to IAM, Amazon Cognito can be an option if you need to integrate your solution with a front-end client that authenticates users with its own process. In this case, you can use Cognito to handle the authentication, authorization, and user management for you. You can use Cognito unauthenticated identity pools with Amazon Location as a way for applications to retrieve temporary, scoped-down AWS credentials. To learn more about setting up Cognito with Amazon Location, see the blog post Add a map to your webpage with Amazon Location Service.

Limit the scope of your unauthenticated roles to a domain

When you are building an application that allows users to perform actions such as retrieving map tiles, searching for points of interest, updating device positions, and calculating routes without needing them to be authenticated, you can make use of unauthenticated roles.

When using unauthenticated roles to access Amazon Location resources, you can add an extra condition to limit resource access to an HTTP referer that you specify in the policy. The aws:referer request context value is provided by the caller in an HTTP header, and it is included in a web browser request.

The following is an example of a policy that allows access to a Map resource by using the aws:referer condition, but only if the request comes from the domain example.com.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "MapsReadOnly",
      "Effect": "Allow",
      "Action": [
        "geo:GetMapStyleDescriptor",
        "geo:GetMapGlyphs",
        "geo:GetMapSprites",
        "geo:GetMapTile"
      ],
      "Resource": "arn:aws:geo:us-west-2:111122223333:map/MyMap",
      "Condition": {
        "StringLike": {
          "aws:Referer": "https://www.example.com/*"
        }
      }
    }
  ]
}

To learn more about aws:referer and other global conditions, see AWS global condition context keys.

Encrypt tracker and geofence information using customer managed keys with AWS KMS

When you create your tracker and geofence collection resources, you have the option to use a symmetric customer managed key to add a second layer of encryption to geofencing and tracking data. Because you have full control of this key, you can establish and maintain your own IAM policies, manage key rotation, and schedule keys for deletion.

After you create your resources with customer managed keys, the geometry of your geofences and all positions associated to a tracked device will have two layers of encryption. In the next sections, you will see how to create a key and use it to encrypt your own data.

Create an AWS KMS symmetric key

First, you need to create a key policy that will limit the AWS KMS key to allow access to principals authorized to use Amazon Location and to principals authorized to manage the key. For more information about specifying permissions in a policy, see the AWS KMS Developer Guide.

To create the key policy

Create a JSON policy file by using the following policy as a reference. This key policy allows Amazon Location to grant access to your KMS key only when it is called from your AWS account. This works by combining the kms:ViaService and kms:CallerAccount conditions. In the following policy, replace us-west-2 with your AWS Region of choice, and the kms:CallerAccount value with your AWS account ID. Adjust the KMS Key Administrators statement to reflect your actual key administrators’ principals, including yourself. For details on how to use the Principal element, see the AWS JSON policy elements documentation.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Amazon Location",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "kms:DescribeKey",
        "kms:CreateGrant"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "kms:ViaService": "geo.us-west-2.amazonaws.com",
          "kms:CallerAccount": "111122223333"
        }
      }
    },
    {
      "Sid": "Allow access for Key Administrators",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/KMSKeyAdmin"
      },
      "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:TagResource",
        "kms:UntagResource",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion"
      ],
      "Resource": "*"
    }
  ]
}

For the next steps, you will use the AWS Command Line Interface (AWS CLI). Make sure to have the latest version installed by following the AWS CLI documentation.

Tip: AWS CLI will consider the Region you defined as the default during the configuration steps, but you can override this configuration by adding –region <your region> at the end of each command line in the following command. Also, make sure that your user has the appropriate permissions to perform those actions.

To create the symmetric key

Now, create a symmetric key on AWS KMS by running the create-key command and passing the policy file that you created in the previous step.

aws kms create-key –policy file://<your JSON policy file>

Alternatively, you can create the symmetric key using the AWS KMS console with the preceding key policy.

After running the command, you should see the following output. Take note of the KeyId value.

{
  "KeyMetadata": {
    "Origin": "AWS_KMS",
    "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab",
    "Description": "",
    "KeyManager": "CUSTOMER",
    "Enabled": true,
    "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
    "KeyUsage": "ENCRYPT_DECRYPT",
    "KeyState": "Enabled",
    "CreationDate": 1502910355.475,
    "Arn": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
    "AWSAccountId": "111122223333",
    "MultiRegion": false
    "EncryptionAlgorithms": [
      "SYMMETRIC_DEFAULT"
    ],
  }
}

Create an Amazon Location tracker and geofence collection resources

To create an Amazon Location tracker resource that uses AWS KMS for a second layer of encryption, run the following command, passing the key ID from the previous step.

aws location \
	create-tracker \
	--tracker-name "MySecureTracker" \
	--kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab"

Here is the output from this command.

{
    "CreateTime": "2021-07-15T04:54:12.913000+00:00",
    "TrackerArn": "arn:aws:geo:us-west-2:111122223333:tracker/MySecureTracker",
    "TrackerName": "MySecureTracker"
}

Similarly, to create a geofence collection by using your own KMS symmetric keys, run the following command, also modifying the key ID.

aws location \
	create-geofence-collection \
	--collection-name "MySecureGeofenceCollection" \
	--kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab"

Here is the output from this command.

{
    "CreateTime": "2021-07-15T04:54:12.913000+00:00",
    "TrackerArn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/MySecureGeoCollection",
    "TrackerName": "MySecureGeoCollection"
}

By following these steps, you have added a second layer of encryption to your geofence collection and tracker.

Data retention best practices

Trackers and geofence collections are stored and never leave your AWS account without your permission, but they have different lifecycles on Amazon Location.

Trackers store the positions of devices and assets that are tracked in a longitude/latitude format. These positions are stored for 30 days by the service before being automatically deleted. If needed for historical purposes, you can transfer this data to another data storage layer and apply the proper security measures based on the shared responsibility model.

Geofence collections store the geometries you provide until you explicitly choose to delete them, so you can use encryption with AWS managed keys or your own keys to keep them for as long as needed.

Asset tracking and location storage best practices

After a tracker is created, you can start sending location updates by using the Amazon Location front-end SDKs or by calling the BatchUpdateDevicePosition API. In both cases, at a minimum, you need to provide the latitude and longitude, the time when the device was in that position, and a device-unique identifier that represents the asset being tracked.

Protecting device IDs

This device ID can be any string of your choice, so you should apply measures to prevent certain IDs from being used. Some examples of what to avoid include:

  • First and last names
  • Facility names
  • Documents, such as driver’s licenses or social security numbers
  • Emails
  • Addresses
  • Telephone numbers

Latitude and longitude precision

Latitude and longitude coordinates convey precision in degrees, presented as decimals, with each decimal place representing a different measure of distance (when measured at the equator).

Amazon Location supports up to six decimal places of precision (0.000001), which is equal to approximately 11 cm or 4.4 inches at the equator. You can limit the number of decimal places in the latitude and longitude pair that is sent to the tracker based on the precision required, increasing the location range and providing extra privacy to users.

Figure 1 shows a latitude and longitude pair, with the level of detail associated to decimals places.

Figure 1: Geolocation decimal precision details

Figure 1: Geolocation decimal precision details

Position filtering

Amazon Location introduced position filtering as an option to trackers that enables cost reduction and reduces jitter from inaccurate device location updates.

  • DistanceBased filtering ignores location updates wherein devices have moved less than 30 meters (98.4 ft).
  • TimeBased filtering evaluates every location update against linked geofence collections, but not every location update is stored. If your update frequency is more often than 30 seconds, then only one update per 30 seconds is stored for each unique device ID.
  • AccuracyBased filtering ignores location updates if the distance moved was less than the measured accuracy provided by the device.

By using filtering options, you can reduce the number of location updates that are sent and stored, thus reducing the level of location detail provided and increasing the level of privacy.

Logging and monitoring

Amazon Location integrates with AWS services that provide the observability needed to help you comply with your organization’s security standards.

To record all actions that were taken by users, roles, or AWS services that access Amazon Location, consider using AWS CloudTrail. CloudTrail provides information on who is accessing your resources, detailing the account ID, principal ID, source IP address, timestamp, and more. Moreover, Amazon CloudWatch helps you collect and analyze metrics related to your Amazon Location resources. CloudWatch also allows you to create alarms based on pre-defined thresholds of call counts. These alarms can create notifications through Amazon Simple Notification Service (Amazon SNS) to automatically alert teams responsible for investigating abnormalities.

Conclusion

At AWS, security is our top priority. Here, security and compliance is a shared responsibility between AWS and the customer, where AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. The customer assumes the responsibility to perform all of the necessary security configurations to the solutions they are building on top of our infrastructure.

In this blog post, you’ve learned the controls and guardrails that Amazon Location provides out of the box to help provide data privacy and data protection to our customers. You also learned about the other mechanisms you can use to enhance your security posture.

Start building your own secure geolocation solutions by following the Amazon Location Developer Guide and learn more about how the service handles security by reading the security topics in the guide.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on Amazon Location Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Rafael Leandro Junior

Rafael Leandro, Junior

Rafael Leandro, Junior, is a senior global solutions architect who currently focuses on the consumer packaged goods and transportation industries. He helps large global customers on their journeys with AWS.

David Bailey

David Bailey

David Bailey is a senior security consultant who helps AWS customers achieve their cloud security goals. He has a passion for building new technologies and providing mentorship for others.

Commitment to Customer Security

Post Syndicated from Ling Wu original https://blog.cloudflare.com/our-commitment-to-customer-security/

Commitment to Customer Security

Commitment to Customer Security

Cloudflare has been hooked on securing customers globally since its inception. Our services protect customer traffic and data as well as our own, and we are continuously improving and expanding those services to respond to the changing threat landscape of the Internet. Proving that commitment is a multi-faceted venture, the Security Team focuses on people, proof, and transparency to ensure every touchpoint with our products and company feels dependable.

People

The breadth of knowledge of the Security Team is wide and bleeding edge. Working as a security team at a security company means being highly technical, diverse, willing to test any and all products on ourselves, and sharing our knowledge with our local and global communities through industry groups and presenting at conferences worldwide. Connecting with our customers and counterparts through meetups and conferences lets us share problems, learn about upcoming industry trends, and share feedback to make improvements to the customer experience. In addition to running a formally documented, risk-based security program for Cloudflare, team members drive continuous improvement efforts across our Product and Infrastructure teams by reviewing and advising on changes, identifying and treating vulnerabilities, controlling authorization and access to systems and data, encrypting data in transit and at rest, and by detecting and responding to threats and incidents.

Proof

Security claims are all well and good, but how can a customer be sure we are doing what we say we do? We do it by undergoing several audits a year, proving that our security practices meet industry standards. To date, Cloudflare has regularly assessed and maintained compliance with PCI DSS (as a merchant and a service provider), SOC 2 Type II, ISO 27001 and ISO 27701 standards. No matter where our customers are in the world, they will likely need to rely on at least one of these standards to protect their customers’ information. We honor the responsibility of being the backbone of that trust.

As Cloudflare’s customer base continues to grow into more regulated industries with complex and rigorous requirements, we’ve decided to assess our global network against three additional standards this year:

  • FedRAMP, the US Federal Risk and Authorization Management Program, which evaluates our systems and practices against the standard for protection of US agency data in cloud computing environments. Cloudflare is listed on the FedRAMP Marketplace as “In Process” for an agency authorization at a Moderate impact level. We’re in the final steps of concluding our security assessment report from our auditors and on target to receive an authorization to operate in 2022.
  • ISO 27018, which examines our practices to protect personally identifiable information (PII) as a cloud provider. This extension to the ISO 27001 standard ensures that our information security management system (ISMS) manages the risks associated with processing PII. We’ve completed the third-party assessment, and we’re waiting for our certification in the upcoming month.
  • C5, Cloud Computing Compliance Criteria Catalog, introduced by the Federal Office for Information Security (The BSI) in Germany, is a validation against a defined baseline security level for cloud computing. Cloudflare is currently in the process of being assessed against the catalog by third-party auditors. Learn about our journey here.

Transparency

Our commitment to security for our customers and business means we have to be super transparent. When a security incident is being contained, we have in our response plan to not only bring in our legal, compliance and communications teams to determine notification strategy, but we also start outlining a detailed overview of how we are responding, even if we are still in the process of remediating.

We know firsthand how frustrating it can be when your critical vendors stay silent during a security incident and provide nothing more than a one sentence legal response which fails to reveal how they were impacted by the security vulnerability or incident. Here at Cloudflare, it is in our DNA to be transparent. You can see it with the blogs (Verkada Incident, Log4j) we write and how quickly we show our customers how we’ve responded and what we’re doing to fix the issue.

One of the most frequent questions we get from our customers regarding incidents is if our third-parties were impacted. Supply chain vulnerabilities, like Solarwinds and Log4j, have driven us to create efficiencies, such as automated inquiries, to all of our critical vendors at once. During the Containment phase of our security incident response process, our third-party risk team is quickly able to identify the impacted vendors and prioritize our production and security vendors. Our tooling allows us to trigger inquiries to third parties immediately, and our team is integrated into the incident response process to ensure effective communication. Any information that we receive from our vendors, we share with our Security Compliance forums to ensure that other companies who are also inquiring with their vendors don’t have to duplicate their work.

Value

These recurring audits and assessments are not simple website badges. Our Security team doesn’t produce evidence only to pass audits; our process includes identifying risks, forming controls and processes to address those risks, continuous operation of those processes, evaluation of the effectiveness of (in the form of internal and external audits and tests) those processes, and making improvements to the ISMS based on those evaluations. Some things on our process that set us apart include the following:

  • Many companies do not contact vendors or have this process baked into their incident response procedures. For log4j, our Vendor Security Team was on calls with the response team and providing regular updates on vendor responses as soon as the incident was identified.
  • Many companies do not proactively communicate to customers like we do. We communicate even when we are not legally required to do so because we feel it’s the right thing to do regardless of the requirement.
  • The tools in this space also are not usually flexible enough to send custom questionnaires quickly out to vendors. We have automation in place to get these out in bulk right away and tailor questions to the vulnerabilities at hand.

The final step is communicating the resulting picture of our security posture to our customers. Our security certifications and assessment results are available to our customers via download from their Cloudflare Dashboards, or by request to their account team. For the latest information about our certifications and reports, please visit our Trust Hub.

Customers can now request the AWS CyberGRX report for their third-party supplier due diligence

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/customers-can-now-request-the-aws-cybergrx-report-for-their-third-party-supplier-due-diligence/

CyberGRX

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). We are continuously expanding our compliance programs to provide customers with more tools and resources to be able to perform effective due diligence on AWS. We are excited to announce the availability of the AWS CyberGRX report for our customers.

With the increase in adoption of cloud platforms and services across multiple sectors and industries, AWS has become one of the most critical components of customers’ third-party ecosystems. Regulated customers, such as those in the financial services sector, are held to higher standards by their regulators and auditors when it comes to exercising effective due diligence on their third parties. Customers are using third-party cyber risk management (TPCRM) platforms such as CyberGRX to better manage risks from their evolving third-party ecosystems and drive operational efficiencies. To help customers in such efforts, AWS has completed CyberGRX assessment of its security posture. The assessment is performed annually and is validated by independent CyberGRX partners.

CyberGRX assessment applies a dynamic approach to third-party risk assessment, which is updated in line with changes in risk level of cloud service providers, or as AWS updates its security posture and controls. This approach eliminates outdated static spreadsheets for third-party risk assessments, in which the risk matrices are not updated in near real time. CyberGRX assessment provides advanced capabilities by integrating AWS responses with analytics, threat intelligence, and sophisticated risk models to provide an in-depth view of the AWS security posture. In addition, AWS customers can use CyberGRX’s Framework Mapper feature to map AWS assessment controls and responses to well-known industry standards and frameworks (such as NIST 800-53, NIST Cybersecurity Framework (CSF), ISO 27001, PCI DSS, HIPAA) which can significantly reduce customers’ third-party supplier due-diligence burden.

The AWS CyberGRX report is available to all customers free of cost. Customers can request access to the report by completing an access request form, available on the AWS CyberGRX page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page, or if you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across Europe and other regions. During his professional career, he has helped multiple cloud service providers in obtaining global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto. He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has previously worked in risk management, security assurance, and technology audits for the past 12 years.

Streamlining evidence collection with AWS Audit Manager

Post Syndicated from Nicholas Parks original https://aws.amazon.com/blogs/security/streamlining-evidence-collection-with-aws-audit-manager/

In this post, we will show you how to deploy a solution into your Amazon Web Services (AWS) account that enables you to simply attach manual evidence to controls using AWS Audit Manager. Making evidence-collection as seamless as possible minimizes audit fatigue and helps you maintain a strong compliance posture.

As an AWS customer, you can use APIs to deliver high quality software at a rapid pace. If you have compliance-focused teams that rely on manual, ticket-based processes, you might find it difficult to document audit changes as those changes increase in velocity and volume.

As your organization works to meet audit and regulatory obligations, you can save time by incorporating audit compliance processes into a DevOps model. You can use modern services like Audit Manager to make this easier. Audit Manager automates evidence collection and generates reports, which helps reduce manual auditing efforts and enables you to scale your cloud auditing capabilities along with your business.

AWS Audit Manager uses services such as AWS Security Hub, AWS Config, and AWS CloudTrail to automatically collect and organize evidence, such as resource configuration snapshots, user activity, and compliance check results. However, for controls represented in your software or processes without an AWS service-specific metric to gather, you need to manually create and provide documentation as evidence to demonstrate that you have established organizational processes to maintain compliance. The solution in this blog post streamlines these types of activities.

Solution architecture

This solution creates an HTTPS API endpoint, which allows integration with other software development lifecycle (SDLC) solutions, IT service management (ITSM) products, and clinical trial management systems (CTMS) solutions that capture trial process change amendment documentation (in the case of pharmaceutical companies who use AWS to build robust pharmacovigilance solutions). The endpoint can also be a backend microservice to an application that allows contract research organizations (CRO) investigators to add their compliance supporting documentation.

In this solution’s current form, you can submit an evidence file payload along with the assessment and control details to the API and this solution will tie all the information together for the audit report. This post and solution is directed towards engineering teams who are looking for a way to accelerate evidence collection. To maximize the effectiveness of this solution, your engineering team will also need to collaborate with cross-functional groups, such as audit and business stakeholders, to design a process and service that constructs and sends the message(s) to the API and to scale out usage across the organization.

To download the code for this solution, and the configuration that enables you to set up auto-ingestion of manual evidence, see the aws-audit-manager-manual-evidence-automation GitHub repository.

Architecture overview

In this solution, you use AWS Serverless Application Model (AWS SAM) templates to build the solution and deploy to your AWS account. See Figure 1 for an illustration of the high-level architecture.

Figure 1. The architecture of the AWS Audit Manager automation solution

Figure 1. The architecture of the AWS Audit Manager automation solution

The SAM template creates resources that support the following workflow:

  1. A client can call an Amazon API Gateway endpoint by sending a payload that includes assessment details and the evidence payload.
  2. An AWS Lambda function implements the API to handle the request.
  3. The Lambda function uploads the evidence to an Amazon Simple Storage Service (Amazon S3) bucket (3a) and uses AWS Key Management Service (AWS KMS) to encrypt the data (3b).
  4. The Lambda function also initializes the AWS Step Functions workflow.
  5. Within the Step Functions workflow, a Standard Workflow calls two Lambda functions. The first looks for a matching control within an assessment, and the second updates the control within the assessment with the evidence.
  6. When the Step Functions workflow concludes, it sends a notification for success or failure to subscribers of an Amazon Simple Notification Service (Amazon SNS) topic.

Deploy the solution

The project available in the aws-audit-manager-manual-evidence-automation GitHub repository contains source code and supporting files for a serverless application you can deploy with the AWS SAM command line interface (CLI). It includes the following files and folders:

src Code for the application’s Lambda implementation of the Step Functions workflow.
It also includes a Step Functions definition file.
template.yml A template that defines the application’s AWS resources.

Resources for this project are defined in the template.yml file. You can update the template to add AWS resources through the same deployment process that updates your application code.

Prerequisites

This solution assumes the following:

  1. AWS Audit Manager is enabled.
  2. You have already created an assessment in AWS Audit Manager.
  3. You have the necessary tools to use the AWS SAM CLI (see details in the table that follows).

For more information about setting up Audit Manager and selecting a framework, see Getting started with Audit Manager in the blog post AWS Audit Manager Simplifies Audit Preparation.

The AWS SAM CLI is an extension of the AWS CLI that adds functionality for building and testing Lambda applications. The AWS SAM CLI uses Docker to run your functions in an Amazon Linux environment that matches Lambda. It can also emulate your application’s build environment and API.

To use the AWS SAM CLI, you need the following tools:

AWS SAM CLI Install the AWS SAM CLI
Node.js Install Node.js 14, including the npm package management tool
Docker Install Docker community edition

To deploy the solution

  1. Open your terminal and use the following command to create a folder to clone the project into, then navigate to that folder. Be sure to replace <FolderName> with your own value.

    mkdir Desktop/<FolderName> && cd $_

  2. Clone the project into the folder you just created by using the following command.

    git clone https://github.com/aws-samples/aws-audit-manager-manual-evidence-automation.git

  3. Navigate into the newly created project folder by using the following command.

    cd aws-audit-manager-manual-evidence-automation

  4. In the AWS SAM shell, use the following command to build the source of your application.

    sam build

  5. In the AWS SAM shell, use the following command to package and deploy your application to AWS. Be sure to replace <DOC-EXAMPLE-BUCKET> with your own unique S3 bucket name.

    sam deploy –guided –parameter-overrides paramBucketName=<DOC-EXAMPLE-BUCKET>

  6. When prompted, enter the AWS Region where AWS Audit Manager was configured. For the rest of the prompts, leave the default values.
  7. To activate the IAM authentication feature for API gateway, override the default value by using the following command.

    paramUseIAMwithGateway=AWS_IAM

To test the deployed solution

After you deploy the solution, run an invocation like the one below for an assessment (using curl). Be sure to replace <YOURAPIENDPOINT> and <AWS REGION> with your own values.

curl –location –request POST
‘https://<YOURAPIENDPOINT>.execute-api.<AWS REGION>.amazonaws.com/Prod’ \
–header ‘x-api-key: ‘ \
–form ‘[email protected]”<PATH TO FILE>”‘ \
–form ‘AssessmentName=”GxP21cfr11″‘ \
–form ‘ControlSetName=”General requirements”‘ \
–form ‘ControlIdName=”11.100(a)”‘

Check to see that your file is correctly attached to the control for your assessment.

Form-data interface parameters

The API implements a form-data interface that expects four parameters:

  1. AssessmentName: The name for the assessment in Audit Manager. In this example, the AssessmentName is GxP21cfr11.
  2. ControlSetName: The display name for a control set within an assessment. In this example, the ControlSetName is General requirements.
  3. ControlIdName: this is a particular control within a control set. In this example, the ControlIdName is 11.100(a).
  4. Payload: this is the file representing evidence to be uploaded.

As a refresher of Audit Manager concepts, evidence is collected for a particular control. Controls are grouped into control sets. Control sets can be grouped into a particular framework. The assessment is considered an implementation, or an instance, of the framework. For more information, see AWS Audit Manager concepts and terminology.

To clean up the deployed solution

To clean up the solution, use the following commands to delete the AWS CloudFormation stack and your S3 bucket. Be sure to replace <YourStackId> and <DOC-EXAMPLE-BUCKET> with your own values.

aws cloudformation delete-stack –stack-name <YourStackId>
aws s3 rb s3://<DOC-EXAMPLE-BUCKET> –force

Conclusion

This solution provides a way to allow for better coordination between your software delivery organization and compliance professionals. This allows your organization to continuously deliver new updates without overwhelming your security professionals with manual audit review tasks.

Next steps

There are various ways to extend this solution.

  1. Update the API Lambda implementation to be a webhook for your favorite software development lifecycle (SDLC) or IT service management (ITSM) solution.
  2. Modify the steps within the Step Functions state machine to more closely match your unique compliance processes.
  3. Use AWS CodePipeline to start Step Functions state machines natively, or integrate a variation of this solution with any continuous compliance workflow that you have.

Learn more AWS Audit Manager, DevOps, and AWS for Health and start building!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Nicholas Parks

Nicholas Parks

Nicholas has been using AWS since 2010 across various enterprise verticals including healthcare, life sciences, financial, retail, and telecommunications. Nicholas focuses on modernizations in pursuit of new revenue as well as application migrations. He specializes in Lean, DevOps cultural change, and Continuous Delivery.

Brian Tang

Brian Tang

Brian Tang is an AWS Solutions Architect based out of Boston, MA. He has 10 years of experience helping enterprise customers across a wide range of industries complete digital transformations by migrating business-critical workloads to the cloud. His core interests include DevOps and serverless-based solutions. Outside of work, he loves rock climbing and playing guitar.

Cloudflare re-enforces commitment to security in Germany via BSIG audit

Post Syndicated from Rebecca Rogers original https://blog.cloudflare.com/bsig-audit-and-beyond/

Cloudflare re-enforces commitment to security in Germany via BSIG audit

Cloudflare re-enforces commitment to security in Germany via BSIG audit

As a large data processing country, Germany is at the forefront of security and privacy regulation in Europe and sets the tone for other countries to follow. Analyzing and meeting the requirements to participate in Germany’s cloud security industry requires adherence to international, regional, and country-specific standards. Cloudflare is pleased to announce that we have taken appropriate organizational and technical precautions to prevent disruptions to the availability, integrity, authenticity, and confidentiality of Cloudflare’s production systems in accordance with BSI-KritisV. TÜViT, the auditing body tasked with auditing Cloudflare and providing the evidence to BSI every two years. Completion of this audit allows us to comply with the NIS Directive within Germany.

Why do cloud companies operating in Germany need to go through a BSI audit?

In 2019, Cloudflare registered as an Operator of Essential Services’ under the EU Directive on Security of Network and Information Systems (NIS Directive). The NIS Directive is cybersecurity legislation with the goal to enhance cybersecurity across the EU. Every member state has started to adopt national legislation for the NIS Directive and the criteria for compliance is set individually by each country. As an ‘Operator of Essential Services’ in Germany, Cloudflare is regulated by the Federal Office for Information Security (The BSI) and must adhere to the requirements set by The BSI.

What does the audit prove?

This audit includes a thorough review of Cloudflare’s security controls in the following areas:

  • Asset Management
  • Risk Analysis
  • Business Continuity and Disaster Recovery
  • Personnel and Organizational Security
  • Encryption
  • Network Security
  • Security Authentication
  • Incident Response
  • Vendor Security
  • Physical Security

In addition to an audit of Cloudflare’s security controls in the aforementioned areas, TÜViT also conducted a thorough review of Cloudflare’s Information Security Management System (ISMS).

By having these areas audited, German customers can rest assured that Cloudflare respects the requirements put forth by the governing bodies tasked with protecting their data.

Are there any additional German-specific audits on the horizon?

Yes. Cloudflare is currently undergoing an independent third-party audit for the Cloud Computing Compliance Criteria Catalog (C5) certification. The C5 was introduced by BSI Germany in 2016 and reviews operational security within cloud services. Industries that place a high level of importance on C5 include cloud computing and German federal agencies. Learn more here.

What other certifications does Cloudflare hold that demonstrate its dedication to privacy and security?

Different certifications measure different elements of a company’s security or privacy posture. Cloudflare has met the requirements of the following standards:

  • ISO 27001 – Cloudflare has been ISO 27001 certified since 2019. Customers can be assured that Cloudflare has a formal information security management program that adheres to a globally recognized standard.
  • SOC2 Type II – Cloudflare maintains SOC reports that include the security, confidentiality, and availability trust principles.
  • PCI DSS – Cloudflare engages with a QSA (Qualified Security Assessor) on an annual basis to evaluate us as a Level 1 Merchant and a Service Provider.
  • ISO 27701 – Cloudflare was one of the first companies in the industry to achieve ISO 27701 certification as both a data processor and controller. The certification provides assurance to our customers that we have a formal privacy program that is aligned to GDPR.
  • FedRAMP In Process – Cloudflare hit a major milestone by being listed on the FedRAMP Marketplace as ‘In Process’ for receiving an agency authorization at a moderate baseline. Once an Authorization to Operate (ATO) is granted, it will allow agencies and other cloud service providers to leverage our product and services in a public sector capacity.

Pro, Business, and Enterprise customers now have the ability to obtain a copy of Cloudflare’s certifications, reports, and overview through the Cloudflare Dashboard. For the latest information about our certifications and reports, please visit our Trust Hub.

Cloud Security and Compliance: The Ultimate Frenemies of Financial Services

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/02/17/cloud-security-and-compliance-the-ultimate-frenemies-of-financial-services/

Cloud Security and Compliance: The Ultimate Frenemies of Financial Services

Meeting compliance standards as a financial services (finserv) company can be incredibly time-consuming and expensive. Finserv has some of the highest regulatory bars to clear out of any industry — with the exception, perhaps, of healthcare.

That said, these regulations exist for good reason. Even beyond being requirements to operate, meeting compliance standards helps financial services companies gain customer trust, avoid reputational damage, and protect themselves from unnecessary or unprofitable risk.

But as I’m sure just about everyone reading this will agree, meeting regulatory compliance standards does not necessarily mean your organization is fully secure. Often, it’s difficult for government agencies and legislators to keep up with the pace of changing technologies, so regulations tend to lag behind the state of tech. This is often particularly true with emerging technologies, such as hybrid and multi-cloud environments.

On top of all of that, the cost of not having a well-rounded security portfolio is particularly massive here — the average cost of a data breach in financial services is second highest of all industries (only healthcare is more expensive).

Needless to say, financial services organizations have a complex relationship with compliance, particularly as it relates to business-driven cloud migration and innovation.

Here are four ways finserv companies can embrace the love-hate relationship with cloud security and compliance while effectively navigating the need to maintain pace with today’s rapid rate of change.

1. Implement continuous monitoring

Change seems to be the only constant when it comes to multi-cloud environments. And there’s virtually no limit to where these changes can occur — all clouds and regions are fair game. In addition, new compliance regulations are continuously taking shape as cloud security best practices continue to progress. Lawmakers around the globe are tasked with implementing these new and updated rules to protect data in every industry to effectively address the rapidly changing vulnerabilities.

A key component to remain compliant with these rules and regulations is knowing who is responsible for making changes and maintaining compliance. To do this, you’ll need visibility to distinguish between normal changes to infrastructure, applications, or access made by your development team and the changes that occur at the hands of a threat actor.

But the reality is that many data points can make distinguishing threats difficult for security professionals. After all, financial services organizations are an irresistible target for those with malicious intent because there’s a direct line to the dollar value.

Clearly, information security statutes provide necessary oversight. Since assessing data in real time is essential for success in this industry, continuous monitoring can help you stay compliant at every stage of development. This can also be useful during an audit to show your organization has taken a proactive approach to compliance.

2. Automate security processes

Many regulations require organizations to act fast in the wake of a security breach. The European Union (EU)’s General Data Protection Regulation (GDPR), which is setting the standard for privacy and security laws globally, requires supervisory authorities (and at times individuals) to be notified within 72 hours of becoming aware of a data breach. And these correspondences must provide extensive information about the breach, such as how many individuals were impacted, the consequences of the breach, and perhaps most importantly, what the next steps are for containment and mitigation.

While not every jurisdiction has laws that are as strict as the EU’s GDPR, many countries are using GDPR as a baseline for their own guidance in how they hold organizations responsible and accountable for protecting consumer information. Industry regulations and cloud governance frameworks, such as PCI DSS, SOC 2, ISO 27001, and Gramm-Leach-Bliley Act, are just a few of the many standards with which finserv organizations need to ensure compliance. Organizations that do business globally not only need to be aware of these guidelines but also how they impact the way they do business. For example, if a consumer lives in a country protected by GDPR, their data is protected by GDPR guidelines. This is necessary even if the organization doesn’t operate directly in that country.

The best-case scenario is to catch misconfigurations before they go live and cause a breach, so you can safeguard customer data and avoid going through the lengthy disclosure process and the ensuing loss of customer trust. That’s exactly what automation helps you do.

Relying on manual efforts from your team to ensure that owners of noncompliant resources are notified and remediation takes place can be a time-consuming and involved process. Plus, it’s too easy for potential threats to fall between the cracks. By automating continuous auditing of resources, misconfiguration notification, and remediation, organizations can address noncompliance before issues escalate.

3. Improve organizational culture by sharing context

Large teams that leverage multiple platforms can commonly experience information breakdowns. After all, not everyone can understand security jargon. When misunderstandings occur, it can often lead to an unintentional lapse in compliance among employees. Implementing a simplified reporting structure can help security professionals communicate more effectively with resource owners and other immediate stakeholders. Isolated data points from multiple threat alarms can make it confusing and time-consuming for cloud resource owners to understand what happened and what to do next.

Finding a platform that empowers product and engineering teams to take responsibility for their own security, while also providing thorough context about the violation and the necessary remediation steps is essential. This can help set a standard by challenging teams to measure security compliance daily, while minimizing a lot of the friction and guesswork that comes from shifting security earlier in the development lifecycle.

Not only does this help with compliance legalities, but showing these ongoing, team-wide security compliance checks can also satisfy squeamish boards. Gartner predicts that security concerns will continue to be a top priority for board members in the wake of sensational security breaches that are occurring with startling regularity. Ransomware, for example, has gone mainstream, and boards have taken notice. Cybersecurity is seen as a potentially major vulnerability, which means the expectations placed on CISOs are mounting.

Though a lot of focus goes into updating frameworks and systems, corporate culture is the third piece of a powerful security strategy. It must not be overlooked.

4. Gain greater visibility

Robust, multi-cloud environments can make visibility challenging. Enterprises need to govern their clouds using Identity and Access Management (IAM) and adopt a least-privileged access security model across cloud and container environments. But that’s not enough. They also need a strategy that enables them to see vulnerabilities across multiple environments and devices. This is especially important as more insiders gain access to the cloud who also have the ability to make changes and add assets — and do all of this at a startling pace.

This visibility is a significant part of remaining compliant because rapid changes can have unintended results that can be missed without an overarching view of the cloud environment. What might look like harmless or anonymized data could still cause privacy and compliance concerns. For example, knowing simply gender, zip code, and birth date is enough information to identify 87% of Americans. To protect consumers, legislation such as the California Consumer Privacy Act (CCPA) stipulates that toxic data combinations, or data that can be viewed as a whole to reveal personal identities, must be avoided.

In order to remain compliant, organizations must have a system in place to spot toxic data combinations that could run afoul of regulators. This is especially important as data-sharing agreements become more commonplace.

Next steps: Embracing the complex relationship

Finserv organizations must embrace the complex relationship with cloud security and compliance. It is, realistically, the only way to survive and thrive in a world where the cloud is the go-to method of innovation.

Taking steps for improvement in each of the four areas outlined above can feel overwhelming, so we suggest getting started by focusing on these three key actions:

  • Innovate quickly. Innovation is crucial in today’s finserv landscape. Organizations in financial services are competing for attention, which requires continuous digital transformation. The cloud allows these innovations to happen fast, but CISOs must ensure a secure environment for advancements to effectively take place. Is your organization striking the right balance between innovation and safety?
  • Automate aggressively. There are too many data points in today’s multi-cloud environments for security teams to track successfully without automation. Ongoing hygiene practices and internal audits are made possible using automation best practices. Do you have the right controls in place to launch an automation strategy that supports — and enhances — your security processes?
  • Transform culture. Never forget that people are at the center of your security and compliance strategy. Improving communication, education and consistency across teams can upgrade your organization’s compliance strategy. And remember: Your compliance strategy will be under increased scrutiny from executives and boards in the coming years. Does your team understand the “why” behind the security best practices you’re asking them to support?

Let’s navigate the future of cloud security for finserv together. Learn more here.

Additional reading:

AWS User Guide to Financial Services Regulations and Guidelines in Switzerland and FINMA workbooks publications

Post Syndicated from Margo Cronin original https://aws.amazon.com/blogs/security/aws-user-guide-to-financial-services-regulations-and-guidelines-in-switzerland-and-finma/

AWS is pleased to announce the publication of the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland whitepaper and workbooks.

This guide refers to certain rules applicable to financial institutions in Switzerland, including banks, insurance companies, stock exchanges, securities dealers, portfolio managers, trustees and other financial entities which are overseen (directly or indirectly) by the Swiss Financial Market Supervisory Authority (FINMA).

Amongst other topics, this guide covers requirements created by the following regulations and publications of interest to Swiss financial institutions:

  • Federal Laws – including Article 47 of the Swiss Banking Act (BA). Banks and Savings Banks are overseen by FINMA and governed by the BA (Bundesgesetz über die Banken und Sparkassen, Bankengesetz, BankG). Article 47 BA holds relevance in the context of outsourcing.
  • Response on Cloud Guidelines for Swiss Financial institutions produced by the Swiss Banking Union, Schweizerische Bankiervereinigung SBVg.
  • Controls outlined by FINMA, Switzerland’s independent regulator of financial markets, that may be applicable to Swiss banks and insurers in the context of outsourcing arrangements to the cloud.

In combination with the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland whitepaper, customers can use the detailed AWS FINMA workbooks and ISAE 3000 report available from AWS Artifact.

The five core FINMA circulars are intended to assist Swiss-regulated financial institutions in understanding approaches to due diligence, third-party management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The AWS FINMA workbooks and ISAE 3000 report scope covers, in detail, requirements of the following FINMA circulars:

  • 2018/03 Outsourcing – banks and insurers (04.11.2020)
  • 2008/21 Operational Risks – Banks – Principle 4 Technology Infrastructure (31.10.2019)
  • 2008/21 Operational Risks – Banks – Appendix 3 Handling of electronic Client Identifying Data (31.10.2019)
  • 2013/03 Auditing – Information Technology (04.11.2020)
  • 2008/10 Self-regulation as a minimum standard – Minimum Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013)

Customers can use the detailed FINMA workbooks, which include detailed control mappings for each FINMA control, covering both the AWS control activities and the Customer User Entity Controls. Where applicable, under the AWS Shared Responsibility Model, these workbooks provide industry standard practices, incorporating AWS Well-Architected, to assist Swiss customers in their own preparation for FINMA circular alignment.

This whitepaper follows the issuance of the second Swiss Financial Market Supervisory Authority (FINMA) ISAE 3000 Type 2 attestation report. The latest report covers the period from October 1, 2020 to September 30, 2021, with a total of 141 AWS services and 23 global AWS Regions included in the scope. Customers can download the report from AWS Artifact. A full list of certified services and Regions is presented within the published FINMA report.

As always, AWS is committed to bringing new services into the scope of our FINMA program in the future based on customers’ architectural and regulatory needs. Please reach out to your AWS account team if you have any questions or feedback. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Margo Cronin

Margo is a Principal Security Specialist at Amazon Web Services based in Zurich, Switzerland. She spends her days working with customers, from startups to the largest of enterprises, helping them build new capabilities and accelerating their cloud journey. She has a strong focus on security, helping customers improve their security, risk, and compliance in the cloud.

C5 Type 2 attestation report now available with 141 services in scope

Post Syndicated from Mercy Kanengoni original https://aws.amazon.com/blogs/security/c5-type-2-attestation-report-now-available-with-141-services-in-scope/

Amazon Web Services (AWS) is pleased to announce the issuance of the new Cloud Computing Compliance Controls Catalogue (C5) Type 2 attestation report. We added 18 additional services and service features to the scope of the 2021 report.

Germany’s national cybersecurity authority, Bundesamt für Sicherheit in der Informationstechnik (BSI), established C5 to define a reference standard for German cloud security requirements. The C5 Type 2 report covers the time period from October 1, 2020, through September 30, 2021. It was issued by an independent third-party attestation organization, and assesses the design and the operational effectiveness of AWS’s controls against the new version C5:2020’s basic and additional criteria.

Customers in Germany and other European countries can use AWS’s attestation report to confirm that AWS meets the security requirements of the C5:2020 framework, and to review the details of the tested controls. This attestation demonstrates our commitment to meet and exceed the security expectations for cloud service providers set by the BSI.

AWS has added the following 18 services and service features to the new C5 scope:

You can see a current list of the services in scope for C5 on the AWS Services in Scope by Compliance Program page.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about the C5 report.

The C5 report and Continuing Operations Letter is available to AWS customers through AWS Artifact. For more information, see Cloud Computing Compliance Controls Catalogue (C5).

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub forum. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Mercy Kanengoni

Mercy Kanengoni

Mercy is a Security Audit Program Manager at AWS based in Manchester, UK. She leads security audits across Europe, and she has previously worked in security assurance and technology risk management.

Author

Karthik Amrutesh

Karthik is a Senior Manager, Security Assurance at AWS based in New York, U.S. His team is responsible for audits, attestations, certifications, and assessments globally. Karthik has previously worked in risk management, security assurance, and technology audits for the past 18 years.

How to use tokenization to improve data security and reduce audit scope

Post Syndicated from Tim Winston original https://aws.amazon.com/blogs/security/how-to-use-tokenization-to-improve-data-security-and-reduce-audit-scope/

Tokenization of sensitive data elements is a hot topic, but you may not know what to tokenize, or even how to determine if tokenization is right for your organization’s business needs. Industries subject to financial, data security, regulatory, or privacy compliance standards are increasingly looking for tokenization solutions to minimize distribution of sensitive data, reduce risk of exposure, improve security posture, and alleviate compliance obligations. This post provides guidance to determine your requirements for tokenization, with an emphasis on the compliance lens given our experience as PCI Qualified Security Assessors (PCI QSA).

What is tokenization?

Tokenization is the process of replacing actual sensitive data elements with non-sensitive data elements that have no exploitable value for data security purposes. Security-sensitive applications use tokenization to replace sensitive data, such as personally identifiable information (PII) or protected health information (PHI), with tokens to reduce security risks.

De-tokenization returns the original data element for a provided token. Applications may require access to the original data, or an element of the original data, for decisions, analysis, or personalized messaging. To minimize the need to de-tokenize data and to reduce security exposure, tokens can retain attributes of the original data to enable processing and analysis using token values instead of the original data. Common characteristics tokens may retain from the original data are:

Format attributes

Length for compatibility with storage and reports of applications written for the original data
Character set for compatibility with display and data validation of existing applications
Preserved character positions such as first 6 and last 4 for credit card PAN

Analytics attributes

Mapping consistency where the same data always results in the same token
Sort order

Retaining functional attributes in tokens must be implemented in ways that do not defeat the security of the tokenization process. Using attribute preservation functions can possibly reduce the security of a specific tokenization implementation. Limiting the scope and access to tokens addresses limitations introduced when using attribute retention.

Why tokenize? Common use cases

I need to reduce my compliance scope

Tokens are generally not subject to compliance requirements if there is sufficient separation of the tokenization implementation and the applications using the tokens. Encrypted sensitive data may not reduce compliance obligations or scope. Such industry regulatory standards as PCI DSS 3.2.1 still consider systems that store, process, or transmit encrypted cardholder data as in-scope for assessment; whereas tokenized data may remove those systems from assessment scope. A common use case for PCI DSS compliance is replacing PAN with tokens in data sent to a service provider, which keeps the service provider from being subject to PCI DSS.

I need to restrict sensitive data to only those with a “need-to-know”

Tokenization can be used to add a layer of explicit access controls to de-tokenization of individual data items, which can be used to implement and demonstrate least-privileged access to sensitive data. For instances where data may be co-mingled in a common repository such as a data lake, tokenization can help ensure that only those with the appropriate access can perform the de-tokenization process and reveal sensitive data.

I need to avoid sharing sensitive data with my service providers

Replacing sensitive data with tokens before providing it to service providers who have no access to de-tokenize data can eliminate the risk of having sensitive data within service providers’ control, and avoid having compliance requirements apply to their environments. This is common for customers involved in the payment process, which provides tokenization services to merchants that tokenize the card holder data, and return back to their customers a token they can use to complete card purchase transactions.

I need to simplify data lake security and compliance

A data lake centralized repository allows you to store all your structured and unstructured data at any scale, to be used later for not-yet-determined analysis. Having multiple sources and data stored in multiple structured and unstructured formats creates complications for demonstrating data protection controls for regulatory compliance. Ideally, sensitive data should not be ingested at all; however, that is not always feasible. Where ingestion of such data is necessary, tokenization at each data source can keep compliance-subject data out of data lakes, and help avoid compliance implications. Using tokens that retain data attributes, such as data-to-token consistency (idempotence) can support many of the analytical capabilities that make it useful to store data in the data lake.

I want to allow sensitive data to be used for other purposes, such as analytics

Your organization may want to perform analytics on the sensitive data for other business purposes, such as marketing metrics, and reporting. By tokenizing the data, you can minimize the locations where sensitive data is allowed, and provide tokens to users and applications needing to conduct data analysis. This allows numerous applications and processes to access the token data and maintain security of the original sensitive data.

I want to use tokenization for threat mitigation

Using tokenization can help you mitigate threats identified in your workload threat model, depending on where and how tokenization is implemented. At the point where the sensitive data is tokenized, the sensitive data element is replaced with a non-sensitive equivalent throughout the data lifecycle, and across the data flow. Some important questions to ask are:

  • What are the in-scope compliance, regulatory, privacy, or security requirements for the data that will be tokenized?
  • When does the sensitive data need to be tokenized in order to meet security and scope reduction objectives?
  • What attack vector is being addressed for the sensitive data by tokenizing it?
  • Where is the tokenized data being hosted? Is it in a trusted environment or an untrusted environment?

For additional information on threat modeling, see the AWS security blog post How to approach threat modeling.

Tokenization or encryption consideration

Tokens can provide the ability to retain processing value of the data while still managing the data exposure risk and compliance scope. Encryption is the foundational mechanism for providing data confidentiality.

Encryption rarely results in cipher text with a similar format to the original data, and may prevent data analysis, or require consuming applications to adapt.

Your decision to use tokenization instead of encryption should be based on the following:

Reduction of compliance scope As discussed above, by properly utilizing tokenization to obfuscate sensitive data you may be able to reduce the scope of certain framework assessments such as PCI DSS 3.2.1.
Format attributes Used for compatibility with existing software and processes.
Analytics attributes Used to support planned data analysis and reporting.
Elimination of encryption key management A tokenization solution has one essential API—create token—and one optional API—retrieve value from token. Managing access controls can be simpler than some non-AWS native general purpose cryptographic key use policies. In addition, the compromise of the encryption key compromises all data encrypted by that key, both past and future. The compromise of the token database compromises only existing tokens.

Where encryption may make more sense

Although scope reduction, data analytics, threat mitigation, and data masking for the protection of sensitive data make very powerful arguments for tokenization, we acknowledge there may be instances where encryption is the more appropriate solution. Ask yourself these questions to gain better clarity on which solution is right for your company’s use case.

Scalability If you require a solution that scales to large data volumes, and have the availability to leverage encryption solutions that require minimal key management overhead, such as AWS Key Management Services (AWS KMS), then encryption may be right for you.
Data format If you need to secure data that is unstructured, then encryption may be the better option given the flexibility of encryption at various layers and formats.
Data sharing with 3rd parties If you need to share sensitive data in its original format and value with a 3rd party, then encryption may be the appropriate solution to minimize external access to your token vault for de-tokenization processes.

What type of tokenization solution is right for your business?

When trying to decide which tokenization solution to use, your organization should first define your business requirements and use cases.

  1. What are your own specific use cases for tokenized data, and what is your business goal? Identifying which use cases apply to your business and what the end state should be is important when determining the correct solution for your needs.
  2. What type of data does your organization want to tokenize? Understanding what data elements you want to tokenize, and what that tokenized data will be used for may impact your decision about which type of solution to use.
  3. Do the tokens need to be deterministic, the same data always producing the same token? Knowing how the data will be ingested or used by other applications and processes may rule out certain tokenization solutions.
  4. Will tokens be used internally only, or will the tokens be shared across other business units and applications? Identifying a need for shared tokens may increase the risk of token exposure and, therefore, impact your decisions about which tokenization solution to use.
  5. How long does a token need to be valid? You will need to identify a solution that can meet your use cases, internal security policies, and regulatory framework requirements.

Choosing between self-managed tokenization or tokenization as a service

Do you want to manage the tokenization within your organization, or use Tokenization as a Service (TaaS) offered by a third-party service provider? Some advantages to managing the tokenization solution with your company employees and resources are the ability to direct and prioritize the work needed to implement and maintain the solution, customizing the solution to the application’s exact needs, and building the subject matter expertise to remove a dependency on a third party. The primary advantages of a TaaS solution are that it is already complete, and the security of both tokenization and access controls are well tested. Additionally, TaaS inherently demonstrates separation of duties, because privileged access to the tokenization environment is owned by the tokenization provider.

Choosing a reversible tokenization solution

Do you have a business need to retrieve the original data from the token value? Reversible tokens can be valuable to avoid sharing sensitive data with internal or third-party service providers in payments and other financial services. Because the service providers are passed only tokens, they can avoid accepting additional security risk and compliance scope. If your company implements or allows de-tokenization, you will need to be able to demonstrate strict controls on the management and use of de-tokenization privilege. Eliminating the implementation of de-tokenization is the clearest way to demonstrate that downstream applications cannot have sensitive data. Given the security and compliance risks of converting tokenized data back into its original data format, this process should be highly monitored, and you should have appropriate alerting in place to detect each time this activity is performed.

Operational considerations when deciding on a tokenization solution

While operational considerations are outside the scope of this post, they are important factors for choosing a solution. Throughput, latency, deployment architecture, resiliency, batch capability, and multi-regional support can impact the tokenization solution of choice. Integration mechanisms with identity and access control and logging architectures, for example, are important for compliance controls and evidence creation.

No matter which deployment model you choose, the tokenization solution needs to meet security standards, similar to encryption standards, and must prevent determining what the original data is from the token values.

Conclusion

Using tokenization solutions to replace sensitive data offers many security and compliance benefits. These benefits include lowered security risk and smaller audit scope, resulting in lower compliance costs and a reduction in regulatory data handling requirements.

Your company may want to use sensitive data in new and innovative ways, such as developing personalized offerings that use predictive analysis and consumer usage trends and patterns, fraud monitoring and minimizing financial risk based on suspicious activity analysis, or developing business intelligence to improve strategic planning and business performance. If you implement a tokenization solution, your organization can alleviate some of the regulatory burden of protecting sensitive data while implementing solutions that use obfuscated data for analytics.

On the other hand, tokenization may also add complexity to your systems and applications, as well as adding additional costs to maintain those systems and applications. If you use a third-party tokenization solution, there is a possibility of being locked into that service provider due to the specific token schema they may use, and switching between providers may be costly. It can also be challenging to integrate tokenization into all applications that use the subject data.

In this post, we have described some considerations to help you determine if tokenization is right for you, what to consider when deciding which type of tokenization solution to use, and the benefits. disadvantages, and comparison of tokenization and encryption. When choosing a tokenization solution, it’s important for you to identify and understand all of your organizational requirements. This post is intended to generate questions your organization should answer to make the right decisions concerning tokenization.

You have many options available to tokenize your AWS workloads. After your organization has determined the type of tokenization solution to implement based on your own business requirements, explore the tokenization solution options available in AWS Marketplace. You can also build your own solution using AWS guides and blog posts. For further reading, see this blog post: Building a serverless tokenization solution to mask sensitive data.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Security Assurance Services or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Tim Winston

Tim is a Senior Assurance Consultant with AWS Security Assurance Services. He leverages more than 20 years’ experience as a security consultant and assessor to provide AWS customers with guidance on payment security and compliance. He is a co-author of the “Payment Card Industry Data Security Standard (PCI DSS) 3.2.1 on AWS”.

Author

Kristine Harper

Kristine is a Senior Assurance Consultant and PCI DSS Qualified Security Assessor (QSA) with AWS Security Assurance Services. Her professional background includes security and compliance consulting with large fintech enterprises and government entities. In her free time, Kristine enjoys traveling, outdoor activities, spending time with family, and spoiling her pets.

Author

Michael Guzman

Michael is an Assurance Consultant with AWS Security Assurance Services. Michael is a PCI QSA and HITRUST CCSFP, along with holding several AWS certifications. His background is in Financial Services IT Operations and Administrations, with over 20 years experience within that industry. In his spare time Michael enjoy’s spending time with his family, continuing to improve his golf skills and perfecting his Tri-Tip recipe.

Fall 2021 PCI DSS report now available with 7 services added to compliance scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2021-pci-dss-report-now-available-with-7-services-added-to-compliance-scope/

We’re continuing to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that seven new services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. These new services provide our customers with more options to process and store their payment card data and to architect their cardholder data environment (CDE) securely in AWS.

You can see the full list of services on our Services in Scope by Compliance program page. The seven new services are:

The Asia-Pacific (Jakarta) Region was newly added to scope, and assessed as PCI compliant as part of the Fall 2021 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) that shows AWS PCI compliance status is available through AWS Artifact.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

Continuous compliance monitoring using custom audit controls and frameworks with AWS Audit Manager

Post Syndicated from Deenadayaalan Thirugnanasambandam original https://aws.amazon.com/blogs/security/continuous-compliance-monitoring-using-custom-audit-controls-and-frameworks-with-aws-audit-manager/

For most customers today, security compliance auditing can be a very cumbersome and costly process. This activity within a security program often comes with a dependency on third party audit firms and robust security teams, to periodically assess risk and raise compliance gaps aligned with applicable industry requirements. Due to the nature of how audits are now performed, many corporate IT environments are left exposed to threats until the next manual audit is scheduled, performed, and the findings report is presented.

AWS Audit Manager can help you continuously audit your AWS usage and simplify how you assess IT risks and compliance gaps aligned with industry regulations and standards. Audit Manager automates evidence collection to reduce the “all hands-on deck” manual effort that often happens for audits, while enabling you to scale your audit capability in the cloud as your business grows. Customized control frameworks help customers evaluate IT environments against their own established assessment baseline, enabling them to discern how aligned they are with a set of compliance requirements tailored to their business needs. Custom controls can be defined to collect evidence from specific data sources, helping rate the IT environment against internally defined audit and compliance requirements. Each piece of evidence collected during the compliance assessment becomes a record that can be used to demonstrate compliance with predefined requirements specified by a control.

In this post, you will learn how to leverage AWS Audit Manager to create a tailored audit framework to continuously evaluate your organization’s AWS infrastructure against the relevant industry compliance requirements your organization needs to adhere to. By implementing this solution, you can simplify yet accelerate the detection of security risks present in your AWS environment, which are relevant to your organization, while providing your teams with the information needed to remedy reported compliance gaps.

Solution overview

This solution utilizes an event-driven architecture to provide agility while reducing manual administration effort.

  • AWS Audit Manager–AWS Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards.
  • AWS Lambda–AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, in response to events such as changes in data, application state or user actions.
  • Amazon Simple Storage Service (Amazon S3) –Amazon S3 is object storage built to store and retrieve any amount of data from anywhere, that offers industry leading availability, performance, security, and virtually unlimited scalability at very low costs.
  • AWS Cloud Development Kit (AWS CDK)–AWS Cloud Development Kit is a software development framework for provisioning your cloud infrastructure in code through AWS CloudFormation.

Architecture

This solution enables automated controls management using event-driven architecture with AWS Services such as AWS Audit Manager, AWS Lambda and Amazon S3, in integration with code management services like GitHub and AWS CodeCommit. The Controls owner can design, manage, monitor and roll out custom controls in GitHub with a simple custom controls configuration file, as illustrated in Figure 1. Once the controls configuration file is placed in an Amazon S3 bucket, the on-commit event of the file triggers a control pipeline to load controls in audit manager using a Lambda function. 

Figure 1: Solution workflow

Figure 1: Solution workflow

Solution workflow overview

  1. The Control owner loads the controls as code (Controls and Framework) into an Amazon S3 bucket.
  2. Uploading the Controls yaml file into the S3 bucket triggers a Lambda function to process the control file.
  3. The Lambda function processes the Controls file, and creates a new control (or updates an existing control) in the Audit Manager.
  4. Uploading the Controls Framework yaml file into the S3 bucket triggers a Lambda function to process the Controls Framework file.
  5. The Lambda function validates the Controls Framework file, and updates the Controls Framework library in Audit Manager

This solution can be extended to create custom frameworks based on the controls, and to run an assessment framework against the controls.

Prerequisite steps

  1. Sign in to your AWS Account
  2. Login to the AWS console and choose the appropriate AWS Region.
  3. In the Search tab, search for AWS Audit Manager
  4. Figure 2. AWS Audit Manager

    Figure 2. AWS Audit Manager

  5. Choose Set up AWS Audit Manager.

Keep the default configurations from this page, such as Permissions and Data encryption. When done choose Complete setup.

Before deploying the solution, please ensure that the following software packages and their dependencies are installed on your local machine:

Node.js v12 or above https://nodejs.org/en/
AWS CLI version 2 https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
AWS CDK https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html
jq https://stedolan.github.io/jq/
git https://git-scm.com/
AWS CLI configuration https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html

Solution details

To provision the enterprise control catalog with AWS Audit Manager, start by cloning the sample code from the aws-samples repository on GitHub, followed by running the installation script (included in this repository) with sample controls and framework from your AWS Account.

To clone the sample code from the repository

On your development terminal, git clone the source code of this blog post from the AWS public repository:

git clone [email protected]:aws-samples/enterprise-controls-catalog-via-aws-audit-manager.git

To bootstrap CDK and run the deploy script

The CDK Toolkit Stack will be created by cdk bootstrap and will manage resources necessary to enable deployment of Cloud Applications with AWS CDK.

cdk bootstrap aws://<AWS Account Number>/<Region> # Bootstrap CDK in the specified account and region

cd audit-manager-blog

./deploy.sh

Workflow

Figure 3 illustrates the overall deployment workflow. The deployment script triggers the NPM package manager, and invokes AWS CDK to create necessary infrastructure using AWS CloudFormation. The CloudFormation template offers an easy way to provision and manage lifecycles, by treating infrastructure as code.
 

Figure 3: Detailed workflow lifecycle

Figure 3: Detailed workflow lifecycle

Once the solution is successfully deployed, you can view two custom controls and one custom framework available in AWS Audit Manager. The custom controls use a combination of manual and automated evidence collection, using compliance checks for resource configurations from AWS Config.

To verify the newly created custom data security controls

  1. In the AWS console, go to AWS Audit Manager and select Control library
  2. Choose Custom controls to view the controls DataSecurity-DatainTransit and DataSecurity-DataAtRest
Figure 4. View custom controls

Figure 4. View custom controls

To verify the newly created custom framework

  1. In the AWS console, go to AWS Audit Manager and select Framework library.
  2. Choose Custom frameworks to view the following framework:
Figure 5. Custom frameworks list

Figure 5. Custom frameworks list

You have now successfully created the custom controls and framework using the proposed solution.

Next, you can create your own controls and add to your frameworks using a simple configuration file, and let the implemented solution do the automated provisioning.

To set up error reporting

Before you begin creating your own controls and frameworks, you should complete the error reporting configuration. The solution automatically sets up the error reporting capability using Amazon SNS, a web service that enables sending and receiving notifications from the cloud.

  1. In the AWS Console, go to Amazon SNS > Topics > AuditManagerBlogNotification
  2. Select Create subscription and choose Email as your preferred endpoint to subscribe.
  3. This will trigger an automated email on subscription confirmation. Upon confirmation, you will begin receiving any error notifications by email.

To create your own custom control as code

Follow these steps to create your own controls and frameworks:

  1. Create a new control file named example-control.yaml with contents as shown below. This creates a custom control to check whether all public access to data in Amazon S3 is prohibited:
  2. name:
    DataSecurity-PublicAccessProhibited

    description:
    Information and records (data) are managed consistent with the organization’s risk strategy to protect the confidentiality, integrity, and availability of information.

    actionPlanTitle:
    All public access block settings are enabled at account level

    actionPlanInstructions:
    Ensure all Amazon S3 resources have public access prohibited

    testingInformation:
    Test attestations – preventive and detective controls for prohibiting public access

    tags:
    ID: PRDS-3Subcategory: Public-Access-Prohibited
    Category: Data Security-PRDS
    CIS: CIS17
    COBIT: COBIT 5 APO07-03
    NIST: NIST SP 800-53 Rev 4

    datasources:
    sourceName: Config attestation
    sourceDescription: Config attestation
    sourceSetUpOption: System_Controls_Mapping
    sourceType: AWS_Config

    sourceKeyword:
    keywordInputType: SELECT_FROM_LIST
    keywordValue: S3_ACCOUNT_LEVEL_PUBLIC_ACCESS_BLOCKS

  3. Go to AWS Console > AWS CloudFormation > Stacks. Select AuditManagerBlogStack and choose Outputs.
  4. Make note of the bucketOutput name that starts with auditmanagerblogstack-
  5. Upload the example-control.yaml file into the auditmanagerblogstack- bucket noted in step 3, inside the controls folder
  6. The event-driven architecture is deployed as part of the solution. Uploading the file to the Amazon S3 bucket triggers an automated event to create the new custom control in AWS Audit Manager.

To validate your new custom control is automatically provisioned in AWS Audit Manager

  1. In the AWS console, go to AWS Audit Manager and select Control library
  2. Choose Custom controls to view the following controls:
Figure 6. Audit Manager custom controls are listed as Custom controls

Figure 6. Audit Manager custom controls are listed as Custom controls

To create your own custom framework as code

  1. Create a new framework file named example-framework.yaml with contents as shown below:
  2. name:
    Sample DataSecurity Framework

    description:
    A sample data security framework to prohibit public access to data

    complianceType:
    NIST

    controlSets:
    – name: Prohibit public access
    controls:
    – DataSecurity-PublicAccessProhibited

    tags:
    Tag1: DataSecurity
    Tag2: PublicAccessProhibited

  3. Go to AWS Console > AWS CloudFormation > Stacks. Select AuditManagerBlogStack and choose Outputs.
  4. Make note of the bucketOutput name that starts with auditmanagerblogstack-
  5. Upload the example-framework.yaml file into the bucket noted in step 3 above, inside the frameworks folder
  6. The event driven architecture is deployed as part of the blog. The file upload to Amazon S3 triggers an automated event to create the new custom framework in AWS Audit Manager.

To validate your new custom framework automatically provisioned in AWS Audit Manager

  1. Go to AWS Audit Manager in the AWS console and select Control library
  2. Click Custom controls and you should be able to see the following controls:
Figure 7. View custom controls created via custom repo

Figure 7. View custom controls created via custom repo

Congratulations, you have successfully created your new custom control and framework using the proposed solution.

Next steps

An Audit Manager assessment is based on a framework, which is a grouping of controls. Using the framework of your choice as a starting point, you can create an assessment that collects evidence for the controls in that framework. In your assessment, you can also define the scope of your audit. This includes specifying which AWS accounts and services you want to collect evidence for. You can create an assessment from a custom framework  you build yourself, using steps from the Audit Manager documentation.

Conclusion

The solution provides the dynamic ability to design, develop and monitor capabilities that can be extended as a standardized enterprise IT controls catalogue for your company. With AWS Audit Manager, you can build compliance controls as code, with capability to audit your environment on a daily, weekly, or monthly basis. You can use this solution to improve the dynamic nature of assessments with AWS Audit Manager’s compliance audit, on time with reduced manual effort. To learn more about our standard frameworks to assist you, see Supported frameworks in AWS Audit Manager which provides prebuilt frameworks based on AWS best practices.

Author

Deenadayaalan Thirugnanasambandam

Deenadayaalan is a Solution Architect at Amazon Web Services. He provides prescriptive architectural guidance and consulting that enable and accelerate customers’ adoption of AWS.

Author

Hu Jin

Hu is a Software Development Engineer at AWS. He helps customers build secure and scalable solutions on AWS Cloud to realise business value faster.

Author

Vinodh Shankar

Vinodh is a Sr. Specialist SA at Amazon Web Services. He helps customers with defining their transformation road map, assessing readiness, creating business case and mapping future state business transformation on cloud.

Author

Hafiz Saadullah

Hafiz is a Senior Technical Product Manager with AWS focused on AWS Solutions.

2021 AWS security-focused workshops

Post Syndicated from Temi Adebambo original https://aws.amazon.com/blogs/security/2021-aws-security-focused-workshops/

Every year, Amazon Web Services (AWS) looks to help our customers gain more experience and knowledge of our services through hands-on workshops. In 2021, we unfortunately couldn’t connect with you in person as much as we would have liked, so we wanted to create and share new ways to learn and build on AWS. We built and published several security-focused workshops that help you learn how to use or configure new services and features securely. Workshops are hands-on learning modules designed to teach or introduce practical skills, techniques, or concepts you can use to solve business problems.

In this blog post, we highlight the newest AWS security-focused workshops below. There are also several other workshops that were developed before 2021; you can find them on AWS Workshops, AWS Security Workshops, and AWS Samples. Here’s the list:

Data Protection and Privacy

Workshop Title

Abstract

Data discovery and classification with Amazon Macie

In this workshop, get familiar with Amazon Macie and learn to scan and classify data in your Amazon Simple Storage Service (Amazon S3) buckets. Work with Macie (data classification) and AWS Security Hub (centralized security view) to see how data in your environment is stored, and to understand any changes in S3 bucket policies that may affect your security posture. Learn to create a custom data identifier and to create and scope data discovery and classification jobs in Macie. Finally, use Macie to filter and investigate the results from the scans you create.

Scaling your encryption at rest capabilities with AWS KMS

AWS makes it easy to protect your data with encryption. This hands-on workshop provides an opportunity to dive deep into encryption at rest options with AWS. Learn AWS server-side encryption with AWS Key Management Service (AWS KMS) for services such as Amazon S3, Amazon Elastic Block Store (Amazon EBS), and Amazon Relational Database Service (Amazon RDS). Also, learn best practices for using AWS KMS across multiple accounts and Regions and how to scale while optimizing for performance.

Store, retrieve, and manage sensitive credentials in AWS Secrets Manager

In this workshop, learn how to integrate AWS Secrets Manager in your development platform, backed by serverless applications. Work through a sample application, and use Secrets Manager to retrieve credentials as well as work with attribute-based access control using tags. Also, learn how to monitor the compliance of secrets and implement incident response workflows that will rotate the secret, restore the resource policy, alert the SOC, and deny access to the offender.

Building and operating a Private Certificate Authority on AWS

This workshop covers private certificate management on AWS, employing the concepts of least privilege, separation of duties, monitoring, and automation. Participants learn operational aspects of creating a complete certificate authority (CA) hierarchy, building a simple web application, and issuing private certificates. It also covers how job functions—including CA administrators, application developers, and security administrators—can follow the principle of least privilege to perform various functions associated with certificate management. Finally, learn about IoT certificates, code-signing, and certificate templates to enable all your use cases.

Amazon S3 security and access settings and controls

Amazon S3 provides many security and access settings to help you secure your data, controls that ensure that those settings remain in place, and features to help you audit those settings and controls. This workshop walks you through these Amazon S3 capabilities and scenarios, to help you apply them for different security requirements.

Redact data as needed using Amazon S3 Object Lambda

Amazon S3 Object Lambda works with your existing applications, and allows you to add your own code using AWS Lambda functions to automatically process and transform data from Amazon S3 before returning it to an application. This enables different views of the same object depending on user identity, such as restricting access to confidential information, or disallowing access to personally identifiable information (PII) data. In this workshop, learn how to use Amazon S3 Object Lambda to modify objects during GET requests, so you no longer need to store multiple views of the same document.

Using AWS Nitro Enclaves to process highly sensitive data

In this hands-on workshop, learn how to use AWS Nitro Enclaves to isolate highly-sensitive data from your users, applications, and third-party libraries on your Amazon Elastic Compute Cloud (Amazon EC2) instances. Explore AWS Nitro Enclaves, discuss common use cases, and build and run your own enclave. During this workshop, learn about enclave isolation, cryptographic attestation, enclave image files, local Vsock communication channels, common debugging scenarios, and the enclave lifecycle.

Ransomware prevention strategies in Amazon S3

Learn how to use the protective, detective and monitoring controls in AWS to protect your data in S3 from ransomware threats. Set up Amazon GuardDuty for S3 and AWS Identity and Access Management (IAM) Access Analyzer, and learn to read and respond to findings and create IAM invariants. Create a tiered storage approach to backup and recovery, and learn to use Amazon S3 Object Lock, versioning, and replication to provide immutable storage and protect against accidental or malicious deletion.

Governance, Risk, and Compliance

Operating securely in a multi-account environment

Operating multiple AWS accounts under an organization is how many users consume AWS Cloud services. In this workshop, learn how to build foundational security monitoring in multi-account environments. Walk through an initial setup of AWS Security Hub for centralized aggregation of findings across your AWS Organizations organization. Additionally, learn how to centralize Amazon GuardDuty findings, Amazon Detective functions, AWS Identity and Access Management (IAM) Access Analyzer findings (if available), AWS Config rule evaluations, and AWS CloudTrail logs into the central security monitoring account (security tools account). Finally, implement a service control policy (SCP) that denies the ability to disable these security controls.

Building remediation workflows to simplify compliance

Automation and simplification are key to managing compliance at scale. Remediation is one of the essential elements of simplifying and managing risk. In this workshop, see how to build a remediation workflow using AWS Config and AWS Systems Manager automation. Learn how this workflow can be deployed at scale and monitored with AWS Security Hub to oversee the entire organization and how to use AWS Audit Manager to easily access evidence of risk management.

Identity and Access Management

Integrating IAM Access Analyzer into a CI/CD pipeline

Want to analyze Identity and Access Management (IAM) policies at scale? Want to help your developers write secure IAM policies? This workshop provides you the hands-on opportunity to run IAM Access Analyzer policy validation on your AWS CloudFormation templates in a continuous integration/continuous deployment (CI/CD) pipeline.

Data perimeter workshop

In this workshop, learn how to create a data perimeter by building controls that allow access to data only from expected network locations and by trusted identities. The workshop consists of five modules, each designed to illustrate a different Identity and Access Management (IAM) or network control. Learn where and how to implement the appropriate controls based on different risk scenarios. Discover how to implement these controls as service control policies, identity- and resource-based policies, and Amazon Virtual Private Cloud (Amazon VPC) endpoint policies.

Network and Infrastructure Security

Build a Zero Trust architecture for service-to-service workloads on AWS

In this workshop, get hands-on experience implementing a Zero Trust architecture for service-to-service workloads on AWS. Learn how to use services such as Amazon API Gateway and Virtual Private Cloud (Amazon VPC) endpoints to integrate network and identity controls while using Amazon GuardDuty, Lambda, and Amazon DynamoDB to take advantage of native service controls. Learn how these services allow you to authorize specific flows between components to reduce lateral network mobility risk and improve the overall security posture of your workload.

Securing deployment of third-party ML models

Enterprise users adopting machine learning (ML) on AWS often look for prescriptive guidance on implementing security best practices, establishing governance, securing their ML models, and meeting compliance standards. Building a repeatable solution provides users with standardization and governance over what gets provisioned in their AWS account. In this workshop, learn steps you can take to secure third-party ML model deployments. We provide cloud infrastructure-as-code templates to automate the setup of a hardened Amazon SageMaker environment. These templates include private networking, VPC endpoints, end-to-end encryption, logging and monitoring, and enhanced governance and access controls through AWS Service Catalog.

Building Prowler into a QuickSight-powered AWS security dashboard

In this workshop, get hands-on experience with Prowler, AWS Security Hub, and Amazon QuickSight by building a custom security dashboard for the AWS environment. Using a multi-account deployment of Prowler integrated into Security Hub, learn to identify and analyze Prowler findings and integrate QuickSight to visualize the information. Discover how to get the most from QuickSight and Prowler with automatically created datasets.

Threat Detection and Incident Response

Integration, prioritization, and response with AWS Security Hub

This workshop is designed to get you familiar with AWS Security Hub, so you can better understand how to use it in your own AWS environment. This workshop has two sections. The first section demonstrates the features and functions of AWS Security Hub. The second section shows you how to use AWS Security Hub to import findings from different data sources, analyze findings so you can prioritize response work, and implement responses to findings to help improve your security posture.

Building an AWS incident response plan using Jupyter notebooks

This workshop guides you through building an incident response plan for your AWS environment using Jupyter notebooks. Walk through an easy-to-follow sample incident, using building blocks as a ready-to-use playbook in a Jupyter notebook. Then, follow simple steps to add additional programmatic and documented steps to your incident response plan.

Scaling threat detection and response on AWS

In this hands-on workshop, learn about several AWS services involved in threat detection and response as you walk through real-world threat scenarios. Learn about the threat detection capabilities of Amazon GuardDuty, Amazon Macie, and AWS Security Hub and the available response options. For each hands-on scenario, review methods to detect and respond to threats using the following services: AWS CloudTrail, Virtual Private Cloud (Amazon VPC) Flow Logs, Amazon CloudWatch Events, AWS Lambda, Amazon Inspector, Amazon GuardDuty, and AWS Security Hub.

Building incident response playbooks for AWS

In this workshop, learn how to develop incident response playbooks. Explore the incident response lifecycle, including preparation, detection and analysis, containment, eradication and recovery, and post-incident activity. To get the most out of this workshop, you should have advanced experience with AWS services and responsibilities aligned with incident response frameworks such as NIST SP 800-61 R2.

This list is representative of the security workshops created in 2021 to help customers on their journey in AWS. If you’d like to find more workshops, please go to AWS Workshops and select Security in the top navigation bar, or you can also check out AWS Security Workshops for a subset of workshops curated by AWS Security Specialists. We hope you enjoy these workshops!

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Temi Adebambo

Temi leads the Security and Network Solutions Architecture team at AWS. His team is focused on working with customers on cloud migration and modernization, cybersecurity strategy, architecture best practices, and innovation in the cloud. Before AWS, he spent over 14 years as a consultant, advising CISOs and security leaders.

Comprehensive Cyber Security Framework for Primary (Urban) Cooperative Banks (UCBs)

Post Syndicated from Vikas Purohit original https://aws.amazon.com/blogs/security/comprehensive-cyber-security-framework-for-primary-urban-cooperative-banks/

We are pleased to announce a new Amazon Web Services (AWS) workbook designed to help India Primary (UCBs) customers align with the Reserve Bank of India (RBI) guidance in Comprehensive Cyber Security Framework for Primary (Urban) Cooperative Banks (UCBs) – A Graded Approach.

In addition to RBI’s basic cyber security framework for Primary (Urban) Cooperative Banks (UCBs), RBI issued guidance on its comprehensive cyber security framework, which sets the expectations for the Indian Primary UCBs regarding their cyber security frameworks. This guidance divides the framework into four levels, starting with a common level that applies to all UCBs; the remaining levels apply to specific UCBs based upon their digital depth, and interconnectedness to the payment systems landscape based on RBI-defined criteria. The guidance aims to increase the awareness among the Primary UCBs in India of the controls they should look for as they progress on their digital journey.

Security and compliance is a shared responsibility between AWS and the customer. This differentiation of responsibility is commonly referred to as the AWS Shared Responsibility Model, in which AWS is responsible for security of the cloud, and the customer is responsible for their security in the cloud.

The new AWS Comprehensive Cyber Security Framework for Primary (Urban) Cooperative Banks (UCBs) – A Graded Approach workbook helps customers align with the RBI cyber security framework by providing control mappings for the following:

The downloadable AWS RBI Comprehensive Cyber Security Framework for Primary UCBs workbook is available in AWS Artifact, a self-service portal for on-demand access to AWS Compliance Reports, and it contains two embedded formats:

  • Microsoft Excel: Coverage includes AWS responsibility control statements and Well-Architected Framework best practices
  • Dynamic HTML: Coverage is the same as in the Microsoft Excel format, with the added feature that the Well Architected Framework best practices are mapped to AWS Config managed rules and Amazon GuardDuty findings, where available or applicable.

The AWS RBI Comprehensive Cyber Security Framework for Primary UCBs and AWS RBI Basic Cyber Security Framework for Primary UCBs Workbook are available for download in AWS Artifact. Sign into AWS Artifact via the AWS Management Console, or learn more at Getting Started with AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Vikas Purohit

Vikas Purohit

Vikas works as a Partner Solution Architect with AISPL, India. He helps about helping customers and partners in their cloud journeys. He is particularly passionate in Cloud Security, hybrid networking and migrations.

2022 Planning: Simplifying Complex Cybersecurity Regulations

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/12/09/2022-planning-simplifying-complex-cybersecurity-regulations/

2022 Planning: Simplifying Complex Cybersecurity Regulations

Compliance does not equal security, but it’s also true that a strong cybersecurity program meets many compliance obligations. How can we communicate industry regulatory requirements in a more straightforward way that enhances understanding while saving time and effort? How can we more easily demonstrate that a robust cybersecurity program will typically meet many compliance requirements?

Rapid7’s latest white paper, “Simplifying the Complex: Common Practices Across Cybersecurity Regulations,” is an educational resource aimed at breaking down complicated regulatory text into a set of consistent cybersecurity practices. The paper analyzes 10 major cybersecurity regulations, identifies common practices across the regulations, and provides insight on how to operationalize these practices.

Read the full white paper

Get it here

You can also reserve your spot for the upcoming webinar, “Common Cybersecurity Compliance Requirements.” Register now at our 2022 Planning webinar series page. This talk is designed to help you apply simplification practices across regulations and help your team plan for the year ahead.  

Different regulations, common practices

Cybersecurity regulations are complex. They target a patchwork of industry sectors and are enforced by disparate federal, state, and international government agencies. However, there are patterns: Cybersecurity regulations often require similar baseline security practices, even though the legislation may structure compliance requirements differently.

Identifying these common elements can help regulated entities, regulators, and cybersecurity practitioners communicate how compliance obligations translate to operational practices. For example, an organization’s security leader(s) could use this approach to drive executive support and investment prioritization by demonstrating how a robust security program addresses an array of compliance obligations facing the organization.

This white paper organizes common regulatory requirements into 6 core components of organizational security programs:

  1. Security program: Maintain a comprehensive security program.
  2. Risk assessment: Assess internal and external cybersecurity risks and threats.
  3. Security safeguards: Implement safeguards to control the risks identified in the risk assessment.
  4. Testing and evaluation: Assess the effectiveness of policies, procedures, and safeguards to control risks.
  5. Workforce and personnel: Establish security roles and responsibilities for personnel.
  6. Incident response: Detect, investigate, document, and respond to cybersecurity incidents and events.

Learn additional background information on each regulation and how these 6 practices are incorporated into many of them. The white paper also provides extensive citations for each requirement so that readers can locate the official text directly.  

Rapid7 solutions help support compliance

Your organization is different from any other — that’s a fact. You’ll operationalize security practices based on individual risk profile, technology, and structure. Rapid7 helps you approach implementation with the context for each of the cybersecurity practices we outline, including:

  1. Operational overview: See how the cybersecurity practice generally operates within an organization’s security program.
  2. Organizational structure: This stipulates which teams or functions within an organization implement the cybersecurity practice.
  3. Successful approaches: These provide approaches to successfully implementing the cybersecurity practice.
  4. Common challenges: These spell out common issues that hinder consistently successful implementation.

Rapid7’s portfolio of solutions can help meet and exceed the cybersecurity practices commonly required by regulations. To illustrate this, the white paper provides extensive product and service mapping intended to help every unique organization achieve its compliance goals. We discuss the key, go-to products and services that help fulfill each practice, as well as those that provide additional support.

For example, when it comes to maintaining a comprehensive security program, it might help to measure the effectiveness of your program’s current state with a cybersecurity maturity assessment. Or, if you’re trying to stay compliant with safeguards to control risk, InsightCloudSec can help govern Identity and Access Management (IAM) and adopt a unified zero-trust security model across your cloud and container environments. And what about testing? From pentesting to managed application security services, simulate real-world attacks at different stages of the software development lifecycle (SDLC) to understand your state of risk and know if your end product is customer-ready.  

Which regulations are discussed in the white paper?

Sector-based

  1. HIPAA (health)
  2. GLBA (financial)
  3. NYDFS Cybersecurity Regulation (financial)
  4. PCI DSS (retail)
  5. COPPA (retail)
  6. NERC CIP (electrical)

Broadly applicable

  1. State Data Security Laws (CA, FL, MA, NY, TX)
  2. SOX

International

  1. GDPR
  2. NIS Directive

When it comes to compliance, it’s not just about running afoul of a regulating body you may not have been aware of when entering a new market. Customer trust is difficult to get back once you lose it.    

A comprehensive cybersecurity program from a trusted and vetted provider will help ensure you’re well-protected from threats and in compliance with regulations wherever your company does business. Whether it’s monitoring and testing services, risk assessments, or certification and training for personnel, your provider should deliver tailored solutions and products that help you meet your unique compliance goals — and protect your users — now and well into the future.

Learn more at our webinar on Monday, December 13

Sign up today

Note: The white paper discussed should not be used as a compliance guide and is not legal advice.

Introducing the Customer Metadata Boundary

Post Syndicated from Jon Levine original https://blog.cloudflare.com/introducing-the-customer-metadata-boundary/

Introducing the Customer Metadata Boundary

Introducing the Customer Metadata Boundary

Data localisation has gotten a lot of attention in recent years because a number of countries see it as a way of controlling or protecting their citizens’ data. Countries such as Australia, China, India, Brazil, and South Korea have or are currently considering regulations that assert legal sovereignty over their citizens’ personal data in some fashion — health care data must be stored locally; public institutions may only contract with local service providers, etc.

In the EU, the recent “Schrems II” decision resulted in additional requirements for companies that transfer personal data outside the EU. And a number of highly regulated industries require that specific types of personal data stay within the EU’s borders.

Cloudflare is committed to helping our customers keep personal data in the EU. Last year, we introduced the Data Localisation Suite, which gives customers control over where their data is inspected and stored.

Today, we’re excited to introduce the Customer Metadata Boundary, which expands the Data Localisation Suite to ensure that a customer’s end user traffic metadata stays in the EU.

Metadata: a primer

“Metadata” can be a scary term, but it’s a simple concept — it just means “data about data.” In other words, it’s a description of activity that happened on our network. Every service on the Internet collects metadata in some form, and it’s vital to user safety and network availability.

At Cloudflare, we collect metadata about the usage of our products for several purposes:

  • Serving analytics via our dashboards and APIs
  • Sharing logs with customers
  • Stopping security threats such as bot or DDoS attacks
  • Improving the performance of our network
  • Maintaining the reliability and resiliency of our network

What does that collection look like in practice at Cloudflare? Our network consists of dozens of services: our Firewall, Cache, DNS Resolver, DDoS protection systems, Workers runtime, and more. Each service emits structured log messages, which contain fields like timestamps, URLs, usage of Cloudflare features, and the identifier of the customer’s account and zone.

These messages do not contain the contents of customer traffic, and so they do not contain things like usernames, passwords, personal information, and other private details of customers’ end users. However, these logs may contain end-user IP addresses, which is considered personal data in the EU.

Data Localisation in the EU

The EU’s General Data Protection Regulation, or GDPR, is one of the world’s most comprehensive (and well known) data privacy laws. The GDPR does not, however, insist that personal data must stay in Europe. Instead, it provides a number of legal mechanisms to ensure that GDPR-level protections are available for EU personal data if it is transferred outside the EU to a third country like the United States. Data transfers from the EU to the US were, until recently, permitted under an agreement called the EU-U.S. Privacy Shield Framework.

Shortly after the GDPR went into effect, a privacy activist named Max Schrems filed suit against Facebook for their data collection practices. In July 2020, the Court of Justice of the EU issued the “Schrems II” ruling — which, among other things, invalidated the Privacy Shield framework. However, the court upheld other valid transfer mechanisms that ensure EU personal data won’t be accessed by U.S. government authorities in a way that violates the GDPR.

Since the Schrems II decision, many customers have asked us how we’re protecting EU citizens’ data. Fortunately, Cloudflare has had data protection safeguards in place since well before the Schrems II case, such as our industry-leading commitments on government data requests. In response to Schrems II in particular, we updated our customer Data Processing Addendum (DPA). We incorporated the latest Standard Contractual Clauses, which are legal agreements approved by the EU Commission that enable data transfer. We also added additional safeguards as outlined in the EDPB’s June 2021 Recommendations on Supplementary Measures. Finally, Cloudflare’s services are certified under the ISO 27701 standard, which maps to the GDPR’s requirements.

In light of these measures, we believe that our EU customers can use Cloudflare’s services in a manner consistent with GDPR and the Schrems II decision. Still, we recognize that many of our customers want their EU personal data to stay in the EU. For example, some of our customers in industries like healthcare, law, and finance may have additional requirements.  For that reason, we have developed an optional suite of services to address those requirements. We call this our Data Localisation Suite.

How the Data Localisation Suite helps today

Data Localisation is challenging for customers because of the volume and variety of data they handle. When it comes to their Cloudflare traffic, we’ve found that customers are primarily concerned about three areas:

  1. How do I ensure my encryption keys stay in the EU?
  2. How can I ensure that services like caching and WAF only run in the EU?
  3. How can ensure that metadata is never transferred outside the EU?

To address the first concern, Cloudflare has long offered Keyless SSL and Geo Key Manager, which ensure that private SSL/TLS key material never leaves the EU. Keyless SSL ensures that Cloudflare never has possession of the private key material at all; Geo Key Manager uses Keyless SSL under the hood to ensure the keys never leave the specified region.

Last year we addressed the second concern with Regional Services, which ensures that Cloudflare will only be able to decrypt and inspect the content of HTTP traffic inside the EU. In other words, SSL connections will only be terminated in Europe, and all of our layer 7 security and performance services will only run in our EU data centers.

Today, we’re enabling customers to address the third and final concern, and keep metadata local as well.

How the Metadata Boundary Works

The Customer Metadata Boundary ensures, simply, that end user traffic metadata that can identify a customer stays in the EU. This includes all the logs and analytics that a customer sees.

How are we able to do this? All the metadata that can identify a customer flows through a single service at our edge, before being forwarded to one of our core data centers.

When the Metadata Boundary is enabled for a customer, our edge ensures that any log message that identifies that customer (that is, contains that customer’s Account ID) is not sent outside the EU. It will only be sent to our core data center in the EU, and not our core data center in the US.

Introducing the Customer Metadata Boundary

What’s next

Today our Data Localisation Suite is focused on helping our customers in the EU localise data for their inbound HTTP traffic. This includes our Cache, Firewall, DDoS protection, and Bot Management products.

We’ve heard from customers that they want data localisation for more products and more regions. This means making all of our Data Localisation Products, including Geo Key Manager and Regional Services, work globally. We’re also working on expanding the Metadata Boundary to include our Zero Trust products like Cloudflare for Teams. Stay tuned!

New for AWS Control Tower – Region Deny and Guardrails to Help You Meet Data Residency Requirements

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-requirements/

Many customers, such as those in highly regulated industries and the public sector, want to have control over where their data is stored and processed. AWS already offers many tools and features to comply with local laws and regulations, but we want to provide a simplified way to translate data residency requirements into controls that can be applied to single- and multi-account environments.

Starting today, you can use AWS Control Tower to deploy data residency preventive and detective controls, referred to as guardrails. These guardrails will prevent provisioning resources in unwanted AWS Regions by restricting access to AWS APIs through service control policies (SCPs) built and managed by AWS Control Tower. In this way, content cannot be created or transferred outside of your selected Regions at the infrastructure level. In this context, content can be software (including machine images), data, text, audio, video, or images hosted on AWS for processing or storage. For example, AWS customers in Germany can deny access to AWS services in Regions outside of Frankfurt with the exception of global services such as AWS Identity and Access Management (IAM) and AWS Organizations.

AWS Control Tower also offers guardrails to further control data residency in underlying AWS service options, for example, blocking Amazon Simple Storage Service (Amazon S3) cross-region replication or blocking the creation of internet gateways.

The AWS account used for managing AWS Control Tower is not restricted by the new Region deny settings. That account can be used for remediation if you have data in an unwanted Region before enabling Region deny.

Detective guardrails are implemented via AWS Config rules and can further detect unexpected configuration changes that should not be allowed.

You still retain a shared responsibility model for data residency at the application level, but these controls can help you restrict what infrastructure and application teams can do on AWS.

Using Data Residency Guardrails in AWS Control Tower
To use the new data residency guardrails, you need to have created a landing zone using AWS Control Tower. See Plan your AWS Control Tower landing zone for more information.

To see all the new controls that are available, I select Guardrails on the left pane of the AWS Control Tower console and then find those in the Data Residency category. I sort results by Behavior. Guardrails that have a Prevention behavior are implemented as SCPs. Those that have a Detection behavior are implemented as AWS Config rules.

Console screenshot.

The most interesting guardrail is probably the one denying access to AWS based on the requested AWS Region. I choose it from the list and find that it is different from the other guardrails because it affects all Organizational Units (OUs) and cannot be activated here but must be activated in the landing zone settings.

Console screenshot.

Below the Overview, in the Guardrail components, there is a link to the full SCP for this guardrail, and I can see the list of the AWS APIs that, when this setting is enabled, are still going to be allowed towards non-governed Regions. Depending on your requirements, some of those services, such as Amazon CloudFront or AWS Global Accelerator, can be further limited by a custom SCP.

In the Landing zone settings, the Region deny guardrail is currently not enabled. I choose Modify settings and then enable the Region deny settings.

Console screenshot.

Below the Region deny settings, there is the list of AWS Regions governed by the landing zone. Those will be the regions allowed when I enable Region deny.

Console screenshot.

In my case, I have four governed Regions, two in the US and two in Europe:

  • US East (N. Virginia), which is also the home Region for the landing zone
  • US West (Oregon)
  • Europe (Ireland)
  • Europe (Frankfurt)

I choose Update landing zone at the bottom. The update of the landing zone takes a few minutes to complete. Now, the vast majority of the AWS APIs are blocked if they are not directed to one of those governed Regions. Let’s do a few tests.

Testing Region Deny in a Sandbox Account
Using AWS Single Sign-On, I copy the AWS credentials to use the sandbox account with AWSAdministratorAccess permissions. In a terminal, I paste the commands setting the environment variables to use those credentials.

Console screenshot.

Now, I try to start a new Amazon Elastic Compute Cloud (Amazon EC2) instance in US East (Ohio), one of the non-governed Regions. In a landing zone, the default VPC is replaced by a VPC managed by AWS Control Tower. To start the instance, I need to specify a VPC subnet. Let’s find a subnet ID that I can use.

aws ec2 describe-subnets --query 'Subnets[0].SubnetId' --region us-east-2

An error occurred (UnauthorizedOperation) when calling the DescribeSubnets operation:
You are not authorized to perform this operation.

As expected, I am not authorized to perform this operation in US East (Ohio). Let’s try to start an EC2 instance without passing the subnet ID.

aws ec2 run-instances --image-id ami-0dd0ccab7e2801812 --region us-east-2 \
    --instance-type t3.small                                     

An error occurred (UnauthorizedOperation) when calling the RunInstances operation:
You are not authorized to perform this operation.
Encoded authorization failure message: <ENCODED MESSAGE>

Again, I am not authorized. More information is included in the encoded authorization failure message that I can decode as described in this article:

aws sts decode-authorization-message --encoded-message <ENCODED MESSAGE>

The decoded message (that I have omitted for brevity) tells me that there was an explicit deny to my request and includes the full SCP that caused the deny. This information is really useful for debugging these kind of errors.

Now, let’s try in US East (N. Virginia), one of the four governed regions.

aws ec2 describe-subnets --query 'Subnets[0].SubnetId' --region us-east-1
"subnet-0f3580c0c5e56c210"

This time, the command returns the subnet ID of the first subnet returned by the request. Let’s start an instance in US East (N. Virginia) using this subnet.

aws ec2 run-instances --image-id  ami-04ad2567c9e3d7893 --region us-east-1 \
    --instance-type t3.small --subnet-id subnet-0f3580c0c5e56c210

As expected, it works, and I can see the EC2 instance running in the console.

Console screenshot.

Similarly, APIs for other AWS services are limited by the Region deny settings. For example, I can’t create an S3 bucket in a non-governed Region.

Console screenshot.

When I try to create the bucket, I get an access denied error.

Console screenshot.

As expected, the creation of an S3 bucket works in a governed Region.

Even if someone gives this account access to a bucket in a non-governed Region, I would not be able to copy any data into that bucket.

Other preventive guardrails can enforce data residency, for example:

  • Disallow cross-region networking for Amazon EC2, Amazon CloudFront, and AWS Global Accelerator
  • Disallow internet access for an Amazon VPC instance managed by a customer
  • Disallow Amazon Virtual Private Network (VPN) connections

Now, let’s see how detective guardrails work.

Testing Detective Guardrails in a Sandbox Account
I enable the following guardrails for all accounts in the sandbox OU:

  • Detect whether Amazon EBS snapshots are restorable by all AWS accounts
  • Detect whether public routes exist in the route table for an internet gateway

Now, I want to see what happens if I go against these guardrails. In the EC2 console, I create an EBS snapshot for the volume of the EC2 instance I started before. Then, I modify permissions to share it with all AWS accounts.

Console screenshot.

Then, in the VPC console, I create an internet gateway, attach it to the AWS Control Tower managed VPC, and update the route table of one of the private subnets to use the internet gateway.

Console screenshot.

After a few minutes, the noncompliant resources in the sandbox account are found by the detective guardrails.

Console screenshot.

I look at the information provided by the guardrails and update my configuration to fix the issues. In a multi-account setup I’d contact the account owner and ask for remediation.

Availability and Pricing
You can use data-residency guardrails to control resources in any AWS Region. To create a landing zone, you should start from one of the Regions where AWS Control Tower is offered. For more information, see the AWS Regional Services List. There is no additional cost for this feature. You pay the costs of other services used, such as AWS Config.

This feature provides you with a framework of controls and guidance for setting up a multi-account environment that addresses data residency requirements. Depending on your use case, you may use any subset of the new data residency guardrails.

Set up guardrails based on your data residency requirements with AWS Control Tower.

Danilo

2021 PCI 3DS report now available

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/2021-pci-3ds-report-now-available/

We are excited to announce that Amazon Web Services (AWS) has released the latest 2021 PCI 3-D Secure (3DS) attestation to support our customers implementing EMV® 3-D Secure services on AWS. Although AWS doesn’t directly perform the functions of 3DS Server (3DSS), 3DS Directory Server (DS), or 3DS Access Control Server (ACS), AWS customers can host their 3DS environments on AWS, using services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS) and Amazon Virtual Private Cloud (Amazon VPC).

The new AWS PCI 3DS attestation of compliance means customers can now attain their own PCI 3DS compliance for services running on AWS. All AWS Regions in scope for PCI DSS are included in the 3DS attestation. AWS was assessed by Coalfire, an independent Qualified Security Assessor (QSA). AWS compliance reports, including this latest PCI 3DS attestation, are available on demand through AWS Artifact. The 3DS package available in AWS Artifact includes the 3DS Attestation of Compliance (AOC) and a Shared Responsibility Guide.

To learn more about our PCI program and other compliance and security programs, please visit AWS Compliance Programs.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

Three ways to improve your cybersecurity awareness program

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/three-ways-to-improve-your-cybersecurity-awareness-program/

Raising the bar on cybersecurity starts with education. That’s why we announced in August that Amazon is making its internal Cybersecurity Awareness Training Program available to businesses and individuals for free starting this month. This is the same annual training we provide our employees to help them better understand and anticipate potential cybersecurity risks. The training program will include a getting started guide to help you implement a cybersecurity awareness training program at your organization. It’s aligned with NIST SP 800-53rev4, ISO 27001, K-ISMS, RSEFT, IRAP, OSPAR, and MCTS.

I also want to share a few key learnings for how to implement effective cybersecurity training programs that might be helpful as you develop your own training program:

  1. Be sure to articulate personal value. As humans, we have an evolved sense of physical risk that has developed over thousands of years. Our bodies respond when we sense danger, heightening our senses and getting us ready to run or fight. We have a far less developed sense of cybersecurity risk. Your vision doesn’t sharpen when you assign the wrong permissions to a resource, for example. It can be hard to describe the impact of cybersecurity, but if you keep the message personal, it engages parts of the brain that are tied to deep emotional triggers in memory. When we describe how learning a behavior—like discerning when an email might be phishing—can protect your family, your child’s college fund, or your retirement fund, it becomes more apparent why cybersecurity matters.
  2. Be inclusive. Humans are best at learning when they share a lived experience with their educators so they can make authentic connections to their daily lives. That’s why inclusion in cybersecurity training is a must. But that only happens by investing in a cybersecurity awareness team that includes people with different backgrounds, so they can provide insight into different approaches that will resonate with diverse populations. People from different cultures, backgrounds, and age cohorts can provide insight into culturally specific attack patterns as well as how to train for them. For example, for social engineering in hierarchical cultures, bad actors often spoof authority figures, and for individualistic cultures, they play to the target’s knowledge and importance, and give compliments. And don’t forget to make everything you do accessible for people with varying disability experiences, because everyone deserves the same high-quality training experience. The more you connect with people, the more they internalize your message and provide valuable feedback. Diversity and inclusion breeds better cybersecurity.
  3. Weave it into workflows. Training takes investment. You have to make time for it in your day. We all understand that as part of a workforce we have to do it, but in addition to compliance training, you should be providing just-in-time reminders and challenges to complete. Try working with tooling teams to display messaging when critical tasks are being completed. Make training short and concise—3 minutes at most—so that people can make time for it in their day.

Cybersecurity training isn’t just a once-per-year exercise. Find ways to weave it into the daily lives of your workforce, and you’ll be helping them protect not only your company, but themselves and their loved ones as well.

Get started by going to learnsecurity.amazon.com and take the Cybersecurity Awareness training.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.