Tag Archives: Compliance

New – AWS Config Rules Now Support Proactive Compliance

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-aws-config-rules-now-support-proactive-compliance/

When operating a business, you have to find the right balance between speed and control for your cloud operations. On one side, you want to have the ability to quickly provision the cloud resources you need for your applications. At the same time, depending on your industry, you need to maintain compliance with regulatory, security, and operational best practices.

AWS Config provides rules, which you can run in detective mode to evaluate if the configuration settings of your AWS resources are compliant with your desired configuration settings. Today, we are extending AWS Config rules to support proactive mode so that they can be run at any time before provisioning and save time spent to implement custom pre-deployment validations.

When creating standard resource templates, platform teams can run AWS Config rules in proactive mode so that they can be tested to be compliant before being shared across your organization. When implementing a new service or a new functionality, development teams can run rules in proactive mode as part of their continuous integration and continuous delivery (CI/CD) pipeline to identify noncompliant resources.

You can also use AWS CloudFormation Guard in your deployment pipelines to check for compliance proactively and ensure that a consistent set of policies are applied both before and after resources are provisioned.

Let’s see how this works in practice.

Using Proactive Compliance with AWS Config
In the AWS Config console, I choose Rules in the navigation pane. In the rules table, I see the new Enabled evaluation mode column that specifies whether the rule is proactive or detective. Let’s set up my first rule.

Console screenshot.

I choose Add rule, and then I enter rds-storage in the AWS Managed Rules search box to find the rds-storage-encrypted rule. This rule checks whether storage encryption is enabled for your Amazon Relational Database Service (RDS) DB instances and can be added in proactive or detective evaluation mode. I choose Next.

Console screenshot.

In the Evaluation mode section, I turn on proactive evaluation. Now both the proactive and detective evaluation switches are enabled.

Console screenshot.

I leave all the other settings to their default values and choose Next. In the next step, I review the configuration and add the rule.

Console screenshot.

Now, I can use proactive compliance via the AWS Config API (including the AWS Command Line Interface (CLI) and AWS SDKs) or with CloudFormation Guard. In my CI/CD pipeline, I can use the AWS Config API to check the compliance of a resource before creating it. When deploying using AWS CloudFormation, I can set up a CloudFormation hook to proactively check my configuration before the actual deployment happens.

Let’s do an example using the AWS CLI. First, I call the StartProactiveEvaluationResponse API with in input the resource ID (for reference only), the resource type, and its configuration using the CloudFormation schema. For simplicity, in the database configuration, I only use the StorageEncrypted option and set it to true to pass the evaluation. I use an evaluation timeout of 60 seconds, which is more than enough for this rule.

aws configservice start-resource-evaluation --evaluation-mode PROACTIVE \
    --resource-details '{"ResourceId":"myDB",
                         "ResourceType":"AWS::RDS::DBInstance",
                         "ResourceConfiguration":"{\"StorageEncrypted\":true}",
                         "ResourceConfigurationSchemaType":"CFN_RESOURCE_SCHEMA"}' \
    --evaluation-timeout 60
{
    "ResourceEvaluationId": "be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d"
}

I get back in output the ResourceEvaluationId that I use to check the evaluation status using the GetResourceEvaluationSummary API. In the beginning, the evaluation is IN_PROGRESS. It usually takes a few seconds to get a COMPLIANT or NON_COMPLIANT result.

aws configservice get-resource-evaluation-summary \
    --resource-evaluation-id be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d
{
    "ResourceEvaluationId": "be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d",
    "EvaluationMode": "PROACTIVE",
    "EvaluationStatus": {
        "Status": "SUCCEEDED"
    },
    "EvaluationStartTimestamp": "2022-11-15T19:13:46.029000+00:00",
    "Compliance": "COMPLIANT",
    "ResourceDetails": {
        "ResourceId": "myDB",
        "ResourceType": "AWS::RDS::DBInstance",
        "ResourceConfiguration": "{\"StorageEncrypted\":true}"
    }
}

As expected, the Amazon RDS configuration is compliant to the rds-storage-encrypted rule. If I repeat the previous steps with StorageEncrypted set to false, I get a noncompliant result.

If more than one rule is enabled for a resource type, all applicable rules are run in proactive mode for the resource evaluation. To find out individual rule-level compliance for the resource, I can call the GetComplianceDetailsByResource API:

aws configservice get-compliance-details-by-resource \
    --resource-evaluation-id be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d
{
    "EvaluationResults": [
        {
            "EvaluationResultIdentifier": {
                "EvaluationResultQualifier": {
                    "ConfigRuleName": "rds-storage-encrypted",
                    "ResourceType": "AWS::RDS::DBInstance",
                    "ResourceId": "myDB",
                    "EvaluationMode": "PROACTIVE"
                },
                "OrderingTimestamp": "2022-11-15T19:14:42.588000+00:00",
                "ResourceEvaluationId": "be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d"
            },
            "ComplianceType": "COMPLIANT",
            "ResultRecordedTime": "2022-11-15T19:14:55.588000+00:00",
            "ConfigRuleInvokedTime": "2022-11-15T19:14:42.588000+00:00"
        }
    ]
}

If, when looking at these details, your desired rule is not invoked, be sure to check that proactive mode is turned on.

Availability and Pricing
Proactive compliance will be available in all commercial AWS Regions where AWS Config is offered but it might take a few days to deploy this new capability across all these Regions. I’ll update this post when this deployment is complete. To see which AWS Config rules can be turned into proactive mode, see the Developer Guide.

You are charged based on the number of AWS Config rule evaluations recorded. A rule evaluation is recorded every time a resource is evaluated for compliance against an AWS Config rule. Rule evaluations can be run in detective mode and/or in proactive mode, if available. If you are running a rule in both detective mode and proactive mode, you will be charged for only the evaluations in detective mode. For more information, see AWS Config pricing.

With this new feature, you can use AWS Config to check your rules before provisioning and avoid implementing your own custom validations.

Danilo

New for AWS Control Tower – Comprehensive Controls Management (Preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-comprehensive-controls-management-preview/

Today, customers in regulated industries face the challenge of defining and enforcing controls needed to meet compliance and security requirements while empowering engineers to make their design choices. In addition to addressing risk, reliability, performance, and resiliency requirements, organizations may also need to comply with frameworks and standards such as PCI DSS and NIST 800-53.

Building controls that account for service relationships and their dependencies is time-consuming and expensive. Sometimes customers restrict engineering access to AWS services and features until their cloud architects identify risks and implement their own controls.

To make that easier, today we are launching comprehensive controls management in AWS Control Tower. You can use it to apply managed preventative, detective, and proactive controls to accounts and organizational units (OUs) by service, control objective, or compliance framework. AWS Control Tower does the mapping between them on your behalf, saving time and effort.

With this new capability, you can now also use AWS Control Tower to turn on AWS Security Hub detective controls across all accounts in an OU. In this way, Security Hub controls are enabled in every AWS Region that AWS Control Tower governs.

Let’s see how this works in practice.

Using AWS Control Tower Comprehensive Controls Management
In the AWS Control Tower console, there is a new Controls library section. There, I choose All controls. There are now more than three hundred controls available. For each control, I see which AWS service it is related to, the control objective this control is part of, the implementation (such as AWS Config rule or AWS CloudFormation Guard rule), the behavior (preventive, detective, or proactive), and the frameworks it maps to (such as NIST 800-53 or PCI DSS).

Console screenshot.

In the Find controls search box, I look for a preventive control called CT.CLOUDFORMATION.PR.1. This control uses a service control policy (SCP) to protect controls that use CloudFormation hooks and is required by the control that I want to turn on next. Then, I choose Enable control.

Console screenshot.

Then, I select the OU for which I want to enable this control.

Console screenshot.

Now that I have set up this control, let’s see how controls are presented in the console in categories. I choose Categories in the navigation pane. There, I can browse controls grouped as Frameworks, Services, and Control objectives. By default, the Frameworks tab is selected.

Console screenshot.

I select a framework (for example, PCI DSS version 3.2.1) to see all the related controls and control objectives. To implement a control, I can select the control from the list and choose Enable control.

Console screenshot.

I can also manage controls by AWS service. When I select the Services tab, I see a list of AWS services and the related control objectives and controls.

Console screenshot.

I choose Amazon DynamoDB to see the controls that I can turn on for this service.

Console screenshot.

I select the Control objectives tab. When I need to assess a control objective, this is where I have access to the list of related controls to turn on.

Console screenshot.

I choose Encrypt data at rest to see and search through the available controls for that control objective. I can also check which services are covered in this specific case. I type RDS in the search bar to find the controls related to Amazon Relational Database Service (RDS) for this control objective.

I choose CT.RDS.PR.16 – Require an Amazon RDS database cluster to have encryption at rest configured and then Enable control.

Console screenshot.

I select the OU for which I want to enable the control for, and I proceed. All the AWS accounts in this organization’s OU will have this control enabled in all the Regions that AWS Control Tower governs.

Console screenshot.

After a few minutes, the AWS Control Tower setup is updated. Now, the accounts in this OU have proactive control CT.RDS.PR.16 turned on. When an account in this OU deploys a CloudFormation stack, any Amazon RDS database cluster has to have encryption at rest configured. Because this control is proactive, it’ll be checked by a CloudFormation hook before the deployment starts. This saves a lot of time compared to a detective control that would find the issue only when the CloudFormation deployment is in progress or has terminated. This also improves my security posture by preventing something that’s not allowed as opposed to reacting to it after the fact.

Availability and Pricing
Comprehensive controls management is available in preview today in all AWS Regions where AWS Control Tower is offered. These enhanced control capabilities reduce the time it takes you to vet AWS services from months or weeks to minutes. They help you use AWS by undertaking the heavy burden of defining, mapping, and managing the controls required to meet the most common control objectives and regulations.

There is no additional charge to use these new capabilities during the preview. However, when you set up AWS Control Tower, you will begin to incur costs for AWS services configured to set up your landing zone and mandatory controls. For more information, see AWS Control Tower pricing.

Simplify how you implement compliance and security requirements with AWS Control Tower.

Danilo

New – Amazon Redshift Support in AWS Backup

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-redshift-support-in-aws-backup/

With Amazon Redshift, you can analyze data in the cloud at any scale. Amazon Redshift offers native data protection capabilities to protect your data using automatic and manual snapshots. This works great by itself, but when you’re using other AWS services, you have to configure more than one tool to manage your data protection policies.

To make this easier, I am happy to share that we added support for Amazon Redshift in AWS Backup. AWS Backup allows you to define a central backup policy to manage data protection of your applications and can now also protect your Amazon Redshift clusters. In this way, you have a consistent experience when managing data protection across all supported services. If you have a multi-account setup, the centralized policies in AWS Backup let you define your data protection policies across all your accounts within your AWS Organizations. To help you meet your regulatory compliance needs, AWS Backup now includes Amazon Redshift in its auditor-ready reports. You also have the option to use AWS Backup Vault Lock to have immutable backups and prevent malicious or inadvertent changes.

Let’s see how this works in practice.

Using AWS Backup with Amazon Redshift
The first step is to turn on the Redshift resource type for AWS Backup. In the AWS Backup console, I choose Settings in the navigation pane and then, in the Service opt-in section, Configure resources. There, I toggle the Redshift resource type on and choose Confirm.

Console screenshot.

Now, I can create or update a backup plan to include the backup of all, or some, of my Redshift clusters. In the backup plan, I can define how often these backups should be taken and for how long they should be kept. For example, I can have daily backups with one week of retention, weekly backups with one month of retention, and monthly backups with one year of retention.

I can also create on-demand backups. Let’s see this with more details. I choose Protected resources in the navigation pane and then Create on-demand backup.

I select Redshift in the Resource type dropdown. In the Cluster identifier, I select one of my clusters. For this workload, I need two weeks of retention. Then, I choose Create on-demand backup.

Console screenshot.

My data warehouse is not huge, so after a few minutes, the backup job has completed.

Console screenshot.

I now see my Redshift cluster in the list of the resources protected by AWS Backup.

Console screenshot.

In the Protected resources list, I choose the Redshift cluster to see the list of the available recovery points.

Console screenshot.

When I choose one of the recovery points, I have the option to restore the full data warehouse or just a table into a new Redshift cluster.

Console screenshot.

I now have the possibility to edit the cluster and database configuration, including security and networking settings. I just update the cluster identifier, otherwise the restore would fail because it must be unique. Then, I choose Restore backup to start the restore job.

After some time, the restore job has completed, and I see the old and the new clusters in the Amazon Redshift console. Using AWS Backup gives me a simple centralized way to manage data protection for Redshift clusters as well as many other resources in my AWS accounts.

Console screenshot.

Availability and Pricing
Amazon Redshift support in AWS Backup is available today in the AWS Regions where both AWS Backup and Amazon Redshift are offered, with the exception of the Regions based in China. You can use this capability via the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs.

There is no additional cost for using AWS Backup compared to the native snapshot capability of Amazon Redshift. Your overall costs depend on the amount of storage and retention you need. For more information, see AWS Backup pricing.

Danilo

2022 Canadian Centre for Cyber Security Assessment Summary report available with 12 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2022-canadian-centre-for-cyber-security-assessment-summary-report-available-with-12-additional-services/

We are pleased to announce the availability of the 2022 Canadian Centre for Cyber Security (CCCS) assessment summary report for Amazon Web Services (AWS). This assessment will bring the total to 132 AWS services and features assessed in the Canada (Central) AWS Region, including 12 additional AWS services. A copy of the summary assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the AWS Services in Scope page. The 12 new services are:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decisions to use cloud services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s Protected B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach). As of November 2022, 132 AWS services in the Canada (Central) Region have been assessed by the CCCS and meet the requirements for the CCCS Medium cloud security profile. Meeting the CCCS Medium cloud security profile is required to host workloads that are classified up to and including the medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and reassesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and on customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; you can reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, ecommerce, and utilities industries.

AWS achieves Spain’s ENS High certification across 166 services

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-166-services/

Amazon Web Services (AWS) is committed to bringing additional services and AWS Regions into the scope of our Esquema Nacional de Seguridad (ENS) High certification to help customers meet their regulatory needs.

ENS is Spain’s National Security Framework. The ENS certification is regulated under the Spanish Royal Decree 3/2010 and is a compulsory requirement for central government customers in Spain. ENS establishes security standards that apply to government agencies and public organizations in Spain, and service providers on which Spanish public services depend. Updating and achieving this certification every year demonstrates our ongoing commitment to meeting the heightened expectations for cloud service providers set forth by the Spanish government.

We are happy to announce the addition of 17 services to the scope of our ENS High certification, for a new total of 166 services in scope. The certification now covers 25 Regions. Some of the additional security services in scope for ENS High include the following:

  • AWS CloudShell – a browser-based shell that makes it simpler to securely manage, explore, and interact with your AWS resources. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs by using the AWS SDKs, or use a range of other tools for productivity.
  • AWS Cloud9 – a cloud-based integrated development environment (IDE) that you can use to write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.
  • Amazon DevOps Guru – a service that uses machine learning to detect abnormal operating patterns so that you can identify operational issues before they impact your customers.
  • Amazon HealthLake – a HIPAA-eligible service that offers healthcare and life sciences companies a complete view of individual or patient population health data for query and analytics at scale.
  • AWS IoT SiteWise – a managed service that simplifies collecting, organizing, and analyzing industrial equipment data.

AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level.

For more information about ENS High, see the AWS Compliance page Esquema Nacional de Seguridad High. To view the complete list of services included in the scope, see the AWS Services in Scope by Compliance Program – Esquema Nacional de Seguridad (ENS) page. You can download the ENS High Certificate from AWS Artifact in the AWS Management Console or from the Compliance page Esquema Nacional de Seguridad High.

As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a Security Audit Program Manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations and certification programs in Spain and other EMEA countries. Daniel has 8 years of experience in security assurance and previously worked as an auditor for PCI DSS security framework.

AWS successfully renews GSMA security certification for US East (Ohio) and Europe (Paris) Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-successfully-renews-gsma-security-certification-for-us-east-ohio-and-europe-paris-regions/

Amazon Web Services is pleased to announce that our US East (Ohio) and Europe (Paris) Regions have been re-certified through October 2023 by the GSM Association (GSMA) under its Security Accreditation Scheme Subscription Management (SAS-SM) with scope Data Centre Operations and Management (DCOM).

The US East (Ohio) and Europe (Paris) Regions first obtained GSMA certification in September and October 2021, respectively. This renewal demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers. AWS customers who provide an embedded Universal Integrated Circuit Card (eUICC) for mobile devices can run their remote provisioning applications with confidence in the AWS Cloud in the GSMA-certified Regions.

For up-to-date information related to the certification, visit the AWS Compliance Program page and choose GSMA under Europe, Middle East & Africa.

AWS was evaluated by independent third-party auditors selected by GSMA. The Certificate of Compliance that shows that AWS achieved GSMA compliance status is available on the GSMA website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, you can submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

New AWS whitepaper: Using AWS in the Context of Canada’s Controlled Goods Program (CGP)

Post Syndicated from Michael Davie original https://aws.amazon.com/blogs/security/new-aws-whitepaper-using-aws-in-the-context-of-canadas-controlled-goods-program-cgp/

Amazon Web Services (AWS) has released a new whitepaper to help Canadian defense and security customers accelerate their use of the AWS Cloud.

The new guide, Using AWS in the Context of Canada’s Controlled Goods Program (CGP), continues our efforts to help AWS customers navigate the regulatory expectations of the Government of Canada’s Controlled Goods Program in a shared responsibility environment.

This whitepaper is intended for customers that are looking to store and process controlled goods information in the AWS Cloud, and is particularly useful for leadership, security, risk, and compliance teams that need to understand CGP requirements and guidance.

The whitepaper summarizes CGP requirements and guidance related to the protection of controlled goods information, and gives CGP-regulated customers information they can use to commence their due diligence and assess how to implement the appropriate programs for their use of AWS Cloud services.

This document is our first that is specific to Canadian regulatory requirements and joins other guides related to specific regulatory regimes around the world. As the regulatory environment continues to evolve, we’ll provide further updates on the AWS Security Blog and the AWS Compliance page. You can find more information on cloud-related regulatory compliance at the AWS Compliance Center. You can also reach out to your AWS account manager for help finding the resources you need.

 
If you have feedback about this blog post, submit comments in the Comments section below. You can also start a new thread on re:Post to get answers from the community.

Want more AWS Security news? Follow us on Twitter.

Michael Davie

Michael Davie

Michael is a Senior Industry Specialist with AWS Security Assurance. He works with our customers, their regulators, and AWS teams to help raise the bar on secure cloud adoption and usage. Michael has over 20 years of experience working in the defence, intelligence, and technology sectors in Canada and is a licensed professional engineer.

AWS achieves its second ISMAP authorization in Japan

Post Syndicated from Hidetoshi Takeuchi original https://aws.amazon.com/blogs/security/aws-achieves-its-second-ismap-authorization-in-japan/

Earning and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). Our customers’ security requirements drive the scope and portfolio of the compliance reports, attestations, and certifications we pursue. We’re excited to announce that AWS has achieved authorization under the Information System Security Management and Assessment Program (ISMAP) program, effective from April 1, 2022 to March 31, 2023. The authorization scope covers a total of 145 AWS services (an increase of 22 services over the previous authorization) across 22 AWS Regions, including the Asia Pacific (Tokyo) Region and the Asia Pacific (Osaka) Region. This is the second time AWS has undergone an assessment since ISMAP was first published by the ISMAP steering committee in March 2020.

ISMAP is a Japanese government program for assessing the security of public cloud services. The purpose of ISMAP is to provide a common set of security standards for cloud service providers (CSPs) to comply with as a baseline requirement for government procurement. ISMAP introduces security requirements for cloud domains, practices, and procedures that CSPs must implement. CSPs must engage with an ISMAP-approved third-party assessor to assess compliance with the ISMAP security requirements in order to apply as an ISMAP-registered CSP. The ISMAP program will evaluate the security of each CSP and register those that satisfy the Japanese government’s security requirements. Upon successful ISMAP registration of CSPs, government procurement departments and agencies can accelerate their engagement with the registered CSPs and contribute to the smooth introduction of cloud services in government information systems.

The achievement of this authorization demonstrates the proactive approach AWS has taken to help customers meet compliance requirements set by the Japanese government and to deliver secure AWS services to our customers. Service providers and customers of AWS can use the ISMAP authorization of AWS services to support their own ISMAP authorization programs. The full list of 145 ISMAP-authorized AWS services is available on the AWS Services in Scope by Compliance Program webpage, and you can also use the ISMAP Customer Package on AWS Artifact. You can confirm the AWS ISMAP authorization status and find detailed scope information on the ISMAP Portal.

As always, we are committed to bringing new services and Regions into the scope of our ISMAP program, based on your business needs. If you have any questions, don’t hesitate to contact your AWS Account Manager.

If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Hidetoshi Takeuchi

Hidetoshi Takeuchi

Hidetoshi is the Audit Program Manager for the Asia Pacific Region, leading Japan security certification and authorization programs. Hidetoshi has worked in information technology security, risk management, security assurance, and technology audits for the past 25 years. He is passionate about delivering programs that build customers’ trust and provide them with assurance on cloud security.

Rapid7 Makes Security Compliance Complexity a Thing of the Past With InsightIDR

Post Syndicated from KJ McCann original https://blog.rapid7.com/2022/08/30/rapid7-makes-security-compliance-complexity-a-thing-of-the-past-with-insightidr/

Rapid7 Makes Security Compliance Complexity a Thing of the Past With InsightIDR

As a unified SIEM and XDR solution, InsightIDR gives organizations the tools they need to drive an elevated and efficient compliance program.

Cybersecurity standards and compliance are mission-critical for every organization, regardless of size. Apart from the direct losses resulting from a data breach, non-compliant companies could face hefty fees, loss of business, and even jail time under growing regulations. However, managing and maintaining compliance, preparing for audits, and building necessary reports can be a full-time job, which might not be in the budget. For already-lean teams, compliance can also distract from more critical security priorities like monitoring threats, early threat detection, and accelerated response – exposing organizations to greater risk.

An efficient compliance strategy reduces risk, ensures that your team is always audit-ready, and – most importantly – drives focus on more critical security work. With InsightIDR, security practitioners can quickly meet their compliance and regulatory requirements while accelerating their overall detection and response program.

Here are three ways InsightIDR has been built to elevate and simplify your compliance processes.

1. Powerful log management capabilities for full environment visibility and compliance readiness

Complete environment visibility and security log collection are critical for compliance purposes, as well as for providing a foundation of effective monitoring and threat detection. Enterprises need to monitor user activity, behavior, and application access across their entire environment — from the cloud to on-premises services. The adoption of cloud services continues to increase, creating even more potential access points for teams to keep up with.

InsightIDR’s strong log management capabilities provide full visibility into these potential threats, as well as enable robust compliance reporting by:

  • Centralizing and aggregating all security-relevant events, making them available for use in monitoring, alerting, investigation, ad hoc searching
  • Providing the ability to search for data quickly, create data models and pivots, save searches and pivots as reports, configure alerts, and create dashboards
  • Retaining all log data for 13 months for all InsightIDR customers, enabling the correlation of data over time and meeting compliance mandates.
  • Automatically mapping data to compliance controls, allowing analysts to create comprehensive dashboards and reports with just a few clicks

To take it a step further, InsightIDR’s intuitive user interface streamlines searches while eliminating the need for IT administrators to master a search language. The out-of-the-box correlation searches can be invoked in real time or scheduled to run regularly at a specific time should the need arise for compliance audits and reporting, updated dashboards, and more.

2. Predefined compliance reports and dashboards to keep you organized and consistent

Pre-built compliance content in InsightIDR enables teams to create robust reports without investing countless hours manually building and correlating data to provide information on the organization’s compliance posture. With the pre-built reports and dashboards, you can:

  • Automatically map data to compliance controls
  • Save filters and searches, then duplicate them across dashboards
  • Create, share, and customize reports right from the dashboard
  • Make reports available in multiple formats like PDF or interactive HTML files

InsightIDR’s library of pre-built dashboards makes it easier than ever to visualize your data within the context of common frameworks. Entire dashboards created by our Rapid7 experts can be set up in just a few clicks. Our dashboards cover a variety of key compliance frameworks like PCI, ISO 27001, HIPAA, and more.

Rapid7 Makes Security Compliance Complexity a Thing of the Past With InsightIDR

3. Unified and correlated data points to provide meaningful insights

With strong log management capabilities providing a foundation for your security posture, the ability to correlate the resulting data and look for unusual behavior, system anomalies, and other indicators of a security incident is key. This information is used not only for real-time event notification but also for compliance audits and reporting, performance dashboards, historical trend analysis, and post-hoc incident forensics.

Privileged users are often the targets of attacks, and when compromised, they typically do the most damage. That’s why it’s critical to extend monitoring to these users. In fact, because of the risk involved, privileged user monitoring is a common requirement for compliance reporting in many regulated industries.

InsightIDR provides a constantly curated library of detections that span user behavior analytics, endpoints, file integrity monitoring, network traffic analysis, and cloud threat detection and response – supported by our own native endpoint agent, network sensor, and collection software. User authentications, locational data, and asset activity are baselined to identify anomalous privilege escalations, lateral movement, and compromised credentials. Customers can also connect their existing Privileged Access Management tools (like CyberArk Vault or Varonis DatAdvantage) to get a more unified view of privileged user monitoring with a single interface.

Meet compliance standards while accelerating your detection and response

We know compliance is not the only thing a security operations center (SOC) has to worry about. InsightIDR can ensure that your most critical compliance requirements are met quickly and confidently. Once you have an efficient compliance process, the team will be able to focus their time and effort on staying ahead of emergent threats and remediating attacks quickly, reducing risk to the business.

What could you do with the time back?

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Incident Reporting Regulations Summary and Chart

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/08/26/incident-reporting-regulations-summary-and-chart/

Incident Reporting Regulations Summary and Chart

A growing number of regulations require organizations to report significant cybersecurity incidents. We’ve created a chart that summarizes 11 proposed and current cyber incident reporting regulations and breaks down their common elements, such as who must report, what cyber incidents must be reported, the deadline for reporting, and more.

Incident Reporting Regulations Summary and Chart
Download the chart now

This chart is intended as an educational tool to enhance the security community’s awareness of upcoming public policy actions, and provide a big picture look at how the incident reporting regulatory environment is unfolding. Please note, this chart is not comprehensive (there are even more incident reporting regulations out there!) and is only current as of August 8, 2022. Many of the regulations are subject to change.

This summary is for educational purposes only and nothing in this summary is intended as, or constitutes, legal advice.

Peter Woolverton led the research and initial drafting of this chart.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Avoiding Smash and Grab Under the SEC’s Proposed Cyber Rule

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/08/23/avoiding-smash-and-grab-under-the-secs-proposed-cyber-rule/

Avoiding Smash and Grab Under the SEC’s Proposed Cyber Rule

The SEC recently proposed a regulation to require all public companies to report cybersecurity incidents within four days of determining that the incident is material. While Rapid7 generally supports the proposed rule, we are concerned that the rule requires companies to publicly disclose a cyber incident before the incident has been contained or mitigated. This post explains why this is a problem and suggests a solution that still enables the SEC to drive companies toward disclosure.

(Terminology note: “Public companies” refers to companies that have stock traded on public US exchanges, and “material” means information that “there is a substantial likelihood that a reasonable shareholder would consider it important.” “Containment” aims to prevent a cyber incident from spreading. Containment is part of “mitigation,” which includes actions to reduce the severity of an event or the likelihood of a vulnerability being exploited, though may fall short of full remediation.)

In sum: The public disclosure of material cybersecurity incidents prior to containment or mitigation may cause greater harm to investors than a delay in public disclosure. We propose that the SEC provide an exemption to the proposed reporting requirements, enabling a company to delay public disclosure of an uncontained or unmitigated incident if certain conditions are met. Additionally, we explain why we believe other proposed solutions may not meet the SEC’s goals of transparency and avoidance of harm to investors.

Distinguished by default public disclosure

The purpose of the SEC’s proposed rule is to help enable investors to make informed investment decisions. This is a reflection of the growing importance of cybersecurity to corporate governance, risk assessment, and other key factors that stockholders weigh when investing. With the exception of reporting unmitigated incidents, Rapid7 largely supports this perspective.

The SEC’s proposed rule would (among other things) require companies to disclose material cyber incidents on Form 8-K, which are publicly available via the EDGAR system. Crucially, the SEC’s proposed rule makes no distinction between public disclosure of incidents that are contained or mitigated and incidents that are not yet contained or mitigated. While the public-by-default nature of the disclosure creates new problems, it also aligns with the SEC’s purpose in proposing the rule.

In contrast to the SEC’s proposed rule, the purpose of most other incident reporting regulations is to strengthen cybersecurity – a top global policy priority. As such, most other cyber incident reporting regulators (such as CISA, NERC, FDIC, Fed. Reserve, OCC, NYDFS, etc.) do not typically make incident reports public in a way that identifies the affected organization. In fact, some regulations (such as CIRCIA and the 2021 TSA pipeline security directive) classify company incident reports as sensitive information exempt from FOIA.

Beyond regulations, established cyber incident response protocol is to avoid tipping off an attacker until the incident is contained and the risk of further damage has been mitigated. See, for example, CISA’s Incident Response Playbook (especially sections on opsec) and NIST’s Computer Security Incident Handling Guide (especially Section 2.3.4). For similar reasons, it is commonly the goal of coordinated vulnerability disclosure practices to avoid, when possible, public disclosure of a vulnerability until the vulnerability has been mitigated. See, for example, the CERT Guide to Coordinated Disclosure.

While it may be reasonable to require disclosure of a contained or mitigated incident within four days of determining its materiality, a strict requirement for public disclosure of an unmitigated or ongoing incident is likely to expose companies and investors to additional danger. Investors are not the only group that may act on a cyber incident report, and such information may be misused.

Smash and grab harms investors and misprices securities

Cybercriminals often aim to embed themselves in corporate networks without the company knowing. Maintaining a low profile lets attackers steal data over time, quietly moving laterally across networks, steadily gaining greater access – sometimes over a period of years. But when the cover is blown and the company knows about its attacker? Forget secrecy, it’s smash and grab time.

Public disclosure of an unmitigated or uncontained cyber incident will likely lead to attacker behaviors that cause additional harm to investors. Note that such acts would be in reaction to the public disclosure of an unmitigated incident, and not a natural result of the original attack. For example:

  • Smash and grab: A discovered attacker may forgo stealth and accelerate data theft or extortion activities, causing more harm to the company (and therefore its investors). Consider this passage from the MS-ISAC’s 2020 Ransomware Guide: “Be sure [to] avoid tipping off actors that they have been discovered and that mitigation actions are being undertaken. Not doing so could cause actors to move laterally to preserve their access [or] deploy ransomware widely prior to networks being taken offline.”
  • Scorched earth: A discovered attacker may engage in anti-forensic activity (such as deleting logs), hindering post-incident investigations and intelligence sharing that could prevent future attacks that harm investors. From CISA’s Playbook: “Some adversaries may actively monitor defensive response measures and shift their methods to evade detection and containment.”
  • Pile-on: Announcing that a company has an incident may cause other attackers to probe the company and discover the vulnerability or attack vector from the original incident. If the incident is not yet mitigated, the copycat attackers can cause further harm to the company (and therefore its investors). From the CERT Guide to CVD: “​​Mere knowledge of a vulnerability’s existence in a feature of some product is sufficient for a skillful person to discover it for themselves. Rumor of a vulnerability draws attention from knowledgeable people with vulnerability finding skills — and there’s no guarantee that all those people will have users’ best interests in mind.”
  • Supply chain: Public disclosure of an unmitigated cybersecurity incident may alert attackers to a vulnerability that is present in other companies, the exploitation of which can harm investors in those other companies. Publicly disclosing “the nature and scope” of material incidents within four business days risks exposing enough detail of an otherwise unique zero-day to encourage rediscovery and reimplementation by other criminal and espionage groups against other organizations. For example, fewer than 100 organizations were actually exploited through the Solarwinds supply chain attack, but up to 18,000 organizations were at risk.

In addition, requiring public disclosure of uncontained or unmitigated cyber incidents may result in mispricing the stock of the affected company. By contradicting best practices for cyber incident response and inviting new attacks, the premature public disclosure of an uncontained or unmitigated incident may provide investors with an inaccurate measure of the company’s true ability to respond to cybersecurity incidents. Moreover, a premature disclosure during the incident response process may result in investors receiving inaccurate information about the scope or impact of the incident.

Rapid7 is not opposed to public disclosure of unmitigated vulnerabilities or incidents in all circumstances, and our security researchers publicly disclose vulnerabilities when necessary. However, public disclosure of unmitigated vulnerabilities typically occurs after failure to mitigate (such as due to inability to engage the affected organization), or when users should take defensive measures before mitigation because ongoing exploitation of the vulnerability “in the wild” is actively harming users. By contrast, the SEC’s proposed rule would rely on a public disclosure requirement on a restrictive timeline in nearly all cases, creating the risk of additional harm to investors that can outweigh the benefits of public disclosure.

Proposed solution

Below, we suggest a solution that we believe achieves the SEC’s ultimate goal of investor protection by requiring timely disclosure of cyber incidents and simultaneously avoiding the unnecessary additional harm to investors that may result with premature public disclosure.

Specifically, we suggest that the proposed rule remains largely the same — i.e., the SEC continues to require that companies determine whether the incident is material as soon as practicable after discovery of the cyber incident, and file a report on Form 8-K four days after the materiality determination under normal circumstances. However, we propose that the rule be revised to also provide companies with a temporary exemption from public disclosure if each of the below conditions are met:

  • The incident is not yet contained or otherwise mitigated to prevent additional harm to the company and its investors;
  • The company reasonably believes that public disclosure of the uncontained or unmitigated incident may cause substantial additional harm to the company, its investors, or other public companies or their investors;
  • The company reasonably believes the incident can be contained or mitigated in a timely manner; and
  • The company is actively engaged in containing or mitigating the incident in a timely manner.

The determination of the applicability of the aforementioned exception may be made simultaneously to the determination of materiality. If the exception applies, the company may delay public disclosure until such time that any of the conditions are no longer occurring, at which point, they must publicly disclose the cyber incident via Form 8-K, no later than four days after the date on which the exemption is no longer applicable. The 8-K disclosure could note that, prior to filing the 8-K, the company relied on the exemption from disclosure. Existing insider trading restrictions would, of course, continue to apply during the public disclosure delay.

If an open-ended delay in public disclosure for containment or mitigation is unacceptable to the SEC, then we suggest that the exemption only be available for 30 days after the determination of materiality. In our experience, the vast majority of incidents can be contained and mitigated within that time frame. However, cybersecurity incidents can vary greatly, and there may nonetheless be rare outliers where the mitigation process exceeds 30 days.

Drawbacks of other solutions

Rapid7 is aware of other solutions being floated to address the problem of public disclosure of unmitigated cyber incidents. However, these carry drawbacks that do not align with the purpose of the SEC rule or potentially don’t make sense for cybersecurity. For example:

  • AG delay: The SEC’s proposed rule considers allowing a delay in reporting the incident when the Attorney General (AG) determines the delay is in the interest of national security. This is an appropriate delay, but insufficient on its own. This AG delay would apply to a very small fraction of major cyber incidents and not prevent the potential harms described above in the vast majority of cases.
  • Law enforcement delay: The SEC’s proposed rule considers, and then rejects, a delay when incident reporting would hinder a law enforcement investigation. We believe this too would be an appropriate delay, to ensure law enforcement can help prevent future cyber incidents that would harm investors. However, it is unclear if this delay would be triggered in many cases. First, the SEC’s proposed timeframe (four days after concluding the incident is material) poses a tight turnaround for law enforcement to start a new investigation or add to an existing investigation, determine how disclosure might impact the investigation, and then request delay from the SEC. Second, law enforcement agencies already have investigations opened against many cybercriminal groups, so public disclosure of another incident may not make a significant difference in the investigation, even if public disclosure of the incident would cause harm. Although a law enforcement delay would be used more than the AG delay, we still anticipate it would apply to only a fraction of incidents.
  • Vague disclosures: Another potential solution is to continue to require public companies to disclose unmitigated cyber incidents on the proposed timeline, but to allow the disclosures to be so vague that it is unclear whether the incident has been mitigated. Yet an attacker embedded in a company network is unlikely to be fooled by a vague incident report from the same company, and even a vague report could encourage new attackers to try to get a foothold in. In addition, very vague disclosures are unlikely to be useful for investor decision-making.
  • Materiality after mitigation: Another potential solution is to require a materiality determination only after the incident has been mitigated. However, this risks unnecessary delays in mitigation to avoid triggering the deadline for disclosure, even for incidents that could be mitigated within the SEC’s proposed timeline. Although containment or mitigation of an incident is important prior to public disclosure of the incident, completion of mitigation is not necessarily a prerequisite to determining the seriousness (i.e., materiality) of an incident.

Balancing risks and benefits of transparency

The SEC has an extensive list of material information that it requires companies to disclose publicly on 8-Ks – everything from bankruptcies to mine safety. However, public disclosure of any of these other items is not likely to prompt new criminal actions that bring additional harm to investors. Public disclosure of unmitigated cyber incidents poses unique risks compared with other disclosures and should be considered in that light.

The SEC has long been among the most forward-looking regulators on cybersecurity issues. We thank them for the acknowledgement of the significance of cybersecurity to corporate management, and for taking the time to listen to feedback from the community. Rapid7’s feedback is that we agree on the usefulness of disclosure of material cybersecurity incidents, but we encourage SEC to ensure its public reporting requirement avoids undermining its own goals and providing more opportunities for attackers.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

AWS CyberVadis report now available for due diligence on third-party suppliers

Post Syndicated from Andreas Terwellen original https://aws.amazon.com/blogs/security/aws-cybervadis-report-now-available-for-due-diligence-on-third-party-suppliers/

At Amazon Web Services (AWS), we’re continuously expanding our compliance programs to provide you with more tools and resources to perform effective due diligence on AWS. We’re excited to announce the availability of the AWS CyberVadis report to help you reduce the burden of performing due diligence on your third-party suppliers.

With the increase in adoption of cloud products and services across multiple sectors and industries, AWS is a critical component of customers’ third-party environments. Regulated customers, such as those in the financial services sector, are held to high standards by regulators and auditors when it comes to exercising effective due diligence on third parties.

Many customers use third-party cyber risk management (TPCRM) services such as CyberVadis to better manage risks from their evolving third-party environments and to drive operational efficiencies. To help with such efforts, AWS has completed the CyberVadis assessment of its security posture. CyberVadis security analysts perform the assessment and validate the results annually.

CyberVadis is a comprehensive third-party risk assessment process that combines the speed and scalability of automation with the certainty of analyst validation. The CyberVadis cybersecurity rating methodology assesses the maturity of a company’s information security management system (ISMS) through its policies, implementation measures, and results.

CyberVadis integrates responses from AWS with analytics and risk models to provide an in-depth view of the AWS security posture. The CyberVadis methodology maps to major international compliance standards, including the following:

Customers can download the AWS CyberVadis report at no additional cost. For details on how to access the report, see our AWS CyberVadis report page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

Want more AWS Security news? Follow us on Twitter.

Andreas Terwellen

Andreas Terwellen

Andreas is a senior manager in security audit assurance at AWS, based in Frankfurt, Germany. His team is responsible for third-party and customer audits, attestations, certifications, and assessments across Europe. Previously, he was a CISO in a DAX-listed telecommunications company in Germany. He also worked for different consulting companies managing large teams and programs across multiple industries and sectors.

Manuel Mazarredo

Manuel Mazarredo

Manuel is a security audit program manager at AWS based in Amsterdam, the Netherlands. Manuel leads security audits, attestations, and certification programs across Europe, and is responsible for the BeNeLux area. For the past 18 years, he has worked in information systems audits, ethical hacking, project management, quality assurance, and vendor management across a variety of industries.

Navigating the Evolving Patchwork of Incident Reporting Requirements

Post Syndicated from Peter Woolverton original https://blog.rapid7.com/2022/08/10/navigating-the-evolving-patchwork-of-incident-reporting-requirements/

Navigating the Evolving Patchwork of Incident Reporting Requirements

In March 2022, President Biden signed into law the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), a bipartisan initiative that empowers CISA to require cyber incident reporting from critical infrastructure owners and operators. Rapid7 is supportive of CIRCIA and cyber incident reporting in general, but we also encourage regulators to ensure reporting rules are streamlined and do not impose unnecessary burdens on companies that are actively recovering from cyber intrusions.

Although a landmark legislative change, CIRCIA is just one highly visible example of a broader trend. Incident reporting has emerged as a predominant cybersecurity regulatory strategy across government. Numerous federal and state agencies are implementing their own cyber incident reporting requirements under their respective rulemaking authorities – such as SEC, FTC, the Federal Reserve, OCC, NCUA, NERC, TSA, NYDFS, and others. Several such rules are already in force in US law, with at least three more likely to become effective within the next year.

The trend is not limited to the US. Several international governing bodies have proposed similar cyber incident reporting rules, such as the European Union’s (EU) NIS-2 Directive.

Raising the bar for security transparency through incident reporting is a productive step in a positive direction. Incident reporting requirements can help the government to manage sectoral risk, encourage a higher level of private-sector cyber hygiene, and enhance intrusion remediation and prevention capabilities. But the rapid embrace of this new legal paradigm may have created too much of a good thing, with the emerging regulatory environment risks becoming unmanageable.

Current state

Cyber incident reporting rules that enforce overlapping or contradictory requirements can impose undue compliance burdens on organizations that are actively responding to cyberattacks. To illustrate the problem, consider the potential experience of a hypothetical company – let’s call it Energy1. Energy1 is a US-based, publicly traded utility company that owns and operates energy generation plants, electrical transmission systems, and natural gas distribution lines. If Energy1 experiences a significant cyber attack, it may be required to submit the following reports:

  • Within one hour, provide to NERC – under NERC CIP rules – a report with preliminary details about the incident and its functional impact on operations.
  • Within 24 hours, provide to TSA – under the pipeline security directive – a report with a complete description of the incident, its functional impact on business operations, and the details of remediation steps.
  • Within 72 hours, provide to CISA – under CIRCIA – a complete description of the incident, details of remediation steps, and threat intelligence information that may identify the perpetrator.
  • Within 96 hours, provide to SEC – under the SEC’s proposed rule – a complete description of the incident and its impact, including whether customer data was compromised.

In our hypothetical scenario, Energy1 may need to rapidly compile the necessary information to comply with each different reporting rule or statute, all while balancing the urgent need to remediate and recover from a cyber intrusion. Furthermore, if Energy1 operates in non-US markets as well, it may be subject to several more reporting requirements, such as those proposed under the draft NIS-2 Directive in the EU or the CERT-IN rule in India. Many of these regulations would also require subsequent status updates after the initial report.

The example above demonstrates the complexity of the emerging patchwork of incident reporting requirements. Legal compliance in this new environment creates a number of challenges for the private sector and the government. For example:

  • Redundant requirements: Unnecessarily duplicative compliance requirements imposed in the wake of a cyber incident can draw critical resources away from incident remediation, potentially leading to lower-quality data submitted in the reports.
  • Public vs. private disclosure: Most reports are held privately by regulators, but the SEC’s proposed rule would require companies to file public reports within 96 hours of determining that an incident is significant. Public disclosure before the incident is contained or mitigated may expose the affected company to further risk of cyberattack. In addition, premature public reporting of incidents prior to mitigation may not provide an accurate reflection of the affected company’s cyber incident response capabilities.
  • Inconsistent requirements: The definition of what is reportable is not consistent across agency rules. For example, the SEC requires reporting of cyber incidents that are “material” to a reasonable investor, whereas NERC requires reporting of almost any cyber incident, including failed “attempts” at cyber intrusion. The lack of a uniform definition of reportability adds another layer of complexity to the compliance process.
  • Process inconsistencies: As demonstrated in the Energy1 example, all incident reporting rules and proposed rules have different deadlines. In addition, each rule and proposed rule has different required reporting formats and methods of submission. These process inconsistencies add friction to the compliance process.

Recommendations

The key issues outlined above may be addressed by the Cyber Incident Reporting Council (CIRC), an interagency working group led by the Department of Homeland Security (DHS). This Council was established under CIRCIA and is tasked with harmonizing existing incident reporting requirements into a more unified regulatory regime. A readout of the Council’s first meeting, convened on July 25, stated CIRC’s intent to “reduce [the] burden on industry by advancing common standards for incident reporting.”

In addition to DHS, CIRC includes representatives from across government, including from the Departments of Justice, Commerce, Treasury, and Energy among others. It is not yet clear from the Council’s initial meeting how exactly CIRC will reshape cyber incident reporting regulations, or whether such changes will be achievable through executive action or whether new legislation will be needed. The Council will release a report with recommendations by the end of 2022.

Rapid7 urges CIRC to consider several harmonization strategies intended to streamline compliance while maintaining the benefits of cyber incident reporting, such as:

  • Unified process: When practically possible, develop a single intake point for all incident reporting submissions with a universal format accepted by multiple agencies. This would help eliminate the need for organizations to submit several reports to different agencies with different formats and on different timetables.
  • Deconflicted requirements: Agree on a more unified definition of what constitutes a reportable cyber incident, and build toward more consistent reporting requirements that satisfy the needs of multiple agency rules.
  • Public disclosure delay: Releasing incident reports publicly before affected organizations have time to contain the breach may put the security of the company and its customers at unnecessary risk. Requirements that involve public disclosure, such as proposed rules from the SEC and FTC, should consider delaying and coordinating disclosure timing with the affected company.

Some agencies in the Federal government are already designing incident reporting rules with harmonization in mind. The Federal Reserve, FDIC, and OCC, rather than building out three separate rules for each agency, designed a single universal incident reporting requirement for all three agencies. The rule requires only one report be submitted to whichever of the three agencies is the affected company’s “primary regulator.” The sharing of reports between agencies is handled internally, removing from companies the burden of submitting multiple reports to multiple agencies. Rapid7 supports this approach and would encourage the CIRC to pursue a similarly streamlined strategy in its harmonization efforts where possible.

Striking the right balance

Rapid7 supports the growing adoption of cyber incident reporting. Greater cybersecurity transparency between government and industry can deliver considerable benefits. However, unnecessarily overlapping or contradictory reporting requirements may cause harm by detracting from the critical work of incident response and recovery. We encourage regulators to streamline and simplify the process in order to capture the full benefits of incident reporting without exposing organizations to unnecessary burden or risk in the process.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Spring 2022 PCI DSS report available with seven services added to compliance scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/spring-2022-pci-dss-report-available-with-seven-services-added-to-compliance-scope/

We’re continuing to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that seven new services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This provides our customers with more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in AWS.

You can see the full list of services on our Services in Scope by Compliance program page. The seven new services are:

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). Customers can access the Attestation of Compliance (AOC) report demonstrating AWS’ PCI compliance status through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

ISO 27002 Emphasizes Need For Threat Intelligence

Post Syndicated from Drew Burton original https://blog.rapid7.com/2022/07/25/iso-27002-emphasizes-need-for-threat-intelligence/

ISO 27002 Emphasizes Need For Threat Intelligence

With employees reluctant to return to the office following the COVID-19 pandemic, the concept of a well-defined network perimeter has become a thing of the past for many organizations. Attack surfaces continue to expand, and as a result, threat intelligence has taken on even greater importance.

Earlier this year, the International Organization for Standardization (ISO) released ISO 27002, which features a dedicated threat intelligence control (Control 5.7). This control is aimed at helping organizations collect and analyze threat intelligence data more effectively. It also provides guidelines for creating policies that limit the impact of threats. In short, ISO 27002’s Control 5.7 encourages a proactive approach to threat intelligence.

Control 5.7 specifies that threat intelligence must be “relevant, perceptive, contextual, and actionable” in order to be effective. It also recommends that organizations consider threat intelligence on three levels: strategic, operational, and tactical.

  • Strategic threat intelligence is defined as high-level information about the evolving threat landscape (information about threat actors, types of attacks, etc.)
  • Operational threat intelligence is information about the tactics, tools, and procedures (TTPs) used by attackers.
  • Tactical threat intelligence includes detailed information on particular attacks, including technical indicators.

ISO 27002 is intended to be used with ISO 27001, which provides guidance for establishing and maintaining information security management systems. Many organizations use ISO 27001 and 27002 in conjunction as a framework for showing compliance with regulations where detailed requirements are not provided, for example Sarbanes-Oxley Act (SOX) in the US and the Data Protection Directive in the EU.

How Rapid7 can help

In addition to our threat intelligence and digital risk protection solution Threat Command, there are several Rapid7 products and services that can help you address a variety of controls recommended in ISO 27002.

InsightVM identifies and classifies assets, audits password policies, and identifies and prioritizes vulnerabilities. Metasploit can be used to validate vulnerability exploitability, audit the effectiveness of network segmentation, and conduct technical compliance tests. InsightAppSec tests the security of web applications. InsightIDR monitors user access to the network, collects and analyzes events, and assists in incident response.

Additionally, Rapid7 can provide security consulting services, perform an assessment of your organization’s current state of controls against the ISO 27002 framework, and identify gaps in your security program. We can also develop and review security policies, conduct penetration tests, respond to security incidents, and more.

Addressing ISO 27002 Control 5.7

A dedicated threat intelligence and digital risk protection solution like Rapid7 Threat Command can greatly ease the process of addressing Control 5.7.

Threat Command is designed to simplify the collection and analysis of threat intelligence data — from detection to remediation. It proactively monitors thousands of sources across the clear, deep, and dark web and delivers tailored threat intelligence information specific to your organization. Even better, Threat Command helps reduce the information overload with comprehensive external threat protection from a single pane of glass.

Threat Command enables you to make informed decisions, rapidly detect and mitigate threats,  and minimize exposure to your organization. Simply input your digital assets and properties, and you’ll receive relevant alerts categorized by severity, type of threat, and source. Fast detection and integration with SIEM, SOAR, EDR, and firewall allow you to quickly turn threat intelligence into action.

To learn more about how Threat Command fits into your organization’s security strategy, schedule a demo today.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

AWS achieves TISAX certification (Information with Very High Protection Needs (AL3)

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-tisax-certification-information-with-very-high-protection-needs-al3/

We’re excited to announce the completion of the Trusted Information Security Assessment Exchange (TISAX) certification on June 30, 2022 for 19 AWS Regions. These Regions achieved the Information with Very High Protection Needs (AL3) label for the control domains Information Handling and Data Protection. This alignment with TISAX requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS automotive customers can run their applications in the AWS Cloud certified Regions in confidence.

The following 19 Regions are currently TISAX certified:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Oregon)
  • Africa (Cape Town)
  • Asia Pacific (Hong Kong)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Osaka)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Milan)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

TISAX is a European automotive industry-standard information security assessment (ISA) catalog based on key aspects of information security, such as data protection and connection to third parties.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the European Network Exchange (ENX) Portal (the scope ID and assessment ID are SM22TH and AYA2D4-1, respectively) and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose TISAX.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about TISAX compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

AWS achieves HDS certification to three additional Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-hds-certification-to-three-additional-regions/

We’re excited to announce that three additional AWS Regions—Asia Pacific (Korea), Europe (London), and Europe (Stockholm)—have been granted the Health Data Hosting (Hébergeur de Données de Santé, HDS) certification. This alignment with the HDS requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS customers who handle personal health data can be hosted in the AWS Cloud certified Regions with confidence.

The following 16 Regions are now in scope of this certification:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

Introduced by the French governmental agency for health, Agence Française de la Santé Numérique (ASIP Santé), HDS certification aims to strengthen the security and protection of personal health data. Achieving this certification demonstrates that AWS provides a framework for technical and governance measures to secure and protect personal health data, governed by French law.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the Agence du Numérique en Santé (ANS) website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose HDS.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about HDS compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

Manage application security and compliance with the AWS Cloud Development Kit and cdk-nag

Post Syndicated from Rodney Bozo original https://aws.amazon.com/blogs/devops/manage-application-security-and-compliance-with-the-aws-cloud-development-kit-and-cdk-nag/

Infrastructure as Code (IaC) is an important part of Cloud Applications. Developers rely on various Static Application Security Testing (SAST) tools to identify security/compliance issues and mitigate these issues early on, before releasing their applications to production. Additionally, SAST tools often provide reporting mechanisms that can help developers verify compliance during security reviews.

cdk-nag integrates directly into AWS Cloud Development Kit (AWS CDK) applications to provide identification and reporting mechanisms similar to SAST tooling.

This post demonstrates how to integrate cdk-nag into an AWS CDK application to provide continual feedback and help align your applications with best practices.

Overview of cdk-nag

cdk-nag (inspired by cfn_nag) validates that the state of constructs within a given scope comply with a given set of rules. Additionally, cdk-nag provides a rule suppression and compliance reporting system. cdk-nag validates constructs by extending AWS CDK Aspects. If you’re interested in learning more about the AWS CDK Aspect system, then you should check out this post.

cdk-nag includes several rule sets (NagPacks) to validate your application against. As of this post, cdk-nag includes the AWS Solutions, HIPAA Security, NIST 800-53 rev 4, NIST 800-53 rev 5, and PCI DSS 3.2.1 NagPacks. You can pick and choose different NagPacks and apply as many as you wish to a given scope.

cdk-nag rules can either be warnings or errors. Both warnings and errors will be displayed in the console and compliance reports. Only unsuppressed errors will prevent applications from deploying with the cdk deploy command.

You can see which rules are implemented in each of the NagPacks in the Rules Documentation in the GitHub repository.

Walkthrough

This walkthrough will setup a minimal AWS CDK v2 application, as well as demonstrate how to apply a NagPack to the application, how to suppress rules, and how to view a report of the findings. Although cdk-nag has support for Python, TypeScript, Java, and .NET AWS CDK applications, we’ll use TypeScript for this walkthrough.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • A local installation of and experience using the AWS CDK.

Create a baseline AWS CDK application

In this section you will create and synthesize a small AWS CDK v2 application with an Amazon Simple Storage Service (Amazon S3) bucket. If you are unfamiliar with using the AWS CDK, then learn how to install and setup the AWS CDK by looking at their open source GitHub repository.

  1. Run the following commands to create the AWS CDK application:
mkdir CdkTest
cd CdkTest
cdk init app --language typescript
  1. Replace the contents of the lib/cdk_test-stack.ts with the following:
import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { Bucket } from 'aws-cdk-lib/aws-s3';

export class CdkTestStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);
    const bucket = new Bucket(this, 'Bucket')
  }
}
  1. Run the following commands to install dependencies and synthesize our sample app:
npm install
npx cdk synth

You should see an AWS CloudFormation template with an S3 bucket both in your terminal and in cdk.out/CdkTestStack.template.json.

Apply a NagPack in your application

In this section, you’ll install cdk-nag, include the AwsSolutions NagPack in your application, and view the results.

  1. Run the following command to install cdk-nag:
npm install cdk-nag
  1. Replace the contents of the bin/cdk_test.ts with the following:
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { CdkTestStack } from '../lib/cdk_test-stack';
import { AwsSolutionsChecks } from 'cdk-nag'
import { Aspects } from 'aws-cdk-lib';

const app = new cdk.App();
// Add the cdk-nag AwsSolutions Pack with extra verbose logging enabled.
Aspects.of(app).add(new AwsSolutionsChecks({ verbose: true }))
new CdkTestStack(app, 'CdkTestStack', {});
  1. Run the following command to view the output and generate the compliance report:
npx cdk synth

The output should look similar to the following (Note: SSE stands for Server-side encryption):

[Error at /CdkTestStack/Bucket/Resource] AwsSolutions-S1: The S3 Bucket has server access logs disabled. The bucket should have server access logging enabled to provide detailed records for the requests that are made to the bucket.

[Error at /CdkTestStack/Bucket/Resource] AwsSolutions-S2: The S3 Bucket does not have public access restricted and blocked. The bucket should have public access restricted and blocked to prevent unauthorized access.

[Error at /CdkTestStack/Bucket/Resource] AwsSolutions-S3: The S3 Bucket does not default encryption enabled. The bucket should minimally have SSE enabled to help protect data-at-rest.

[Error at /CdkTestStack/Bucket/Resource] AwsSolutions-S10: The S3 Bucket does not require requests to use SSL. You can use HTTPS (TLS) to help prevent potential attackers from eavesdropping on or manipulating network traffic using person-in-the-middle or similar attacks. You should allow only encrypted connections over HTTPS (TLS) using the aws:SecureTransport condition on Amazon S3 bucket policies.

Found errors

Note that applying the AwsSolutions NagPack to the application rendered several errors in the console (AwsSolutions-S1, AwsSolutions-S2, AwsSolutions-S3, and AwsSolutions-S10). Furthermore, the cdk.out/AwsSolutions-CdkTestStack-NagReport.csv contains the errors as well:

Rule ID,Resource ID,Compliance,Exception Reason,Rule Level,Rule Info
"AwsSolutions-S1","CdkTestStack/Bucket/Resource","Non-Compliant","N/A","Error","The S3 Bucket has server access logs disabled."
"AwsSolutions-S2","CdkTestStack/Bucket/Resource","Non-Compliant","N/A","Error","The S3 Bucket does not have public access restricted and blocked."
"AwsSolutions-S3","CdkTestStack/Bucket/Resource","Non-Compliant","N/A","Error","The S3 Bucket does not default encryption enabled."
"AwsSolutions-S5","CdkTestStack/Bucket/Resource","Compliant","N/A","Error","The S3 static website bucket either has an open world bucket policy or does not use a CloudFront Origin Access Identity (OAI) in the bucket policy for limited getObject and/or putObject permissions."
"AwsSolutions-S10","CdkTestStack/Bucket/Resource","Non-Compliant","N/A","Error","The S3 Bucket does not require requests to use SSL."

Remediating and suppressing errors

In this section, you’ll remediate the AwsSolutions-S10 error, suppress the  AwsSolutions-S1 error on a Stack level, suppress the  AwsSolutions-S2 error on a Resource level errors, and not remediate the  AwsSolutions-S3 error and view the results.

  1. Replace the contents of the lib/cdk_test-stack.ts with the following:
import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { Bucket } from 'aws-cdk-lib/aws-s3';
import { NagSuppressions } from 'cdk-nag'

export class CdkTestStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);
    // The local scope 'this' is the Stack. 
    NagSuppressions.addStackSuppressions(this, [
      {
        id: 'AwsSolutions-S1',
        reason: 'Demonstrate a stack level suppression.'
      },
    ])
    // Remediating AwsSolutions-S10 by enforcing SSL on the bucket.
    const bucket = new Bucket(this, 'Bucket', { enforceSSL: true })
    NagSuppressions.addResourceSuppressions(bucket, [
      {
        id: 'AwsSolutions-S2',
        reason: 'Demonstrate a resource level suppression.'
      },
    ])
  }
}
  1. Run the cdk synth command again:
npx cdk synth

The output should look similar to the following:

[Error at /CdkTestStack/Bucket/Resource] AwsSolutions-S3: The S3 Bucket does not default encryption enabled. The bucket should minimally have SSE enabled to help protect data-at-rest.

Found errors

The cdk.out/AwsSolutions-CdkTestStack-NagReport.csv contains more details about rule compliance, non-compliance, and suppressions.

Rule ID,Resource ID,Compliance,Exception Reason,Rule Level,Rule Info
"AwsSolutions-S1","CdkTestStack/Bucket/Resource","Suppressed","Demonstrate a stack level suppression.","Error","The S3 Bucket has server access logs disabled."
"AwsSolutions-S2","CdkTestStack/Bucket/Resource","Suppressed","Demonstrate a resource level suppression.","Error","The S3 Bucket does not have public access restricted and blocked."
"AwsSolutions-S3","CdkTestStack/Bucket/Resource","Non-Compliant","N/A","Error","The S3 Bucket does not default encryption enabled."
"AwsSolutions-S5","CdkTestStack/Bucket/Resource","Compliant","N/A","Error","The S3 static website bucket either has an open world bucket policy or does not use a CloudFront Origin Access Identity (OAI) in the bucket policy for limited getObject and/or putObject permissions."
"AwsSolutions-S10","CdkTestStack/Bucket/Resource","Compliant","N/A","Error","The S3 Bucket does not require requests to use SSL."

Moreover, note that the resultant cdk.out/CdkTestStack.template.json template contains the cdk-nag suppression data. This provides transparency with what rules weren’t applied to an application, as the suppression data is included in the resources.

{
  "Metadata": {
    "cdk_nag": {
      "rules_to_suppress": [
        {
          "id": "AwsSolutions-S1",
          "reason": "Demonstrate a stack level suppression."
        }
      ]
    }
  },
  "Resources": {
    "BucketDEB6E181": {
      "Type": "AWS::S3::Bucket",
      "UpdateReplacePolicy": "Retain",
      "DeletionPolicy": "Retain",
      "Metadata": {
        "aws:cdk:path": "CdkTestStack/Bucket/Resource",
        "cdk_nag": {
          "rules_to_suppress": [
            {
              "id": "AwsSolutions-S2",
              "reason": "Demonstrate a resource level suppression."
            }
          ]
        }
      }
    },
  ...
  },
  ...
}

Reflecting on the Walkthrough

In this section, you learned how to apply a NagPack to your application, remediate/suppress warnings and errors, and review the compliance reports. The reporting and suppression systems provide mechanisms for the development and security teams within organizations to work together to identify and mitigate potential security/compliance issues. Security can choose which NagPacks developers should apply to their applications. Then, developers can use the feedback to quickly remediate issues. Security can use the reports to validate compliances. Furthermore, developers and security can work together to use suppressions to transparently document exceptions to rules that they’ve decided not to follow.

Advanced usage and further reading

This section briefly covers some advanced options for using cdk-nag.

Unit Testing with the AWS CDK Assertions Library

The Annotations submodule of the AWS CDK assertions library lets you check for cdk-nag warnings and errors without AWS credentials by integrating a NagPack into your application unit tests. Read this post for further information about the AWS CDK assertions module. The following is an example of using assertions with a TypeScript AWS CDK application and Jest for unit testing.

import { Annotations, Match } from 'aws-cdk-lib/assertions';
import { App, Aspects, Stack } from 'aws-cdk-lib';
import { AwsSolutionsChecks } from 'cdk-nag';
import { CdkTestStack } from '../lib/cdk_test-stack';

describe('cdk-nag AwsSolutions Pack', () => {
  let stack: Stack;
  let app: App;
  // In this case we can use beforeAll() over beforeEach() since our tests 
  // do not modify the state of the application 
  beforeAll(() => {
    // GIVEN
    app = new App();
    stack = new CdkTestStack(app, 'test');

    // WHEN
    Aspects.of(stack).add(new AwsSolutionsChecks());
  });

  // THEN
  test('No unsuppressed Warnings', () => {
    const warnings = Annotations.fromStack(stack).findWarning(
      '*',
      Match.stringLikeRegexp('AwsSolutions-.*')
    );
    expect(warnings).toHaveLength(0);
  });

  test('No unsuppressed Errors', () => {
    const errors = Annotations.fromStack(stack).findError(
      '*',
      Match.stringLikeRegexp('AwsSolutions-.*')
    );
    expect(errors).toHaveLength(0);
  });
});

Additionally, many testing frameworks include watch functionality. This is a background process that reruns all of the tests when files in your project have changed for fast feedback. For example, when using the AWS CDK in JavaScript/Typescript, you can use the Jest CLI watch commands. When Jest watch detects a file change, it attempts to run unit tests related to the changed file. This can be used to automatically run cdk-nag-related tests when making changes to your AWS CDK application.

CDK Watch

When developing in non-production environments, consider using AWS CDK Watch with a NagPack for fast feedback. AWS CDK Watch attempts to synthesize and then deploy changes whenever you save changes to your files. Aspects are run during synthesis. Therefore, any NagPacks applied to your application will also run on save. As in the walkthrough, all of the unsuppressed errors will prevent deployments, all of the messages will be output to the console, and all of the compliance reports will be generated. Read this post for further information about AWS CDK Watch.

Conclusion

In this post, you learned how to use cdk-nag in your AWS CDK applications. To learn more about using cdk-nag in your applications, check out the README in the GitHub Repository. If you would like to learn how to create your own rules and NagPacks, then check out the developer documentation. The repository is open source and welcomes community contributions and feedback.

Author:

Arun Donti

Arun Donti is a Senior Software Engineer with Twitch. He loves working on building automated processes and tools that enable builders and organizations to focus on and deliver their mission critical needs. You can find him on GitHub.

Cloudflare achieves key cloud computing certifications — and there’s more to come

Post Syndicated from Rory Malone original https://blog.cloudflare.com/iso-27018-second-privacy-certification-and-c5/

Cloudflare achieves key cloud computing certifications — and there’s more to come

Cloudflare achieves key cloud computing certifications — and there’s more to come

Back in the early days of the Internet, you could physically see the hardware where your data was stored. You knew where your data was and what kind of locks and security protections you had in place. Fast-forward a few decades, and data is all “in the cloud”. Now, you have to trust that your cloud services provider is putting security precautions in place just as you would have if your data was still sitting on your hardware. The good news is, you don’t have to merely trust your provider anymore. There are a number of ways a cloud services provider can prove it has robust privacy and security protections in place.

Today, we are excited to announce that Cloudflare has taken three major steps forward in proving the security and privacy protections we provide to customers of our cloud services: we achieved a key cloud services certification, ISO/IEC 27018:2019; we completed our independent audit and received our Cloud Computing Compliance Criteria Catalog (“C5”) attestation; and we have joined the EU Cloud Code of Conduct General Assembly to help increase the impact of the trusted cloud ecosystem and encourage more organizations to adopt GDPR-compliant cloud services.

Cloudflare has been committed to data privacy and security since our founding, and it is important to us that we can demonstrate these commitments. Certification provides assurance to our customers that a third party has independently verified that Cloudflare meets the requirements set out in the standard.

ISO/IEC 27018:2019 – Cloud Services Certification

2022 has been a big year for people who like the number ‘two’. February marked the second when the 22nd Feb 2022 20:22:02 passed: the second second of the twenty-second minute of the twentieth hour of the twenty-second day of the second month, of the year twenty-twenty-two! As well as the date being a palindrome — something that reads the same forwards and backwards — on an vintage ‘80s LCD clock, the date and time could be written as an ambigram too — something that can be read upside down as well as the right way up:

Cloudflare achieves key cloud computing certifications — and there’s more to come

When we hit 2022 02 22, our team was busy completing our second annual audit to certify to ISO/IEC 27701:2019, having been one of the first organizations in our industry to have achieved this new ISO privacy certification in 2021, and the first Internet performance & security company to be certified to it. And now Cloudflare has now been certified to a second international privacy standard related to the processing of personal data — ISO/IEC 27018:2019.1

ISO 27018 is a privacy extension to the widespread industry standards ISO/IEC 27001 and ISO/IEC 27002, which describe how to establish and run an Information Security Management System. ISO 27018 extends the standards into a code of practice for how any personal information should be protected when processed in a public cloud, such as Cloudflare’s.

What does ISO 27018 mean for Cloudflare customers?

Put simply, with Cloudflare’s certifications to both ISO 27701 and ISO 27018, customers can be assured that Cloudflare both has a privacy program that meets GDPR-aligned industry standards and also that Cloudflare protects the personal data processed in our network as part of that privacy program.

These certifications, in addition to the Data Processing Addendum (“DPA”) we make available to our customers, offer our customers multiple layers of assurance that any personal data that Cloudflare processes on their behalf will be handled in a way that meets the GDPR’s requirements.

The ISO 271018 standard contains enhancements to existing ISO 27002 controls and an additional set of 25 controls identified for organizations that are personal data processors. Controls are essentially a set of best practices that processors must meet in terms of data handling practices and transparency about those practices, protecting and encrypting the personal data processed, and handling data subject rights, among others. As an example, one of the ISO 27018 requirements is:

Where the organization is contracted to process personal data, that personal data may not be used for the purpose of marketing and advertising without establishing that prior consent was obtained from the appropriate data subject. Such consent shall not be a condition for receiving the service.

When Cloudflare acts as a data processor for our customers’ data, that data (and any personal data it may contain) belongs to our customers, not to us. Cloudflare does not track our customers’ end users for marketing or advertising purposes, and we never will. We even went beyond what the ISO control required and added this commitment to our customer DPA:

“… Cloudflare shall not use the Personal Data for the purposes of marketing or advertising…”
– 3.1(b), Cloudflare Data Protection Addendum

Cloudflare achieves ISO 27018:2019 Certification

For ISO 27018, Cloudflare was assessed by a third-party auditor, Schellman, between December 2021 and February 2022. Certifying to an ISO privacy standard is a multi-step process that includes an internal and an external audit, before finally being certified against the standard by the independent auditor. Cloudflare’s new single joint certificate, covering ISO 27001:2013, ISO 27018:2019, and ISO 27701:2019 is now available to download from the Cloudflare Dashboard.

Cloudflare achieves key cloud computing certifications — and there’s more to come

C5:2020 – Cloud Computing Compliance Criteria Catalog

ISO 27108 isn’t all we’re announcing: as we blogged in February, Cloudflare has also been undergoing a separate independent audit for the Cloud Computing Compliance Criteria Catalog certification — also known as C5 — which was introduced by the German government’s Federal Office for Information Security (“BSI”) in 2016 and updated in 2020. C5 evaluates an organization’s security program against a standard of robust cloud security controls. Both German government agencies and private companies place a high level of importance on aligning their cloud computing requirements with these standards. Learn more about C5 here.

Today, we’re excited to announce that we have completed our independent audit and received our C5 attestation from our third-party auditors. The C5 attestation report is now available  to download from the Cloudflare Dashboard.

And we’re not done yet…

When the European Union’s benchmark-setting General Data Protection Regulation (“GDPR”) was adopted four years ago this week, Article 40 encouraged:

“…the drawing up of codes of conduct intended to contribute to the proper application of this Regulation, taking account of the specific features of the various processing sectors and the specific needs of micro, small and medium-sized enterprises.”

The first code officially approved as GDPR-compliant by the EU one year ago this past weekend is ‘The EU Cloud Code of Conduct’. This code is designed to help cloud service providers demonstrate the protections they provide for the personal data they process on behalf of their customers. It covers all cloud service layers, and its compliance is overseen by accredited monitoring body SCOPE Europe. Initially, cloud service providers join as members of the code’s General Assembly, and then the second step is to undergo an audit to validate their adherence to the code.

Today, we are pleased to announce today that Cloudflare has joined the General Assembly of the EU Cloud Code of Conduct. We look forward to the second stage in this process, undertaking our audit and publicly affirming our compliance to the GDPR as a processor of personal data.

Cloudflare Certifications

Customers may now download a copy of Cloudflare’s certifications and reports from the Cloudflare Dashboard; new customers may request these from your sales representative. For the latest information about our certifications and reports, please visit our Trust Hub.


1The International Organization for Standardization (“ISO”) is an international, nongovernmental organization made up of national standards bodies that develops and publishes a wide range of proprietary, industrial, and commercial standards.

LGPD workbook for AWS customers managing personally identifiable information in Brazil

Post Syndicated from Rodrigo Fiuza original https://aws.amazon.com/blogs/security/lgpd-workbook-for-aws-customers-managing-personally-identifiable-information-in-brazil/

Portuguese version

AWS is pleased to announce the publication of the Brazil General Data Protection Law Workbook.

The General Data Protection Law (LGPD) in Brazil was first published on 14 August 2018, and started its applicability on 18 August 2020. Companies that manage personally identifiable information (PII) in Brazil as defined by LGPD will have to comply with and attend to the law.

To better help customers prepare and implement controls that focus on LGPD Chapter VII Security and Best Practices, AWS created a workbook based on industry best practices, AWS service offerings, and controls.

Amongst other topics, this workbook covers information security and AWS controls from:

In combination with Brazil General Data Protection Law Workbook, customers can use the detailed Navigating LGPD Compliance on AWS whitepaper.

AWS adheres to a shared responsibility model. Customers will have to observe which services offer privacy features and determine their applicability to their specific compliance requirements. Further information about data privacy at AWS can be found at our Data Privacy Center. Specific information about LGPD and data privacy at AWS in Brazil can be found on our Brazil Data Privacy page.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security news? Follow us on Twitter.
 


Portuguese

Workbook da LGPD para Clientes AWS que gerenciam Informações de Identificação Pessoal no Brasil

A AWS tem o prazer de anunciar a publicação do Workbook Lei Geral de Proteção de Dados do Brasil.

A Lei Geral de Proteção de Dados (LGPD) teve sua primeira publicação em 14 de agosto de 2018 no Brasil e iniciou sua aplicabilidade em 18 de agosto de 2020. Empresas que gerenciam informações pessoais identificáveis (PII) conforme definido na LGPD terão que cumprir e atender às cláusulas da lei.

Para ajudar melhor os clientes a preparar e implementar controles que se concentram no Capítulo VII da LGPD “da Segurança e Boas Práticas”, a AWS criou uma pasta de trabalho com base nas melhores práticas do setor, ofertas de serviços e controles da AWS.

Entre outros tópicos, esta pasta de trabalho aborda a segurança da informação e os controles da AWS de:

Em combinação com o Workbook Lei Geral de Proteção de Dados do Brasil, os clientes podem usar o whitepaper detalhado Navegando na conformidade com a LGPD na AWS.

A AWS adere a um modelo de responsabilidade compartilhada. Clientes terão que observar quais serviços oferecem recursos de privacidade e determinar sua aplicabilidade aos seus requisitos específicos de compliance. Mais informações sobre a privacidade de dados na AWS podem ser encontradas em nosso Centro de Privacidade de Dados. Informações adicionais sobre LGPD e Privacidade de dados na AWS no Brasil podem ser encontradas em nossa página de Privacidade de Dados no Brasil.

Para saber mais sobre nossos programas de conformidade e segurança, consulte Programas de conformidade da AWS. Como sempre, valorizamos seus comentários e perguntas; entre em contato com a equipe de conformidade da AWS por meio da página Fale conosco.

Se você tiver feedback sobre esta postagem, envie comentários na seção Comentários abaixo.

Quer mais notícias sobre segurança da AWS? Siga-nos no Twitter.

Author

Rodrigo Fiuza

Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, Caribbean and Europe. Rodrigo has previously worked in risk management, security assurance, and technology audits for the past 12 years.