Tag Archives: Governance

New IAMCTL tool compares multiple IAM roles and policies

Post Syndicated from Sudhir Reddy Maddulapally original https://aws.amazon.com/blogs/security/new-iamctl-tool-compares-multiple-iam-roles-and-policies/

If you have multiple Amazon Web Services (AWS) accounts, and you have AWS Identity and Access Management (IAM) roles among those multiple accounts that are supposed to be similar, those roles can deviate over time from your intended baseline due to manual actions performed directly out-of-band called drift. As part of regular compliance checks, you should confirm that these roles have no deviations. In this post, we present a tool called IAMCTL that you can use to extract the IAM roles and policies from two accounts, compare them, and report out the differences and statistics. We will explain how to use the tool, and will describe the key concepts.

Prerequisites

Before you install IAMCTL and start using it, here are a few prerequisites that need to be in place on the computer where you will run it:

To follow along in your environment, clone the files from the GitHub repository, and run the steps in order. You won’t incur any charges to run this tool.

Install IAMCTL

This section describes how to install and run the IAMCTL tool.

To install and run IAMCTL

  1. At the command line, enter the following command:
    pip3 install git+ssh://[email protected]/aws-samples/[email protected]
    

    You will see output similar to the following.

    Figure 1: IAMCTL tool installation output

    Figure 1: IAMCTL tool installation output

  2. To confirm that your installation was successful, enter the following command.
    iamctl –h
    

    You will see results similar to those in figure 2.

    Figure 2: IAMCTL help message

    Figure 2: IAMCTL help message

Now that you’ve successfully installed the IAMCTL tool, the next section will show you how to use the IAMCTL commands.

Example use scenario

Here is an example of how IAMCTL can be used to find differences in IAM roles between two AWS accounts.

A system administrator for a product team is trying to accelerate a product launch in the middle of testing cycles. Developers have found that the same version of their application behaves differently in the development environment as compared to the QA environment, and they suspect this behavior is due to differences in IAM roles and policies.

The application called “app1” primarily reads from an Amazon Simple Storage Service (Amazon S3) bucket, and runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance. In the development (DEV) account, the application uses an IAM role called “app1_dev” to access the S3 bucket “app1-dev”. In the QA account, the application uses an IAM role called “app1_qa” to access the S3 bucket “app1-qa”. This is depicted in figure 3.

Figure 3: Showing the “app1” application in the development and QA accounts

Figure 3: Showing the “app1” application in the development and QA accounts

Setting up the scenario

To simulate this setup for the purpose of this walkthrough, you don’t have to create the EC2 instance or the S3 bucket, but just focus on the IAM role, inline policy, and trust policy.

As noted in the prerequisites, you will switch between the two AWS accounts by using the AWS CLI named profiles “dev-profile” and “qa-profile”, which are configured to point to the DEV and QA accounts respectively.

Start by using this command:

mkdir -p iamctl_test iamctl_test/dev iamctl_test/qa

The command creates a directory structure that looks like this:
Iamctl_test
|– qa
|– dev

Now, switch to the dev folder to run all the following example commands against the DEV account, by using this command:

cd iamctl_test/dev

To create the required policies, first create a file named “app1_s3_access_policy.json” and add the following policy to it. You will use this file’s content as your role’s inline policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-dev/shared/*"
            ]
        }
    ]
}

Second, create a file called “app1_trust_policy.json” and add the following policy to it. You will use this file’s content as your role’s trust policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now use the two files to create an IAM role with the name “app1_dev” in the account by using these command(s), run in the same order as listed here:

#create role with trust policy

aws --profile dev-profile iam create-role --role-name app1_dev --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role created above
 
aws --profile dev-profile iam put-role-policy --role-name app1_dev --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

In the QA account, the IAM role is named “app1_qa” and the S3 bucket is named “app1-qa”.

Repeat the steps from the prior example against the QA account by changing dev to qa where shown in bold in the following code samples. Change the directory to qa by using this command:

cd ../qa

To create the required policies, first create a file called “app1_s3_access_policy.json” and add the following policy to it.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Next, create a file, called “app1_trust_policy.json” and add the following policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now, use the two files created so far to create an IAM role with the name “app1_qa” in your QA account by using these command(s), run in the same order as listed here:

#create role with trust policy
aws --profile qa-profile iam create-role --role-name app1_qa --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role create above
aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

So far, you have two accounts with an IAM role created in each of them for your application. In terms of permissions, there are no differences other than the name of the S3 bucket resource the permission is granted against.

You can expect IAMCTL to generate a minimal set of differences between the DEV and QA accounts, assuming all other IAM roles and policies are the same, but to be sure about the current state of both accounts, in IAMCTL you can run a process called baselining.

Through the process of baselining, you will generate an equivalency dictionary that represents all the known string patterns that reduce the noise in the generated deviations list, and then you will introduce a change into one of the IAM roles in your QA account, followed by a final IAMCTL diff to show the deviations.

Baselining

Baselining is the process of bringing two accounts to an “equivalence” state for IAM roles and policies by establishing a baseline, which future diff operations can leverage. The process is as simple as:

  1. Run the iamctl diff command.
  2. Capture all string substitutions into an equivalence dictionary to remove or reduce noise.
  3. Save the generated detailed files as a snapshot.

Now you can go through these steps for your baseline.

Go ahead and run the iamctl diff command against these two accounts by using the following commands.

#change directory from qa to iamctl-test
cd ..

#run iamctl init
iamctl init

The results of running the init command are shown in figure 4.

Figure 4: Output of the iamctl init command

Figure 4: Output of the iamctl init command

If you look at the iamctl_test directory now, shown in figure 5, you can see that the init command created two files in the iamctl_test directory.

Figure 5: The directory structure after running the init command

Figure 5: The directory structure after running the init command

These two files are as follows:

  1. iam.jsonA reference file that has all AWS services and actions listed, among other things. IAMCTL uses this to map the resource listed in an IAM policy to its corresponding AWS resource, based on Amazon Resource Name (ARN) regular expression.
  2. equivalency_list.jsonThe default sample dictionary that IAMCTL uses to suppress false alarms when it compares two accounts. This is where the known string patterns that need to be substituted are added.

Note: A best practice is to make the directory where you store the equivalency dictionary and from which you run IAMCTL to be a Git repository. Doing this will let you capture any additions or modifications for the equivalency dictionary by using Git commits. This will not only give you an audit trail of your historical baselines but also gives context to any additions or modifications to the equivalency dictionary. However, doing this is not necessary for the regular functioning of IAMCTL.

Next, run the iamctl diff command:

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa
Figure 6: Result of diff command

Figure 6: Result of diff command

Figure 6 shows the results of running the diff command. You can see that IAMCTL considers the app1_qa and app1_dev roles as unique to the DEV and QA accounts, respectively. This is because IAMCTL uses role names to decide whether to compare the role or tag the role as unique.

You will add the strings “dev” and “qa” to the equivalency dictionary to instruct IAMCTL to substitute occurrences of these two strings with “accountname” by adding the follow JSON to the equivalency_list.json file. You will also clean up some defaults already present in there.

echo “{“accountname”:[“dev”,”qa”]}” > equivalency_list.json

Figure 7 shows the equivalency dictionary before you take these actions, and figure 8 shows the dictionary after these actions.

Figure 7: Equivalency dictionary before

Figure 7: Equivalency dictionary before

Figure 8: Equivalency dictionary after

Figure 8: Equivalency dictionary after

There’s another thing to notice here. In this example, one common role was flagged as having a difference. To know which role this is and what the difference is, go to the detail reports folder listed at the bottom of the summary report. The directory structure of this folder is shown in figure 9.

Notice that the reports are created under your home directory with a folder structure that mimics the time stamp down to the second. IAMCTL does this to maintain uniqueness for each run.

tree /Users/<username>/aws-idt/output/2020/08/24/08/38/49/
Figure 9: Files written to the output reports directory

Figure 9: Files written to the output reports directory

You can see there is a file called common_roles_in_dev_with_differences.csv, and it lists a role called “AwsSecurity***Audit”.

You can see there is another file called dev_to_qa_common_role_difference_items.csv, and it lists the granular IAM items from the DEV account that belong to the “AwsSecurity***Audit” role as compared to QA, but which have differences. You can see that all entries in the file have the DEV account number in the resource ARN, whereas in the qa_to_dev_common_role_difference_items.csv file, all entries have the QA account number for the same role “AwsSecurity***Audit”.

Add both of the account numbers to the equivalency dictionary to substitute them with a placeholder number, because you don’t want this role to get flagged as having differences.

echo “{“accountname”:[“dev”,”qa”],”000000000000”:[“123456789012”,”987654321098”]}” > equivalency_list.json

Now, re-run the diff command.

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa

As you can see in figure 10, you get back the result of the diff command that shows that the DEV account doesn’t have any differences in IAM roles as compared to the QA account.

Figure 10: Output showing no differences after completion of baselining

Figure 10: Output showing no differences after completion of baselining

This concludes the baselining for your DEV and QA accounts. Now you will introduce a change.

Introducing drift

Drift occurs when there is a difference in actual vs expected values in the definition or configuration of a resource. There are several reasons why drift occurs, but for this scenario you will use “intentional need to respond to a time-sensitive operational event” as a reason to mimic and introduce drift into what you have built so far.

To simulate this change, add “s3:PutObject” to the qa app1_s3_access_policy.json file as shown in the following example.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:PutObject"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Put this new inline policy on your existing role “app1_qa” by using this command:

aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

The following table represents the new drift in the accounts.

ActionAccount-DEV
Role name: app1_dev
Account-QA
Role name: app1_qa
s3:Get*YesYes
s3:List*YesYes
s3: PutObjectNoYes

Next, run the iamctl diff command to see how it picks up the drift from your previously baselined accounts.

#change directory from qa to iamctL-test
cd ..
iamctl diff dev-profile dev qa-profile qa
Figure 11: Output showing the one deviation that was introduced

Figure 11: Output showing the one deviation that was introduced

You can see that IAMCTL now shows that the QA account has one difference as compared to DEV, which is what we expect based on the deviation you’ve introduced.

Open up the file qa_to_dev_common_role_difference_items.csv to look at the one difference. Again, adjust the following path example with the output from the iamctl diff command at the bottom of the summary report in Figure 11.

cat /Users/<username>/aws-idt/output/2020/09/18/07/38/15/qa_to_dev_common_role_difference_items.csv

As shown in figure 12, you can see that the file lists the specific S3 action “PutObject” with the role name and other relevant details.

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

You can use this information to remediate the deviation by performing corrective actions in either your DEV account or QA account. You can confirm the effectiveness of the corrective action by re-baselining to make sure that zero deviations appear.

Conclusion

In this post, you learned how to use the IAMCTL tool to compare IAM roles between two accounts, to arrive at a granular list of meaningful differences that can be used for compliance audits or for further remediation actions. If you’ve created your IAM roles by using an AWS CloudFormation stack, you can turn on drift detection and easily capture the drift because of changes done outside of AWS CloudFormation to those IAM resources. For more information about drift detection, see Detecting unmanaged configuration changes to stacks and resources. Lastly, see the GitHub repository where the tool is maintained with documentation describing each of the subcommand concepts. We welcome any pull requests for issues and enhancements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sudhir Reddy Maddulapally

Sudhir is a Senior Partner Solution Architect who builds SaaS solutions for partners by day and is a tech tinkerer by night. He enjoys trips to state and national parks, and Yosemite is his favorite thus far!

Author

Soumya Vanga

Soumya is a Cloud Application Architect with AWS Professional Services in New York, NY, helping customers design solutions and workloads and to adopt Cloud Native services.

Over 150 AWS services now have a security chapter

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/over-150-aws-services-now-have-security-chapter/

We’re happy to share an update on the service documentation initiative that we first told you about on the AWS Security Blog in June, 2019. We’re excited to announce that over 150 services now have dedicated security chapters available in the AWS security documentation.

In case you aren’t familiar with the security chapters, they were developed to provide easy-to-find, easy-to-consume security content in existing service documentation, so you don’t have to refer to multiple sources when reviewing the security capabilities of an AWS service. The chapters align with the Security Epics of the AWS Cloud Adoption Framework (CAF), including information about the security ‘of’ the cloud and security ‘in’ the cloud, as outlined in the AWS Shared Responsibility Model. The chapters cover the following security topics from the CAF, as applicable for each AWS service:

  • Data protection
  • Identity and access management
  • Logging and monitoring
  • Compliance validation
  • Resilience
  • Infrastructure security
  • Configuration and vulnerability analysis
  • Security best practices

These topics also align with the control domains of many industry-recognized standards that customers use to meet their compliance needs when using cloud services. This enables customers to evaluate the services against the frameworks they are already using.

We thought it might be helpful to share some of the ways that we’ve seen our customers and partners use the security chapters as a resource to both assess services and configure them securely. We’ve seen customers develop formal service-by-service assessment processes that include key considerations, such as achieving compliance, data protection, isolation of compute environments, automating audits with APIs, and operational access and security, when determining how cloud services can help them address their regulatory obligations.

To support their cloud journey and digital transformation, Fidelity Investments established a Cloud Center of Excellence (CCOE) to assist and enable Fidelity business units to safely and securely adopt cloud services at scale. The CCOE security team created a collaborative approach, inviting business units to partner with them to identify use cases and perform service testing in a safe environment. This ongoing process enables Fidelity business units to gain service proficiency while working directly with the security team so that risks are properly assessed, minimized, and evidenced well before use in a production environment.

Steve MacIntyre, Cloud Security Lead at Fidelity Investments, explains how the availability of the chapters assists them in this process: “As a diversified financial services organization, it is critical to have a deep understanding of the security, data protection, and compliance features for each AWS offering. The AWS security “chapters” allow us to make informed decisions about the safety of our data and the proper configuration of services within the AWS environment.”

Information found in the security chapters has also been used by customers as key inputs in refining their cloud governance, and helping customers to balance agility and innovation, while remaining secure as they adopt new services. Outlining customer responsibilities that are laid out under the AWS Shared Responsibility Model, the chapters have influenced the refinement of service assessment processes by a number of AWS customers, enabling customization to meet specific control objectives based on known use cases.

For example, when AWS Partner Network (APN) Partner Deloitte works on cloud strategies with organizations, they advise on topics that range from enterprise-wide cloud adoption to controls needed for specific AWS services.

Devendra Awasthi, Cloud Risk & Compliance Leader at Deloitte & Touche LLP, explained that, “When working with companies to help develop a secure cloud adoption framework, we don’t want them to make assumptions about shared responsibility that lead to a false sense of security. We advise clients to use the AWS service security chapters to identify their responsibilities under the AWS Shared Responsibility Model; the chapters can be key to informing their decision-making process for specific service use.”

Partners and customers, including Deloitte and Fidelity, have been helpful by providing feedback on both the content and structure of the security chapters. Service teams will continue to update the security chapters as new features are released, and in the meantime, we would appreciate your input to help us continue to expand the content. You can give us your feedback by selecting the Feedback button in the lower right corner of any documentation page. We look forward to learning how you use the security chapters within your organization.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Kristen Haught

Kristen is a Security and Compliance Business Development Manager focused on strategic initiatives that enable financial services customers to adopt Amazon Web Services for regulated workloads. She cares about sharing strategies that help customers adopt a culture of innovation, while also strengthening their security posture and minimizing risk in the cloud.

AWS Foundational Security Best Practices standard now available in Security Hub

Post Syndicated from Rima Tanash original https://aws.amazon.com/blogs/security/aws-foundational-security-best-practices-standard-now-available-security-hub/

AWS Security Hub offers a new security standard, AWS Foundational Security Best Practices

This week AWS Security Hub launched a new security standard called AWS Foundational Security Best Practices. This standard implements security controls that detect when your AWS accounts and deployed resources do not align with the security best practices defined by AWS security experts. By enabling this standard, you can monitor your own security posture to ensure that you are using AWS security best practices. These controls closely align to the Top 10 Security Best Practices outlined by AWS Chief Information Security Office, Stephen Schmidt, at AWS re:Invent 2019.

In the initial release, this standard consists of 31 fully-automated security controls in supported AWS Regions, and 27 controls in AWS GovCloud (US-West) and AWS GovCloud (US-East).

This standard is enabled by default when you enable Security Hub in a new account, so no extra steps are necessary to enable it. If you are an existing Security Hub user, when you open the Security Hub console, you will see a pop-up message recommending that you enable this standard. For more information, see AWS Foundational Security Best Practices standard in the AWS Security Hub User Guide.

As an example, let’s look at one of the new security controls for Amazon Relational Database Service (Amazon RDS), [RDS.1] RDS snapshots should be private. This control checks the resource types AWS::RDS::DBSnapshot and AWS::RDS::DBClusterSnapshot. The relevant AWS Config rule is rds-snapshots-public-prohibited, which checks whether Amazon RDS snapshots are public. The control fails if Security Hub identifies that any existing or new Amazon RDS snapshots are configured to be publicly accessible. The severity label is CRITICAL when the security check fails. The severity indicates the potential impact of not enforcing this rule.

You can find additional details about all the security controls, including the remediation instructions of the misconfigured resource, in the AWS Foundational Security Best Practices standard section of the Security Hub User Guide.

Getting started

In this post, we will cover:

  • How to enable the new AWS Foundational Security Best Practices standard.
  • An overview of the security controls.
  • An explanation of the security control details.
  • How to disable and enable specific security controls.
  • How to navigate to the remediation instructions for a failed security control.

Prerequisites

For the security standards to be functional in Security Hub, when you enable Security Hub in a particular account and AWS Region, you must also enable AWS Config in that account and Region. This is because Security Hub is a regional service.

Enable the new AWS Foundational Security Best Practices Security standard

After you enable AWS Config in your account and Region, you can enable the AWS Foundational Security Best Practices standard in Security Hub. We recommend that you enable Security Hub and this standard in all accounts and in all Regions where you have activity. For a script to enable AWS Security Hub across multi-account and Regions, see the AWS Security Hub multi-account scripts page on GitHub.

If you are a new user of Security Hub, when you open the Security Hub console, you are prompted to enable Security Hub. When you enable Security Hub, the AWS Foundational Security Best Practices standard is selected by default, as shown in the following screen shot. Leave the default selection and choose Enable Security Hub to enable the AWS Foundational Security Best Practices standard, as well as the other security standards you select, in your AWS account in your selected AWS Region.

Figure 1: Welcome to AWS Security Hub page

Figure 1: Welcome to AWS Security Hub page

If you are an existing user of Security Hub, when you open the Security Hub console, you are presented with a pop-up to enable the new security standard. You will see the number of new controls that are available in your AWS Region and the number of AWS services and resources that are associated with those controls, as shown in the following screen shot. Choose Enable standard to enable the AWS Foundational Security Best Practices standard in your AWS account in your selected AWS Region.

Figure 2: AWS Foundational Security Best Practices confirmation page

Figure 2: AWS Foundational Security Best Practices confirmation page

You also have the option to enable the new AWS Foundational Security Best Practices Security standard by using the command line, which we will describe later in this post.

View the security controls

Now that you have successfully enabled the standard, on the Security standards page, you see the new the AWS Foundational Security Best Practices v1.0.0 standard is displayed with the other security standards, CIS AWS Foundations and PCI DSS.

Figure 3: Security standards page in AWS Security Hub

Figure 3: Security standards page in AWS Security Hub

View security findings

Within two hours after you enable the standard, Security Hub begins to evaluate related resources in the current AWS account and Region against the available AWS controls within the AWS Foundational Security Best Practices standard. The scope of the assessment is the AWS account.

To view security findings, on the Security standards page, for AWS Foundational Security Best Practices standard, choose View results. The following image shows an example of the dashboard page you will see that displays all of the available controls in the standard, and the status of each control within the current AWS account and Region.

Figure 4: AWS Foundational Security Best Practices controls page

Figure 4: AWS Foundational Security Best Practices controls page

At a glance, each control card provides you with the following high-level information:

  • Title and unique identifier of the AWS control. This provides you with a synopsis of the purpose and functionality of the control.
  • The current status of the AWS control evaluation. The possible values are Passed, Failed, or Unknown (evaluation is still in progress and not finished).
  • Severity information associated with the AWS control. The possible values are CRITICAL, HIGH, MEDIUM, and LOW. For Passed findings associated with the controls, the severity appears, but is INFORMATIONAL. To learn more about how Security Hub determines the severity score, see Determining the severity of security standards findings.
  • A count of AWS resources that passed or failed the check for this particular AWS control.

You can use the Filter controls to search for specific AWS controls based on their evaluation status and severity. For example, you can search for all controls that have a check status of Failed and a severity of CRITICAL.

Inspect the security finding

To see detailed information about a specific security control and its findings, choose the security control card. Choosing the control displays a page that contains detailed information about the control, including a list of the findings for the security control. The page also indicates whether the resources for the security control are Passed, Failed, or if the compliance evaluation is still in progress (Unknown).

Figure 5: RDS.1 control findings view

Figure 5: RDS.1 control findings view

For business reasons, you may sometimes need to suppress a particular finding against a particular resource using the workflow status. Setting the Workflow status to SUPPRESSED means that the finding will not be reviewed again and will not be acted upon. If you suppress a FAILED finding, it will stay suppressed as long as it remains failed. However, if the finding moves from FAILED to PASSED, a new passed finding will be generated and the workflow status will be NEW. You can’t un-suppress a finding. If you suppress all findings for a control, the control status will be Unknown until any new finding is generated.

To suppress a finding

  1. In the Findings list, select the control you want to suppress, for example [RDS.1] RDS snapshot should be private.
  2. For Change workflow status, choose Suppressed.

 

Figure 6: RDS.1 control showing change workflow status options

Figure 6: RDS.1 control showing change workflow status options

You will no longer see the finding that you suppressed.

If you do not want to generate any findings for a specific control, you can instead choose to disable the control using the Disable feature, described in the next section.

Disable a security control

You can also disable the security check for a particular security control until you manually re-enable it. This disables the control check for all resources in the context of Security Hub in your AWS account and AWS Region. This may be helpful if a particular security control is not applicable for your environment. To disable a security control, on the AWS Foundational Security Best Practices standard dashboard page, on the specific control card, choose Disable. You can always re-enable the control when you need it in the future.

Figure 7: Control cards Disable option

Figure 7: Control cards Disable option

When you disable a particular control, you are required to enter a reason in the Reason for disabling field, so that you or someone else looking into it in the future have a clear record of why the control is not being used.

Figure 8: Reason for disabling a control page - Disabling control ACM.1example

Figure 8: Reason for disabling a control page – Disabling control ACM.1example

On the AWS Foundational Security Best Practices controls page, disabled controls are marked with a Disabled badge, as shown in the following screenshot. The cards also display the date when the control was disabled, and the reason that was provided. To re-enable a disabled control, on the control card, choose Enable.

Figure 9: Disabled control example – ACM.1

Figure 9: Disabled control example – ACM.1

You can enable the control any time without providing a reason. The evaluation for the control starts from the point in time when the control is re-enabled.

Remediate a failed security control

You can get the remediation instructions for a failed control from within the Security Hub console. On the AWS Foundational Security Best Practices standard dashboard page, choose the specific control card, then in the list of findings for a control, choose the finding you want to remediate. In the finding details, expand the Remediation section, and then choose the For directions on how to fix this issue link, as shown in the following screen shot.

Figure 10: Finding remediation link in the console

Figure 10: Finding remediation link in the console

You can also get to these step-by-step remediation instructions directly from the user guide. Go to the AWS Foundational Security Best Practices controls page and scroll down to the name of the specific control that generated the finding.

Use the AWS CLI to enable or disable the standard

To use the AWS Command Line Interface (AWS CLI) to enable the AWS Foundational Security Best Practices standard in Security Hub programmatically without using the Security Hub console, use the following command. Be sure you are running AWS CLI version 2.0.7 or later, and replace REGION-NAME with your AWS Region:


aws securityhub batch-enable-standards --standards-subscription-requests '{"StandardsArn":"arn:aws:securityhub:<REGION-NAME>::standards/aws-foundational-security-best-practices/v/1.0.0"}' --region <REGION-NAME>

To check the status, run the get-enabled-standards command. Be sure to replace REGION-NAME with your AWS Region:


aws securityhub get-enabled-standards --region <REGION-NAME> 

You should see the following “StandardsStatus”: “READY” output to indicate that the AWS Foundational Security Best Practices standard is enabled and ready:


{
    "StandardsSubscriptions": [
        {
            "StandardsSubscriptionArn": "arn:aws:securityhub:us-east-1:012345678912:subscription/aws-foundational-security-best-practices/v/1.0.0",
            "StandardsArn": "arn:aws:securityhub:us-east-1::standards/aws-foundational-security-best-practices/v/1.0.0",
            "StandardsInput": {},
            "StandardsStatus": "READY"
        }
    ]
}

To use the AWS CLI to disable the AWS Foundational Security Best Practices standard in Security Hub, use the following command. Be sure to replace ACCOUNT_ID with your account ID, and replace REGION-NAME with your AWS Region:


aws securityhub batch-disable-standards --standards-subscription-arns "arn:aws:securityhub:eu-central-1:<ACCOUNT_ID>:subscription/aws-foundational-security-best-practices/v/1.0.0" --region <REGION-NAME> 

Conclusion

In this post, you learned about how to implement the new AWS Foundational Security Best Practices standard in Security Hub, and how to interpret the findings. You also learned how to enable the standard by using the Security Hub console and AWS CLI, how to disable and enable specific controls within the standard, and how to follow remediation steps for failed findings. For more information, see the AWS Foundational Security Best Practices standard in the AWS Security Hub User Guide.

If you have comments about this post, submit them in the Comments section below. If you have questions, please start a new thread on the Security Hub forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Karthikeyan Vasuki Balasubramaniam

Karthikeyan is a Software Development Engineer on the Amazon Security Hub service team. At Amazon Web Services, he works on the infrastructure that supports the development and functioning of various security standards. He has a background in computer networking and operating systems. In his free time, you can find him learning Indian classical music and cooking.

Author

Rima Tanash

Rima Tanash is the Lead Security Engineer on the Amazon Security Hub service team. At Amazon Web Services, she applies automated technologies to audit various access and security configurations. She has a research background in data privacy using graph properties and machine learning.

Scaling a governance, risk, and compliance program for the cloud, emerging technologies, and innovation

Post Syndicated from Michael South original https://aws.amazon.com/blogs/security/scaling-a-governance-risk-and-compliance-program-for-the-cloud/

Governance, risk, and compliance (GRC) programs are sometimes looked upon as the bureaucracy getting in the way of exciting cybersecurity work. But a good GRC program establishes the foundation for meeting security and compliance objectives. It is the proactive approach to cybersecurity that, if done well, minimizes reactive incident response.

Of the three components of cybersecurity—people, processes, and technology—technology is the viewed as the “easy button” because in relative terms, it’s simpler than drafting a policy with the right balance of flexibility and specificity or managing countless organizational principles and human behavior. Still, as much as we promote technology and automation at AWS, we also understand that automating a bad process with the latest technology doesn’t make the process or outcome better. Cybersecurity must incorporate all three aspects with a programmatic approach that scales. To reach that goal, an effective GRC program is essential because it ensures a holistic view has been taken while tackling the daunting mission of cybersecurity.

Although governance, risk, and compliance are oftentimes viewed as separate functions, there is a symbiotic relationship between them. Governance establishes the strategy and guardrails for meeting specific requirements that align and support the business. Risk management connects specific controls to governance and assessed risks, and provides business leaders with the information they need to prioritize resources and make risk-informed decisions. Compliance is the adherence and monitoring of controls to specific governance requirements. It is also the “ante” to play the game in certain industries and, with continuous monitoring, closes the feedback loop regarding effective governance. Security architecture, engineering, and operations are built upon the GRC foundation.

Without a GRC program, people tend to solely focus on technology and stove-pipe processes. For example, say a security operations employee is faced with four events to research and mitigate. In the absence of a GRC program, the staffer would have no context about the business risk or compliance impact of the events, which could lead them to prioritize the least important issue.

GRC relationship model

GRC has a symbiotic relationship

The breadth and depth of a GRC program varies with each organization. Regardless of its simplicity or complexity, there are opportunities to transform or scale that program for the adoption of cloud services, emerging technologies, and other future innovations.

Below is a checklist of best practices to help you on your journey. The key takeaways of the checklist are: base governance on objectives and capabilities, include risk context in decision-making, and automate monitoring and response.

Governance

Identify compliance requirements

__ Identify required compliance frameworks (such as HIPAA or PCI) and contract/agreement obligations.

__ Identify restrictions/limitations to cloud or emerging technologies.

__ Identify required or chosen standards to implement (for example NIST, ISO, COBIT, CSA, CIS, etc.).

Conduct program assessment

__ Conduct a program assessment based on industry processes such as the NIST Cyber Security Framework (CSF) or ISO/IEC TR 27103:2018 to understand the capability and maturity of your current profile.

__ Determine desired end-state capability and maturity, also known as target profile.

__ Document and prioritize gaps (people, process, and technologies) for resource allocation.

__ Build a Cloud Center of Excellence (CCoE) team.

__ Draft and publish a cloud strategy that includes procurement, DevSecOps, management, and security.

__ Define and assign functions, roles, and responsibilities.

Update and publish policies, processes, procedures

__ Update policies based on objectives and desired capabilities that align to your business.

__ Update processes for modern organization and management techniques such as DevSecOps and Agile, specifying how to upgrade old technologies.

__ Update procedures to integrate cloud services and other emerging technologies.

__ Establish technical governance standards to be used to select controls and that monitor compliance.

Risk management

Conduct a risk assessment*

__ Conduct or update an organizational risk assessment (e.g., market, financial, reputation, legal, etc.).

__ Conduct or update a risk assessment for each business line (such as mission, market, products/services, financial, etc.).

__ Conduct or update a risk assessment for each asset type.

* The use of pre-established threat models can simplify the risk assessment process, both initial and updates.

Draft risk plans

__ Implement plans to mitigate, avoid, transfer, or accept risk at each tier, business line, and asset (for example, a business continuity plan, a continuity of operations plan, a systems security plan).

__ Implement plans for specific risk areas (such as supply chain risk, insider threat).

Authorize systems

__ Use NIST Risk Management Framework (RMF) or other process to authorize and track systems.

__ Use NIST Special Publication 800-53, ISO ISO/IEC 27002:2013, or other control set to select, implement, and assess controls based on risk.

__ Implement continuous monitoring of controls and risk, employing automation to the greatest extent possible.

Incorporate risk information into decisions

__ Link system risk to business and organizational risk

__ Automate translation of continuous system risk monitoring and status to business and org risk

__ Incorporate “What’s the risk?” (financial, cyber, legal, reputation) into leadership decision-making

Compliance

Monitor compliance with policy, standards, and security controls

__ Automate technical control monitoring and reporting (advanced maturity will lead to AI/ML).

__ Implement manual monitoring of non-technical controls (for example periodic review of visitor logs).

__ Link compliance monitoring with security information and event management (SIEM) and other tools.

Continually self-assess

__ Automate application security testing and vulnerability scans.

__ Conduct periodic self-assessments from sampling of controls, entire functional area, and pen-tests.

__ Be overly critical of assumptions, perspectives, and artifacts.

Respond to events and changes to risk

__ Integrate security operations with the compliance team for response management.

__ Establish standard operating procedures to respond to unintentional changes in controls.

__ Mitigate impact and reset affected control(s); automate as much as possible.

Communicate events and changes to risk

__ Establish a reporting tree and thresholds for each type of incident.

__ Include general counsel in reporting.

__ Ensure applicable regulatory authorities are notified when required.

__ Automate where appropriate.

Want more AWS Security news? Follow us on Twitter.

Michael South

Michael joined AWS in 2017 as the Americas Regional Leader for public sector security and compliance business development. He supports customers who want to achieve business objectives and improve their security and compliance in the cloud. His customers span across the public sector, including: federal governments, militaries, state/provincial governments, academic institutions, and non-profits from North to South America. Prior to AWS, Michael was the Deputy Chief Information Security Officer for the city of Washington, DC and the U.S. Navy’s Chief Information Officer for Japan.

Secure Build with AWS CodeBuild and LayeredInsight

Post Syndicated from Asif Khan original https://aws.amazon.com/blogs/devops/secure-build-with-aws-codebuild-and-layeredinsight/

This post is written by Asif Awan, Chief Technology Officer of Layered InsightSubin Mathew – Software Development Manager for AWS CodeBuild, and Asif Khan – Solutions Architect

Enterprises adopt containers because they recognize the benefits: speed, agility, portability, and high compute density. They understand how accelerating application delivery and deployment pipelines makes it possible to rapidly slipstream new features to customers. Although the benefits are indisputable, this acceleration raises concerns about security and corporate compliance with software governance. In this blog post, I provide a solution that shows how Layered Insight, the pioneer and global leader in container-native application protection, can be used with seamless application build and delivery pipelines like those available in AWS CodeBuild to address these concerns.

Layered Insight solutions

Layered Insight enables organizations to unify DevOps and SecOps by providing complete visibility and control of containerized applications. Using the industry’s first embedded security approach, Layered Insight solves the challenges of container performance and protection by providing accurate insight into container images, adaptive analysis of running containers, and automated enforcement of container behavior.

 

AWS CodeBuild

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools.

 

Problem Definition

Security and compliance concerns span the lifecycle of application containers. Common concerns include:

Visibility into the container images. You need to verify the software composition information of the container image to determine whether known vulnerabilities associated with any of the software packages and libraries are included in the container image.

Governance of container images is critical because only certain open source packages/libraries, of specific versions, should be included in the container images. You need support for mechanisms for blacklisting all container images that include a certain version of a software package/library, or only allowing open source software that come with a specific type of license (such as Apache, MIT, GPL, and so on). You need to be able to address challenges such as:

·       Defining the process for image compliance policies at the enterprise, department, and group levels.

·       Preventing the images that fail the compliance checks from being deployed in critical environments, such as staging, pre-prod, and production.

Visibility into running container instances is critical, including:

·       CPU and memory utilization.

·       Security of the build environment.

·       All activities (system, network, storage, and application layer) of the application code running in each container instance.

Protection of running container instances that is:

·       Zero-touch to the developers (not an SDK-based approach).

·       Zero touch to the DevOps team and doesn’t limit the portability of the containerized application.

·       This protection must retain the option to switch to a different container stack or orchestration layer, or even to a different Container as a Service (CaaS ).

·       And it must be a fully automated solution to SecOps, so that the SecOps team doesn’t have to manually analyze and define detailed blacklist and whitelist policies.

 

Solution Details

In AWS CodeCommit, we have three projects:
●     “Democode” is a simple Java application, with one buildspec to build the app into a Docker container (run by build-demo-image CodeBuild project), and another to instrument said container (instrument-image CodeBuild project). The resulting container is stored in ECR repo javatestasjavatest:20180415-layered. This instrumented container is running in AWS Fargate cluster demo-java-appand can be seen in the Layered Insight runtime console as the javatestapplication in us-east-1.
●     aws-codebuild-docker-imagesis a clone of the official aws-codebuild-docker-images repo on GitHub . This CodeCommit project is used by the build-python-builder CodeBuild project to build the python 3.3.6 codebuild image and is stored at the codebuild-python ECR repo. We then manually instructed the Layered Insight console to instrument the image.
●     scan-java-imagecontains just a buildspec.yml file. This file is used by the scan-java-image CodeBuild project to instruct Layered Assessment to perform a vulnerability scan of the javatest container image built previously, and then run the scan results through a compliance policy that states there should be no medium vulnerabilities. This build fails — but in this case that is a success: the scan completes successfully, but compliance fails as there are medium-level issues found in the scan.

This build is performed using the instrumented version of the Python 3.3.6 CodeBuild image, so the activity of the processes running within the build are recorded each time within the LI console.

Build container image

Create or use a CodeCommit project with your application. To build this image and store it in Amazon Elastic Container Registry (Amazon ECR), add a buildspec file to the project and build a container image and create a CodeBuild project.

Scan container image

Once the image is built, create a new buildspec in the same project or a new one that looks similar to below (update ECR URL as necessary):

version: 0.2
phases:
  pre_build:
    commands:
      - echo Pulling down LI Scan API client scripts
      - git clone https://github.com/LayeredInsight/scan-api-example-python.git
      - echo Setting up LI Scan API client
      - cd scan-api-example-python
      - pip install layint_scan_api
      - pip install -r requirements.txt
  build:
    commands:
      - echo Scanning container started on `date`
      - IMAGEID=$(./li_add_image --name <aws-region>.amazonaws.com/javatest:20180415)
      - ./li_wait_for_scan -v --imageid $IMAGEID
      - ./li_run_image_compliance -v --imageid $IMAGEID --policyid PB15260f1acb6b2aa5b597e9d22feffb538256a01fbb4e5a95

Add the buildspec file to the git repo, push it, and then build a CodeBuild project using with the instrumented Python 3.3.6 CodeBuild image at <aws-region>.amazonaws.com/codebuild-python:3.3.6-layered. Set the following environment variables in the CodeBuild project:
●     LI_APPLICATIONNAME – name of the build to display
●     LI_LOCATION – location of the build project to display
●     LI_API_KEY – ApiKey:<key-name>:<api-key>
●     LI_API_HOST – location of the Layered Insight API service

Instrument container image

Next, to instrument the new container image:

  1. In the Layered Insight runtime console, ensure that the ECR registry and credentials are defined (click the Setup icon and the ‘+’ sign on the top right of the screen to add a new container registry). Note the name given to the registry in the console, as this needs to be referenced in the li_add_imagecommand in the script, below.
  2. Next, add a new buildspec (with a new name) to the CodeCommit project, such as the one shown below. This code will download the Layered Insight runtime client, and use it to instruct the Layered Insight service to instrument the image that was just built:
    version: 0.2
    phases:
    pre_build:
    commands:
    echo Pulling down LI API Runtime client scripts
    git clone https://github.com/LayeredInsight/runtime-api-example-python
    echo Setting up LI API client
    cd runtime-api-example-python
    pip install layint-runtime-api
    pip install -r requirements.txt
    build:
    commands:
    echo Instrumentation started on `date`
    ./li_add_image --registry "Javatest ECR" --name IMAGE_NAME:TAG --description "IMAGE DESCRIPTION" --policy "Default Policy" --instrument --wait --verbose
  3. Commit and push the new buildspec file.
  4. Going back to CodeBuild, create a new project, with the same CodeCommit repo, but this time select the new buildspec file. Use a Python 3.3.6 builder – either the AWS or LI Instrumented version.
  5. Click Continue
  6. Click Save
  7. Run the build, again on the master branch.
  8. If everything runs successfully, a new image should appear in the ECR registry with a -layered suffix. This is the instrumented image.

Run instrumented container image

When the instrumented container is now run — in ECS, Fargate, or elsewhere — it will log data back to the Layered Insight runtime console. It’s appearance in the console can be modified by setting the LI_APPLICATIONNAME and LI_LOCATION environment variables when running the container.

Conclusion

In the above blog we have provided you steps needed to embed governance and runtime security in your build pipelines running on AWS CodeBuild using Layered Insight.

 

 

 

Tips for Success: GDPR Lessons Learned

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/tips-for-success-gdpr-lessons-learned/

Security is our top priority at AWS, and from the beginning we have built security into the fabric of our services. With the introduction of GDPR (which becomes enforceable on May 25 of 2018), privacy and data protection have become even more ingrained into our security-centered culture. Three weeks ago, well ahead of the deadline, we announced that all AWS services are compliant with GDPR, meaning you can use AWS as a data processor as a way to help solve your GDPR challenges (be sure to visit our GDPR Center for additional information).

When it comes to GDPR compliance, many customers are progressing nicely and much of the initial trepidation is gone. In my interactions with customers on this topic, a few themes have emerged as universal:

  • GDPR is important. You need to have a plan in place if you process personal data of EU data subjects, not only because it’s good governance, but because GDPR does carry significant penalties for non-compliance.
  • Solving this can be complex, potentially involving a lot of personnel and multiple tools. Your GDPR process will also likely span across disciplines – impacting people, processes, and technology.
  • Each customer is unique, and there are many methodologies around assessing your compliance with GDPR. It’s important to be aware of your own individual business attributes.

I thought it might be helpful to share some of our own lessons learned. In our experience in solving the GDPR challenge, the following were keys to our success:

  1. Get your senior leadership involved. We have a regular cadence of detailed status conversations about GDPR with our CEO, Andy Jassy. GDPR is high stakes, and the AWS leadership team knows it. If GDPR doesn’t have the attention it needs with the visibility of top management today, it’s time to escalate.
  2. Centralize the GDPR efforts. Driving all work streams centrally is key. This may sound obvious, but managing this in a distributed manner may result in duplicative effort and/or team members moving in a different direction.
  3. The most important single partner in solving GDPR is your legal team. Having non-legal people make assumptions about how to interpret GDPR for your unique environment is both risky and a potential waste of time and resources. You want to avoid analysis paralysis by getting proper legal advice, collaborating on a direction, and then moving forward with the proper urgency.
  4. Collaborate closely with tech leadership. The “process” people in your organization, the ones who already know how to approach governance problems, are typically comfortable jumping right in to GDPR. But technical teams, including data owners, have set up their software for business application. They may not even know what kind of data they are storing, processing, or transferring to other parts of the business. In the GDPR exercise they need to be aware of (or at least help facilitate) the tracking of data and data elements between systems. This isn’t a typical ask for technical teams, so be prepared to educate and to fully understand data flow.
  5. Don’t live by the established checklists. There are multiple methodologies to solving the compliance challenges of GDPR. At AWS, we ended up establishing core requirements, mapped out by data controller and data processor functions and then, in partnership with legal, decided upon a group of projects based on our known current state. Be careful about using a set methodology, tool or questionnaire to govern your efforts. These generic assessments can help educate, but letting them drive or limit your work could lead to missing something that is key to your own compliance. In this sense, a generic, “one size fits all” solution might not be helpful.
  6. Don’t be afraid to challenge prior orthodoxy. Many times we changed course based on new information. You shouldn’t be afraid to scrap an effort if you determine it’s not working. You should also not be afraid to escalate issues to senior leadership when needed. This is an executive issue.
  7. Look for ways to leverage your work beyond this compliance activity. GDPR requires serious effort, but are the results limited to GDPR compliance? Certainly not. You can use GDPR workflows as a way to ensure better governance moving forward. Privacy and security will require work for the foreseeable future, so make your governance program scalable and usable for other purposes.

One last tip that has made all the difference: think about protecting data subjects and work backwards from there. Customer focus drives us to ask, “what would customers and data subjects want and expect us to do?” Taking GDPR from a pure legal or compliance standpoint may be technically sufficient, but we believe the objectives of security and personal data protection require a more comprehensive view, and you can most effectively shape that view by starting with the individuals GDPR was meant to protect.

If you would like to find out more about our experiences, as well as how we can help you in your efforts, please reach out to us today.

-Chad Woolf

Vice President, AWS Security Assurance

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

Join Us for AWS Security Week February 20–23 in San Francisco!

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/join-us-for-aws-security-week-february-20-23-in-san-francisco/

AWS Pop-up Loft image

Join us for AWS Security Week, February 20–23 at the AWS Pop-up Loft in San Francisco, where you can participate in four days of themed content that will help you secure your workloads on AWS. Each day will highlight a different security and compliance topic, and will include an overview session, a customer or partner speaker, a deep dive into the day’s topic, and a hands-on lab or demos of relevant AWS or partner services.

Tuesday (February 20) will kick off the week with a day devoted to identity and governance. On Wednesday, we will dig into secure configuration and automation, including a discussion about upcoming General Data Protection Regulation (GDPR) requirements. On Thursday, we will cover threat detection and remediation, which will include an Amazon GuardDuty lab. And on Friday, we will discuss incident response on AWS.

Sessions, demos, and labs about each of these topics will be led by seasoned security professionals from AWS, who will help you understand not just the basics, but also the nuances of building applications in the AWS Cloud in a robust and secure manner. AWS subject-matter experts will be available for “Ask the Experts” sessions during breaks.

Register today!

– Craig

Article from a Former Chinese PLA General on Cyber Sovereignty

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/article_from_a_.html

Interesting article by Major General Hao Yeli, Chinese People’s Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.

Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.

China has never been opposed to multi-party governance when appropriate, but rejects the denial of government’s proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.

In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.

A New Guide to Banking Regulations and Guidelines in India

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/a-new-guide-to-banking-regulations-and-guidelines-in-india/

Indian flag

The AWS User Guide to Banking Regulations and Guidelines in India was published in December 2017 and includes information that can help banks regulated by the Reserve Bank of India (RBI) assess how to implement an appropriate information security, risk management, and governance program in the AWS Cloud.

The guide focuses on the following key considerations:

  • Outsourcing guidelines – Guidance for banks entering an outsourcing arrangement, including risk-management practices such as conducting due diligence and maintaining effective oversight. Learn how to conduct an assessment of AWS services and align your governance requirements with the AWS Shared Responsibility Model.
  • Information security – Detailed requirements to help banks identify and manage information security in the cloud.

This guide joins the existing Financial Services guides for other jurisdictions, such as Singapore, Australia, and Hong Kong. AWS will publish additional guides in 2018 to help you understand regulatory requirements in other markets around the world.

– Oliver

Announcing our new beta for the AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-our-new-beta-for-the-aws-certified-security-specialty-exam/

Take the AWS Certified Security – Specialty beta exam for the chance to be among the first to hold this new AWS Certification. This beta exam allows experienced cloud security professionals to demonstrate and validate their expertise. Register today – this beta exam will only be available from January 15 to March 2!

About the exam

This beta exam validates that the successful candidate can effectively demonstrate knowledge of how to secure the AWS platform. The exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection.

The exam validates:

  • Familiarity with regional- and country-specific security and compliance regulations and meta issues that these regulations embody.
  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

Who is eligible

The beta is open to anyone who currently holds an Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

We have training and other resources to help you prepare for the beta exam:

AWS Security Fundamentals Digital| 3 Hours
This course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS Classroom | 3 Days
This course demonstrates how to efficiently use AWS security services to stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your data and systems in the cloud. The course highlights the security features of AWS key services including compute, storage, networking, and database services.

Online resources for Cloud Security and Compliance

Review documentation, whitepapers, and articles & tutorials related to cloud security and compliance.

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Validate Your IT Security Expertise with the New AWS Certified Security – Specialty Beta Exam

Post Syndicated from Sara Snedeker original https://aws.amazon.com/blogs/security/validate-your-it-security-expertise-with-the-new-aws-certified-security-specialty-beta-exam/

AWS Training and Certification image

If you are an experienced cloud security professional, you can demonstrate and validate your expertise with the new AWS Certified Security – Specialty beta exam. This exam allows you to demonstrate your knowledge of incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. Register today – this beta exam will be available only from January 15 to March 2, 2018.

By taking this exam, you can validate your:

  • Familiarity with region-specific and country-specific security and compliance regulations and meta issues that these regulations include.
  • Understanding of data encryption methods and secure internet protocols, and the AWS mechanisms to implement them.
  • Working knowledge of AWS security services to provide a secure production environment.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity when given a set of application requirements.

See the full list of security knowledge you can validate by taking this beta exam.

Who is eligible?

The beta exam is open to anyone who currently holds an AWS Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

You can take the following courses and use AWS cloud security resources and compliance resources to prepare for this exam.

AWS Security Fundamentals (digital, 3 hours)
This digital course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS (classroom, 3 days)
This instructor-led course demonstrates how to efficiently use AWS security services to help stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your AWS resources. The course highlights the security features of AWS compute, storage, networking, and database services.

If you have questions about this new beta exam, contact us.

Good luck with the exam!

– Sara