Tag Archives: policies

New IAMCTL tool compares multiple IAM roles and policies

Post Syndicated from Sudhir Reddy Maddulapally original https://aws.amazon.com/blogs/security/new-iamctl-tool-compares-multiple-iam-roles-and-policies/

If you have multiple Amazon Web Services (AWS) accounts, and you have AWS Identity and Access Management (IAM) roles among those multiple accounts that are supposed to be similar, those roles can deviate over time from your intended baseline due to manual actions performed directly out-of-band called drift. As part of regular compliance checks, you should confirm that these roles have no deviations. In this post, we present a tool called IAMCTL that you can use to extract the IAM roles and policies from two accounts, compare them, and report out the differences and statistics. We will explain how to use the tool, and will describe the key concepts.

Prerequisites

Before you install IAMCTL and start using it, here are a few prerequisites that need to be in place on the computer where you will run it:

To follow along in your environment, clone the files from the GitHub repository, and run the steps in order. You won’t incur any charges to run this tool.

Install IAMCTL

This section describes how to install and run the IAMCTL tool.

To install and run IAMCTL

  1. At the command line, enter the following command:
    pip3 install git+ssh://[email protected]/aws-samples/[email protected]
    

    You will see output similar to the following.

    Figure 1: IAMCTL tool installation output

    Figure 1: IAMCTL tool installation output

  2. To confirm that your installation was successful, enter the following command.
    iamctl –h
    

    You will see results similar to those in figure 2.

    Figure 2: IAMCTL help message

    Figure 2: IAMCTL help message

Now that you’ve successfully installed the IAMCTL tool, the next section will show you how to use the IAMCTL commands.

Example use scenario

Here is an example of how IAMCTL can be used to find differences in IAM roles between two AWS accounts.

A system administrator for a product team is trying to accelerate a product launch in the middle of testing cycles. Developers have found that the same version of their application behaves differently in the development environment as compared to the QA environment, and they suspect this behavior is due to differences in IAM roles and policies.

The application called “app1” primarily reads from an Amazon Simple Storage Service (Amazon S3) bucket, and runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance. In the development (DEV) account, the application uses an IAM role called “app1_dev” to access the S3 bucket “app1-dev”. In the QA account, the application uses an IAM role called “app1_qa” to access the S3 bucket “app1-qa”. This is depicted in figure 3.

Figure 3: Showing the “app1” application in the development and QA accounts

Figure 3: Showing the “app1” application in the development and QA accounts

Setting up the scenario

To simulate this setup for the purpose of this walkthrough, you don’t have to create the EC2 instance or the S3 bucket, but just focus on the IAM role, inline policy, and trust policy.

As noted in the prerequisites, you will switch between the two AWS accounts by using the AWS CLI named profiles “dev-profile” and “qa-profile”, which are configured to point to the DEV and QA accounts respectively.

Start by using this command:

mkdir -p iamctl_test iamctl_test/dev iamctl_test/qa

The command creates a directory structure that looks like this:
Iamctl_test
|– qa
|– dev

Now, switch to the dev folder to run all the following example commands against the DEV account, by using this command:

cd iamctl_test/dev

To create the required policies, first create a file named “app1_s3_access_policy.json” and add the following policy to it. You will use this file’s content as your role’s inline policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-dev/shared/*"
            ]
        }
    ]
}

Second, create a file called “app1_trust_policy.json” and add the following policy to it. You will use this file’s content as your role’s trust policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now use the two files to create an IAM role with the name “app1_dev” in the account by using these command(s), run in the same order as listed here:

#create role with trust policy

aws --profile dev-profile iam create-role --role-name app1_dev --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role created above
 
aws --profile dev-profile iam put-role-policy --role-name app1_dev --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

In the QA account, the IAM role is named “app1_qa” and the S3 bucket is named “app1-qa”.

Repeat the steps from the prior example against the QA account by changing dev to qa where shown in bold in the following code samples. Change the directory to qa by using this command:

cd ../qa

To create the required policies, first create a file called “app1_s3_access_policy.json” and add the following policy to it.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Next, create a file, called “app1_trust_policy.json” and add the following policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now, use the two files created so far to create an IAM role with the name “app1_qa” in your QA account by using these command(s), run in the same order as listed here:

#create role with trust policy
aws --profile qa-profile iam create-role --role-name app1_qa --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role create above
aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

So far, you have two accounts with an IAM role created in each of them for your application. In terms of permissions, there are no differences other than the name of the S3 bucket resource the permission is granted against.

You can expect IAMCTL to generate a minimal set of differences between the DEV and QA accounts, assuming all other IAM roles and policies are the same, but to be sure about the current state of both accounts, in IAMCTL you can run a process called baselining.

Through the process of baselining, you will generate an equivalency dictionary that represents all the known string patterns that reduce the noise in the generated deviations list, and then you will introduce a change into one of the IAM roles in your QA account, followed by a final IAMCTL diff to show the deviations.

Baselining

Baselining is the process of bringing two accounts to an “equivalence” state for IAM roles and policies by establishing a baseline, which future diff operations can leverage. The process is as simple as:

  1. Run the iamctl diff command.
  2. Capture all string substitutions into an equivalence dictionary to remove or reduce noise.
  3. Save the generated detailed files as a snapshot.

Now you can go through these steps for your baseline.

Go ahead and run the iamctl diff command against these two accounts by using the following commands.

#change directory from qa to iamctl-test
cd ..

#run iamctl init
iamctl init

The results of running the init command are shown in figure 4.

Figure 4: Output of the iamctl init command

Figure 4: Output of the iamctl init command

If you look at the iamctl_test directory now, shown in figure 5, you can see that the init command created two files in the iamctl_test directory.

Figure 5: The directory structure after running the init command

Figure 5: The directory structure after running the init command

These two files are as follows:

  1. iam.jsonA reference file that has all AWS services and actions listed, among other things. IAMCTL uses this to map the resource listed in an IAM policy to its corresponding AWS resource, based on Amazon Resource Name (ARN) regular expression.
  2. equivalency_list.jsonThe default sample dictionary that IAMCTL uses to suppress false alarms when it compares two accounts. This is where the known string patterns that need to be substituted are added.

Note: A best practice is to make the directory where you store the equivalency dictionary and from which you run IAMCTL to be a Git repository. Doing this will let you capture any additions or modifications for the equivalency dictionary by using Git commits. This will not only give you an audit trail of your historical baselines but also gives context to any additions or modifications to the equivalency dictionary. However, doing this is not necessary for the regular functioning of IAMCTL.

Next, run the iamctl diff command:

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa
Figure 6: Result of diff command

Figure 6: Result of diff command

Figure 6 shows the results of running the diff command. You can see that IAMCTL considers the app1_qa and app1_dev roles as unique to the DEV and QA accounts, respectively. This is because IAMCTL uses role names to decide whether to compare the role or tag the role as unique.

You will add the strings “dev” and “qa” to the equivalency dictionary to instruct IAMCTL to substitute occurrences of these two strings with “accountname” by adding the follow JSON to the equivalency_list.json file. You will also clean up some defaults already present in there.

echo “{“accountname”:[“dev”,”qa”]}” > equivalency_list.json

Figure 7 shows the equivalency dictionary before you take these actions, and figure 8 shows the dictionary after these actions.

Figure 7: Equivalency dictionary before

Figure 7: Equivalency dictionary before

Figure 8: Equivalency dictionary after

Figure 8: Equivalency dictionary after

There’s another thing to notice here. In this example, one common role was flagged as having a difference. To know which role this is and what the difference is, go to the detail reports folder listed at the bottom of the summary report. The directory structure of this folder is shown in figure 9.

Notice that the reports are created under your home directory with a folder structure that mimics the time stamp down to the second. IAMCTL does this to maintain uniqueness for each run.

tree /Users/<username>/aws-idt/output/2020/08/24/08/38/49/
Figure 9: Files written to the output reports directory

Figure 9: Files written to the output reports directory

You can see there is a file called common_roles_in_dev_with_differences.csv, and it lists a role called “AwsSecurity***Audit”.

You can see there is another file called dev_to_qa_common_role_difference_items.csv, and it lists the granular IAM items from the DEV account that belong to the “AwsSecurity***Audit” role as compared to QA, but which have differences. You can see that all entries in the file have the DEV account number in the resource ARN, whereas in the qa_to_dev_common_role_difference_items.csv file, all entries have the QA account number for the same role “AwsSecurity***Audit”.

Add both of the account numbers to the equivalency dictionary to substitute them with a placeholder number, because you don’t want this role to get flagged as having differences.

echo “{“accountname”:[“dev”,”qa”],”000000000000”:[“123456789012”,”987654321098”]}” > equivalency_list.json

Now, re-run the diff command.

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa

As you can see in figure 10, you get back the result of the diff command that shows that the DEV account doesn’t have any differences in IAM roles as compared to the QA account.

Figure 10: Output showing no differences after completion of baselining

Figure 10: Output showing no differences after completion of baselining

This concludes the baselining for your DEV and QA accounts. Now you will introduce a change.

Introducing drift

Drift occurs when there is a difference in actual vs expected values in the definition or configuration of a resource. There are several reasons why drift occurs, but for this scenario you will use “intentional need to respond to a time-sensitive operational event” as a reason to mimic and introduce drift into what you have built so far.

To simulate this change, add “s3:PutObject” to the qa app1_s3_access_policy.json file as shown in the following example.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:PutObject"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Put this new inline policy on your existing role “app1_qa” by using this command:

aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

The following table represents the new drift in the accounts.

ActionAccount-DEV
Role name: app1_dev
Account-QA
Role name: app1_qa
s3:Get*YesYes
s3:List*YesYes
s3: PutObjectNoYes

Next, run the iamctl diff command to see how it picks up the drift from your previously baselined accounts.

#change directory from qa to iamctL-test
cd ..
iamctl diff dev-profile dev qa-profile qa
Figure 11: Output showing the one deviation that was introduced

Figure 11: Output showing the one deviation that was introduced

You can see that IAMCTL now shows that the QA account has one difference as compared to DEV, which is what we expect based on the deviation you’ve introduced.

Open up the file qa_to_dev_common_role_difference_items.csv to look at the one difference. Again, adjust the following path example with the output from the iamctl diff command at the bottom of the summary report in Figure 11.

cat /Users/<username>/aws-idt/output/2020/09/18/07/38/15/qa_to_dev_common_role_difference_items.csv

As shown in figure 12, you can see that the file lists the specific S3 action “PutObject” with the role name and other relevant details.

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

You can use this information to remediate the deviation by performing corrective actions in either your DEV account or QA account. You can confirm the effectiveness of the corrective action by re-baselining to make sure that zero deviations appear.

Conclusion

In this post, you learned how to use the IAMCTL tool to compare IAM roles between two accounts, to arrive at a granular list of meaningful differences that can be used for compliance audits or for further remediation actions. If you’ve created your IAM roles by using an AWS CloudFormation stack, you can turn on drift detection and easily capture the drift because of changes done outside of AWS CloudFormation to those IAM resources. For more information about drift detection, see Detecting unmanaged configuration changes to stacks and resources. Lastly, see the GitHub repository where the tool is maintained with documentation describing each of the subcommand concepts. We welcome any pull requests for issues and enhancements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sudhir Reddy Maddulapally

Sudhir is a Senior Partner Solution Architect who builds SaaS solutions for partners by day and is a tech tinkerer by night. He enjoys trips to state and national parks, and Yosemite is his favorite thus far!

Author

Soumya Vanga

Soumya is a Cloud Application Architect with AWS Professional Services in New York, NY, helping customers design solutions and workloads and to adopt Cloud Native services.

How to use resource-based policies in the AWS Secrets Manager console to securely access secrets across AWS accounts

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-use-resource-based-policies-aws-secrets-manager-console-to-securely-access-secrets-aws-accounts/

AWS Secrets Manager now enables you to create and manage your resource-based policies using the Secrets Manager console. With this launch, we are also improving your security posture by both identifying and preventing creation of resource policies that grant overly broad access to your secrets across your Amazon Web Services (AWS) accounts. To achieve this, we use the Zelkova engine to mathematically analyze access granted by your resource policy and alert you if such permissions are found. The analysis verifies access across all resource-policy statements, actions, and the set of condition keys used in your policies. To be considered non-public, the resource policy must grant access only to fixed values (values that don’t contain a wildcard) of one or more of the following: aws:SourceArn, aws:SourceVpc, aws:SourceVpce, aws:SourceAccount, aws:SourceIP, and ensure the Principal does not include a “*” entry.

If the policy grants Public or overly broad access to your secrets across AWS accounts, Secrets Manager will block you from applying the policy in the console and alert you with a dashboard message. This prevents your policy from accidentally granting broader access to your secrets, instead ensuring you are restricting it to the intended AWS accounts, AWS services, and AWS Identity and Access Management (IAM) entities. Access to AWS Secrets Manager requires AWS credentials. Those credentials must contain permission to access the AWS resources you want to access, such as your Secrets Manager secrets. In this blog post, we use Public or broad access to refer to values (or a combination of values) in the resource policy that result in a wide access across AWS accounts and principals.

With AWS Secrets Manager, you have the option to store, rotate, manage, and retrieve many types of secrets. These can be database usernames and passwords, API keys, string values, and binary data. AWS supports the ability to share these secrets cross-account by applying resource policies via the AWS Command Line Interface (AWS CLI) and now via the Secrets Manager console.

Why would you need to share a secret? There are many reasons. Perhaps you have database credentials managed in a central account that are needed by applications in your production account. Maybe you have the binary stored for an encryption key that other accounts will use to create AWS Key Management Service (AWS KMS) keys in their accounts. To achieve this goal while ensuring a secure transfer of information and least privilege permissions, you will need a resource-based policy on your secret, a resource-based policy on your AWS KMS Customer Managed Key (CMK) used for encrypting the secret, and a user-based policy on your IAM principal.

You can still create a policy using AWS CLI or AWS SDK permitting access to a broader scope of entities if your business needs dictate. If you do permit this type of broader access, AWS Secrets Manager will show a notification in your dashboard, as shown in Figure 2, below.

Figure 1. This shows the warning when you try to create a resource policy that grants broad access to your secrets via the AWS Secrets Manager console.

Figure 1. This shows the warning when you try to create a resource policy that grants broad access to your secrets via the AWS Secrets Manager console.

 

Figure 2. This alert pops up when you click on a secret that has a resource policy attached via the CLI that grants broad access to the secret.

Figure 2. This alert pops up when you click on a secret that has a resource policy attached via the CLI that grants broad access to the secret.

In the example below, you’ll see how to use the AWS Secrets Manager console to attach a resource-based policy and allow access to your secret from a secondary account. A secret in the CENTRAL_SECURITY_ACCOUNT will be set to allow it to be accessed by an IAM role in the PRODUCTION_ACCOUNT.

In this example:

  • SECURITY_SECRET = The secret created in the CENTRAL_SECURITY_ACCOUNT.
  • SECURITY_CMK = The AWS KMS CMK used to encrypt the SECURITY_SECRET.
  • PRODUCTION_ROLE = The AWS IAM role used to access the SECURITY_SECRET.
  • PRODUCTION_ACCOUNT = The AWS account that owns the AWS IAM role used for cross-account access.

Overview of solution

The architecture of the solution can be broken down into four steps, which are outlined in Figure 3. The four main steps are:

  1. Create the resource-based policy via the AWS Secrets Manager console on the SECURITY_SECRET in the CENTRAL_SECURITY_ACCOUNT.
  2. Update the SECURITY_CMK policy in the CENTRAL_SECURITY_ACCOUNT to allow the role from the PRODUCTION account access.
  3. Grant the AWS IAM role in the PRODUCTION_ACCOUNT permissions to access the secret.
  4. Test and verify access from the PRODUCTION_ACCOUNT.

 

Figure 3. A visual overview of the four steps to use the AWS Secrets Manager console to attach a resource-based policy, allow access to your secret from a secondary account, and test and verify the process.

Figure 3. A visual overview of the four steps to use the AWS Secrets Manager console to attach a resource-based policy, allow access to your secret from a secondary account, and test and verify the process.

Prerequisites

To use the example in this post, you need:

  • An AWS account.
  • An IAM Role with permissions to make modifications in both the CENTRAL_SECURITY_ACCOUNT and the PRODUCTION_ACCOUNT.
  • An IAM role in the PRODUCTION_ACCOUNT you wish to grant permissions to access the SECURITY_SECRET.

Deploying the solution

Step 1: Create a resource-based policy in your CENTRAL_SECURITY account on the SECURITY_SECRET secret

  1. Log in to the AWS Secrets Manager console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_SECRET.
  3. Choose Edit Permissions next to Resource Permissions (optional).

    Figure 4. The dashboard view where you edit permissions.

    Figure 4. The dashboard view where you edit permissions.

  4. This will bring you to the page to add the resource policy. It will give you a basic template as shown in Figure 5, below.

    Figure 5. The basic template to add the resource policy.

    Figure 5. The basic template to add the resource policy.

  5. Since the full policy is provided for you in this example, delete the template from the text box.
  6. Copy the policy below and paste it in the text box. Make sure to replace PRODUCTION with your AWS account ID. You can also adjust the permissions you grant if needed. This policy allows a specific role in the PRODUCTION_ACCOUNT account to retrieve the current version of your secret. In the example, my IAM Role is called PRODUCTION_ROLE. Note you do not need to replace AWSCURRENT with any other value.
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::PRODUCTION:role/PRODUCTION_ROLE"
                },
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "*",
                "Condition": {
                    "ForAnyValue:StringEquals": {
                        "secretsmanager:VersionStage": "AWSCURRENT"
                    }
                }
            }
        ]
    }
    

    As shown in Figure 6, below, you’ll see this in the resource policy text area (with your AWS account ID in place of PRODUCTION).

    Figure 6. The example resource policy shown in the console.

    Figure 6. The example resource policy shown in the console.

  7. Choose Save.

Step 2: Update the resource-based policy in your CENTRAL_SECURITY account on the SECURITY_CMK

Note: Secrets in AWS Secrets Manager are encrypted by default. However, it is important for you to provide authorization for IAM Principals that need to access your secrets. Complete authorization requires access to the secret and the KMS CMK used to encrypt it, which prevents accidental public permissions on the secret. It is important to maintain both sets of authorization to provide appropriate access to secrets.

  1. Log in to the AWS KMS console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_CMK.
  3. Next to Key policy choose the Edit button.
  4. Paste the below code snippet into your key policy to allow the PRODUCTION_ACCOUNT access:
    
    {
        "Sid": "AllowUseOfTheKey",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::PRODUCTION:role/PRODUCTION_ROLE"
        },
        "Action": [
            "kms:Decrypt",
            "kms:DescribeKey"
        ],
        "Resource": "arn:aws:kms:us-east-1:CENTRAL_SECURITY:key/SECURITY_CMK"
    }
    

    You will need to replace PRODUCTION with your production account ID, PRODUCTION_ROLE with your production Role name, CENTRAL_SECURITY with your security account ID, and SECURITY_CMK with the CMK key ID of your security CMK. If you forget to swap out the account IDs in the policy with your own, you’ll see an error message similar to the one shown in Figure 7, below.

    Figure 7. Error message that appears if you don’t swap out your account number correctly.

    Figure 7. Error message that appears if you don’t swap out your account number correctly.

  5. Choose Save changes.

Step 3: Add permissions to the PRODUCTION_ROLE in the PRODUCTION account

  1. Log in to the AWS IAM console in the PRODUCTION_ACCOUNT account.
  2. In the left navigation pane, choose Roles.
  3. Select PRODUCTION_ROLE.
  4. Under the Permissions tab, choose Add inline policy.
  5. Choose the JSON tab and paste the below policy:
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": " arn:aws:secretsmanager:us-east-1:CENTRAL_SECURITY:secret:SECURITY_SECRET"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "kms:Decrypt",
                    "kms:DescribeKey"
                ],
                "Resource": "arn:aws:kms:us-east-1:CENTRAL_SECURITY:key/SECURITY_CMK"
            }
        ]
    }
    

    You will need to replace CENTRAL_SECURITY with your security account id, SECURITY_SECRET with the secret id, and SECURITY_CMK with the CMK key id of your security CMK.

  6. Choose Review policy.
  7. Name the policy Central_Security_Account_Security-Secret-Access, and choose Create policy.

Step 4: Test access to the SECURITY_SECRET from the PRODUCTION account

Verification of access via AWS CLI

  1. From the AWS CLI, use the PRODUCTION_ROLE credentials to run the get-secret-value command.
  2. Returned output should look like the example, below, in Figure 8.
    
    $aws secretsmanager get-secret-value --secret-id SECURITY_SECRET --version-stage AWSCURRENT
    
    {
        “ARN”: “arn:aws:secretsmanager:us-east-1:CENTRAL_SECURITY:secret:SECURITY_SECRET”,
        “Name”: “SECURITY_SECRET”,
        “SecretString”: “TheSecretString”,
        “CreatedDate”: 123456789,
        “VersionId”: “64c4250d-0b81-42e0-9a0c-e189d3c9aea8”,
        “VersionsStages”: [
            “AWSCURRENT”
        ]
    }
    

You can also verify the policy was attached from the CENTRAL_SECURITY_ACCOUNT by following the steps below.

Verification of policy via console

  1. Log into the AWS Secrets Manager console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_SECRET.
  3. Scroll down to where it shows Resource Permissions (optional), and you’ll see your resource policy stored in the console, as shown in Figure 8, below.

    Figure 8. What the example resource policy looks like in the console.

    Figure 8. What the example resource policy looks like in the console.

Conclusion

In this post, you saw how to add a resource-based policy on a secret in AWS Secrets Manager using the console, and how to update your AWS KMS CMK resource-based policy to enable access. The example showed setting up cross-account access, and allowing a role from the PRODUCTION_ACCOUNT to use the secret in the CENTRAL_SECURITY_ACCOUNT. By using the AWS Secrets Manager console to set up the resource-based policy, you now have a straight-forward, visual way to add and manage resource-based policies for your secrets and receive notifications if that policy is too broad.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Secrets Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security and Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Simplify granting access to your AWS resources by using tags on AWS IAM users and roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/simplify-granting-access-to-your-aws-resources-by-using-tags-on-aws-iam-users-and-roles/

Recently, AWS enabled tags on IAM principals (users and roles). With this update, you can now use attribute-based access control (ABAC) to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, you can use an IAM policy that grants developers access to resources that match their project tag. As the team adds resources to projects, permissions automatically apply based on attributes. No policy update required for each new resource.

In this blog post, I walk through three examples of how you can control access permissions by using tags on IAM principals and AWS resources. It’s important to note that you can use tags to control access to your AWS resources, but only if the AWS service in question supports tag-based permissions. To learn more about AWS services that support tag-based permissions, see AWS Services That Work with IAM.

As a reminder, I introduced the following tagging condition keys in my post about tagging. Adding tags to the Condition element of a policy tailors the policy’s permissions and limits its actions and resources.

Condition keyDescriptionActions that support the condition key
aws:RequestTagTags that you request to be added or removed.iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeysTag keys that are checked before the actions are executed.iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTagTags that exist on the user or role making the call.A global condition (all actions across all services support this condition key)
iam:ResourceTagTags that exist on an IAM resource.All IAM APIs that supports an IAM user or role and sts:AssumeRole

Example 1: Grant IAM users access to your AWS resources by using tags

Assume that you have multiple teams of developers who need permissions to start and stop specific EC2 instances based on their cost center. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key aws:PrincipalTag. This will help ensure that the principal is able to start and stop that instance only if value of the ec2 instance CostCenter tag matches value of the CostCenter tag on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, as you only need to manage a single policy for all your dev teams requiring permissions to start and stop instances and rely on tag values to specify the resources.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

Example 2: Grant users in an IAM group access to your AWS resources by using tags

Assume there are database administrators in your account who need start, stop, and reboot permissions for specific Amazon RDS instances. In the following policy, I define the start, stop, and reboot actions for Amazon RDS in the Action element of the policy, and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key, aws:PrincipalTag, to select users with the tag, CostCenter=0735. I use the StringEquals condition operator to check for an exact match of the value. I also use the condition key, rds:db-tag, to control access to databases tagged with Project=DataAnalytics. I attach this policy to an IAM group which contains all the database administrators in my account. Now, any database administrator in this group with tag CostCenter=0735 gets access to the RDS instance tagged Project=DataAnalytics.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "rds:DescribeDBInstances",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "rds:RebootDBInstance",
                "rds:StartDBInstance",
                "rds:StopDBInstance"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalTag/CostCenter": "0735",
                    "rds:db-tag/Project": "DataAnalytics"
                }
            }
        }
    ]
}

Example 3: Use tags to control access to IAM roles

Let’s say a user, Bob in Account A, needs to manage several applications and needs to assume specific roles in Account B. The following policy grants Bob’s IAM user permissions to assume all roles tagged with ExampleCorpABC. In the Action element of the policy, I define sts:AssumeRole, which grants permissions to assume roles. In the Resource element of the policy, I define a wildcard (*) to grant access to all roles, but use the condition key, iam:ResourceTag, in the Condition element to scope down the roles that Bob can assume. As with the previous policy, I use the StringEquals operator to ensure that Bob can assume roles that have the tag, Project=ExampleCorpABC. Now, whenever I create a role in Account B and trust Bob’s account in the role’s trust policy, Bob can only assume this role if it is tagged with Project=ExampleCorpABC.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "*",
      "Condition": {
	          "StringEquals": 
		    {"iam:ResourceTag/Project": "ExampleCorpABC"}
      }
    }
  ]
} 
 

Summary

You now can tag your IAM principals to control access to your AWS resources, and the three examples I’ve included in this post show how tags can help you simplify access management.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Add Tags to Manage Your AWS IAM Users and Roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/add-tags-to-manage-your-aws-iam-users-and-roles/

We made it easier for you to manage your AWS Identity and Access Management (IAM) resources by enabling you to add tags to your IAM users and roles (also known as IAM principals). Tags enable you to add customizable key-value pairs to resources, and many AWS services support tagging of AWS resources. Now, you can use tags to add custom attributes such as project name and cost center to your IAM principals. Additionally, tags on IAM principals simplify permissions management. For example, you can author a policy that allows a user to assume the roles for a specific project by using a tag. As you add roles with that tag, users gain permissions to assume those roles automatically. In a subsequent post, I will review how you can use tags on IAM principals to control access to your AWS resources.

In this blog post, I introduce the new APIs and conditions you can use to tag IAM principals, show three example policies that address three tagging use cases, and I show how to add tags to IAM principals by using the AWS Console and CLI. The first example policy grants permissions to tag principals. The second example policy requires specific tags for new users, and the third grants permissions to manage specific tags on principals.

Note: You must have the latest version of the AWS CLI to tag your IAM principals. Follow these instructions to update the AWS CLI.

New IAM APIs for tagging IAM principals

The following table lists the new IAM APIs that you must grant access to using an IAM policy so that you can view and modify tags on IAM principals. These APIs support resource-level permissions so that you can grant permissions to tag only specific principals.

ActionsDescriptionSupports resource-level permissions
iam:ListUserTagsLists the tags on an IAM user.arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:ListRoleTagsLists the tags on an IAM role.arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:TagUserCreates or modifies the tags on an IAM user.arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:TagRoleCreates or modifies the tags on an IAM role.arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:UntagUserRemoves the tags on an IAM user.arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:UntagRoleRemoves the tags on an IAM role.arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>

In addition to the new APIs, tagging parameters now are available for the existing iam:CreateUser and iam:CreateRole APIs to enable you to tag your users and roles when they are created. I show how you can add tags to a new user later in this blog post.

Now that you know the APIs you can use to tag IAM principals, let’s review an example of how to grant permissions to tag by using an IAM policy.

Example policy 1: Grant permissions to tag specific users and all roles

To get started using tags, you must first ensure you grant permissions to do so. The following policy grants permissions to tag one IAM user and all roles.

Note: Replace <ACCOUNT-ID> with your 12-digit account number.

        
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUserTags",
                "iam:ListRoleTags",
                "iam:TagUser",
                "iam:TagRole",
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": [
                "arn:aws:iam:: <ACCOUNT-ID>:user/John",
                "arn:aws:iam:: <ACCOUNT-ID>:role/*"
            ]
        }
    ]
}

This policy lists all the actions required to see and modify tags for IAM principals. The Resource element of the policy grants permissions to tag one user, John, and all roles in the account by specifying the Amazon Resource Name (ARN).

Now that I have reviewed the new APIs you can use to view and modify tags on your IAM principals, let’s go over the new IAM condition keys you can use in policies.

New IAM condition keys for tagging IAM principals

The following table lists the condition keys you can use in your IAM policies to control access by using tags. In this section, I also show examples of how context keys in policies can help you grant more specific access for tagging IAM principals.

Condition keyDescriptionActions that support the condition key
aws:RequestTagTags that you request to be added or removed from a user or roleiam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeysTag keys that are checked before the actions are executediam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTagTags that exist on the user or role making the callglobal condition (all actions across all services support this condition key)
iam:ResourceTagTags that exist on the resourceAny IAM API that supports an IAM user or role and sts:AssumeRole

Now that I have explained both the new APIs and condition keys for tagging IAM users and roles, let’s review two more use cases with tags.

Example policy 2: Require tags for new IAM users

Let’s say I want to apply the same tags to all new IAM users so that I can track them consistently along with my other AWS resources. Now, when you create a user, you can also pass in one or more tags. Let’s say I want to ensure that all the administrators on my team apply a CostCenter tag. I create an IAM policy that includes the actions required to create and tag users. I also use the Condition element to list the tags required to be added to each new user during creation. If an administrator forgets to add a tag, the administrator’s attempt to create the user fails.

Note: These actions are creating new users by using the AWS CLI.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisRequiresSpecificTagsWhenYouCreateANewUsers",
            "Effect": "Allow",
            "Action": [
                "iam:CreateUser",
                "iam:TagUser"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    
                        "aws:RequestTag/CostCenter": "*"
                 
            }
	  }
        }
    ]
} 

The preceding policy grants iam:CreateUser and iam:TagUser to allow creating and tagging IAM users in the AWS CLI. The Condition element that specifies the CostCenter tag is required during creation by using the condition key aws:RequestTag.

Example policy 3: Grant permissions to manage specific tags on IAM principals

Let’s say I want an administrator on my team, Alice, to manage two tags, Project and CostCenter, for all IAM principals in our account. The following policy allows Alice to be able to assign any value to the Project tag, but limits the values she can assign to the CostCenter tag.


{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Sid": "ViewAllTags", 
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles",
				"iam:ListUserTags",
                "iam:ListRoleTags"
            ],
            "Resource": "*"
        },
        {
           "Sid": "TagUserandRoleWithAnyProjectName",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/Project": "*"
                }
            }
        },
        {
           "Sid": "TagUserandRoleWithTwoCostCenterValues",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/CostCenter": [
                        "1234",
                        "5678"]}}
        },
        {
           "Sid": "UntagUserandRoleProjectCostCenter",
            "Effect": "Allow",
            "Action": [
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": "*",
            "Condition": {
                "ForAllValues:StringLike": {
                    "aws:TagKeys": [
                        "CostCenter",
                        "Project"
                    ]
                }
            }
        }
    ]
}

This policy permits Alice to view, add, and remove the Project and CostCenter tags for all principals in the account. In the Condition element of the second and third statements of the policy, I use the condition, aws:RequestTag, to define the tags Alice is allowed to add or remove as well as the values she is able to assign to those tags. Alice can assign any value to the tag, Project, but is limited to two values, 1234 and 5678, for the tag, CostCenter.

Now that you understand how to grant permissions to tag IAM principals, I will show you how to run the commands to tag a new user and an existing role.

How to add tags to a new IAM user

Using the CLI
Let’s say that IAM user, John, is a new team member and needs access to AWS. To manage resources, I use the following command to create John and add the Project, CostCenter, and EmailID tags.

aws iam create-user --user-name John --tags Key=CostCenter,Value=1234, Key=EmailID,[email protected] 

To give John access to the appropriate AWS actions and resources, you can use the use the CLI to attach policies to John.

Using the console
You can also add tags to a user using the AWS console through the user creation flow as shown below.

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. In the left navigation pane, select Users, and then select Add user.
  3. Type the user name for the new user.
  4. Select the type of access this user will have. You can select programmatic access, access to the AWS Management Console, or both.
  5. Select Next: Permissions.
  6. On the Set permissions page, specify how you want to assign permissions to this set of new users. You can choose between Add user to group, Copy permissions from existing user, or Attach existing policies to user directly.
  7. Select Next:Tags.
  8. On the Add tags (optional) page, add the tags you want to attach to this principal. I add the CostCenter tag key with a value of 1234 and the EmailID tag key with value of [email protected].
     
    Figure 1: Add tags

    Figure 1: Add tags

  9. Select Next: Review.
  10. Once you reviewed all the information, select Create user. This action creates your user John with the permissions and tags you attached. You can navigate to the user Details page to view this user.

    How to add tags to an existing IAM role

    Using the CLI
    To manage custom data for each role in my account, I need to add the following tags to all existing roles: Company, Project, Service, and CreationDate. The following command adds these tags to all existing roles. To be able to run the commands I just demonstrated, you must have permissions granted to you in an IAM policy.

    aws iam tag-role --role-name * --tags Key=Project, Key=Service

    I can define the value of the tags for a specific role, Migration, by using the following command:

    aws iam add-role-tags --role-name Migration --tags Key=Project,Value=IAM, Key=Service,Value=S3
    

    Using the console
    You can use the console to add tags to roles individually. To do this, on the left side, select Roles, and then select the role you want to add tags to.
     

    Figure 2: Add tags to individual roles

    Figure 2: Add tags to individual roles

    To view the existing tags on the role, select the Tags tab. The image shown below shows a Migration role in my account with two existing tags: Project with value IAM and key Service with value S3. To add tags or edit the existing tags, select Edit tags.
     

    Figure 3: Edit tags

    Figure 3: Edit tags

    Summary

    When you tag IAM principals, you add custom attributes to the users and roles in your account to make it easier to manage your IAM resources. In this post, I reviewed the new APIs and condition keys and showed three policy examples that address use cases to grant permissions to tag your IAM principals. In a subsequent post, I will review how you can use tags on IAM principals to control access to AWS resources and other accounts.

    If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

    Want more AWS Security news? Follow us on Twitter.

    The author

    Sulay Shah

    Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Delegate permission management to developers by using IAM permissions boundaries

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

Today, AWS released a new IAM feature that makes it easier for you to delegate permissions management to trusted employees. As your organization grows, you might want to allow trusted employees to configure and manage IAM permissions to help your organization scale permission management and move workloads to AWS faster. For example, you might want to grant a developer the ability to create and manage permissions for an IAM role required to run an application on Amazon EC2. This ability is powerful and might be used inappropriately or accidentally to attach an administrator access policy to obtain full access to all resources in an account. Now, you can set a permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.

A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have. Before we walk you through a specific example, here is an overview of how permissions boundaries work. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. See the following diagram for a visual representation.
 

Figure 1: The intersection of permission boundaries and policies

Figure 1: The intersection of permissions boundary and permissions policy

In this post, we’ll walk through an example that shows how to grant an employee permission to create IAM roles and assign permissions. We’ll also show how to ensure that these IAM roles can only access Amazon DynamoDB actions and resources in the AWS EU (Frankfurt) region. This solution requires the following steps.

IAM administrator tasks

  1. Define the permissions boundary by creating a customer-managed policy.
  2. Create and attach a permissions policy to allow an employee to create roles, but only with a permissions boundary and a name that meets a specific convention.
  3. Create and attach a permissions policy to allow an employee to pass this role to Amazon EC2.

Employee tasks

  1. Create a role with the required permissions boundary.
  2. Attach a permissions policy to the role.

Administrator step 1: Define the permissions boundary

As an IAM administrator, we’ll create a customer managed policy that grants permissions to put, update, and delete items on all DynamoDB tables in the AWS EU (Frankfurt) region. We’ll require employees to set this policy as the permissions boundary for the roles they create. To follow along, paste the following JSON policy in a file with the name DynamoDB_Boundary_Frankfurt_Text.json.


{
  "Version" : "2012-10-17",
  "Statement" : [
  {
    "Effect": "Allow",
    "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem"
   ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:RequestedRegion": "eu-central-1"
        }
    }
  }
]
}

Next, use the create-policy AWS CLI command to create the policy, DynamoDB_Boundary_Frankfurt.

$aws iam create-policy –policy-name DynamoDB_Boundary_Frankfurt –policy-document file://DynamoDB_Boundary_Frankfurt_Text.json

Note: You can also use an AWS managed policy as a permissions boundary.

Administrator step 2: Create and attach the permissions policy

Create a policy that grants permissions to create IAM roles with the DynamoDB_Boundary_Frankfurt permissions boundary, and a name that begins with the prefix MyTestApp. This policy also grants permissions to create and attach IAM policies to roles with this boundary and naming convention. The permissions boundary controls the maximum permissions these roles can have. The naming convention enables administrators to more effectively grant access to manage and use these roles, without updating the employee’s permissions when they create a role. The naming convention also makes it easier to audit and identify roles created by an employee. To create this policy, paste the following JSON policy document in a file with the name Permissions_Policy_For_Employee_Text.json. Make sure to replace the variable <ACCOUNT NUMBER> with your own AWS account number. You can update the policy to grant additional permissions, such as launching EC2 instances in a specific subnet or allowing read-only access on items in a DynmoDB table.


{
  "Version" : "2012-10-17",
  "Statement" : [
     {
"Sid": "SetPermissionsBoundary",
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*",
"Condition": {
     "StringEquals": {
     "iam:PermissionsBoundary":     
     "arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt"}}
      },
     {
      "Sid": "CreateAndEditPermissionsPolicy",
"Effect": "Allow",
"Action": [
"iam:CreatePolicy",
      "iam:CreatePolicyVersion"
],
"Resource": "*"
     }
]
}

Next, use the create-policy command to create the customer managed policy, Permissions_Policy_For_Employee, and use the attach-role-policy command to attach this policy to the principal, MyEmployeeRole, used by your employee.

$aws iam create-policy –policy-name Permissions_Policy_For_Employee –policy-document file://Permissions_Policy_For_Employee_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Permissions_Policy_For_Employee –role-name MyEmployeeRole

Administrator step 3: Create and attach the permissions policy for granting permissions to pass the role

Create a policy to allow the employee to pass the roles they created to AWS services, such as Amazon EC2, enabling these services to assume the roles and perform actions on the employee’s behalf. To do this, paste the following JSON policy document in a file with the name Pass_Role_Policy_Text.json.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*"
        }
    ]
}

Then, use the create-policycreate-policy command to create the policy, Pass_Role_Policy, and the attach-role-policy command to attach this policy to the principal, MyEmployeeRole.

$aws iam create-policy –policy-name Pass_Role_Policy –policy-document file://Pass_Role_Policy_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Pass_Role_Policy –role-name MyEmployeeRole

As the IAM administrator, we’ve successfully defined a permissions boundary. We’ve also granted our employee the ability to create IAM roles and attach permissions policies, while ensuring the permissions of the roles don’t exceed the boundary that we set.

Managing Permissions Boundaries

Changing and modifying a permissions boundary is a powerful permission. You should reserve this permission for full administrators in an account. You can do this by ensuring that policies you use as permissions boundaries don’t include the DeleteUserPermissionsBoundary and DeleteRolePermissionsBoundary actions. Or, if you allow “iam:*actions, then you must explicitly deny those actions.

Employee step 1: Create a role by providing the permissions boundary

Your employee can now use the create-role command to create a new IAM role with the DynamoDB_Boundary_Frankfurt permissions boundary and the attach-role-policy command to attach permissions policies to this role.

For this post, we assume that your employee operates an application, MyTestApp, on Amazon EC2 that requires access to the Amazon DynamoDB table, MyTestApp_DDB_Table. The employee can paste the following JSON policy document and save it as Role_Trust_Policy_Text.json to define the trust policy.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Then, the employee can use the create-role command to create the IAM role, MyTestAppRole, and define the permissions boundary as DynamoDB_Boundary_Frankfurt. The create-role command will fail if the employee doesn’t provide the appropriate permissions boundary. Make sure to the <ACCOUNT NUMBER> variable is replaced with the employee’s in the policy below.

$aws iam create-role –role-name MyTestAppRole
–assume-role-policy-document file://Role_Trust_Policy_Text.json
–permissions-boundary arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt

Next, the employee grants permissions to this role by using the attach-role-policy command to attach the following policy, MyTestApp_DDB_Permissions. This policy grants the ability to perform all actions on the DynamoDB table, MyTestApp_DDB_Table.


{
    "Version": "2012-10-17",
    "Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": [
"arn:aws:dynamodb:eu-central-1:<ACCOUNT_NUMBER>:table/MyTestApp_DDB_Table"]
}
]
}

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/MyTestApp_DDB_Permissions
–role-name MyTestAppRole

Although the employee granted full DynamoDB access, the effective permissions for this IAM role are the intersection of the permissions boundary, DynamoDB_Boundary_Frankfurt, and the permissions policy, MyTestApp_DDB_Permissions. This means the role only has access to put, update, and delete items on the MyTestApp_DDB_Table in the AWS EU (Frankfurt) region. See the following diagram for a visual representation.
 

Figure 2: Effective permissions for the IAM role

Figure 2: Effective permissions for the IAM role

Summary

We demonstrated how to use permissions boundaries to delegate IAM permission management. Using permissions boundaries can help you scale permission management in your organization and move workloads to AWS faster. To learn more, see the IAM documentation for permissions boundaries.

If you have comments about this post, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.

Summary

SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see:

timeShift(GrafanaBuzz, 1w) Issue 46

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/05/24/timeshiftgrafanabuzz-1w-issue-46/

Welcome to TimeShift The day has finally arrived; GDPR is officially in effect! These new policies are meant to provide more transparency about the data companies collect on users, and how that data is used. I for one am just excited that the onslaught of "We’ve updated our privacy policy" emails arriving in my pummeled inbox is nearing its end.
Grafana Labs is no exception. We encourage you to check out our privacy policy, and if you have any questions, feel free to contact us at [email protected]

The Benefits of Side Projects

Post Syndicated from Bozho original https://techblog.bozho.net/the-benefits-of-side-projects/

Side projects are the things you do at home, after work, for your own “entertainment”, or to satisfy your desire to learn new stuff, in case your workplace doesn’t give you that opportunity (or at least not enough of it). Side projects are also a way to build stuff that you think is valuable but not necessarily “commercialisable”. Many side projects are open-sourced sooner or later and some of them contribute to the pool of tools at other people’s disposal.

I’ve outlined one recommendation about side projects before – do them with technologies that are new to you, so that you learn important things that will keep you better positioned in the software world.

But there are more benefits than that – serendipitous benefits, for example. And I’d like to tell some personal stories about that. I’ll focus on a few examples from my list of side projects to show how, through a sort-of butterfly effect, they helped shape my career.

The computoser project, no matter how cool algorithmic music composition, didn’t manage to have much of a long term impact. But it did teach me something apart from niche musical theory – how to read a bulk of scientific papers (mostly computer science) and understand them without being formally trained in the particular field. We’ll see how that was useful later.

Then there was the “State alerts” project – a website that scraped content from public institutions in my country (legislation, legislation proposals, decisions by regulators, new tenders, etc.), made them searchable, and “subscribable” – so that you get notified when a keyword of interest is mentioned in newly proposed legislation, for example. (I obviously subscribed for “information technologies” and “electronic”).

And that project turned out to have a significant impact on the following years. First, I chose a new technology to write it with – Scala. Which turned out to be of great use when I started working at TomTom, and on the 3rd day I was transferred to a Scala project, which was way cooler and much more complex than the original one I was hired for. It was a bit ironic, as my colleagues had just read that “I don’t like Scala” a few weeks earlier, but nevertheless, that was one of the most interesting projects I’ve worked on, and it went on for two years. Had I not known Scala, I’d probably be gone from TomTom much earlier (as the other project was restructured a few times), and I would not have learned many of the scalability, architecture and AWS lessons that I did learn there.

But the very same project had an even more important follow-up. Because if its “civic hacking” flavour, I was invited to join an informal group of developers (later officiated as an NGO) who create tools that are useful for society (something like MySociety.org). That group gathered regularly, discussed both tools and policies, and at some point we put up a list of policy priorities that we wanted to lobby policy makers. One of them was open source for the government, the other one was open data. As a result of our interaction with an interim government, we donated the official open data portal of my country, functioning to this day.

As a result of that, a few months later we got a proposal from the deputy prime minister’s office to “elect” one of the group for an advisor to the cabinet. And we decided that could be me. So I went for it and became advisor to the deputy prime minister. The job has nothing to do with anything one could imagine, and it was challenging and fascinating. We managed to pass legislation, including one that requires open source for custom projects, eID and open data. And all of that would not have been possible without my little side project.

As for my latest side project, LogSentinel – it became my current startup company. And not without help from the previous two mentioned above – the computer science paper reading was of great use when I was navigating the crypto papers landscape, and from the government job I not only gained invaluable legal knowledge, but I also “got” a co-founder.

Some other side projects died without much fanfare, and that’s fine. But the ones above shaped my “story” in a way that would not have been possible otherwise.

And I agree that such serendipitous chain of events could have happened without side projects – I could’ve gotten these opportunities by meeting someone at a bar (unlikely, but who knows). But we, as software engineers, are capable of tilting chance towards us by utilizing our skills. Side projects are our “extracurricular activities”, and they often lead to unpredictable, but rather positive chains of events. They would rarely be the only factor, but they are certainly great at unlocking potential.

The post The Benefits of Side Projects appeared first on Bozho's tech blog.

RFC: LWN’s draft updated privacy policy

Post Syndicated from corbet original https://lwn.net/Articles/755089/rss

It is the season for web sites to be updating their privacy policies and
obtaining consent from their users for whatever data they collect. LWN,
being short of staff with the time or interest to work in this area, is
rather late to this game. The first step is an updated
privacy policy, which we’re now putting out for review. Little has changed
from the current version; we still don’t
collect much data, share data with others, or attempt to
monetize what we have in any way. We would like to ask interested readers
to have a look and let us know about any potential problems they see.

Secure Build with AWS CodeBuild and LayeredInsight

Post Syndicated from Asif Khan original https://aws.amazon.com/blogs/devops/secure-build-with-aws-codebuild-and-layeredinsight/

This post is written by Asif Awan, Chief Technology Officer of Layered InsightSubin Mathew – Software Development Manager for AWS CodeBuild, and Asif Khan – Solutions Architect

Enterprises adopt containers because they recognize the benefits: speed, agility, portability, and high compute density. They understand how accelerating application delivery and deployment pipelines makes it possible to rapidly slipstream new features to customers. Although the benefits are indisputable, this acceleration raises concerns about security and corporate compliance with software governance. In this blog post, I provide a solution that shows how Layered Insight, the pioneer and global leader in container-native application protection, can be used with seamless application build and delivery pipelines like those available in AWS CodeBuild to address these concerns.

Layered Insight solutions

Layered Insight enables organizations to unify DevOps and SecOps by providing complete visibility and control of containerized applications. Using the industry’s first embedded security approach, Layered Insight solves the challenges of container performance and protection by providing accurate insight into container images, adaptive analysis of running containers, and automated enforcement of container behavior.

 

AWS CodeBuild

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools.

 

Problem Definition

Security and compliance concerns span the lifecycle of application containers. Common concerns include:

Visibility into the container images. You need to verify the software composition information of the container image to determine whether known vulnerabilities associated with any of the software packages and libraries are included in the container image.

Governance of container images is critical because only certain open source packages/libraries, of specific versions, should be included in the container images. You need support for mechanisms for blacklisting all container images that include a certain version of a software package/library, or only allowing open source software that come with a specific type of license (such as Apache, MIT, GPL, and so on). You need to be able to address challenges such as:

·       Defining the process for image compliance policies at the enterprise, department, and group levels.

·       Preventing the images that fail the compliance checks from being deployed in critical environments, such as staging, pre-prod, and production.

Visibility into running container instances is critical, including:

·       CPU and memory utilization.

·       Security of the build environment.

·       All activities (system, network, storage, and application layer) of the application code running in each container instance.

Protection of running container instances that is:

·       Zero-touch to the developers (not an SDK-based approach).

·       Zero touch to the DevOps team and doesn’t limit the portability of the containerized application.

·       This protection must retain the option to switch to a different container stack or orchestration layer, or even to a different Container as a Service (CaaS ).

·       And it must be a fully automated solution to SecOps, so that the SecOps team doesn’t have to manually analyze and define detailed blacklist and whitelist policies.

 

Solution Details

In AWS CodeCommit, we have three projects:
●     “Democode” is a simple Java application, with one buildspec to build the app into a Docker container (run by build-demo-image CodeBuild project), and another to instrument said container (instrument-image CodeBuild project). The resulting container is stored in ECR repo javatestasjavatest:20180415-layered. This instrumented container is running in AWS Fargate cluster demo-java-appand can be seen in the Layered Insight runtime console as the javatestapplication in us-east-1.
●     aws-codebuild-docker-imagesis a clone of the official aws-codebuild-docker-images repo on GitHub . This CodeCommit project is used by the build-python-builder CodeBuild project to build the python 3.3.6 codebuild image and is stored at the codebuild-python ECR repo. We then manually instructed the Layered Insight console to instrument the image.
●     scan-java-imagecontains just a buildspec.yml file. This file is used by the scan-java-image CodeBuild project to instruct Layered Assessment to perform a vulnerability scan of the javatest container image built previously, and then run the scan results through a compliance policy that states there should be no medium vulnerabilities. This build fails — but in this case that is a success: the scan completes successfully, but compliance fails as there are medium-level issues found in the scan.

This build is performed using the instrumented version of the Python 3.3.6 CodeBuild image, so the activity of the processes running within the build are recorded each time within the LI console.

Build container image

Create or use a CodeCommit project with your application. To build this image and store it in Amazon Elastic Container Registry (Amazon ECR), add a buildspec file to the project and build a container image and create a CodeBuild project.

Scan container image

Once the image is built, create a new buildspec in the same project or a new one that looks similar to below (update ECR URL as necessary):

version: 0.2
phases:
  pre_build:
    commands:
      - echo Pulling down LI Scan API client scripts
      - git clone https://github.com/LayeredInsight/scan-api-example-python.git
      - echo Setting up LI Scan API client
      - cd scan-api-example-python
      - pip install layint_scan_api
      - pip install -r requirements.txt
  build:
    commands:
      - echo Scanning container started on `date`
      - IMAGEID=$(./li_add_image --name <aws-region>.amazonaws.com/javatest:20180415)
      - ./li_wait_for_scan -v --imageid $IMAGEID
      - ./li_run_image_compliance -v --imageid $IMAGEID --policyid PB15260f1acb6b2aa5b597e9d22feffb538256a01fbb4e5a95

Add the buildspec file to the git repo, push it, and then build a CodeBuild project using with the instrumented Python 3.3.6 CodeBuild image at <aws-region>.amazonaws.com/codebuild-python:3.3.6-layered. Set the following environment variables in the CodeBuild project:
●     LI_APPLICATIONNAME – name of the build to display
●     LI_LOCATION – location of the build project to display
●     LI_API_KEY – ApiKey:<key-name>:<api-key>
●     LI_API_HOST – location of the Layered Insight API service

Instrument container image

Next, to instrument the new container image:

  1. In the Layered Insight runtime console, ensure that the ECR registry and credentials are defined (click the Setup icon and the ‘+’ sign on the top right of the screen to add a new container registry). Note the name given to the registry in the console, as this needs to be referenced in the li_add_imagecommand in the script, below.
  2. Next, add a new buildspec (with a new name) to the CodeCommit project, such as the one shown below. This code will download the Layered Insight runtime client, and use it to instruct the Layered Insight service to instrument the image that was just built:
    version: 0.2
    phases:
    pre_build:
    commands:
    echo Pulling down LI API Runtime client scripts
    git clone https://github.com/LayeredInsight/runtime-api-example-python
    echo Setting up LI API client
    cd runtime-api-example-python
    pip install layint-runtime-api
    pip install -r requirements.txt
    build:
    commands:
    echo Instrumentation started on `date`
    ./li_add_image --registry "Javatest ECR" --name IMAGE_NAME:TAG --description "IMAGE DESCRIPTION" --policy "Default Policy" --instrument --wait --verbose
  3. Commit and push the new buildspec file.
  4. Going back to CodeBuild, create a new project, with the same CodeCommit repo, but this time select the new buildspec file. Use a Python 3.3.6 builder – either the AWS or LI Instrumented version.
  5. Click Continue
  6. Click Save
  7. Run the build, again on the master branch.
  8. If everything runs successfully, a new image should appear in the ECR registry with a -layered suffix. This is the instrumented image.

Run instrumented container image

When the instrumented container is now run — in ECS, Fargate, or elsewhere — it will log data back to the Layered Insight runtime console. It’s appearance in the console can be modified by setting the LI_APPLICATIONNAME and LI_LOCATION environment variables when running the container.

Conclusion

In the above blog we have provided you steps needed to embed governance and runtime security in your build pipelines running on AWS CodeBuild using Layered Insight.

 

 

 

Easier way to control access to AWS regions using IAM policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/easier-way-to-control-access-to-aws-regions-using-iam-policies/

We made it easier for you to comply with regulatory standards by controlling access to AWS Regions using IAM policies. For example, if your company requires users to create resources in a specific AWS region, you can now add a new condition to the IAM policies you attach to your IAM principal (user or role) to enforce this for all AWS services. In this post, I review conditions in policies, introduce the new condition, and review a policy example to demonstrate how you can control access across multiple AWS services to a specific region.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element that lets you specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:RequestedRegion condition key

The new global condition key, , supports all actions across all AWS services. You can use any string operator and specify any AWS region for its value.

Condition keyDescriptionOperator(s)Value
aws:RequestedRegionAllows you to specify the region to which the IAM principal (user or role) can make API callsAll string operators (for example, StringEqualsAny AWS region (for example, us-east-1)

I’ll now demonstrate the use of the new global condition key.

Example: Policy with region-level control

Let’s say a group of software developers in my organization is working on a project using Amazon EC2 and Amazon RDS. The project requires a web server running on an EC2 instance using Amazon Linux and a MySQL database instance in RDS. The developers also want to test Amazon Lambda, an event-driven platform, to retrieve data from the MySQL DB instance in RDS for future use.

My organization requires all the AWS resources to remain in the Frankfurt, eu-central-1, region. To make sure this project follows these guidelines, I create a single IAM policy for all the AWS services that this group is going to use and apply the new global condition key aws:RequestedRegion for all the services. This way I can ensure that any new EC2 instances launched or any database instances created using RDS are in Frankfurt. This policy also ensures that any Lambda functions this group creates for testing are also in the Frankfurt region.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "rds:Describe*",
                "iam:ListRolePolicies",
                "iam:ListRoles",
                "iam:GetRole",
                "iam:ListInstanceProfiles",
                "iam:AttachRolePolicy",
                "lambda:GetAccountSettings"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances",
                "rds:CreateDBInstance",
                "rds:CreateDBCluster",
                "lambda:CreateFunction",
                "lambda:InvokeFunction"
            ],
            "Resource": "*",
      "Condition": {"StringEquals": {"aws:RequestedRegion": "eu-central-1"}}

        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::account-id:role/*"
        }
    ]
}

The first statement in the above example contains all the read-only actions that let my developers use the console for EC2, RDS, and Lambda. The permissions for IAM-related actions are required to launch EC2 instances with a role, enable enhanced monitoring in RDS, and for AWS Lambda to assume the IAM execution role to execute the Lambda function. I’ve combined all the read-only actions into a single statement for simplicity. The second statement is where I give write access to my developers for the three services and restrict the write access to the Frankfurt region using the aws:RequestedRegion condition key. You can also list multiple AWS regions with the new condition key if your developers are allowed to create resources in multiple regions. The third statement grants permissions for the IAM action iam:PassRole required by AWS Lambda. For more information on allowing users to create a Lambda function, see Using Identity-Based Policies for AWS Lambda.

Summary

You can now use the aws:RequestedRegion global condition key in your IAM policies to specify the region to which the IAM principal (user or role) can invoke an API call. This capability makes it easier for you to restrict the AWS regions your IAM principals can use to comply with regulatory standards and improve account security. For more information about this global condition key and policy examples using aws:RequestedRegion, see the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Ransomware Update: Viruses Targeting Business IT Servers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ransomware-update-viruses-targeting-business-it-servers/

Ransomware warning message on computer

As ransomware attacks have grown in number in recent months, the tactics and attack vectors also have evolved. While the primary method of attack used to be to target individual computer users within organizations with phishing emails and infected attachments, we’re increasingly seeing attacks that target weaknesses in businesses’ IT infrastructure.

How Ransomware Attacks Typically Work

In our previous posts on ransomware, we described the common vehicles used by hackers to infect organizations with ransomware viruses. Most often, downloaders distribute trojan horses through malicious downloads and spam emails. The emails contain a variety of file attachments, which if opened, will download and run one of the many ransomware variants. Once a user’s computer is infected with a malicious downloader, it will retrieve additional malware, which frequently includes crypto-ransomware. After the files have been encrypted, a ransom payment is demanded of the victim in order to decrypt the files.

What’s Changed With the Latest Ransomware Attacks?

In 2016, a customized ransomware strain called SamSam began attacking the servers in primarily health care institutions. SamSam, unlike more conventional ransomware, is not delivered through downloads or phishing emails. Instead, the attackers behind SamSam use tools to identify unpatched servers running Red Hat’s JBoss enterprise products. Once the attackers have successfully gained entry into one of these servers by exploiting vulnerabilities in JBoss, they use other freely available tools and scripts to collect credentials and gather information on networked computers. Then they deploy their ransomware to encrypt files on these systems before demanding a ransom. Gaining entry to an organization through its IT center rather than its endpoints makes this approach scalable and especially unsettling.

SamSam’s methodology is to scour the Internet searching for accessible and vulnerable JBoss application servers, especially ones used by hospitals. It’s not unlike a burglar rattling doorknobs in a neighborhood to find unlocked homes. When SamSam finds an unlocked home (unpatched server), the software infiltrates the system. It is then free to spread across the company’s network by stealing passwords. As it transverses the network and systems, it encrypts files, preventing access until the victims pay the hackers a ransom, typically between $10,000 and $15,000. The low ransom amount has encouraged some victimized organizations to pay the ransom rather than incur the downtime required to wipe and reinitialize their IT systems.

The success of SamSam is due to its effectiveness rather than its sophistication. SamSam can enter and transverse a network without human intervention. Some organizations are learning too late that securing internet-facing services in their data center from attack is just as important as securing endpoints.

The typical steps in a SamSam ransomware attack are:

1
Attackers gain access to vulnerable server
Attackers exploit vulnerable software or weak/stolen credentials.
2
Attack spreads via remote access tools
Attackers harvest credentials, create SOCKS proxies to tunnel traffic, and abuse RDP to install SamSam on more computers in the network.
3
Ransomware payload deployed
Attackers run batch scripts to execute ransomware on compromised machines.
4
Ransomware demand delivered requiring payment to decrypt files
Demand amounts vary from victim to victim. Relatively low ransom amounts appear to be designed to encourage quick payment decisions.

What all the organizations successfully exploited by SamSam have in common is that they were running unpatched servers that made them vulnerable to SamSam. Some organizations had their endpoints and servers backed up, while others did not. Some of those without backups they could use to recover their systems chose to pay the ransom money.

Timeline of SamSam History and Exploits

Since its appearance in 2016, SamSam has been in the news with many successful incursions into healthcare, business, and government institutions.

March 2016
SamSam appears

SamSam campaign targets vulnerable JBoss servers
Attackers hone in on healthcare organizations specifically, as they’re more likely to have unpatched JBoss machines.

April 2016
SamSam finds new targets

SamSam begins targeting schools and government.
After initial success targeting healthcare, attackers branch out to other sectors.

April 2017
New tactics include RDP

Attackers shift to targeting organizations with exposed RDP connections, and maintain focus on healthcare.
An attack on Erie County Medical Center costs the hospital $10 million over three months of recovery.
Erie County Medical Center attacked by SamSam ransomware virus

January 2018
Municipalities attacked

• Attack on Municipality of Farmington, NM.
• Attack on Hancock Health.
Hancock Regional Hospital notice following SamSam attack
• Attack on Adams Memorial Hospital
• Attack on Allscripts (Electronic Health Records), which includes 180,000 physicians, 2,500 hospitals, and 7.2 million patients’ health records.

February 2018
Attack volume increases

• Attack on Davidson County, NC.
• Attack on Colorado Department of Transportation.
SamSam virus notification

March 2018
SamSam shuts down Atlanta

• Second attack on Colorado Department of Transportation.
• City of Atlanta suffers a devastating attack by SamSam.
The attack has far-reaching impacts — crippling the court system, keeping residents from paying their water bills, limiting vital communications like sewer infrastructure requests, and pushing the Atlanta Police Department to file paper reports.
Atlanta Ransomware outage alert
• SamSam campaign nets $325,000 in 4 weeks.
Infections spike as attackers launch new campaigns. Healthcare and government organizations are once again the primary targets.

How to Defend Against SamSam and Other Ransomware Attacks

The best way to respond to a ransomware attack is to avoid having one in the first place. If you are attacked, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or none if you ever suffer an attack.

In our previous post, How to Recover From Ransomware, we listed the ten ways to protect your organization from ransomware.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as disconnected external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to patch early and patch often to close known vulnerabilities in operating systems, server software, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

Please Tell Us About Your Experiences with Ransomware

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please tell us of your experiences in the comments.

The post Ransomware Update: Viruses Targeting Business IT Servers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.