Tag Archives: cloud security

Top four ways to improve your Security Hub security score

Post Syndicated from Priyank Ghedia original https://aws.amazon.com/blogs/security/top-four-ways-to-improve-your-security-hub-security-score/

AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks across your Amazon Web Services (AWS) accounts and AWS Regions, aggregates alerts, and enables automated remediation. Security Hub is designed to simplify and streamline the management of security-related data from various AWS services and third-party tools. It provides a holistic view of your organization’s security state that you can use to prioritize and respond to security alerts efficiently.

Security Hub assigns a security score to your environment, which is calculated based on passed and failed controls. A control is a safeguard or countermeasure prescribed for an information system or an organization that’s designed to protect the confidentiality, integrity, and availability of the system and to meet a set of defined security requirements. You can use the security score as a mechanism to baseline the accounts. The score is displayed as a percentage rounded up or down to the nearest whole number.

In this blog post, we review the top four mechanisms that you can use to improve your security score, review the five controls in Security Hub that most often fail, and provide recommendations on how to remediate them. This can help you reduce the number of failed controls, thus improving your security score for the accounts.

What is the security score?

Security scores represent the proportion of passed controls to enabled controls. The score is displayed as a percentage rounded to the nearest whole number. It’s a measure of how well your AWS accounts are aligned with security best practices and compliance standards. The security score is dynamic and changes based on the evolving state of your AWS environment. As you address and remediate findings associated with controls, your security score can improve. Similarly, changes in your environment or the introduction of new Security Hub findings will affect the score.

Each check is a point-in-time evaluation of a rule against a single resource that results in a compliance status of PASSED, FAILED, WARNING, or NOT_AVAILBLE. A control is considered passed when the compliance status of all underlying checks for resources are PASSED or if the FAILED checks have a workflow status of SUPPRESSED. You can view the security score through the Security Hub console summary page—as shown in figure 1—to quickly gain insights into your security posture. The dashboard provides visual representations and details of specific findings contributing to the score. For more information about how scores are calculated, see determining security scores.

Figure. 1 Security Hub dashboard

Figure. 1 Security Hub dashboard

How to improve the security score?

You can improve your security score in four ways:

  • Remediating failed controls: After the resources responsible for failed checks in a control are configured with compliant settings and the check is repeated, Security Hub marks the compliance status of the checks as PASSED and the workflow status as RESOLVED. This increases the number of passed controls, thus improving the score.
  • Suppressing findings associated with failed controls: When calculating the control status, Security Hub ignores findings in the ARCHIVED state as well as findings with a workflow status of SUPPRESSED, which will affect security scores. So if you suppress all failed findings for a control, the control status becomes passed.

    If you determine that a Security Hub finding for a resource is an accepted risk, you can manually set the workflow status of the finding to SUPPRESSED from the Security Hub console or using the BatchUpdateFindings API. Suppression doesn’t stop new findings from being generated, but you can set up an automation rule to suppress all future new and updated findings that meet the filtering criteria.

  • Disabling controls that aren’t relevant: Security Hub provides flexibility by allowing administrators to customize and configure security controls. This includes the ability to disable specific controls or adjust settings to help align with organizational security policies. When a control is disabled, security checks are no longer performed and no additional findings are generated. Existing findings are set to ARCHIVED and the control is excluded from the security score calculations.

    Use Security Hub central configuration with the Security Hub delegated administrator (DA) account to centrally manage Security Hub controls and standards and to view your Security Hub configuration throughout your organization from a single place. You can also deploy these settings to organizational units (OUs).

    Use central configuration in Security Hub to tailor the security controls to help align with your organization’s specific requirements. You can fine-tune your security controls, focus on relevant issues, and improve the accuracy and relevance of your security score. Introducing new central configuration capabilities in AWS Security Hub provides an overview and the benefits of central configuration.

    Suppression should be used when you want to tune control findings from specific resources whereas controls should be disabled only when the control is no longer relevant for your AWS environment.

  • Customize parameter values to fine tune controls: Some Security Hub controls use parameters that affect how the control is evaluated. Typically, these controls are evaluated against the default parameter values that Security Hub defines. However, for a subset of these controls, you can customize the parameter values. When you customize a parameter value for a control, Security Hub starts evaluating the control against the value that you specify. If the resource underlying the control satisfies the custom value, Security Hub generates a PASSED finding.

We will use these mechanisms to address the most commonly failed controls in the following sections.

Identifying the most commonly failed controls in Security Hub

You can use the AWS Management Console to identify the most commonly failed controls across your accounts in AWS Organizations:

  1. Sign in to the delegated administrator account and open the Security Hub console.
  2. On the navigation pain, choose Controls.

Here, you will see the status of your controls sorted by the severity of the failed controls. You will also see the associated number of failed checks with the failed controls in the Failed checks column on this page. A check is performed for each resource. If a column says 85 out of 124 for a control, it means 85 resources out of 124 failed the check for that control. You can sort this column in descending order to identify failed controls that have the most resources as shown in Figure 2.

Figure 2: Security Hub control status page

Figure 2: Security Hub control status page

Addressing the most commonly failed controls

In this section we address remediation strategies for the most used Security Hub controls that have Critical and High severity and have a high failure rate amongst AWS customers. We review five such controls and provide recommended best practices, default settings for the resource type at deployment, guardrails, and compensating controls where applicable.

AutoScaling.3: Auto Scaling group launch configuration

An Auto Scaling group in AWS is a service that automatically adjusts the number of Amazon Elastic Compute Cloud (Amazon EC2) instances in a fleet based on user-defined policies, making sure that the desired number of instances are available to handle varying levels of application demand. A launch configuration is a blueprint that defines the configuration of the EC2 instances to be launched by the Auto Scaling group. The AutoScaling.3 control checks whether Instance Metadata Service Version 2 (IMDSv2) is enabled on the instances launched by EC2 Auto Scaling groups using launch configurations. The control fails if the Instance Metadata Service (IMDS) version isn’t included in the launch configuration, or if both Instance Metadata Service Version 1 (IMDSv1) and IMDSv2 are included. AutoScaling.3 aligns with best practice SEC06-BP02 Reduce attack surface of the well architected framework.

The IMDS is a service on Amazon EC2 that provides metadata about EC2 instances, such as instance ID, public IP address, AWS Identity and Access Management (IAM) role information, and user data such as scripts during launch. IMDS also provides credentials for the IAM role attached to the EC2 instance, which can be used by threat actors for privilege escalation. The existing instance metadata service (IMDSv1) is fully secure, and AWS will continue to support it. If your organization strategy involves using IMDSv1, then consider disabling AutoScaling.3 and EC2.8 Security Hub controls. EC2.8 is a similar control, but checks the IMDS configuration for each EC2 instance instead of the launch configuration.

IMDSv2 adds protection for four types of vulnerabilities that could be used to access the IMDS, including misconfigured or open website application firewalls, misconfigured or open reverse proxies, unpatched service-side request forgery (SSRF) vulnerabilities, and misconfigured or open layer 3 firewalls and network address translation. It does so by requiring the use of a session token using a PUT request when requesting instance metadata and using a Time to Live (TTL) default of 1 so the token cannot travel outside the EC2 instance. For more information on protections added by IMDSv2, see Add defense in depth against open firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the EC2 Instance Metadata Service.

The Autoscaling.3 control creates a failed check finding for every Amazon EC2 launch configuration that is out of compliance. An Auto Scaling group is associated with one launch configuration at a time. You cannot modify a launch configuration after you create it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration with IMDSv2 enabled and then delete the old launch configuration. After you delete the launch configuration that’s out of compliance, Security Hub will automatically update the finding state to ARCHIVED. It’s recommended to use Amazon EC2 launch templates, which is a successor to launch configurations because you cannot create launch configurations with new EC2 instances released after December 31, 2022. See Migrate your Auto Scaling groups to launch templates for more information.

Amazon has taken a series of steps to make IMDSv2 the default. For example, Amazon Linux 2023 uses IMDSv2 by default for launches. You can also set the default instance metadata version at the account level to IMDSv2 for each Region. When an instance is launched, the instance metadata version is automatically set to the account level value. If you’re using the account-level setting to require the use of IMDSv2 outside of launch configuration, then consider using the central Security Hub configuration to disable AutoScaling.3 for these accounts. See the Sample Security Hub central configuration policy section for an example policy.

EC2.18: Security group configuration

AWS security groups act as virtual stateful firewalls for your EC2 instances to control inbound and outbound traffic and should follow the principle of least privileged access. In the Well-Architected Framework security pillar recommendation SEC05-BP01 Create network layers, it’s best practice to not use overly permissive or unrestricted (0.0.0.0/0) security groups because it exposes resources to misuse and abuse. By default, the EC2.18 control checks whether a security group permits unrestricted incoming TCP traffic on ports except for the allowlisted ports 80 and 443. It also checks if unrestricted UDP traffic is allowed on a port. For example, the check will fail if your security group has an inbound rule with unrestricted traffic to port 22. This control allows custom control parameters that can be used to edit the list of authorized ports for which unrestricted traffic is allowed. If you don’t expect any security groups in your organization to have unrestricted access on any port, then you can edit the control parameters and remove all ports from being allowlisted. You can use a central configuration policy as shown in Sample Security Hub central configuration policy to update the parameter across multiple accounts and Regions. Alternately, you can also add authorized ports to the list of ports you want to allowlist for the check to pass.

EC2.18 checks the rules in the security groups in accounts, whether the security groups are in use or not. You can use AWS Firewall Manager to identify and delete unused security groups in your organization using usage audit security group policies. Deleting unused security groups that have failed the checks will change the finding state of associated findings to ARCHIVED and exclude them from security score calculation. Deleting unused resources also aligns with SUS02-BP03 of the sustainability pillar of the Well-Architected Framework. You can create a Firewall Manager usage audit security group policy through the firewall manager using the following steps:

To configure Firewall Manager:

  1. Sign in to the Firewall Manager administrator account and open the Firewall Manager console.
  2. In the navigation pane, select Security policies.
  3. Choose Create policy.
  4. On Choose policy type and Region:
    1. For Region, select the AWS Region the policy is meant for.
    2. For Policy type, select Security group.
    3. For Security group policy type, select Auditing and cleanup of unused and redundant security groups.
    4. Choose Next.
  5. On Describe policy:
    1. Enter a Policy name and description.
    2. For Policy rules, select Security groups within this policy scope must be used by at least one resource.
    3. You can optionally specify how many minutes a security group can exist unused before it’s considered noncompliant, up to 525,600 minutes (365 days). You can use this setting to allow yourself time to associate new security groups with resources.
    4. For Policy action, we recommend starting by selecting Identify resources that don’t comply with the policy rules, but don’t auto remediate. This allows you to assess the effects of your new policy before you apply it. When you’re satisfied that the changes are what you want, edit the policy and change the policy action by selecting Auto remediate any noncompliant resources.
    5. Choose Next.
  6. On Define policy scope:
    1. For AWS accounts this policy applies to, select one of the three options as appropriate.
    2. For Resource type, select Security Group.
    3. For Resources, you can narrow the scope of the policy using tagging, by either including or excluding resources with the tags that you specify. You can use inclusion or exclusion, but not both.
    4. Choose Next.
  7. Review the policy settings to be sure they’re what you want, and then choose Create policy.

Firewall manager is a Regional service so these policies must be created in each Region you have services in.

You can also set up guardrails for security groups using Firewall Manager policies to remediate new or updated security groups that allow unrestricted access. You can create a Firewall Manager content audit security group policy through the Firewall Manager console:

To create a Firewall Manager security group policy:

  1. Sign in to the Firewall Manager administrator account.
  2. Open the Firewall Manager console.
  3. In the navigation pane, select Security policies.
  4. Choose Create policy.
  5. On Choose policy type and Region:
    1. For Region, select a Region.
    2. For Policy type, select Security group.
    3. For Security group policy type, select Auditing and enforcement of security group rules.
    4. Choose Next.
  6. On Describe policy:
    1. Enter a Policy name and description.
    2. For Policy rule options, select configure managed audit policy rules.
    3. Configure the following options under Policy rules.
      1. For the Security group rules to audit, select Inbound rules from the drop down.
      2. Select Audit overly permissive security group rules.
      3. Select Rule allows all traffic.
    4. For Policy action, we recommend starting by selecting Identify resources that don’t comply with the policy rules, but don’t auto remediate. This allows you to assess the effects of your new policy before you apply it. When you’re satisfied that the changes are what you want, edit the policy and change the policy action by selecting Auto remediate any noncompliant resources.
    5. Choose Next.
  7. On Define policy scope:
    1. For AWS accounts this policy applies to, select one of the three options as appropriate.
    2. For Resource type, select Security Group.
    3. For Resources, you can narrow the scope of the policy using tagging, by either including or excluding resources with the tags that you specify. You can use inclusion or exclusion, but not both.
    4. Choose Next.
  8. Review the policy settings to be sure they’re what you want, and then choose Create policy.

For use cases such as a bastion host where you might have unrestricted inbound access to port 22 (SSH), EC2.18 will fail. A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the internet. In this scenario, you might want to suppress findings associated with the bastion host security groups instead of disabling the control. You can create a Security Hub automation rule in the Security Hub delegated administrator account based on a tag or resource ID to set the workflow status of future findings to SUPPRESSED. Note that an automation rule applies only in the Region in which it’s created. To apply a rule in multiple Regions, the delegated administrator must create the rule in each Region.

To create an automation rule:

  1. Sign in to the delegated administrator account and open the Security Hub console.
  2. In the navigation pane, select Automations, and then choose Create rule.
  3. Enter a Rule Name and Rule Description.
  4. For Rule Type, select Create custom rule.
  5. In the Rule section, provide a unique rule name and a description for your rule.
  6. For Criteria, use the KeyOperator, and Value drop down menus to select your rule criteria. Use the following fields in the criteria section:
    1. Add key ProductName with operator Equals and enter the value Security Hub.
    2. Add key WorkFlowStatus with operator Equals and enter the value NEW.
    3. Add key ComplianceSecurityControlId with operator Equals and enter the value EC2.18.
    4. Add key ResourceId with operator Equals and enter the Amazon Resource Name (ARN) of the bastion host security group as the value.
  7. For Automated action:
    1. Choose the drop down under Workflow Status and select SUPPRESSED.
    2. Under Note, enter text such as EC2.18 exception.
  8. For Rule status, select Enabled.
  9. Choose Create rule.

This automation rule will set the workflow status of all future updated and new findings to SUPPRESSED.

IAM.6: Hardware MFA configuration for the root user

When you first create an AWS account, you begin with a single identity that has complete access to the AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account.

The root user has administrator level access to your AWS accounts, which requires that you apply several layers of security controls to protect this account. In this section, we walk you through:

  • When to apply which best practice to secure the root user, including the root user of the Organizations management account.
  • What to do when the root account isn’t required on your Organizations member accounts and what to do when the root user is required.

We recommend using a layered approach and applying multiple best practices to secure your root account across these scenarios.

AWS root user best practices include recommendations from SEC02-BP01, which recommends multi-factor authentication (MFA) for the root user be enabled. IAM.6 checks whether your AWS account is enabled to use a hardware MFA device to sign in with root user credentials. The control fails if MFA isn’t enabled or if any virtual MFA devices are permitted for signing in with root user credentials. A finding is generated for every account that doesn’t meet compliance. To remediate, see General steps for enabling MFA devices, which describes how to set up and use MFA with a root account. Remember that the root account should be used only when absolutely necessary and is only required for a subset of tasks. As a best practice, for other tasks we recommend signing in to your AWS accounts using federation, which provides temporary access keys by assuming an IAM role instead of using long-lived static credentials.

The Organizations management account deploys universal security guardrails, and you can configure additional services that will affect the member accounts in the organization. So, you should restrict who can sign in and administer the root user in your management account and is why you should apply hardware MFA as an added layer of security.

Note: Beginning on May 16, 2024, AWS requires multi-factor authentication (MFA) for the root user of your Organizations management account when accessing the console.

Many customers manage hundreds of AWS accounts across their organization and managing hardware MFA devices for each root account can be a challenge. While it’s a best practice to use MFA, an alternative approach might be necessary. This includes mapping out and identifying the most critical AWS accounts. This analysis should be done carefully—consider if this is a production environment, what type of data is present, and the overall criticality of the workloads running in that account.

This subset of your most critical AWS accounts should be configured with MFA. For other accounts, consider that in most cases the root account isn’t required and you can disable the use of the root account across the Organizations member accounts using Organizations service control policies (SCP). The following is an example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "*",
      "Resource": "*",
      "Effect": "Deny",
      "Condition": {
        "StringLike": {
          "aws:PrincipalArn": [
            "arn:aws:iam::*:root"
          ]
        }
      }
    }
  ]
}

If you’re using AWS Control Tower, use the disallow actions as a root user guardrail. If you’re using an SCP for organizations or the AWS Control Tower guardrail to restrict root use in member accounts, consider disabling the IAM.6 control in those member accounts. However, do not disable IAM.6 in the management account. See the Sample Security Hub central configuration policy section for an example policy.

If root account use is required within a member account, confirmed as a valid root-user-task, then perform the following steps:

  1. Complete the root user account recovery steps.
  2. Temporarily move that member account into a different OU that doesn’t include the root restriction SCP policy, limited to the timeframe required to make the necessary changes.
  3. Sign in using the recovered root user password and make the necessary changes.
  4. After the task is complete, move the account back into its original Organizations OU with the root restricted SCP in place.

When you take this approach, we recommend configuring Amazon CloudWatch to alert on root sign-in activity within AWS CloudTrail. Consider the Monitor IAM root user activity solution in the aws-samples GitHub to get started. Alternately, if Amazon GuardDuty is enabled, it will generate the Policy:IAMUser/RootCredentialUsage finding when the root user is used for a task.

Another consideration and best practice is to make sure that all AWS accounts have updated contact information, including the email attached to the root user. This is important for several reasons. For example, you must have access to the email associated with the root user to reset the root user’s password. See how to update the email address associated with the root user. AWS uses account contact information to notify and communicate with the AWS account administrators on several important topics including security, operations, and billing related information. Consider using an email distribution list to make sure these email addresses are mapped to a common internal mailbox restricted to your cloud or security team. See how to update your AWS primary and secondary account contact details.

EC2.2: Default security groups configuration

Each Amazon Virtual Private Cloud (Amazon VPC) comes with a default security group. We recommend that you create security groups for EC2 instances or groups of instances instead of using the default security group. If you don’t specify a security group when you launch an instance, the service associates the instance with the default security group for the VPC. In addition, the default security group cannot be deleted because it’s the default security group assigned to an EC2 instance if another security group is not created or assigned.

The default security group allows outbound and inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. EC2.2 checks whether the default security group of a VPC allows inbound or outbound traffic, and the control fails if the security group allows inbound or outbound traffic. This control doesn’t check if the default security group is in use. A finding is generated for each default VPC security group that’s out of compliance. The default security group doesn’t adhere to least privilege and therefore the following steps are recommended. If no EC2 instance is attached to the default security group, delete the inbound and outbound rules of the default security group. However, if you’re not certain that the default security group is in use, use the following AWS Command Line Interface (AWS CLI) command across each account and Region. If the command returns a list of EC2 instance IDs, then the default security group is in use by these instances. If it returns an empty list, then the default security group isn’t used in that account. Use the ‐‐region option to change Regions.

aws ec2 describe-instances --filters "Name=instance.group-name,Values=default"--query 'Reservations[].Instances[].InstanceId' --region us-east-1

For these instances, replace the default security group with a new security group using similar rules and work with the owners of those EC2 instances to determine a least privilege security group and ruleset that could be applied. After the instances are moved to the replacement security group, you can remove the inbound and outbound rules of the default security group. You can use an AWS Config rule in each account and Region to remove the inbound and outbound rules of the default security group.

To create a rule with auto remediation:

  1. If you haven’t already, set up a service role access for automation. After the role is created, copy the ARN of the service role to use in later steps.
  2. Open the AWS Config console.
  3. In the navigation pane, select Rules.
  4. On the Rules page, choose Add rule.
  5. On the Specify rule type page, enter vpc-default-security-group-closed in the search field.

    Note: This will check if the default security group of the VPC doesn’t allow inbound or outbound traffic.

  6. On the Configure rule page:
    1. Enter a name and description.
    2. Add tags as needed.
    3. Choose Next.
  7. Review and then choose Save.
  8. Search for the rule by its name on the rules list page and select the rule.
  9. From the Actions dropdown list, choose Manage remediation.
  10. Choose Auto remediation to automatically remediate noncompliant resources
  11. In the Remediation action dropdown, select AWSConfigRemediation-RemoveVPCDefaultSecurityGroupRules document.
  12. Adjust Rate Limits as needed.
  13. Under the Resource ID Parameter dropdown, select GroupId.
  14. Under Parameter, enter the ARN of the automation service role you copied in step 1.
  15. Choose Save.

It’s important to verify that changes and configurations are clearly communicated to all users of an environment. We recommend that you take the opportunity to update your company’s central cloud security requirements and governance guidance and notify users in advance of the pending change.

ECS.5: ECS container access configuration

An Amazon Elastic Container Service (Amazon ECS) task definition is a blueprint for running Docker containers within an ECS cluster. It defines various parameters required for launching containers, such as Docker image, CPU and memory requirements, networking configuration, container dependencies, environment variables, and data volumes. An ECS task definition is to containers is what a launch configuration is to EC2 instances. ECS.5 is a control related to ECS and ensures that the ECS task definition has read-only access to mounted root filesystem enabled. This control is important and great for defense in depth because it helps prevent containers from making changes to the container’s root file system, prevents privilege escalation if a container is compromised, and can improve security and stability. This control fails if the readonlyRootFilesystem parameter doesn’t exist or is set to false in the ECS task definition JSON.

If you’re using the console to create the task definition, then you must select the read-only box against the root file system parameter in the console as show in Figure 3. If you are using JSON for task definition, then the parameter readonlyRootFilesystem must be set to true and supplied with the container definition or updated in order for this check to pass. This control creates a failed check finding for every ECS task definition that is out of compliance.

Figure 3: Using the ECS console to set readonlyRootFilesystem to true

Figure 3: Using the ECS console to set readonlyRootFilesystem to true

Follow the steps in the remediation section of the control user guide to fix the resources identified by the control. Consider using infrastructure as code (IaC) tools such as AWS CloudFormation to define your task definitions as code, with the read-only root filesystem set to true to help prevent accidental misconfigurations. If you use continuous integration and delivery (CI/CD) to create your container task definitions, then consider adding a check that looks for the existence of the readonlyRootFilesystem parameter in the task definition and that its set to true.

If this is expected behavior for certain task definitions, you can use Security Hub automation rules to suppress the findings by matching on the ComplianceSecurityControlID and ResourceId filters in the criteria section.

To create the automation rule:

  1. Sign in to the delegated administrator account and open the Security Hub console.
  2. In the navigation pane, select Automations.
  3. Choose Create rule. For Rule Type, select Create custom rule.
  4. Enter a Rule Name and Rule Description.
  5. In the Rule section, enter a unique rule name and a description for your rule.
  6. For Criteria, use the KeyOperator, and Value drop down menus to specify your rule criteria. Use the following fields in the criteria section:
    1. Add key ProductName with operator Equals and enter the value Security Hub.
    2. Add key WorkFlowStatus with operator Equals and enter the value NEW.
    3. Add key ComplianceSecurityControlId with operator Equals and enter the value ECS.5.
    4. Add key ResourceId with operator Equals and enter the ARN of the ECS task definition as the value.
  7. For Automated action,
    1. Choose the dropdown under Workflow Status and select SUPPRESSED.
    2. Under note, enter a description such as ECS.5 exception.
  8. For Rule status, select Enabled
  9. Choose Create rule.

Sample Security Hub central configuration policy

In this section, we cover a sample policy for the controls reviewed in this post using central configuration. To use central configuration, you must integrate Security Hub with Organizations and designate a home Region. The home Region is also your Security Hub aggregation Region, which receives findings, insights, and other data from linked Regions. If you use the Security Hub console, these prerequisites are included in the opt-in workflow for central configuration. Remember that an account or OU can only be associated with one configuration policy at a given time as to not have conflicting configurations. The policy should also provide complete specifications of settings applied to that account. Review the policy considerations document to understand how central configuration policies work. Follow the steps in the Start using central configuration to get started.

If you want to disable controls and update parameters as described in this post, then you must create two policies in the Security Hub delegated administrator account home Region. One policy applies to the management account and another policy applies to the member accounts.

First, create a policy to disable IAM.6, Autoscaling.3, and update the ports for the EC2.18 control to identify security groups with unrestricted access on the ports. Apply this policy to all member accounts. Use the Exclude organization units or accounts section to enter the account ID of the AWS management account.

To create a policy to disable IAM.6, Autoscaling.3 and update the ports:

  1. Open the Security Hub console in the Security Hub delegated administrator account home Region.
  2. In the navigation pane, select Configuration and then the Policies tab. Then, choose Create policy. If you already have an existing policy that applies to all member accounts, then select the policy and choose Edit.
    1. For Controls, select Disable specific controls.
    2. For Controls to disable, select IAM.6 and AutoScaling.3.
    3. Select Customize controls parameters.
    4. From the Select a Control dropdown, select EC2.18.
      1. Edit the cell under List of authorized TCP ports, and add ports that are allow listed for unrestricted access. If no ports should be allow listed for unrestricted access then delete the text in the cell.
    5. For Accounts, select All accounts.
    6. Choose Exclude organizational units or accounts and enter the account ID of the management account.
    7. For Policy details, enter a policy name and description.
    8. Choose Next.
  3. On the Review and apply page, review your configuration policy details. Choose Create policy and apply.

Create another policy in the Security Hub delegated administrator account home Region to disable Autoscaling.3 and update the ports for the EC2.18 control to fail the check for security groups with unrestricted access on any port. Apply this policy to the management account. Use the Specific accounts option for the Accounts section and then the Enter organization unit or accounts tab to enter the account ID of the management account.

To disable Autoscaling.3 and update the ports:

  1. Open the AWS Security Hub console in the Security Hub delegated administrator account home Region.
  2. In the navigation pane, select Configuration and the Policies tab.
  3. Choose Create policy. If you already have an existing policy that applies to the management account only, then select the policy and choose Edit.
    1. For Controls, choose Disable specific controls.
    2. For Controls to disable, select AutoScaling.3.
    3. Select Customize controls parameters.
    4. From the Select a Control dropdown, select EC2.18.
      1. Edit the cell under List of authorized TCP ports and add ports that are allow listed for unrestricted access. If no ports should be allow listed for unrestricted access then delete the text in the cell.
    5. For Accounts, select Specific accounts.
    6. Select the Enter Organization units or accounts tab and enter the Account ID of the management account.
    7. For Policy details, enter a policy name and description.
    8. Choose Next.
  4. On the Review and apply page, review your configuration policy details. Choose Create policy and apply.

Conclusion

In this post, we reviewed the importance of the Security Hub security score and the four methods that you can use to improve your score. The methods include remediation of non-complaint resources, managing controls using Security Hub central configuration, suppressing findings using Security Hub automation rules, and using custom parameters to customize controls. You saw ways to address the five most commonly failed controls across Security Hub customers, including remediation strategies and guardrails for each of these controls.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Priyank Ghedia

Priyank Ghedia
Priyank is a Senior Solutions Architect focused on threat detection and incident response. Priyank helps customers meet their security visibility and response objectives by building architectures using AWS security services and tools. Before AWS, he spent eight years advising customers on global networking and security operations.

Author

Megan O’Neil
Megan is a Principal Security Solutions Architect for AWS. Megan and her team enable AWS customers to implement sophisticated, scalable, and secure solutions that solve their business challenges.

Securing AI Development in the Cloud: Navigating the Risks and Opportunities

Post Syndicated from Rapid7 original https://blog.rapid7.com/2024/06/05/securing-ai-development-in-the-cloud-navigating-the-risks-and-opportunities/

AI-TRiSM – Trust, Risk and Security Management in the Age of AI

Securing AI Development in the Cloud: Navigating the Risks and Opportunities

Co-authored by Lara Sunday and Pojan Shahrivar

As artificial intelligence (AI) and machine learning (ML) technologies continue to advance and proliferate, organizations across industries are investing heavily in these transformative capabilities. According to Gartner, by 2027, spending on AI software will grow to $297.9 billion at a compound annual growth rate of 19.1%. Generative AI (GenAI) software spend will rise from 8% of AI software in 2023 to 35% by 2027.

With the promise of enhanced efficiency, personalization, and innovation, organizations are increasingly turning to cloud environments to develop and deploy these powerful AI and ML technologies. However, this rapid innovation also introduces new security risks and challenges that must be addressed proactively to protect valuable data, intellectual property, and maintain the trust of customers and stakeholders.

Benefits of Cloud Environments for AI Development

Cloud platforms offer unparalleled scalability, allowing organizations to easily scale their computing resources up or down to meet the demanding requirements of training and deploying complex AI models.

“The ability to spin up and down resources on-demand has been a game-changer for our AI development efforts,” says Stuart Millar, Principal AI Engineer at Rapid7. “We can quickly provision the necessary compute power during peak training periods, then scale back down to optimize costs when those resources are no longer needed.”

Cloud environments also provide a cost-effective way to develop AI models, with usage-based pricing models that avoid large upfront investments in hardware and infrastructure. Additionally, major cloud providers offer access to cutting-edge AI hardware and pre-built tools and services, such as Amazon SageMaker, Azure Machine Learning, and Google Cloud AI Platform, which can accelerate development and deployment cycles.

Challenges and Risks of Cloud-Based AI Development

While the cloud offers numerous advantages for AI development, it also introduces unique challenges that organizations must navigate. Limited visibility into complex data flows and model updates can create blind spots for security teams, leaving them unable to effectively monitor for potential threats or anomalies.

In their  AI Threat Landscape Report, HiddenLayer highlighted that 98% of all the companies surveyed identified that elements of their AI models were crucial to their business success, and 77% identified breaches to their AI in the past year. Additionally, multi-cloud and hybrid deployments bring monitoring, governance, and reporting challenges, making it difficult to assess AI/ML risk in context across different cloud environments.

New Attack Vectors and Risk Types

Developing AI in the cloud also exposes organizations to new attack vectors and risk types that traditional security tools may not be equipped to detect or mitigate. Some examples include:

Prompt Injection (LLM01): Imagine a large language model used for generating marketing copy. An attacker could craft a special prompt that tricks the model into generating harmful or offensive content, damaging the company’s brand and reputation.

Training Data Poisoning (LLM03, ML02): Adversaries can tamper with training data to compromise the integrity and reliability of cloud-based AI models. In the case of an AI model used for image recognition in a security surveillance system, poisoned training data containing mislabeled images could cause the model to generate incorrect classifications, potentially missing critical threats.

Model Theft (LLM10, ML05): Unauthorized access to proprietary AI models deployed in the cloud poses risks to intellectual property and competitive advantage. If a competitor were to steal a model trained on a company’s sensitive data, they could potentially replicate its functionality and gain valuable insights.

Supply Chain Vulnerabilities (LLM05, ML06): Compromised libraries, datasets, or services used in cloud AI development pipelines can lead to widespread security breaches. A malicious actor might introduce a vulnerability into a widely used open-source library for AI, which could then be exploited to gain access to AI models deployed by multiple organizations.

Developing Best Practices for Securing AI Development

To address these challenges and risks, organizations need to develop and implement best practices and standards tailored to their specific business needs, striking the right balance between enabling innovation and introducing risk.

While guidelines like NCSC Secure AI System Development and The Open Standard for Responsible AI provide a valuable starting point, organizations must also develop their own customized best practices that align with their unique business requirements, risk appetite, and AI/ML use cases. For instance, a financial institution developing AI models for fraud detection might prioritize best practices around data governance and model explainability to ensure compliance with regulations and maintain transparency in decision-making processes.

Key considerations when developing these best practices include:

Ensuring secure data handling and governance throughout the AI lifecycle

  • Implementing robust access controls and identity management for AI/ML resources
  • Validating and monitoring AI models for potential biases, vulnerabilities, or anomalies
  • Establishing incident response and remediation processes for AI-specific threats
  • Maintaining transparency and explainability to understand and audit AI model behavior

Rapid7’s Approach to Securing AI Development

“At Rapid7, our InsightCloudSec solution offers real-time visibility into AI/ML resources running across major cloud providers, allowing security teams to continuously monitor for potential risks or misconfigurations,” says Aniket Menon, VP, Product Management. “Visibility is the foundation for effective security in any environment, and that’s especially true in the complex world of AI development. Without a clear view into your AI/ML assets and activities, you’re essentially operating blind, leaving your organization vulnerable to a range of threats.”

Here at Rapid7 our AI TRiSM (Trust, Risk, and Security Management) framework empowers our teams. The framework provides us with confidence not only in our operations but also in driving innovation. In their recent blog outlining the company’s AI principles, Laura Ellis and Sabeen Malik shared how Rapid7 tackles and addresses AI challenges. Centering on transparency, fairness, safety, security, privacy, and accountability, these principles are not just guidelines; they are integral to how Rapid7 builds, deploys, and manages AI systems.

Security and compliance are two key InsightCloudSec capabilities. Compliance Packs are out-of-the-box collections of related Insights focused on industry requirements and standards for all of your resources. Compliance packs may focus on security, costs, governance, or combinations of these across a variety of frameworks, e.g., HIPAA, PCI DSS, GDPR, etc.

Last year Rapid7 launched the Rapid7 AI/ML Security Best Practices compliance pack, the pack allows for real-time and continuous visibility into AI/ML resources running across your clouds with support for GenAI services across AWS, Azure and GCP. To empower you to assess this data in the context of your organizational requirements and priorities, you can then automatically prioritize AI/ML-related risk with Layered Context based on exploitability and potential business impact.

You can also leverage Identity Analysis in InsightCloudSec to collect and present the actions executed by a given user or role within a certain time period. These logged actions are collected and analyzed, providing you with a view across your organization of who can access AI/ML resources and automatically rightsize in accordance with the least privilege access (LPA) concept. This enables you to strategically inform your policies moving forward. Native automation allows you to then act on your assessments to alert on compliance drift, remediate AI/ML risk, and enact prevention mechanisms.

Rapid7’s Continued Dedication to AI Innovation

As an inaugural signer of the CISA Secure by Design Pledge, and through our partnership with Queen’s University Belfast Centre for Secure Information Technologies (CSIT), Rapid7 remains dedicated to collaborating with industry leaders and academic institutions to stay ahead of emerging threats and develop cutting-edge solutions for securing AI development.

As the adoption of AI and ML capabilities continues to accelerate, it’s imperative that organizations have the knowledge and tools to make informed decisions and build with confidence. By implementing robust best practices and leveraging advanced security tools like InsightCloudSec, organizations can harness the power of AI while mitigating the associated risks and ensuring their valuable data and intellectual property remain protected.

To learn more about how Rapid7 can help your organization develop and implement best practices for securing AI development, visit our website to request a demo.


Gartner, Forecast Analysis: Artificial Intelligence Software, 2023-2027, Worldwide, Alys Woodward, et al, 07 November 2023

The art of possible: Three themes from RSA Conference 2024

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/the-art-of-possible-three-themes-from-rsa-conference-2024/

San Francisco skyline with Oakland Bay Bridge at sunset, California, USA

RSA Conference 2024 drew 650 speakers, 600 exhibitors, and thousands of security practitioners from across the globe to the Moscone Center in San Francisco, California from May 6 through 9.

The keynote lineup was diverse, with 33 presentations featuring speakers ranging from WarGames actor Matthew Broderick, to public and private-sector luminaries such as Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly, U.S. Secretary of State Antony Blinken, security technologist Bruce Schneier, and cryptography experts Tal Rabin, Whitfield Diffie, and Adi Shamir.

Topics aligned with this year’s conference theme, “The art of possible,” and focused on actions we can take to revolutionize technology through innovation, while fortifying our defenses against an evolving threat landscape.

This post highlights three themes that caught our attention: artificial intelligence (AI) security, the Secure by Design approach to building products and services, and Chief Information Security Officer (CISO) collaboration.

AI security

Organizations in all industries have started building generative AI applications using large language models (LLMs) and other foundation models (FMs) to enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. So it’s not surprising that AI dominated conversations. Over 100 sessions touched on the topic, and the desire of attendees to understand AI technology and learn how to balance its risks and opportunities was clear.

“Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system.” — CISA

FMs and the applications built around them are often used with highly sensitive business data such as personal data, compliance data, operational data, and financial information to optimize the model’s output. As we explore the advantages of generative AI, protecting highly sensitive data and investments is a top priority. However, many organizations aren’t paying enough attention to security.

A joint generative AI security report released by Amazon Web Services (AWS) and the IBM Institute for Business Value during the conference found that 82% of business leaders view secure and trustworthy AI as essential for their operations, but only 24% are actively securing generative AI models and embedding security processes in AI development. In fact, nearly 70% say innovation takes precedence over security, despite concerns over threats and vulnerabilities (detailed in Figure 1).

Figure 1: Generative AI adoption concerns

Figure 1: Generative AI adoption concerns, Source: IBM Security

Because data and model weights—the numerical values models learn and adjust as they train—are incredibly valuable, organizations need them to stay protected, secure, and private, whether that means restricting access from an organization’s own administrators, customers, or cloud service provider, or protecting data from vulnerabilities in software running in the organization’s own environment.

There is no silver AI-security bullet, but as the report points out, there are proactive steps you can take to start protecting your organization and leveraging AI technology to improve your security posture:

  1. Establish a governance, risk, and compliance (GRC) foundation. Trust in gen AI starts with new security governance models (Figure 2) that integrate and embed GRC capabilities into your AI initiatives, and include policies, processes, and controls that are aligned with your business objectives.

    Figure 2: Updating governance, risk, and compliance models

    Figure 2: Updating governance, risk, and compliance models, Source: IBM Security

    In the RSA Conference session AI: Law, Policy, and Common Sense Suggestions to Stay Out of Trouble, digital commerce and gaming attorney Behnam Dayanim highlighted ethical, policy, and legal considerations—including AI-specific regulations—as well as governance structures such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0) that can help maximize a successful implementation and minimize potential risk.

  2. Strengthen your security culture. When we think of securing AI, it’s natural to focus on technical measures that can help protect the business. But organizations are made up of people—not technology. Educating employees at all levels of the organization can help avoid preventable harms such as prompt-based risks and unapproved tool use, and foster a resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

    “You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS
  3. Engage with partners. Developing and securing AI solutions requires resources and skills that many organizations lack. Partners can provide you with comprehensive security support—whether that’s informing and advising you about generative AI, or augmenting your delivery and support capabilities. This can help make your engineers and your security controls more effective.

    While many organizations purchase security products or solutions with embedded generative AI capabilities, nearly two-thirds, as detailed in Figure 3, report that their generative AI security capabilities come through some type of partner.

    Figure 3: More than 90% of security gen AI capabilities are coming from third-party products or partners

    Figure 3: Most security gen AI capabilities are coming from third-party products or partners, Source: IBM Security

    Tens of thousands of customers are using AWS, for example, to experiment and move transformative generative AI applications into production. AWS provides AI-powered tools and services, a Generative AI Innovation Center program, and an extensive network of AWS partners that have demonstrated expertise delivering machine learning (ML) and generative AI solutions. These resources can support your teams with hands-on help developing solutions mapped to your requirements, and a broader collection of knowledge they can use to help you make the nuanced decisions required for effective security.

View the joint report and AWS generative AI security resources for additional guidance.

Secure by Design

Building secure software was a popular and related focus at the conference. Insecure design is ranked as the number four critical web application security concern on the Open Web Application Security Project (OWASP) Top 10.

The concept known as Secure by Design is gaining importance in the effort to mitigate vulnerabilities early, minimize risks, and recognize security as a core business requirement. Secure by Design builds off of security models such as Zero Trust, and aims to reduce the burden of cybersecurity and break the cycle of constantly creating and applying updates by developing products that are foundationally secure.

More than 60 technology companies—including AWS—signed CISA’s Secure by Design Pledge during RSA Conference as part of a collaborative push to put security first when designing products and services.

The pledge demonstrates a commitment to making measurable progress towards seven goals within a year:

  • Broaden the use of multi-factor authentication (MFA)
  • Reduce default passwords
  • Enable a significant reduction in the prevalence of one or more vulnerability classes
  • Increase the installation of security patches by customers
  • Publish a vulnerability disclosure policy (VDP)
  • Demonstrate transparency in vulnerability reporting
  • Strengthen the ability of customers to gather evidence of cybersecurity intrusions affecting products

“From day one, we have pioneered secure by design and secure by default practices in the cloud, so AWS is designed to be the most secure place for customers to run their workloads. We are committed to continuing to help organizations around the world elevate their security posture, and we look forward to collaborating with CISA and other stakeholders to further grow and promote security by design and default practices.” — Chris Betz, CISO at AWS

The need for security by design applies to AI like any other software system. To protect users and data, we need to build security into ML and AI with a Secure by Design approach that considers these technologies to be part of a larger software system, and weaves security into the AI pipeline.

Since models tend to have very high privileges and access to data, integrating an AI bill of materials (AI/ML BOM) and Cryptography Bill of Materials (CBOM) into BOM processes can help you catalog security-relevant information, and gain visibility into model components and data sources. Additionally, frameworks and standards such as the AI RMF 1.0, the HITRUST AI Assurance Program, and ISO/IEC 42001 can facilitate the incorporation of trustworthiness considerations into the design, development, and use of AI systems.

CISO collaboration

In the RSA Conference keynote session CISO Confidential: What Separates The Best From The Rest, Trellix CEO Bryan Palma and CISO Harold Rivas noted that there are approximately 32,000 global CISOs today—4 times more than 10 years ago. The challenges they face include staffing shortages, liability concerns, and a rapidly evolving threat landscape. According to research conducted by the Information Systems Security Association (ISSA), nearly half of organizations (46%) report that their cybersecurity team is understaffed, and more than 80% of CISOs recently surveyed by Trellix have experienced an increase in cybersecurity threats over the past six months. When asked what would most improve their organizations’ abilities to defend against these threats, their top answer was industry peers sharing insights and best practices.

Building trusted relationships with peers and technology partners can help you gain the knowledge you need to effectively communicate the story of risk to your board of directors, keep up with technology, and build success as a CISO.

AWS CISO Circles provide a forum for cybersecurity executives from organizations of all sizes and industries to share their challenges, insights, and best practices. CISOs come together in locations around the world to discuss the biggest security topics of the moment. With NDAs in place and the Chatham House Rule in effect, security leaders can feel free to speak their minds, ask questions, and get feedback from peers through candid conversations facilitated by AWS Security leaders.

“When it comes to security, community unlocks possibilities. CISO Circles give us an opportunity to deeply lean into CISOs’ concerns, and the topics that resonate with them. Chatham House Rule gives security leaders the confidence they need to speak openly and honestly with each other, and build a global community of knowledge-sharing and support.” — Clarke Rodgers, Director of Enterprise Strategy at AWS

At RSA Conference, CISO Circle attendees discussed the challenges of adopting generative AI. When asked whether CISOs or the business own generative AI risk for the organization, the consensus was that security can help with policies and recommendations, but the business should own the risk and decisions about how and when to use the technology. Some attendees noted that they took initial responsibility for generative AI risk, before transitioning ownership to an advisory board or committee comprised of leaders from their HR, legal, IT, finance, privacy, and compliance and ethics teams over time. Several CISOs expressed the belief that quickly taking ownership of generative AI risk before shepherding it to the right owner gave them a valuable opportunity to earn trust with their boards and executive peers, and to demonstrate business leadership during a time of uncertainty.

Embrace the art of possible

There are many more RSA Conference highlights on a wide range of additional topics, including post-quantum cryptography developments, identity and access management, data perimeters, threat modeling, cybersecurity budgets, and cyber insurance trends. If there’s one key takeaway, it’s that we should never underestimate what is possible from threat actors or defenders. By harnessing AI’s potential while addressing its risks, building foundationally secure products and services, and developing meaningful collaboration, we can collectively strengthen security and establish cyber resilience.

Join us to learn more about cloud security in the age of generative AI at AWS re:Inforce 2024 June 10–12 in Pennsylvania. Register today with the code SECBLOfnakb to receive a limited time $150 USD discount, while supplies last.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Danielle Ruderman

Danielle Ruderman

Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

Post Syndicated from Miklos Csecsi original https://blog.rapid7.com/2024/03/07/securing-the-next-level-automated-cloud-defense-in-game-development-with-insightcloudsec/

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

Imagine the following scenario: You’re about to enjoy a strategic duel on chess.com or dive into an intense battle in Fortnite, but as you log in, you find your hard-earned achievements, ranks, and reputation have vanished into thin air. This is not just a hypothetical scenario but a real possibility in today’s cloud gaming landscape, where a single security breach can undo years of dedication and achievement.

Cloud gaming, powered by giants like AWS, is transforming the gaming industry, offering unparalleled accessibility and dynamic gaming experiences. Yet, with this technological leap forward comes an increase in cyber threats. The gaming world has already witnessed significant security breaches, such as the GTA5 code theft and Activision’s consistent data challenges, highlighting the lurking dangers in this digital arena.

In such a scenario, securing cloud-based games isn’t just an additional feature; it’s an absolute necessity. As we examine the intricate world of cloud gaming, the role of comprehensive security solutions becomes increasingly vital. In the subsequent sections, we will explore how Rapid7’s InsightCloudSec can be instrumental in securing cloud infrastructure and CI/CD processes in game development, thereby safeguarding the integrity and continuity of our virtual gaming experiences.

Challenges in Cloud-Based Game Development

Picture this: You’re a game developer, immersed in creating the next big title in cloud gaming. Your team is buzzing with creativity, coding, and testing. But then, out of the blue, you’re hit by a cyberattack, much like the one that rocked CD Projekt Red in 2021. Imagine the chaos – months of hard work (e.g. Cyberpunk 2077 or The Witcher 3) locked up by ransomware, with all sorts of confidential data floating in the wrong hands. This scenario is far from fiction in today’s digital gaming landscape.

What Does This Kind of Attack Really Mean for a Game Development Team?

The Network Weak Spot: It’s like leaving the back door open while you focus on the front; hackers can sneak in through network gaps we never knew existed. That’s what might have happened with CD Projekt Red. A more fortified network could have been their digital moat.

When Data Gets Held Hostage: It’s one thing to secure your castle, but what about safeguarding the treasures inside? The CD Projekt Red incident showed us how vital it is to keep our game codes and internal documents under lock and key, digitally speaking.

A Safety Net Missing: Imagine if CD Projekt Red had a robust backup system. Even after the attack, they could have bounced back quicker, minimizing the damage. It’s like having a safety net when you’re walking a tightrope. You hope you won’t need it, but you’ll be glad it’s there if you do.

This is where a solution like Rapid7’s InsightCloudSec comes into play. It’s not just about building higher walls; it’s about smarter, more responsive defense systems. Think of it as having a digital watchdog that’s always on guard, sniffing out threats, and barking alarms at the first sign of trouble.

With tools that watch over your cloud infrastructure, monitor every digital move in real time, and keep your compliance game strong, you’re not just creating games; you’re also playing the ultimate game of digital security – and winning.

In the realm of cloud-based game development, mastering AWS services’ security nuances transcends mere technical skill – it’s akin to painting a masterpiece. Let’s embark on a journey through the essential AWS services like EC2, S3, Lambda, CloudFront, and RDS, with a keen focus on their security features – our guardians in the digital expanse.

Consider Amazon EC2 as the infrastructure’s backbone, hosting the very servers that breathe life into games. Here, Security Groups act as discerning gatekeepers, meticulously managing who gets in and out. They’re not just gatekeepers but wise ones, remembering allowed visitors and ensuring a seamless yet secure flow of traffic.

Amazon S3 stands as our digital vault, safeguarding data with precision-crafted bucket policies. These policies aren’t just rules; they’re declarations of trust, dictating who can glimpse or alter the stored treasures. History is littered with tales of those who faltered, so precision here is paramount.

Lambda functions emerge as the silent virtuosos of serverless architecture, empowering game backends with their scalable might. Yet, their power is wielded judiciously, guided by the principle of least privilege through meticulously assigned roles and permissions, minimizing the shadow of vulnerability.

Amazon CloudFront, our swift courier, ensures game content flies across the globe securely and at breakneck speed. Coupled with AWS Shield (Advanced), it stands as a bulwark against DDoS onslaughts, guaranteeing that game delivery remains both rapid and impregnable.

Amazon RDS, the fortress for player data, automates the mundane yet crucial tasks – backups, patches, scaling – freeing developers to craft experiences. It whispers secrets only to those meant to hear, guarding data with robust encryption, both at rest and in transit.

Visibility and vigilance form the bedrock of our security ethos. With tools like AWS CloudTrail and CloudWatch, our gaze extends across every corner of our domain, ever watchful for anomalies, ready to act with precision and alacrity.

Encryption serves as our silent sentinel, a protective veil over data, whether nestled in S3’s embrace or traversing the vastness to and from EC2 and RDS. It’s our unwavering shield against the curious and the malevolent alike.

In weaving the security measures of these AWS services into the fabric of game development, we engage not in mere procedure but in the creation of a secure tapestry that envelops every facet of the development journey. In the vibrant, ever-evolving landscape of game creation, fortifying our cloud infrastructure with a security-first mindset is not just a technical endeavor – it’s a strategic masterpiece, ensuring our games are not only a source of joy but bastions of privacy and security in the cloud.

Automated Cloud Security with InsightCloudSec

When it comes to deploying a game in the cloud, understanding and implementing automated security is paramount. This is where Rapid7’s InsightCloudSec takes center stage, revolutionizing how game developers secure their cloud environments with a focus on automation and real-time monitoring.

Data Harvesting Strategies

InsightCloudSec differentiates itself through its innovative approach to data collection and analysis, employing two primary methods: API harvesting and Event Driven Harvesting (EDH). Initially, InsightCloudSec utilizes the API method, where it directly calls cloud provider APIs to gather essential platform information. This process enables InsightCloudSec to populate its platform with critical data, which is then unified into a cohesive dataset. For example, disparate storage solutions from AWS, Azure, and GCP are consolidated under a generic “Storage” category, while compute instances are unified as “Instances.” This normalization allows for the creation of universal compliance packs that can be applied across multiple cloud environments, enhancing the platform’s efficiency and coverage.

However, the real game-changer is Rapid7’s implementation of EDH. Unlike the traditional API pull method, EDH leverages serverless functions within the customer’s cloud environment to ingest security event data and configuration changes in real-time. This data is then pushed to the InsightCloudSec platform, significantly reducing costs and increasing the speed of data acquisition. For AWS environments, this means event information can be updated in near real-time, within 60 seconds, and within 2-3 minutes for Azure and GCP. This rapid update capability is a stark contrast to the hourly or daily updates provided by other cloud security platforms, setting InsightCloudSec apart as a leader in real-time cloud security monitoring.

Automated Remediation with InsightCloudSec Bots

The integration of near-to-real-time event information through Event Driven Harvesting (EDH) with InsightCloudSec’s advanced bot automation features equips developers with a formidable toolset for safeguarding cloud environments. This unique combination not only flags vulnerable configurations but also facilitates automatic remediation within minutes, a critical capability for maintaining a pristine cloud ecosystem. InsightCloudSec’s bots go beyond mere detection; they proactively manage misconfigurations and vulnerabilities across virtual machines and containers, ensuring the cloud space is both secure and compliant.

The versatility of these bots is remarkable. Developers have the flexibility to define the scope of the bot’s actions, allowing changes to be applied across one or multiple accounts. This granular control ensures that automated security measures are aligned with the specific needs and architecture of the cloud environment.

Moreover, the timing of these interventions can be finely tuned. Whether responding to a set schedule or reacting to specific events – such as the creation, modification, or deletion of resources – the bots are adept at addressing security concerns at the most opportune moments. This responsiveness is especially beneficial in dynamic cloud environments where changes are frequent and the security landscape is constantly evolving.

The actions undertaken by InsightCloudSec’s bots are diverse and impactful. According to the extensive list of sample bots provided by Rapid7, these automated guardians can, for example:

  • Automatically tag resources lacking proper identification, ensuring that all elements within the cloud are categorized and easily manageable
  • Enforce compliance by identifying and rectifying resources that do not adhere to established security policies, such as unencrypted databases or improperly configured networks
  • Remediate exposed resources by adjusting security group settings to prevent unauthorized access, a crucial step in safeguarding sensitive data
  • Monitor and manage excessive permissions, scaling back unnecessary access rights to adhere to the principle of least privilege, thereby reducing the risk of internal and external threats
  • And much more…

This automation, powered by InsightCloudSec, transforms cloud security from a reactive task to a proactive, streamlined process.

By harnessing the power of EDH for real-time data harvesting and leveraging the sophisticated capabilities of bots for immediate action, developers can ensure that their cloud environments are not just reactively protected but are also preemptively fortified against potential vulnerabilities and misconfigurations. This shift towards automated, intelligent cloud security management empowers developers to focus on innovation and development, confident in the knowledge that their infrastructure is secure, compliant, and optimized for the challenges of modern cloud computing.

Infrastructure as Code (IaC) Enhanced: Introducing mimInsightCloudSec

In the dynamic arena of cloud security, particularly in the bustling sphere of game development, the wisdom of “an ounce of prevention is worth a pound of cure” holds unprecedented significance. This is where the role of Infrastructure as Code (IaC) becomes pivotal, and Rapid7’s innovative tool, mimInsightCloudSec, elevates this approach to new heights.

mimInsightCloudSec, a cutting-edge component of the InsightCloudSec platform, is specifically designed to integrate seamlessly into any development pipeline, whether you prefer working with executable binaries or thrive in a containerized ecosystem. Its versatility allows it to be a perfect fit for various deployment strategies, making it an indispensable tool for developers aiming to embed security directly into their infrastructure deployment process.

The primary goal of mimInsightCloudSec is to identify vulnerabilities before the infrastructure is even created, thus embodying the proactive stance on security. This foresight is crucial in the realm of game development, where the stakes are high, and the digital landscape is constantly shifting. By catching vulnerabilities at this nascent stage, developers can ensure that their games are built on a foundation of security, devoid of the common pitfalls that could jeopardize their work in the future.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec
Figure 2: Shift Left – Infrastructure as Code (IaC) Security

Upon completion of its analysis, mimInsightCloudSec presents its findings in a variety of formats suitable for any team’s needs, including HTML, SARIF, and XML. This flexibility ensures that the results are not just comprehensive but also accessible, allowing teams to swiftly understand and address any identified issues. Moreover, these results are pushed to the InsightCloudSec platform, where they contribute to a broader overview of the security posture, offering actionable insights into potential misconfigurations.

But the capabilities of the InsightCloudSec platform extend even further. Within this sophisticated environment, developers can craft custom configurations, tailoring the security checks to fit the unique requirements of their projects. This feature is particularly valuable for teams looking to go beyond standard security measures, aiming instead for a level of infrastructure hardening that is both rigorous and bespoke. These custom configurations empower developers to establish a static level of security that is robust, nuanced, and perfectly aligned with the specific needs of their game-development projects.

By leveraging mimInsightCloudSec within the InsightCloudSec ecosystem, game developers not only can anticipate and mitigate vulnerabilities before they manifest but also refine their cloud infrastructure with precision-tailored security measures. This proactive and customized approach ensures that the gaming experiences they craft are not only immersive and engaging but also built on a secure, resilient digital foundation.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec
Figure 3: Misconfigurations and recommended remediations in the InsightCloudSec platform

In summary, Rapid7’s InsightCloudSec offers a comprehensive and automated approach to cloud security, crucial for the dynamic environment of game development. By leveraging both API harvesting and innovative Event Driven Harvesting – along with robust support for Infrastructure as Code – InsightCloudSec ensures that game developers can focus on what they do best: creating engaging and immersive gaming experiences with the knowledge that their cloud infrastructure is secure, compliant, and optimized for performance.

In a forthcoming blog post, we’ll explore the unique security challenges that arise when operating a game in the cloud. We’ll also demonstrate how InsightCloudSec can offer automated solutions to effortlessly maintain a robust security posture.

How to develop an Amazon Security Lake POC

Post Syndicated from Anna McAbee original https://aws.amazon.com/blogs/security/how-to-develop-an-amazon-security-lake-poc/

You can use Amazon Security Lake to simplify log data collection and retention for Amazon Web Services (AWS) and non-AWS data sources. To make sure that you get the most out of your implementation requires proper planning.

In this post, we will show you how to plan and implement a proof of concept (POC) for Security Lake to help you determine the functionality and value of Security Lake in your environment, so that your team can confidently design and implement in production. We will walk you through the following steps:

  1. Understand the functionality and value of Security Lake
  2. Determine success criteria for the POC
  3. Define your Security Lake configuration
  4. Prepare for deployment
  5. Enable Security Lake
  6. Validate deployment

Understand the functionality of Security Lake

Figure 1 summarizes the main features of Security Lake and the context of how to use it:

Figure 1: Overview of Security Lake functionality

Figure 1: Overview of Security Lake functionality

As shown in the figure, Security Lake ingests and normalizes logs from data sources such as AWS services, AWS Partner sources, and custom sources. Security Lake also manages the lifecycle, orchestration, and subscribers. Subscribers can be AWS services, such as Amazon Athena, or AWS Partner subscribers.

There are four primary functions that Security Lake provides:

  • Centralize visibility to your data from AWS environments, SaaS providers, on-premises, and other cloud data sources — You can collect log sources from AWS services such as AWS CloudTrail management events, Amazon Simple Storage Service (Amazon S3) data events, AWS Lambda data events, Amazon Route 53 Resolver logs, VPC Flow Logs, and AWS Security Hub findings, in addition to log sources from on-premises, other cloud services, SaaS applications, and custom sources. Security Lake automatically aggregates the security data across AWS Regions and accounts.
  • Normalize your security data to an open standard — Security Lake normalizes log sources in a common schema, the Open Security Schema Framework (OCSF), and stores them in compressed parquet files.
  • Use your preferred analytics tools to analyze your security data — You can use AWS tools, such as Athena and Amazon OpenSearch Service, or you can utilize external security tools to analyze the data in Security Lake.
  • Optimize and manage your security data for more efficient storage and query — Security Lake manages the lifecycle of your data with customizable retention settings with automated storage tiering to help provide more cost-effective storage.

Determine success criteria

By establishing success criteria, you can assess whether Security Lake has helped address the challenges that you are facing. Some example success criteria include:

  • I need to centrally set up and store AWS logs across my organization in AWS Organizations for multiple log sources.
  • I need to more efficiently collect VPC Flow Logs in my organization and analyze them in my security information and event management (SIEM) solution.
  • I want to use OpenSearch Service to replace my on-premises SIEM.
  • I want to collect AWS log sources and custom sources for machine learning with Amazon Sagemaker.
  • I need to establish a dashboard in Amazon QuickSight to visualize my Security Hub findings and a custom log source data.

Review your success criteria to make sure that your goals are realistic given your timeframe and potential constraints that are specific to your organization. For example, do you have full control over the creation of AWS services that are deployed in an organization? Do you have resources that can dedicate time to implement and test? Is this time convenient for relevant stakeholders to evaluate the service?

The timeframe of your POC will depend on your answers to these questions.

Important: Security Lake has a 15-day free trial per account that you use from the time that you enable Security Lake. This is the best way to estimate the costs for each Region throughout the trial, which is an important consideration when you configure your POC.

Define your Security Lake configuration

After you establish your success criteria, you should define your desired Security Lake configuration. Some important decisions include the following:

  • Determine AWS log sources — Decide which AWS log sources to collect. For information about the available options, see Collecting data from AWS services.
  • Determine third-party log sources — Decide if you want to include non-AWS service logs as sources in your POC. For more information about your options, see Third-party integrations with Security Lake; the integrations listed as “Source” can send logs to Security Lake.

    Note: You can add third-party integrations after the POC or in a second phase of the POC. Pre-planning will be required to make sure that you can get these set up during the 15-day free trial. Third-party integrations usually take more time to set up than AWS service logs.

  • Select a delegated administrator – Identify which account will serve as the delegated administrator. Make sure that you have the appropriate permissions from the organization admin account to identify and enable the account that will be your Security Lake delegated administrator. This account will be the location for the S3 buckets with your security data and where you centrally configure Security Lake. The AWS Security Reference Architecture (AWS SRA) recommends that you use the AWS logging account for this purpose. In addition, make sure to review Important considerations for delegated Security Lake administrators.
  • Select accounts in scope — Define which accounts to collect data from. To get the most realistic estimate of the cost of Security Lake, enable all accounts across your organization during the free trial.
  • Determine analytics tool — Determine if you want to use native AWS analytics tools, such as Athena and OpenSearch Service, or an existing SIEM, where the SIEM is a subscriber to Security Lake.
  • Define log retention and Regions — Define your log retention requirements and Regional restrictions or considerations.

Prepare for deployment

After you determine your success criteria and your Security Lake configuration, you should have an idea of your stakeholders, desired state, and timeframe. Now you need to prepare for deployment. In this step, you should complete as much as possible before you deploy Security Lake. The following are some steps to take:

  • Create a project plan and timeline so that everyone involved understands what success look like and what the scope and timeline is.
  • Define the relevant stakeholders and consumers of the Security Lake data. Some common stakeholders include security operations center (SOC) analysts, incident responders, security engineers, cloud engineers, finance, and others.
  • Define who is responsible, accountable, consulted, and informed during the deployment. Make sure that team members understand their roles.
  • Make sure that you have access in your management account to delegate and administrator. For further details, see IAM permissions required to designate the delegated administrator.
  • Consider other technical prerequisites that you need to accomplish. For example, if you need roles in addition to what Security Lake creates for custom extract, transform, and load (ETL) pipelines for custom sources, can you work with the team in charge of that process before the POC?

Enable Security Lake

The next step is to enable Security Lake in your environment and configure your sources and subscribers.

  1. Deploy Security Lake across the Regions, accounts, and AWS log sources that you previously defined.
  2. Configure custom sources that are in scope for your POC.
  3. Configure analytics tools in scope for your POC.

Validate deployment

The final step is to confirm that you have configured Security Lake and additional components, validate that everything is working as intended, and evaluate the solution against your success criteria.

  • Validate log collection — Verify that you are collecting the log sources that you configured. To do this, check the S3 buckets in the delegated administrator account for the logs.
  • Validate analytics tool — Verify that you can analyze the log sources in your analytics tool of choice. If you don’t want to configure additional analytics tooling, you can use Athena, which is configured when you set up Security Lake. For sample Athena queries, see Amazon Security Lake Example Queries on GitHub and Security Lake queries in the documentation.
  • Obtain a cost estimate — In the Security Lake console, you can review a usage page to verify that the cost of Security Lake in your environment aligns with your expectations and budgets.
  • Assess success criteria — Determine if you achieved the success criteria that you defined at the beginning of the project.

Next steps

Next steps will largely depend on whether you decide to move forward with Security Lake.

  • Determine if you have the approval and budget to use Security Lake.
  • Expand to other data sources that can help you provide more security outcomes for your business.
  • Configure S3 lifecycle policies to efficiently store logs long term based on your requirements.
  • Let other teams know that they can subscribe to Security Lake to use the log data for their own purposes. For example, a development team that gets access to CloudTrail through Security Lake can analyze the logs to understand the permissions needed for an application.

Conclusion

In this blog post, we showed you how to plan and implement a Security Lake POC. You learned how to do so through phases, including defining success criteria, configuring Security Lake, and validating that Security Lake meets your business needs.

As a customer, this guide will help you run a successful proof of value (POV) with Security Lake. It guides you in assessing the value and factors to consider when deciding to implement the current features.

Further resources

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Anna McAbee

Anna McAbee

Anna is a Security Specialist Solutions Architect focused on threat detection and incident response at AWS. Before AWS, she worked as an AWS customer in financial services on both the offensive and defensive sides of security. Outside of work, Anna enjoys cheering on the Florida Gators football team, wine tasting, and traveling the world.

Author

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Marc Luescher

Marc Luescher

Marc is a Senior Solutions Architect helping enterprise customers be successful, focusing strongly on threat detection, incident response, and data protection. His background is in networking, security, and observability. Previously, he worked in technical architecture and security hands-on positions within the healthcare sector as an AWS customer. Outside of work, Marc enjoys his 3 dogs, 4 cats, and 20+ chickens.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/12/19/expanded-coverage-and-new-attack-path-visualizations-help-security-teams-prioritize-cloud-risk-and-understand-blast-radius/

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

Cloud environments differ in a number of ways from more traditional on-prem environments. From the immense scale and compounding complexity to the rate of change, the cloud creates a host of challenges for security teams to navigate and grapple with. By definition, anything running in the cloud has the potential to be made publicly available, either directly or indirectly. The interconnected nature of these environments is such that when one account, resource, or service is compromised, it can be fairly easy for a bad actor to move laterally across your environment and/or grant themselves the permissions to wreak havoc. These avenues for lateral movement or privilege escalation are often referred to as attack paths.

Having a solution in place that can clearly and dynamically detect and depict these attack paths is critical to helping teams not only understand where risks exist across their environment but arguably more importantly how they are most likely to be exploited and what that means for an organization – particularly with respect to protecting high-value assets.

Detect and Remediate Attack Paths With InsightCloudSec

Attack Path Analysis in InsightCloudSec enables Rapid7 customers to see their cloud environments from the perspective of an attacker. It visualizes the various ways an attacker could gain access, move between resources, and compromise the cloud environment. Attack Paths are high fidelity signals in our risk prioritization model that focuses on identifying toxic combinations that lead to real business impact.

Since Rapid7 initially launched Attack Path Analysis, we’ve continued to roll out incremental updates to the feature, primarily in the form of expanded attack path coverage across each of the major cloud service providers (CSPs). In our most recent InsightCloudSec release (12.12.2023), we’ve continued this momentum, announcing additional attack paths as well as some exciting updates around how we visualize risk across paths and the potential blast radius should a compromised resource within an existing attack path be exploited. In this post, we’ll dive into an example of one of our recently added attack paths for Microsoft Azure along with a bit more detail about the new risk visualizations. So with that, let’s jump right in.

Expanding Coverage With New Attack Paths

First, on the coverage side of things we’ve added seven new paths in recent releases across AWS and Azure. Our AWS coverage was extended to support ECS across all of our AWS Attack Paths, and we also introduced 3 new Azure Attack paths. In the interest of brevity, we won’t cover each of them, but we do have an ever-developing list of supported attack paths you can access here on the docs page. As an example, however, let’s dive into one of the new paths we released for Azure, which identifies the presence of attack paths targeting publicly exposed instances that also have attached privileged roles.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

This type of attack path is concerning for a couple of reasons: First and foremost, an attacker could use the publicly exposed instance as an inroad to your cloud environment due to the fact that it’s publicly accessible, gaining access to sensitive data on the resource itself or accessing data the resource in question has indirect access to. Secondly, since the attached role is capable of escalating privileges, an attacker could then leverage the resource to assign themselves admin permissions which could in turn be used to open up new attack vectors.

Because this could have wide-reaching ramifications should it be exploited, we’ve assigned this a critical severity. That means we’ll want to work to resolve this as fast as possible any time this path shows up across our cloud environments, maybe even automating the process of closing down public access or adjusting the resource permissions to limit the potential for lateral movement or privilege escalation. Speaking of paths with widespread impact should they be exploited, that brings me to some other exciting updates we’ve rolled out to Attack Path Analysis.

Clearly Visualizing Risk Severity and Potential Blast Radius

As I mentioned earlier, along with expanded coverage, we’ve also updated Attack Path Analysis to make it clearer for users where your riskiest assets lie across a given attack path and to clearly show the potential blast radius of an exploitation.

To make it easier to understand the overall riskiness of an attack path and where its choke points are, we’ve added a new security view that visualizes the risk of each resource along a given path. This new view makes it very easy for security teams to immediately understand which specific resources present the highest risk and where they should be focusing their remediation efforts to block potential attackers in their tracks.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

In addition to this new security-focused view, we’ve also extended Attack Path Analysis to show a potential blast radius by displaying a graph-based topology map that helps clearly outline the various ways resources across your environment – and specifically within an attack path – interconnect with one another.

This topology map not only makes it easier for security teams to quickly hone in on what needs their attention first during an investigation, but also where a bad actor could move next. Additionally, this view helps security teams and leaders in communicating risk across the organization, particularly when engaging with non-technical stakeholders that find it difficult to understand why exactly a compromised resource presents a potentially larger risk to the business.

We will continue to expand on our existing Attack Path Analysis capabilities in the future, so be sure to keep an eye out for additional paths being added in the coming months as well as a continued effort to enable security teams to more quickly analyze cloud risk with the context needed to effectively detect, communicate, prioritize, and respond.

NIST SP 800-53 Rev. 5 Updates: What You Need to Know About The Most Recent Patch Release (5.1.1)

Post Syndicated from Lara Sunday original https://blog.rapid7.com/2023/12/14/nist-sp-800-53-rev-5-updates-what-you-need-to-know-about-the-most-recent-patch-release-5-1-1/

NIST SP 800-53 Rev. 5 Updates: What You Need to Know About The Most Recent Patch Release (5.1.1)

On November 6th, the National Institute of Standards and Technology (NIST) issued an update to SP 800-53, a NIST-curated catalog of controls that organizations can implement to effectively manage security and privacy risk. In this blog we’ll cover the new and updated controls within patch release 5.1.1, as well as review how Rapid7 InsightCloudSec helps security teams implement and continuously enforce them across their organizations. Let’s dive right in.

Updates to NIST SP 800-53 Compliance Pack: What You Need to Know About Revision 5.1.1

Unlike the large revision that occurred a few years back when Revision 5 was released – which brought with it nearly 270 control updates in aggregate – this update doesn’t have quite the far-reaching implications. That said, there are a few changes to be aware of. Release 5.1.1 added one new control with three supporting control enhancements, along with some minor grammar and formatting structure changes to other existing controls. Organizations are not mandated to implement the new control and have the option to defer implementation until SP 800-53 Release 6.0.0 is issued, however there is no defined timeline for when 6.0.0 will be released.

While there is no mandate at this time, the team here at Rapid7 generally advises our customers to adopt new patch releases immediately to ensure alignment with the most up-to-date best practices and that your team is covered for emerging attack vectors. In this case, we recommend adopting 5.1.1 primarily to ensure you’re effectively implementing encryption and authentication controls across your environment.

The newly-added control is Identification and Authentication (or IA-13) which states that organizations should “Employ identity providers and authorization servers to manage user, device, and non-person entity (NPE) identities, attributes, and access rights supporting authentication and authorization decisions.”

IA-13 has been broken down by NIST into three supporting control enhancements:

  • IA-13 (01) – Cryptographic keys that protect access tokens are generated, managed, and protected from disclosure and misuse.
  • IA-13 (02) – The source and integrity of identity assertions and access tokens are verified before granting access to system and information resources.
  • IA-13 (03) – Assertions and access tokens are continuously refreshed, time-restricted, audience-restrained and revoked when necessary and after a defined period of non-use.

So, what does all that mean? Put simply, organizations should implement controls to effectively track and manage user and system entity permissions to ensure only authorized users are permitted access to corporate systems or data. This includes the proper use of encryption, hygiene and lifecycle management for access tokens.

This is, of course, a much needed and community-requested addition that speaks to the growing awareness and criticality of implementing checks and guardrails to mitigate identity-related risk. A key component of this equation is implementing a solution that can help you detect areas of your cloud environment that haven’t fully implemented these controls. This can be a particularly challenging thing to manage in a cloud environment, given its democratized nature, the sheer volume of identities and permissions that need to be managed and the ease with which improper allocation of permissions and privileges can occur.

Implement and Continuously Enforce NIST SP 800-53 Rev. 5 with InsightCloudSec

InsightCloudSec allows security teams to establish and continuously measure compliance against organizational policies, whether they’re based on service provider best practices, a common industry framework, or a custom pack tailored to specific business needs.

A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework, or industry or provider best practices. The platform comes out of the box with 40+ compliance packs, including a dedicated pack for NIST SP 800-53 Rev. 5.1.1, which now provides an additional 14 insights that align to the newly-added IA-13.

The dedicated pack provides 367 Insights checking against 128 NIST SP 800-53 Rev. 5.1.1 requirements that assess your multi cloud environment for compliance with the controls outlined by NIST. With extensive support for various resource types across all major cloud service providers (CSPs), security teams can confidently implement and continuously enforce compliance with SP 800-53 Rev 5.1.1.

NIST SP 800-53 Rev. 5 Updates: What You Need to Know About The Most Recent Patch Release (5.1.1)

InsightCloudSec continuously assesses your entire multi-cloud environment for compliance with one or more compliance packs and detects noncompliant resources within minutes after they are created or an unapproved change is made. If you so choose, you can make use of the platform’s native, no-code automation to contact the resource owner, or even remediate the issue—either via deletion or by adjusting the configuration or permissions—without any human intervention.

For more information about how to use InsightCloudSec to implement and enforce compliance standards like those outlined in NIST SP 800-53 Rev. 5.1.1, be sure to check out the docs page! For more on our cloud identity and access management capabilities, we’ve got some additional information on that here.

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Post Syndicated from Laura Ellis original https://blog.rapid7.com/2023/11/29/rapid7-takes-next-step-in-ai-innovation-with-new-ai-powered-threat-detections/

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Digital transformation has created immense opportunity to generate new revenue streams, better engage with customers and drive operational efficiency. A decades-long transition to cloud as the de-facto delivery model of choice has delivered undeniable value to the business landscape. But any change in operating model brings new challenges too.

The speed, scale and complexity of modern IT environments results in security teams being tasked with analyzing mountains of data to keep pace with the ever-expanding threat landscape. This dynamic puts security analysts on their heels, constantly reacting to incoming threat signals from tools that weren’t purpose-built to solve hybrid environments, creating coverage gaps and a need to swivel-chair between a multitude of point solutions. Making matters worse? Attackers have increasingly looked to weaponize AI technologies to launch sophisticated attacks, benefiting from increased scale, easy access to AI-generated malware packages, as well as more effective social engineering and phishing using generative AI.

To combat these challenges, we need to equip our security teams with modern solutions leveraging AI to cut through the noise and boost signals that matter.  Our AI algorithms alleviate alert and action fatigue by delivering visibility across your IT environment and intelligently prioritizing the most important risk signals.

Rapid7 Has Been an AI Innovator for Decades

There has been a groundswell of development and corresponding interest in generative AI, particularly in the last few years as mainstream adoption of large language models (LLMs) grows. Most notably OpenAI’s ChatGPT – has brought AI to the forefront of people’s minds. This buzz has resulted in a number of vendors in the security space launching their own intelligent assistants and working to incorporate AI/ML into their respective solutions to keep pace.

From our perspective, this is great news and a huge step forward in the data & AI space. Rapid7 is also accelerating investment, but we’re certainly not starting from scratch. In fact, Rapid7 was a pioneer in AI development for security use cases, starting all the way back to our earliest days with our VM Expert System in the early 2000’s.

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Built on decades of risk analysis and continuously trained by our expert SOC team, Rapid7 AI enables your team to focus on what matters most by proactively shrinking your attack surface and intelligently re-balancing the scales between signal and noise.

With visibility across your hybrid attack surface, the Insight Platform enables proactive prevention, leveraging a proprietary AI-based detection engine to spot threats faster than ever and automatically prioritize the signals that matter most based on likelihood of exploitation and potential business impact. Based on learnings from your own environment and security operations over time, the platform will intelligently recommend updates to detection rule settings in an effort to reduce excess noise and eliminate false positives.

By integrating our AI capabilities into the Rapid7 platform, customers benefit from:

  • World class threat efficacy, with AI-driven detection of anomalous activity.  With a vast amount of legitimate activity occurring across customer environments, our AI algorithms validate if activity is actually malicious, allowing teams to spot unknown threats faster than ever.
  • Help to cut through the noise, by identifying the signals that matter most.  Our AI algorithms  automatically prioritize risks and threats, intelligently, suppressing benign alerts to eliminate the noise so analysts can focus on what matters most.
  • The confidence that they’re taking action on AI-generated insights they can trust, built on decades of risk and threat analysis and trained by a team of recognized innovators in AI-driven security.

Recent Innovations in AI-driven Threat Detection

We’ve recently announced two new AI/ML-powered threat detection capabilities aimed at enabling teams to detect unknown threats across a customer’s  environment faster than ever before without introducing excess noise.

  • Cloud Anomaly Detection
    Cloud Anomaly Detection is an AI-powered, agentless detection capability designed to detect and prioritize anomalous activity within an organization’s own cloud environment. The proprietary AI engine goes beyond simply detecting suspicious behavior; it automatically suppresses benign signals to reduce noise, eliminate false positives, and enable teams to focus on investigating highly-probable active threats. For more information on Cloud Anomaly Detection, check out the launch blog here.
  • Intelligent Kerberoasting Detection
    We’ve expanded existing AI-driven detections for attack types such as data exfiltration, phishing and credential theft to include intelligent detection, validation and prioritization of Kerberoasting attacks. The platform goes beyond traditional tactics for detecting Kerberoasting by applying a deep understanding of typical user activity across the organization. With this context, SOC teams can respond with confidence knowing the signals they are receiving are actually indicative of a Kerberoasting attack.

Rapid7 continues to explore and invest in ways we can leverage AI/ML to better-equip our customers to defend their organizations against the ever-expanding threat landscape. Keep an eye out in the near-future for additional innovations to come out in this space.

For now, be sure to stop by the Rapid7 booth (#1270) at AWS Re:Invent, where we’ll be showcasing Cloud Anomaly Detection and talk to us about how your team is thinking about utilizing AI.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/11/28/updates-to-layered-context-enable-teams-to-quickly-understand-which-risk-signals-are-most-pressing/

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

Layered Context introduced a consolidated view of all security risks insightCloudSec collects from the various layers of a cloud environment. This enabled our customers to go from visibility into individual security risks on a resource, to understanding all of the risks that impacted that resource and the overall risk of that resource.

For example: let’s take a cloud resource that has a port left open to the public.

With this level of detail it is pretty challenging to identify the risk level, because we don’t know enough about the resource in question, or even if it was supposed to be opened to the public or not. It’s not that this isn’t risky, we just need to know more to evaluate just how risky it is. As we add more context, we start to get a clearer picture: the environment the resource is running in, if it is connected to a business critical application, does it have any known vulnerabilities, are there identities with elevated permissions associated with the resource, etc.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

By layering together all of this context, customers are able to effectively understand the actual risk associated with each and every one of their resources – in real-time. This is of course helpful information to have in one consolidated view, but even still it can be difficult to sift through potentially thousands of resources and prioritize the work that needs to be done to secure each one. To that end, we are excited to introduce a new risk score in Layered Context, which analyzes all the signals and context we know about a given cloud resource and automatically assigns a score and a severity, making it easy for our customers to understand the riskiest resources they should focus on.

Prioritizing Risk By Focusing on Toxic Combinations

Much like Layered Context itself, the new risk score combines a variety of risk signals, assigning a higher risk score to resources that suffer from toxic combinations, or multiple risk vectors that compound to present an increased likelihood or impact of compromise.

The risk score takes into account:

  • Business Criticality, with an understanding of what applications the resource is associated with such as a crown-jewel or revenue generating app
  • Public Accessibility, both from a network perspective as well as via user permissions (more on that in a second)
  • Potential Attack Paths, to understand how a bad actor could move laterally across your inter-connected environment
  • Identity-related risk, including excessive and/or unused permissions and privileges
  • Misconfigurations, including whether or not the resource is in compliance with organizational standards
  • Threats to factor in any malicious behavior that has been detected
  • And of course, Vulnerabilities, using Rapid7’s Active Risk model which consumes data on exploitability and active exploitation in the wild

By identifying these toxic combinations, we can ensure the riskiest resources are given the highest priority. Each resource is assigned a score and a severity, making it easy for our customers to see where the riskiest resources exist in their environment and where to focus.

A Clear Understanding of How We Calculate Risk

Alongside our risk score, we are  introducing a new view to breakdown all of the reasons why a resource has been scored accordingly. This will give an overview of the most important information our customers need to know that clearly summarizes the factors that influenced the risk scoring. Reducing the time required to understand why a resource is risky, meaning security teams can focus on remediating the risks.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

A Bit More on How we Determine Public Accessibility

As mentioned previously, the basis of much of our risk calculation in cloud resources stems from a simple question: “is this resource publicly accessible?” This is a critical detail in determining relative risk, but can be very difficult to ascertain given the complex and ephemeral nature of cloud environments. To address this, we’ve invested significant time and effort to ensure we’re assessing public accessibility as accurately as possible but also explaining why we’ve determined it that way, so it’s much easier to take remediation action. This determination can easily be viewed on a per resource basis from the Layered Context page.

We have lots of exciting releases coming up in the next few months, alongside Risk scoring we are also extending our Attack Path Analysis feature to show the Blast Radius of an Attack with improved topology visualizations.  This will give our customers not only the visibility into how an attacker could exploit a given resource but also the potential for lateral movement between interconnected resources. Additionally, we’ll be updating the way we validate and show proof of public accessibility. Should a resource be publicly accessible, you will be able to easily view the proof details which will show exactly which combination of configurations is resulting in the resource being publicly accessible.

The new risk scoring capabilities in Layered Context will be on display at AWS Re:Invent next week. Be sure to stop by booth #1270 to see it in action!

How to use the BatchGetSecretsValue API to improve your client-side applications with AWS Secrets Manager

Post Syndicated from Brendan Paul original https://aws.amazon.com/blogs/security/how-to-use-the-batchgetsecretsvalue-api-to-improve-your-client-side-applications-with-aws-secrets-manager/

AWS Secrets Manager is a service that helps you manage, retrieve, and rotate database credentials, application credentials, OAuth tokens, API keys, and other secrets throughout their lifecycles. You can use Secrets Manager to help remove hard-coded credentials in application source code. Storing the credentials in Secrets Manager helps avoid unintended or inadvertent access by anyone who can inspect your application’s source code, configuration, or components. You can replace hard-coded credentials with a runtime call to the Secrets Manager service to retrieve credentials dynamically when you need them.

In this blog post, we introduce a new Secrets Manager API call, BatchGetSecretValue, and walk you through how you can use it to retrieve multiple Secretes Manager secrets.

New API — BatchGetSecretValue

Previously, if you had an application that used Secrets Manager and needed to retrieve multiple secrets, you had to write custom code to first identify the list of needed secrets by making a ListSecrets call, and then call GetSecretValue on each individual secret. Now, you don’t need to run ListSecrets and loop. The new BatchGetSecretValue API reduces code complexity when retrieving secrets, reduces latency by running bulk retrievals, and reduces the risk of reaching Secrets Manager service quotas.

Security considerations

Though you can use this feature to retrieve multiple secrets in one API call, the access controls for Secrets Manager secrets remain unchanged. This means AWS Identity and Access Management (IAM) principals need the same permissions as if they were to retrieve each of the secrets individually. If secrets are retrieved using filters, principals must have both permissions for list-secrets and get-secret-value on secrets that are applicable. This helps protect secret metadata from inadvertently being exposed. Resource policies on secrets serve as another access control mechanism, and AWS principals must be explicitly granted permissions to access individual secrets if they’re accessing secrets from a different AWS account (see Cross-account access for more information). Later in this post, we provide some examples of how you can restrict permissions of this API call through an IAM policy or a resource policy.

Solution overview

In the following sections, you will configure an AWS Lambda function to use the BatchGetSecretValue API to retrieve multiple secrets at once. You also will implement attribute based access control (ABAC) for Secrets Manager secrets, and demonstrate the access control mechanisms of Secrets Manager. In following along with this example, you will incur costs for the Secrets Manager secrets that you create, and the Lambda function invocations that are made. See the Secrets Manager Pricing and Lambda Pricing pages for more details.

Prerequisites

To follow along with this walk-through, you need:

  1. Five resources that require an application secret to interact with, such as databases or a third-party API key.
  2. Access to an IAM principal that can:
    • Create Secrets Manager secrets through the AWS Command Line Interface (AWS CLI) or AWS Management Console.
    • Create an IAM role to be used as a Lambda execution role.
    • Create a Lambda function.

Step 1: Create secrets

First, create multiple secrets with the same resource tag key-value pair using the AWS CLI. The resource tag will be used for ABAC. These secrets might look different depending on the resources that you decide to use in your environment. You can also manually create these secrets in the Secrets Manager console if you prefer.

Run the following commands in the AWS CLI, replacing the secret-string values with the credentials of the resources that you will be accessing:

  1.  
    aws secretsmanager create-secret --name MyTestSecret1 --description "My first test secret created with the CLI for resource 1." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-1\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  2.  
    aws secretsmanager create-secret --name MyTestSecret2 --description "My second test secret created with the CLI for resource 2." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-2\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  3.  
    aws secretsmanager create-secret --name MyTestSecret3 --description "My third test secret created with the CLI for resource 3." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-3\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  4.  
    aws secretsmanager create-secret --name MyTestSecret4 --description "My fourth test secret created with the CLI for resource 4." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-4 \"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  5.  
    aws secretsmanager create-secret --name MyTestSecret5 --description "My fifth test secret created with the CLI for resource 5." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-5\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"

Next, create a secret with a different resource tag value for the app key, but the same environment key-value pair. This will allow you to demonstrate that the BatchGetSecretValue call will fail when an IAM principal doesn’t have permissions to retrieve and list the secrets in a given filter.

Create a secret with a different tag, replacing the secret-string values with credentials of the resources that you will be accessing.

  1.  
    aws secretsmanager create-secret --name MyTestSecret6 --description "My test secret created with the CLI." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-6\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app2\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"

Step 2: Create an execution role for your Lambda function

In this example, create a Lambda execution role that only has permissions to retrieve secrets that are tagged with the app:app1 resource tag.

Create the policy to attach to the role

  1. Navigate to the IAM console.
  2. Select Policies from the navigation pane.
  3. Choose Create policy in the top right corner of the console.
  4. In Specify Permissions, select JSON to switch to the JSON editor view.
  5. Copy and paste the following policy into the JSON text editor.
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Sid": "Statement1",
    			"Effect": "Allow",
    			"Action": [
    				"secretsmanager:ListSecretVersionIds",
    				"secretsmanager:GetSecretValue",
    				"secretsmanager:GetResourcePolicy",
    				"secretsmanager:DescribeSecret"
    			],
    			"Resource": [
    				"*"
    			],
    			"Condition": {
    				"StringNotEquals": {
    					"aws:ResourceTag/app": [
    						"${aws:PrincipalTag/app}"
    					]
    				}
    			}
    		},
    		{
    			"Sid": "Statement2",
    			"Effect": "Allow",
    			"Action": [
    				"secretsmanager:ListSecrets"
    			],
    			"Resource": ["*"]
    		}
    	]
    }

  6. Choose Next.
  7. Enter LambdaABACPolicy for the name.
  8. Choose Create policy.

Create the IAM role and attach the policy

  1. Select Roles from the navigation pane.
  2. Choose Create role.
  3. Under Select Trusted Identity, leave AWS Service selected.
  4. Select the dropdown menu under Service or use case and select Lambda.
  5. Choose Next.
  6. Select the checkbox next to the LambdaABACPolicy policy you just created and choose Next.
  7. Enter a name for the role.
  8. Select Add tags and enter app:app1 as the key value pair for a tag on the role.
  9. Choose Create Role.

Step 3: Create a Lambda function to access secrets

  1. Navigate to the Lambda console.
  2. Choose Create Function.
  3. Enter a name for your function.
  4. Select the Python 3.10 runtime.
  5. Select change default execution role and attach the execution role you just created.
  6. Choose Create Function.
    Figure 1: create a Lambda function to access secrets

    Figure 1: create a Lambda function to access secrets

  7. In the Code tab, copy and paste the following code:
    import json
    import boto3
    from botocore.exceptions import ClientError
    import urllib.request
    import json
    
    session = boto3.session.Session()
    # Create a Secrets Manager client
    client = session.client(
            service_name='secretsmanager'
        )
        
    
    def lambda_handler(event, context):
    
         application_secrets = client.batch_get_secret_value(Filters =[
            {
            'Key':'tag-key',
            'Values':[event["TagKey"]]
            },
            {
            'Key':'tag-value',
            'Values':[event["TagValue"]]
            }
            ])
    
    
        ### RESOURCE 1 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 1")
            resource_1_secret = application_secrets["SecretValues"][0]
            ## IMPLEMENT RESOURCE CONNECTION HERE
    
            print("SUCCESFULLY CONNECTED TO RESOURCE 1")
        
        except Exception as e:
            print("Failed to connect to resource 1")
            return e
    
        ### RESOURCE 2 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 2")
            resource_2_secret = application_secrets["SecretValues"][1]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO RESOURCE 2")
        
        except Exception as e:
            print("Failed to connect to resource 2",)
            return e
    
        
        ### RESOURCE 3 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 3")
            resource_3_secret = application_secrets["SecretValues"][2]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO DB 3")
            
        except Exception as e:
            print("Failed to connect to resource 3")
            return e 
    
        ### RESOURCE 4 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 4")
            resource_4_secret = application_secrets["SecretValues"][3]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO RESOURCE 4")
            
        except Exception as e:
            print("Failed to connect to resource 4")
            return e
    
        ### RESOURCE 5 CONNECTION ###
        try:
            print("TESTING ACCESS TO RESOURCE 5")
            resource_5_secret = application_secrets["SecretValues"][4]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO RESOURCE 5")
            
        except Exception as e:
            print("Failed to connect to resource 5")
            return e
        
        return {
            'statusCode': 200,
            'body': json.dumps('Successfully Completed all Connections!')
        }

  8. You need to configure connections to the resources that you’re using for this example. The code in this example doesn’t create database or resource connections to prioritize flexibility for readers. Add code to connect to your resources after the “## IMPLEMENT RESOURCE CONNECTION HERE” comments.
  9. Choose Deploy.

Step 4: Configure the test event to initiate your Lambda function

  1. Above the code source, choose Test and then Configure test event.
  2. In the Event JSON, replace the JSON with the following:
    {
    "TagKey": "app",
    “TagValue”:”app1”
    }

  3. Enter a Name for your event.
  4. Choose Save.

Step 5: Invoke the Lambda function

  1. Invoke the Lambda by choosing Test.

Step 6: Review the function output

  1. Review the response and function logs to see the new feature in action. Your function logs should show successful connections to the five resources that you specified earlier, as shown in Figure 2.
    Figure 2: Review the function output

    Figure 2: Review the function output

Step 7: Test a different input to validate IAM controls

  1. In the Event JSON window, replace the JSON with the following:
    {
      "TagKey": "environment",
    “TagValue”:”production”
    }

  2. You should now see an error message from Secrets Manager in the logs similar to the following:
    User: arn:aws:iam::123456789012:user/JohnDoe is not authorized to perform: 
    secretsmanager:GetSecretValue because no resource-based policy allows the secretsmanager:GetSecretValue action

As you can see, you were able to retrieve the appropriate secrets based on the resource tag. You will also note that when the Lambda function tried to retrieve secrets for a resource tag that it didn’t have access to, Secrets Manager denied the request.

How to restrict use of BatchGetSecretValue for certain IAM principals

When dealing with sensitive resources such as secrets, it’s recommended that you adhere to the principle of least privilege. Service control policies, IAM policies, and resource policies can help you do this. Below, we discuss three policies that illustrate this:

Policy 1: IAM ABAC policy for Secrets Manager

This policy denies requests to get a secret if the principal doesn’t share the same project tag as the secret that the principal is trying to retrieve. Note that the effectiveness of this policy is dependent on correctly applied resource tags and principal tags. If you want to take a deeper dive into ABAC with Secrets Manager, see Scale your authorization needs for Secrets Manager using ABAC with IAM Identity Center.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Deny",
      "Action": [
        "secretsmanager:GetSecretValue",
	“secretsmanager:BatchGetSecretValue”
      ],
      "Resource": [
        "*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceTag/project": [
            "${aws:PrincipalTag/project}"
          ]
        }
      }
    }
  ]
}

Policy 2: Deny BatchGetSecretValue calls unless from a privileged role

This policy example denies the ability to use the BatchGetSecretValue unless it’s run by a privileged workload role.

"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "Statement1",
			"Effect": "Deny",
			"Action": [
				"secretsmanager:BatchGetSecretValue",
			],
			"Resource": [
				"arn:aws:secretsmanager:us-west-2:12345678910:secret:testsecret"
			],
			"Condition": {
				"StringNotLike": {
					"aws:PrincipalArn": [
						"arn:aws:iam::123456789011:role/prod-workload-role"
					]
				}
			}
		}]
}

Policy 3: Restrict actions to specified principals

Finally, let’s take a look at an example resource policy from our data perimeters policy examples. This resource policy restricts Secrets Manager actions to the principals that are in the organization that this secret is a part of, except for AWS service accounts.

{
    "Version": "2012-10-17",
    "Statement": 
	[
        {
            "Sid": "EnforceIdentityPerimeter",
            "Effect": "Deny",
            "Principal": 
			{
                "AWS": "*"
            },
            "Action": "secretsmanager:*",
            "Resource": "*",
            "Condition": 
			{
                "StringNotEqualsIfExists": 
				{
                    "aws:PrincipalOrgID": "<my-org-id>"
                },
                "BoolIfExists": 
				{
                    "aws:PrincipalIsAWSService": "false"
                }
            }
        },
     ]
}

Conclusion

In this blog post, we introduced the BatchGetSecretValue API, which you can use to improve operational excellence, performance efficiency, and reduce costs when using Secrets Manager. We looked at how you can use the API call in a Lambda function to retrieve multiple secrets that have the same resource tag and showed an example of an IAM policy to restrict access to this API.

To learn more about Secrets Manager, see the AWS Secrets Manager documentation or the AWS Security Blog.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Brendan Paul

Brendan Paul

Brendan is a Senior Solutions Architect at Amazon Web Services supporting media and entertainment companies. He has a passion for data protection and has been working at AWS since 2019. In 2024, he will start to pursue his Master’s Degree in Data Science at UC Berkeley. In his free time, he enjoys watching sports and running.

2023 Canadian Centre for Cyber Security Assessment Summary report available with 20 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2023-canadian-centre-for-cyber-security-assessment-summary-report-available-with-20-additional-services/

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of current and new AWS services and features. We are pleased to announce the availability of the 2023 Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS. With this assessment, a total of 150 AWS services and features are assessed in the Canada (Central) Region, including 20 additional AWS services and features. The assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the Services in Scope page. The 20 new services and features are the following:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.  

The CCCS cloud service provider information technology security assessment process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach, Annex 3 – Security control catalogue). As of November 2023, 150 AWS services in the Canada (Central) Region have been assessed by CCCS and meet the requirements for the Medium cloud security profile. Meeting the Medium cloud security profile is required to host workloads that are classified up to and including Medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is an Audit Lead for Canada. He has experience leading audits, attestations, certifications, and assessments across the Americas. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. He previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, and utilities industries.

Manage Enterprise Risk at Scale with a Unified, Holistic Approach

Post Syndicated from KC Higgins original https://blog.rapid7.com/2023/11/16/manage-enterprise-risk-at-scale-with-a-unified-holistic-approach/

Manage Enterprise Risk at Scale with a Unified, Holistic Approach

The rapid pace of technological change and the attendant rise of cyber threats in both speed and number leave most organizations at a disadvantage.

Historically, many firms faced this challenge simply by purchasing more technology in the hopes that the latest threat protection software would keep their data safe. But those days have come to an end. Not only have budgets come under increased scrutiny, but the sheer number of tools in most environments has become a handicap as well: Tools don’t always work well together and the expertise required to manage them remains in short supply. According to some analysts, the current complexity and diversity of tech environments also hampers visibility into vulnerability risks, at least in part because data must be obtained from disparate systems or laboriously exported into spreadsheets and data analytics platforms to fine tune and understand relevant risks.

For organizations looking for a unified perspective of risk across their cloud and on-prem environments, prioritizing risk, eliminating repetitive manual work, maintaining complete risk visibility, and consolidating point solutions will enable them to meet cyber threats with speed and success.

Not all Threats are Created Equally

You’ll hear companies claim to stop all threats everywhere all the time, but such claims are neither true nor practical – anyone who follows the news even casually knows that the threats keep coming. The key is to understand which threats pose the largest risks and mitigate those first. Tools like Rapid7’s InsightVM analyze enterprise-wide asset and vulnerability data to identify the actions that will have the largest impact on risk reduction in a given organization. Instead of thousand-page lists of individual patches to apply, organizations can make informed, up-to-the-minute decisions on how to allocate resources for maximum risk reduction.

InsightVM also offers live dashboards that update whenever new data is discovered, allowing teams to track the attack surface and risk as they change. Dashboard views can even be customized for different technical teams or stakeholders as organizational perimeters expand into the cloud and beyond. Other approaches, such as cloud risk management, allow organizations to manage, prioritize, and act on risks within the large scale of modern multi-cloud environments and on-prem footprints by helping them understand the potential impact of a particular risk and its likelihood of exploitation.

You Can’t Protect What You Can’t See

In addition to trying to tackle every imaginable risk, maintaining maximum visibility into attack-surface risk is the only way organizations can hope to minimize security gaps while managing the many containers, cloud services, and virtual devices that are often spun up and down without direct involvement from the security team.

While InsightVM integrates directly with dynamic infrastructure to give full visibility into the risks posed by these assets, solutions like Executive Risk View provide complete visibility into hybrid-environment risk by ingesting data with purpose-built collection mechanisms – regardless of whether work is running on-premises or in the cloud.

Executive Risk View also aggregates and normalizes disparate risk assessments from on-premises and cloud environments for a unified, interactive dashboard that brings clarity to discovered vulnerabilities and the risks each represents so that security teams can prioritize remediation actions and share insights cross functionally. Insight into how vulnerabilities translate into business risk – and which of them are most likely to be targeted by attackers – means teams can quickly and effectively address the risks that pose the most significant danger.

Simplify the Stack

Prioritization, automation, and visibility are all foundational elements of unified risk protection, but if organizations rely on multiple vendors to manage them, they will continue to lose efficiencies and battle tool proliferation. Cloud Risk Complete offers all of these solutions from one comprehensive platform and single subscription model. This means organizations can secure hybrid environments from development to production; detect and address risk across endpoints, cloud workloads, and traditional infrastructure; and perform dynamic application security testing to remediate application risk – all with a single subscription.

Learn more about the ways Rapid7 can help increase your security posture.

Be Empathetic and Hug Your CISO More!

Post Syndicated from Owen Holland original https://blog.rapid7.com/2023/11/10/be-empathetic-and-hug-your-ciso-more/

Be Empathetic and Hug Your CISO More!

In the rapidly evolving landscape of cloud computing, the adoption of multi-cloud environments has become a prevailing trend. Organizations increasingly turn to multiple cloud providers to harness diverse features, prevent vendor lock-in, and optimize costs. The multi-cloud approach offers unparalleled agility, scalability, and flexibility, but it has its complexities and CISOs need your support.

In the final episode of the Cloud Security Webinar Series, Rapid7’s Chief Security Officer Jaya Baloo and other experts share their thoughts on the cloud strategies to support security leaders as they move into 2024 and beyond.

These webinars can now be viewed on-demand, giving security professionals greater insight into how to safeguard their cloud environments and set themselves up for success. A summary of the key discussion points are listed below.

Nurturing Comprehension and Collaboration

Multi-cloud environments present a complex tapestry woven with equal parts opportunity and complexity. Governance, security, and cost optimization are paramount concerns often exacerbated by the absence of centralized visibility and with the threat of misconfigurations and potential compliance issues looming in the background.

So, in the face of these challenges, collaborative unity among security teams becomes not just a nicety but a necessity. It is through the sharing of knowledge and experiences that the security community effectively grapples with these evolving challenges.

Striving for Collective Success

There are several simple strategies security teams can adopt to support a more robust defense:

  1. Centralized visibility: Embrace cloud management tools to unveil a comprehensive view of the multi-cloud landscapes. In doing so, we foster collaboration and unity. This provides a single pane of glass for security teams to gain comprehensive insights into their digital assets, compliance status, and ongoing security threats.
  2. Automation: Leveraging automation is key to efficiently managing multi-cloud landscapes. Automate asset discovery, security policy enforcement, and threat response. Automation not only streamlines these processes but also reduces the risk of human error.
  3. Security governance framework: Develop a comprehensive security governance framework that encompasses all aspects of multi-cloud security, including identity and access management, data protection, and threat detection. This framework should be flexible enough to accommodate the nuances of each cloud platform.
  4. Resource optimization: Regularly evaluate resource utilization across different cloud providers. Ensure that resources are allocated efficiently to minimize costs. Implement scaling and resource allocation strategies to adapt to changing workload requirements.
  5. Enhanced staff training: Invest in the skills and knowledge of security and IT teams, along with opportunities for cross-training and knowledge sharing.

As organizations continue to embrace multi-cloud environments, mastering the complexities of diverse cloud platforms is crucial for enhanced security, governance, and cost optimization. By gaining a deep understanding of the multi-cloud landscape, addressing key challenges head-on, and implementing efficient management strategies, security professionals can navigate the intricate web of multi-cloud and ensure seamless operations in the cloud-native era.

Cultivating Unity for a More Resilient Future

The evolving nature of cybersecurity demands organizations stand together to share experiences, strategies, and best practices. By cultivating unity and empathy across the security community and the wider business, organizations can collectively navigate the shifting threat landscape more easily.

Ultimately, uniting the cybersecurity community is not merely a virtue but an imperative. To find out more, watch the on-demand cloud security series now.

Aggregating, searching, and visualizing log data from distributed sources with Amazon Athena and Amazon QuickSight

Post Syndicated from Pratima Singh original https://aws.amazon.com/blogs/security/aggregating-searching-and-visualizing-log-data-from-distributed-sources-with-amazon-athena-and-amazon-quicksight/

Customers using Amazon Web Services (AWS) can use a range of native and third-party tools to build workloads based on their specific use cases. Logs and metrics are foundational components in building effective insights into the health of your IT environment. In a distributed and agile AWS environment, customers need a centralized and holistic solution to visualize the health and security posture of their infrastructure.

You can effectively categorize the members of the teams involved using the following roles:

  1. Executive stakeholder: Owns and operates with their support staff and has total financial and risk accountability.
  2. Data custodian: Aggregates related data sources while managing cost, access, and compliance.
  3. Operator or analyst: Uses security tooling to monitor, assess, and respond to related events such as service disruptions.

In this blog post, we focus on the data custodian role. We show you how you can visualize metrics and logs centrally with Amazon QuickSight irrespective of the service or tool generating them. We use Amazon Simple Storage Service (Amazon S3) for storage, AWS Glue for cataloguing, and Amazon Athena for querying the data and creating structured query language (SQL) views for QuickSight to consume.

Target architecture

This post guides you towards building a target architecture in line with the AWS Well-Architected Framework. The tiered and multi-account target architecture, shown in Figure 1, uses account-level isolation to separate responsibilities across the various roles identified above and makes access management more defined and specific to those roles. The workload accounts generate the telemetry around the applications and infrastructure. The data custodian account is where the data lake is deployed and collects the telemetry. The operator account is where the queries and visualizations are created.

Throughout the post, I mention AWS services that reduce the operational overhead in one or more stages of the architecture.

Figure 1: Data visualization architecture

Figure 1: Data visualization architecture

Ingestion

Irrespective of the technology choices, applications and infrastructure configurations should generate metrics and logs that report on resource health and security. The format of the logs depends on which tool and which part of the stack is generating the logs. For example, the format of log data generated by application code can capture bespoke and additional metadata deemed useful from a workload perspective as compared to access logs generated by proxies or load balancers. For more information on types of logs and effective logging strategies, see Logging strategies for security incident response.

Amazon S3 is a scalable, highly available, durable, and secure object storage that you will use as the storage layer. To build a solution that captures events agnostic of the source, you must forward data as a stream to the S3 bucket. Based on the architecture, there are multiple tools you can use to capture and stream data into S3 buckets. Some tools support integration with S3 and directly stream data to S3. Resources like servers and virtual machines need forwarding agents such as Amazon Kinesis Agent, Amazon CloudWatch agent, or Fluent Bit.

Amazon Kinesis Data Streams provides a scalable data streaming environment. Using on-demand capacity mode eliminates the need for capacity provisioning and capacity management for streaming workloads. For log data and metric collection, you should use on-demand capacity mode, because log data generation can be unpredictable depending on the requests that are being handled by the environment. Amazon Kinesis Data Firehose can convert the format of your input data from JSON to Apache Parquet before storing the data in Amazon S3. Parquet is naturally compressed, and using Parquet native partitioning and compression allows for faster queries compared to JSON formatted objects.

Scalable data lake

Use AWS Lake Formation to build, secure, and manage the data lake to store log and metric data in S3 buckets. We recommend using tag-based access control and named resources to share the data in your data store to share data across accounts to build visualizations. Data custodians should configure access for relevant datasets to the operators who can use Athena to perform complex queries and build compelling data visualizations with QuickSight, as shown in Figure 2. For cross-account permissions, see Use Amazon Athena and Amazon QuickSight in a cross-account environment. You can also use Amazon DataZone to build additional governance and share data at scale within your organization. Note that the data lake is different to and separate from the Log Archive bucket and account described in Organizing Your AWS Environment Using Multiple Accounts.

Figure 2: Account structure

Figure 2: Account structure

Amazon Security Lake

Amazon Security Lake is a fully managed security data lake service. You can use Security Lake to automatically centralize security data from AWS environments, SaaS providers, on-premises, and third-party sources into a purpose-built data lake that’s stored in your AWS account. Using Security Lake reduces the operational effort involved in building a scalable data lake, as the service automates the configuration and orchestration for the data lake with Lake Formation. Security Lake automatically transforms logs into a standard schema—the Open Cybersecurity Schema Framework (OCSF) — and parses them into a standard directory structure, which allows for faster queries. For more information, see How to visualize Amazon Security Lake findings with Amazon QuickSight.

Querying and visualization

Figure 3: Data sharing overview

Figure 3: Data sharing overview

After you’ve configured cross-account permissions, you can use Athena as the data source to create a dataset in QuickSight, as shown in Figure 3. You start by signing up for a QuickSight subscription. There are multiple ways to sign in to QuickSight; this post uses AWS Identity and Access Management (IAM) for access. To use QuickSight with Athena and Lake Formation, you first must authorize connections through Lake Formation. After permissions are in place, you can add datasets. You should verify that you’re using QuickSight in the same AWS Region as the Region where Lake Formation is sharing the data. You can do this by checking the Region in the QuickSight URL.

You can start with basic queries and visualizations as described in Query logs in S3 with Athena and Create a QuickSight visualization. Depending on the nature and origin of the logs and metrics that you want to query, you can use the examples published in Running SQL queries using Amazon Athena. To build custom analytics, you can create views with Athena. Views in Athena are logical tables that you can use to query a subset of data. Views help you to hide complexity and minimize maintenance when querying large tables. Use views as a source for new datasets to build specific health analytics and dashboards.

You can also use Amazon QuickSight Q to get started on your analytics journey. Powered by machine learning, Q uses natural language processing to provide insights into the datasets. After the dataset is configured, you can use Q to give you suggestions for questions to ask about the data. Q understands business language and generates results based on relevant phrases detected in the questions. For more information, see Working with Amazon QuickSight Q topics.

Conclusion

Logs and metrics offer insights into the health of your applications and infrastructure. It’s essential to build visibility into the health of your IT environment so that you can understand what good health looks like and identify outliers in your data. These outliers can be used to identify thresholds and feed into your incident response workflow to help identify security issues. This post helps you build out a scalable centralized visualization environment irrespective of the source of log and metric data.

This post is part 1 of a series that helps you dive deeper into the security analytics use case. In part 2, How to visualize Amazon Security Lake findings with Amazon QuickSight, you will learn how you can use Security Lake to reduce the operational overhead involved in building a scalable data lake and centralizing log data from SaaS providers, on-premises, AWS, and third-party sources into a purpose-built data lake. You will also learn how you can integrate Athena with Security Lake and create visualizations with QuickSight of the data and events captured by Security Lake.

Part 3, How to share security telemetry per Organizational Unit using Amazon Security Lake and AWS Lake Formation, dives deeper into how you can query security posture using AWS Security Hub findings integrated with Security Lake. You will also use the capabilities of Athena and QuickSight to visualize security posture in a distributed environment.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Pratima Singh

Pratima Singh

Pratima is a Security Specialist Solutions Architect with Amazon Web Services based out of Sydney, Australia. She is a security enthusiast who enjoys helping customers find innovative solutions to complex business challenges. Outside of work, Pratima enjoys going on long drives and spending time with her family at the beach.

Cloud Webinar Series Part 1: Commanding Cloud Strategies

Post Syndicated from Owen Holland original https://blog.rapid7.com/2023/10/17/cloud-webinar-series-part-1-commanding-cloud-strategies/

Cloud Webinar Series Part 1: Commanding Cloud Strategies

Over the past decade, cloud computing has evolved into a cornerstone of modern business operations. Its flexibility, scalability, and efficiency have reshaped industries and brought unprecedented opportunities.

However, this transformation has come with challenges—most notably those associated with cloud security. Our new cloud security webinar series will explore the dynamic landscape of cloud security, unveiling key trends, pinpointing critical challenges, and providing actionable insights tailored to security professionals.

In Commanding Cloud Strategies, the first webinar of the series, Rapid7’s Chief Security Officer Jaya Baloo and other experts will share their thoughts on the cloud challenges that security leaders face and offer insights on how to overcome them.

Please register for the first episode of our Cloud Security Series here to find out what our security experts think are the top strategies to overcome these challenges and considerations.

Armed with the knowledge and insights provided in part-one, security professionals will be better equipped to safeguard their cloud environments and data assets in the modern digital landscape.

To learn more, check out the webinar abstract below.

Commanding Cloud Strategies Webinar Abstract

In the ever-evolving world of cloud security, staying ahead of the curve is paramount. Over the past ten years, several trends have emerged, shaping how organizations safeguard their digital assets.

The shift towards a shared responsibility model, greater emphasis on automation and orchestration, and a growing focus on identity and access management (IAM) are among the defining trends.

Cloud Security Challenges

  • Data Privacy and Compliance: Ensuring data protection and regulatory compliance within cloud environments is a persistent challenge. As data becomes more mobile and diverse, maintaining compliance becomes increasingly complex.
  • Evolving Threat Landscape: The threat landscape is in constant flux, with cyberattacks targeting cloud infrastructure and applications growing in sophistication. Security professionals must adapt to this ever-changing landscape to keep their organizations safe.

Considerations in Cloud Security

  • Scalable Security Architecture: Large enterprises must design security architectures that are both scalable and flexible to adapt to evolving cloud infrastructure and workload needs. The ability to scale security measures efficiently is crucial.
  • Identity and Access Management (IAM): Given the intricate web of user roles and permissions in large organizations, effective IAM is essential. Organizations should prioritize IAM solutions that streamline access while maintaining security.

Understanding Risk

Understanding cybersecurity risk is at the heart of cloud security. Effective risk assessment and mitigation involve evaluating internal and external tactics that could compromise an organization’s digital assets and information security. Our security experts will delve into this critical domain’s core challenges and considerations in the session.

Challenges in Understanding Risk

  • Complexity of Cloud Ecosystems: Successful organizations often operate intricate cloud ecosystems with numerous interconnected services and platforms. Navigating this complexity while assessing risk can be daunting.
  • Lack of Skilled Cybersecurity Personnel: The need for more skilled cybersecurity professionals capable of analyzing and managing cloud security risks is a widespread challenge. Organizations must find and retain the right talent to stay secure.

Considerations for Understanding Risk

  • Risk Assessment and Prioritization: Organizations should prioritize the identification and assessment of cloud security risks based on their potential impact and likelihood. Effective risk assessment tools and threat modelling can help in this regard.
  • Continuous Monitoring and Response: Establishing a robust, real-time monitoring system is essential. It allows organizations to continuously assess cloud environments for security incidents and respond promptly to emerging threats. Integrating Security Information and Event Management (SIEM) and DevSecOps practices can enhance this capability.

Threat Intelligence

In cloud security, threat intelligence is pivotal in staying one step ahead of potential threats and vulnerabilities. Effective threat intelligence involves collecting, analyzing, and disseminating timely information to protect cloud environments and data assets proactively.

Challenges in Threat Intelligence

  • Data Overload and False Positives: Organizations generate vast amounts of security data, including threat intelligence feeds. Managing this data can lead to data overload and false positives, causing alert fatigue.
  • Integration and Compatibility: Integrating threat intelligence feeds into existing security infrastructure can be complex, as different sources may use varying formats and standards.

Considerations in Threat Intelligence

  • Customization and Contextualization: To make threat intelligence actionable, organizations should customize it to their specific cloud environments, industry, and business context. Tailored alerting rules and threat-hunting workflows can enhance effectiveness.
  • Sharing and Collaboration: Collaborating with industry peers, Information Sharing and Analysis Centers (ISACs), and government agencies for threat intelligence sharing can provide valuable insights into emerging threats specific to the industry.

Security Capabilities

Cloud security capabilities encompass the ability to comprehend evolving risks, establish benchmark standards, and take immediate, informed actions to safeguard cloud environments and data assets effectively. The final topic in the webinar will explore the core challenges and considerations in building robust security capabilities.

Challenges in Security Capabilities

  • Resource Allocation and Prioritization: Allocating resources effectively across vast cloud environments can be challenging, leading to difficulties prioritizing security efforts and ensuring critical areas receive the necessary attention and investment.
  • Complexity of Hybrid and Multi-Cloud Environments: Managing security capabilities becomes particularly challenging when organizations operate in hybrid or multi-cloud environments. Ensuring consistent security practices and policies across different platforms and providers requires specialized expertise.

Considerations in Security Capabilities

  • Integrated Security Ecosystem: Organizations should strive to create an integrated security ecosystem that combines various security tools, technologies, and processes to provide a comprehensive view of their cloud environment.
  • Scalability and Elasticity: Cloud security capabilities should be designed to scale and adapt to the organization’s evolving cloud infrastructure and workloads. This includes automated resource scaling and continuous security testing.

Now available: Building a scalable vulnerability management program on AWS

Post Syndicated from Anna McAbee original https://aws.amazon.com/blogs/security/now-available-how-to-build-a-scalable-vulnerability-management-program-on-aws/

Vulnerability findings in a cloud environment can come from a variety of tools and scans depending on the underlying technology you’re using. Without processes in place to handle these findings, they can begin to mount, often leading to thousands to tens of thousands of findings in a short amount of time. We’re excited to announce the Building a scalable vulnerability management program on AWS guide, which includes how you can build a structured vulnerability management program, operationalize tooling, and scale your processes to handle a large number of findings from diverse sources.

Building a scalable vulnerability management program on AWS focuses on the fundamentals of building a cloud vulnerability management program, including traditional software and network vulnerabilities and cloud configuration risks. The guide covers how to build a successful and scalable vulnerability management program on AWS through preparation, enabling and configuring tools, triaging findings, and reporting.

Targeted outcomes

This guide can help you and your organization with the following:

  • Develop policies to streamline vulnerability management and maintain accountability.
  • Establish mechanisms to extend the responsibility of security to your application teams.
  • Configure relevant AWS services according to best practices for scalable vulnerability management.
  • Identify patterns for routing security findings to support a shared responsibility model.
  • Establish mechanisms to report on and iterate on your vulnerability management program.
  • Improve security finding visibility and help improve overall security posture.

Using the new guide

We encourage you to read the entire guide before taking action or building a list of changes to implement. After you read the guide, assess your current state compared to the action items and check off the items that you’ve already completed in the Next steps table. This will help you assess the current state of your AWS vulnerability management program. Then, plan short-term and long-term roadmaps based on your gaps, desired state, resources, and business needs. Building a cloud vulnerability management program often involves iteration, so you should prioritize key items and regularly revisit your backlog to keep up with technology changes and your business requirements.

Further information

For more information and to get started, see the Building a scalable vulnerability management program on AWS.

We greatly value feedback and contributions from our community. To share your thoughts and insights about the guide, your experience using it, and what you want to see in future versions, select Provide feedback at the bottom of any page in the guide and complete the form.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Anna McAbee

Anna is a Security Specialist Solutions Architect focused on threat detection and incident response at AWS. Before AWS, she worked as an AWS customer in financial services on both the offensive and defensive sides of security. Outside of work, Anna enjoys cheering on the Florida Gators football team, wine tasting, and traveling the world.

Author

Megan O’Neil

Megan is a Principal Security Specialist Solutions Architect focused on Threat Detection and Incident Response. Megan and her team enable AWS customers to implement sophisticated, scalable, and secure solutions that solve their business challenges. Outside of work, Megan loves to explore Colorado, including mountain biking, skiing, and hiking.

A Look at Our Development Process of the Cloud Resource Enrichment API

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/09/07/a-look-at-our-development-process-of-the-cloud-resource-enrichment-api/

A Look at Our Development Process of the Cloud Resource Enrichment API

In today’s ever-evolving cybersecurity landscape, detecting and responding to cyber threats is paramount for organizations in cloud environments. At the same time, investigating cyber threat alerts can be arduous due to the time-consuming and complex process of data collection. To tackle this pain point, Rapid7 developed a new Cloud Resource Enrichment API that streamlines data retrieval from various cloud resources. The API empowers security analysts to swiftly respond to cyber threats and improve incident response time.

Identifying the Need for a Unified API

Protecting cloud resources from cyber attacks is a growing challenge. Security analysts must grapple with gathering relevant data spread across multiple systems and APIs, leading to incident response inefficiencies. Presented with this challenge, we recognized a pressing need for a unified API that collects all relevant data types related to a cloud resource during a cyber threat action. This API streamlines data access, enabling analysts to piece together a comprehensive view of incidents rapidly, enhancing cybersecurity operations.

Defining the Vision and Scope

Our development team worked closely with security analysts to tailor the API’s functionalities to meet real-world needs. Defining the API’s scope involved meticulous prioritization of features, striking the right balance between usability and data abundance. By involving analysts from the outset, we laid a solid foundation for the API’s success.

The Development Journey

Adopting agile methodologies, our team iteratively developed the API, adapting and fine-tuning as we progressed. The iterative development process played a vital role in ensuring the API’s success. By breaking down the project into smaller, manageable tasks, we could focus on specific features, implement them efficiently, and gather feedback from early prototypes. With a comprehensive design phase, we defined the API’s architecture and capabilities based on insights from security analysts. Regular meetings and feedback gathering facilitated continuous improvements, streamlining the data retrieval process.

The API utilizes RESTful API design principles for data integration and communication between cloud systems. It collects the following types of data:

  • Harvested cloud resource properties (image, IP, network interfaces, region, cloud organization and account, security groups, and much, much more)
  • Permissions data (permissions on the resource, permissions of the resource)
  • Security insights (risks, misconfigurations, vulnerabilities)
  • Security alerts (“threat finding”)
  • First level cloud related resources
  • Application context (tagging made by the client in the cloud environment)

Each data type required collaboration with a different team which is responsible for collecting and processing the data. This resulted in a feature that involved developers from 6 different teams! Regular meetings and continuous communication with the development team and the product manager, allowed us to incorporate suggestions and make iterative improvements to the API’s design and functionality.

Conclusion

The development journey of our Cloud Resource Enrichment API has been both challenging and rewarding. With a user-centric approach, we have crafted a powerful tool that empowers security teams to respond effectively to cyber threats. As we continue to enhance the API, we remain committed to fortifying organizations’ cyber defenses and elevating incident response capabilities. Together, we can better equip security analysts to face the ever-changing cyber war with confidence.

Why Your AWS Cloud Container Needs Client-Side Security

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/08/24/why-your-aws-cloud-container-needs-client-side-security/

Why Your AWS Cloud Container Needs Client-Side Security

With increasingly complicated network infrastructure and organizations needing to deploy applications across various environments, cloud containers are necessary for companies to stay agile and innovative. Containers are packages of software that hold all of the necessary components for an app to run in any environment. One of the biggest benefits of cloud containers? They virtualize an operating system, enabling users to access from private data centers, public clouds, and even laptops.

According to recent research by Faction, 92% of organizations have a multi-cloud strategy in place or are in the process of adopting one. In addition to the ubiquity of cloud computing, there are a variety of cloud container providers, including Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. Nearly 80% of all containers on the cloud, however, run on AWS, which is known for its security, reliability, and scalability.

When it comes to cloud container security, AWS works on a shared responsibility model. This means that security and compliance is shared between AWS and the client. AWS protects the infrastructure running the services offered in the cloud — the hardware, software, networking, and facilities.

Unfortunately, many AWS users stop here. They believe that the security provided by AWS is sufficient to protect their cloud containers. While it is true that the level of customer responsibility for security differs depending on the AWS product, each product does require the customer to assume some level of security responsibility.

To avoid this mistake, let’s examine why your AWS cloud container needs additional client-side security and how Rapid7 can help.

Top reasons why your AWS container needs client-side security

Visibility and monitoring

Some of the same qualities that make containers ideal for agility and innovation also creates difficulty in visibility and monitoring. Cloud containers are ephemeral, which means they’re easy to establish and destroy. This is convenient for quickly moving workloads and applications, but it also makes it difficult to track changes. Many AWS containers share memory and CPU resources with a variety of hosts (physical and cloud) in your ecosystem. Consequently, monitoring resource consumption and assessing container performance and application health can be difficult — after all, how can you know how much memory is being utilized by the container or the physical host?

Traditional monitoring tools and solutions also fail to collect the necessary metrics or provide the crucial insights needed for monitoring and troubleshooting container health and performance. While AWS offers protection for the cloud container structure, visualizing and monitoring what happens within the container is the responsibility of your organization.

Alert contextualization and remediation

As your company grows and you scale your cloud infrastructure, your DevOps teams will continue to create containers. For example, Google runs everything in containers and launches an epic amount of containers (several billion per week!) to keep up with their developer and client needs. While you might not be launching quite as many containers, it’s still easy to lose track of them all. Organizations utilize alerts to keep track of container performance and health to resolve problems quickly. While alerting policies differ, most companies use metric- or log-based alerting.

It can be overwhelming to manage and remediate all of your organization’s container alerts. Not only do these alerts need to be routed to the proper developer or resource owner, but they also need to be remediated quickly to ensure the security and continued good performance of the container.

Cybersecurity standards

While AWS provides security for your foundational services in containerized applications — computing, storage, databases, and networking — it’s your responsibility to develop sufficient security protocols to protect your data, applications, operating system, and firewall. In the same way that your organization follows external cybersecurity standards for security and compliance across the rest of your digital ecosystem, it’s best to align your client-side AWS container security with a well-known industry framework.

Adopting a standardized cybersecurity framework will work in concert with AWS’s security measures by providing guidelines and best practices — preventing your organization from a haphazard security application that creates coverage gaps.

How Rapid7 can help with AWS container security

Now that you know why your organization needs client-side security, here’s how Rapid7 can help.

  • Visibility and monitoring: Rapid7’s InsightCloudSec continuously scans your cloud’s infrastructure, orchestration platforms, and workloads to provide a real-time assessment of health, performance, and risk. With the ability to scan containers in less than 60 seconds, your team will be able to quickly and accurately track changes in your containers and view the data in a single, convenient platform, perfect for collaborating across teams and quickly remediating issues.
  • Alert contextualization and remediation: Client-side security measures are key to processing and remediating system alerts in your AWS containers, but it can’t be accomplished manually. Automation is key for alert contextualization and remediation. InsightCloudSec integrates with AWS services like Amazon GuardDuty to analyze logs for malicious activity. The tool also integrates with your larger enterprise security systems to automate the remediation of critical risks in real time — often within 60 seconds.
  • Cybersecurity standards: While aligning your cloud containers with an industry-standard cybersecurity framework is a necessity, it’s often a struggle. Maintaining security and compliance requirements requires specialized knowledge and expertise. With record staff shortages, this often falls by the wayside. InsightCloudSec automates cloud compliance for well-known industry standards like the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework (CSF) with out-of-the-box policies that map back to specific NIST directives.

Secure your container (and it’s contents)

AWS’s shared responsibility model of security helps relieve operational burdens for organizations operating cloud containers. AWS clients don’t have to worry about the infrastructure security of their cloud containers. The contents in the cloud containers, however, are the owner’s responsibility and require additional security considerations.

Client-side security is necessary for proper monitoring and visibility, reduction in alert fatigue and real-time troubleshooting, and the application of external cybersecurity frameworks. The right tools, like Rapid7’s InsightCloudSec, can provide crucial support in each of these areas and beyond, filling crucial expertise and staffing gaps on your team and empowering your organization to confidently (and securely) utilize cloud containers.

Want to learn more about AWS container security? Download Fortify Your Containerized Apps With Rapid7 on AWS.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

Post Syndicated from James Alaniz original https://blog.rapid7.com/2023/08/01/new-insightcloudsec-compliance-pack-for-cis-aws-benchmark-2-0-0/

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

The Center for Internet Security (CIS) recently released version two of their AWS Benchmark. CIS AWS Benchmark 2.0.0 brings two new recommendations and eliminates one from the previous version. The update also includes some minor formatting changes to certain recommendation descriptions.

In this post, we’ll talk a little bit about the “why” behind these changes. We’ll also look at using InsightCloudSec’s new, out-of-the-box compliance pack to implement and enforce the benchmark’s recommendations.

What’s new, what’s changed, and why

Version 2.0.0 of the CIS AWS Benchmark included two new recommendations:

  • Ensure access to AWSCloudShellFullAccess is restricted
    An important addition from CIS, this recommendation focuses on restricting access to the AWSCloudShellFullAccess policy, which presents a potential path for data exfiltration by malicious cloud admins that are given full permissions to the service. AWS documentation describes how to create a more restrictive IAM policy that denies file transfer permissions.
  • Ensure that EC2 Metadata Service only allows IMDSv2
    Users should be using IMDSv2 to avoid leaving your EC2 instances susceptible to Server-Side Request Forgery (SSRF) attacks, a critical fault of IMDSv1.

The update also included the removal of the previous recommendation:

  • Ensure all S3 buckets employ encryption-at-rest
    This recommendation was removed because AWS now encrypts all new objects by default as of January 2023. It’s important to note that this only applies to newly created S3 buckets. So, if you’ve got some buckets that have been kicking around for a while, make sure they are employing encryption-at-rest and that it can not be inadvertently turned off at some point down the line.

Along with these changes, CIS also made a few minor changes related to the wording in some of the benchmark titles and descriptions.

How does ICS help me implement this in my environment?

Available as a compliance pack within InsightCloudSec right out-of-the-box, Rapid7 makes it easy for teams to scan their AWS environments for compliance against the recommendations and controls outlined in the CIS AWS Benchmark. If you’re not yet using InsightCloudSec today, be sure to check out the docs pages here, which will guide you through getting started with the platform.

Once you’re up and running, scoping your compliance assessment to a specific pack is as easy as 4 clicks. First, from the Compliance Summary page  you’ll want to select your desired benchmark. In this case, of course, CIS AWS Benchmark 2.0.0.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

From there, we can select the specific cloud or clouds we want to scan.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

And because we’ve got our badging and tagging strategies in order (right…….RIGHT?!) we can hone in even further. For this example, let’s focus on the production environment.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

You’ll get some trending insights that show how your organization as a whole, as well as how specific teams and accounts are doing and whether or not you’re seeing the improvement over time.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

Finally, if you’ve got a number of cloud accounts and/or clusters running across your environment, you can even scope down to that level. In this example, we’ll select all.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

Once you’ve got your filters set, you can apply and get real-time insight into how well your organization is adhering to the CIS AWS Benchmark. As with any pack, you can see your current compliance score overall along with a breakdown of the risk level associated with each instance of non-compliance.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

So as you can see, it’s fairly simple to assess your cloud environment for compliance with the CIS AWS Benchmark with a cloud security tool like InsightCloudSec. If you’re just starting your cloud security journey or aren’t really sure where to start, utilizing an out-of-the-box compliance pack is a great way to set a foundation to build off of.

In fact, Rapid7 recently partnered with AWS to help organizations in that very situation. Using a combination of market-leading technology and hands-on expertise, our AWS Cloud Risk Assessment provides a point-in-time understanding of your entire AWS cloud footprint and its security posture.

During the assessment, our experts will inspect your cloud environment for more than 100 distinct risks and misconfigurations, including publicly exposed resources, lack of encryption, and user accounts not utilizing multi-factor authentication. At the end of this assessment, your team will receive an executive-level report aligned to the AWS Foundational Security Best Practices, participate in a read-out call, and discuss next steps for executing your cloud risk mitigation program alongside experts from Rapid7 and our services partners.

If you’re interested, be sure to reach out about the AWS Cloud Risk Assessment with this quick request form!

Managing Risk Across Hybrid Environments with Executive Risk View

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/07/18/managing-risk-across-hybrid-environments-with-executive-risk-view/

Managing Risk Across Hybrid Environments with Executive Risk View

Over the last decade or so, organizations of all shapes and sizes across all industries have been going through a seismic shift in the way they engage with their customers and deliver their solutions to the market. These new delivery models are often underpinned by cloud services, which can change the composition of an organization’s IT environment drastically.

As part of this digital transformation, and in turn cloud adoption, many administrators have moved from maintaining a few hundred or so physical servers in their on-premises environment to running thousands and thousands of cloud instances spread across hundreds of cloud accounts—which are much more complex and ephemeral in nature.

The Modern Attack Surface is Expanding

Whether the impetus for this transformation is an attempt to maintain or gain a competitive advantage, or even as a result of mergers and acquisition, security teams are forced to play catch-up to harden a rapidly expanding attack surface. This expanding attack surface means that security teams need to evolve the scope and approach of their vulnerability management programs, and because they’re already playing catch-up, these teams are often asked to adapt their programs on the fly.

Making matters worse, many of the tools and processes used by teams to manage and secure those workloads aren’t able to keep up with the pace of innovation. Plus, many organizations have given DevOps teams self-service access to the underlying infrastructure that their teams need to innovate quickly, making it even more difficult for the security team to keep up with the ever-changing environment.

Adapting Your Vulnerability Management Program to the Cloud Requires a Different Approach

Assessing and reducing risk across on-premises and cloud environments can be complex and cumbersome, often requiring significant time and manual effort to aggregate, analyze and prioritize a plethora of risk signals. Practitioners are often forced to context switch between multiple tools and exert manual effort to normalize data and translate security findings into meaningful risk metrics that the business can understand. As a result, many teams struggle with blind spots resulting from gaps in data, or too much noise being surfaced without the ability to effectively prioritize remediation efforts and drive accountability across the organization. To effectively manage risk across complex and dynamic hybrid environments, security teams must adapt their programs and take a fundamentally different approach.

Managing Risk Across Hybrid Environments with Executive Risk View

As is the case with traditional on-premises environments, you need to first achieve and maintain full visibility of your environment. You also need to keep track of how the environment changes over time, and how that change impacts your risk posture. Doing this in an ephemeral environment can be tricky, because in the cloud things can (and will) change on a minute to minute basis. Traditional agent-based vulnerability management tools are  too cumbersome to manage and simply won’t scale in the way modern environments require. Agentless solutions deliver the real-time visibility and change management capabilities that today’s cloud and hybrid environments require.

Once you establish real-time and continuous visibility, you need to assess your environment for risk, understanding your organization’s current risk posture. You’re going to need a way to effectively prioritize risk, and make sure your teams are focusing on the most pressing and impactful issues based on exploitability as well as potential impact to your business and customers.

Finally, once you’ve gotten to a point where you can identify which risk signals need your attention first, you’ll want to remediate them as quickly and comprehensively as possible. When you’re operating at the speed of the cloud, this means you’re likely going to be relying on some form of automation, whether that’s automating repetitive processes, or even having a security solution take action to remediate vulnerabilities on your behalf. Of course, you’ll need to be measuring and tracking progress throughout this process, and you’ll need a way to communicate the progress you and your team is making to improve your risk posture with trending analysis over time.

So, as you can see, it’s not that “what” security teams need to do is significantly different, but “how” they go about it has to change, because traditional approaches just won’t work. The challenge is that this isn’t an either/or scenario. Organizations that are operating in a hybrid environment need to adapt their programs to be able to manage and report on risk in on-premises and cloud environments simultaneously and holistically. If not, security leaders will struggle to make informed decisions on how to effectively plan their budgets and allocate resources to ensure that cloud migration doesn’t have a negative impact on its risk posture.

Manage Risk in Hybrid Environments with Executive Risk View

Executive Risk View, now generally available in Rapid7’s Cloud Risk Complete offering, provides security leaders with the comprehensive visibility and context needed to track total risk across both cloud and on-premises assets to better understand organizational risk posture and trends.

Managing Risk Across Hybrid Environments with Executive Risk View

With Executive Risk View, customers can:

  • Achieve a complete view of risk across their hybrid environments to effectively communicate risk across the organization and track progress.
  • Establish a consistent definition of risk across their organization, aggregating insights and normalizing scores from on-premises and cloud assessments.
  • Take a data-driven approach to decision making, capacity planning and drive accountability for risk reduction across the entire business.

Sounds too good to be true? You can see it in action for yourself in a new guided product tour we recently posted on our website! In addition to taking the tour, you can find more information on Executive Risk View in the docs page.