Tag Archives: siem

How to generate security findings to help your security team with incident response simulations

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/how-to-generate-security-findings-to-help-your-security-team-with-incident-response-simulations/

Continually reviewing your organization’s incident response capabilities can be challenging without a mechanism to create security findings with actual Amazon Web Services (AWS) resources within your AWS estate. As prescribed within the AWS Security Incident Response whitepaper, it’s important to periodically review your incident response capabilities to make sure your security team is continually maturing internal processes and assessing capabilities within AWS. Generating sample security findings is useful to understand the finding format so you can enrich the finding with additional metadata or create and prioritize detections within your security information event management (SIEM) solution. However, if you want to conduct an end-to-end incident response simulation, including the creation of real detections, sample findings might not create actionable detections that will start your incident response process because of alerting suppressions you might have configured, or imaginary metadata (such as synthetic Amazon Elastic Compute Cloud (Amazon EC2) instance IDs), which might confuse your remediation tooling.

In this post, we walk through how to deploy a solution that provisions resources to generate simulated security findings for actual provisioned resources within your AWS account. Generating simulated security findings in your AWS account gives your security team an opportunity to validate their cyber capabilities, investigation workflow and playbooks, escalation paths across teams, and exercise any response automation currently in place.

Important: It’s strongly recommended that the solution be deployed in an isolated AWS account with no additional workloads or sensitive data. No resources deployed within the solution should be used for any purpose outside of generating the security findings for incident response simulations. Although the security findings are non-destructive to existing resources, they should still be done in isolation. For any AWS solution deployed within your AWS environment, your security team should review the resources and configurations within the code.

Conducting incident response simulations

Before deploying the solution, it’s important that you know what your goal is and what type of simulation to conduct. If you’re primarily curious about the format that active Amazon GuardDuty findings will create, you should generate sample findings with GuardDuty. At the time of this writing, Amazon Inspector doesn’t currently generate sample findings.

If you want to validate your incident response playbooks, make sure you have playbooks for the security findings the solution generates. If those playbooks don’t exist, it might be a good idea to start with a high-level tabletop exercise to identify which playbooks you need to create.

Because you’re running this sample in an AWS account with no workloads, it’s recommended to run the sample solution as a purple team exercise. Purple team exercises should be periodically run to support training for new analysts, validate existing playbooks, and identify areas of improvement to reduce the mean time to respond or identify areas where processes can be optimized with automation.

Now that you have a good understanding of the different simulation types, you can create security findings in an isolated AWS account.

Prerequisites

  1. [Recommended] A separate AWS account containing no customer data or running workloads
  2. GuardDuty, along with GuardDuty Kubernetes Protection
  3. Amazon Inspector must be enabled
  4. [Optional] AWS Security Hub can be enabled to show a consolidated view of security findings generated by GuardDuty and Inspector

Solution architecture

The architecture of the solution can be found in Figure 1.

Figure 1: Sample solution architecture diagram

Figure 1: Sample solution architecture diagram

  1. A user specifies the type of security findings to generate by passing an AWS CloudFormation parameter.
  2. An Amazon Simple Notification Service (Amazon SNS) topic is created to subscribe to findings for notifications. Subscribed users are notified of the finding through the deployed SNS topic.
  3. Upon user selection of the CloudFormation parameter, EC2 instances are provisioned to run commands to generate security findings.

    Note: If the parameter inspector is provided during deployment, then only one EC2 instance is deployed. If the parameter guardduty is provided during deployment, then two EC2 instances are deployed.

  4. For Amazon Inspector findings:
    1. The Amazon EC2 user data creates a .txt file with vulnerable images, pulls down Docker images from open source vulhub, and creates an Amazon Elastic Container Registry (Amazon ECR) repository with the vulnerable images.
    2. The EC2 user data pushes and tags the images in the ECR repository which results in Amazon Inspector findings being generated.
    3. An Amazon EventBridge cron-style trigger rule, inspector_remediation_ecr, invokes an AWS Lambda function.
    4. The Lambda function, ecr_cleanup_function, cleans up the vulnerable images in the deployed Amazon ECR repository based on applied tags and sends a notification to the Amazon SNS topic.

      Note: The ecr_cleanup_function Lambda function is also invoked as a custom resource to clean up vulnerable images during deployment. If there are issues with cleanup, the EventBridge rule continually attempts to clean up vulnerable images.

  5. For GuardDuty, the following actions are taken and resources are deployed:
    1. An AWS Identity and Access Management (IAM) user named guardduty-demo-user is created with an IAM access key that is INACTIVE.
    2. An AWS Systems Manager parameter stores the IAM access key for guardduty-demo-user.
    3. An AWS Secrets Manager secret stores the inactive IAM secret access key for guardduty-demo-user.
    4. An Amazon DynamoDB table is created, and the table name is stored in a Systems Manager parameter to be referenced within the EC2 user data.
    5. An Amazon Simple Storage Service (Amazon S3) bucket is created, and the bucket name is stored in a Systems Manager parameter to be referenced within the EC2 user data.
    6. A Lambda function adds a threat list to GuardDuty that includes the IP addresses of the EC2 instances deployed as part of the sample.
    7. EC2 user data generates GuardDuty findings for the following:
      1. Amazon Elastic Kubernetes Service (Amazon EKS)
        1. Installs eksctl from GitHub.
        2. Creates an EC2 key pair.
        3. Creates an EKS cluster (dependent on availability zone capacity).
        4. Updates EKS cluster configuration to make a dashboard public.
      2. DynamoDB
        1. Adds an item to the DynamoDB table for Joshua Tree.
      3. EC2
        1. Creates an AWS CloudTrail trail named guardduty-demo-trail-<GUID> and subsequently deletes the same CloudTrail trail. The <GUID> is randomly generated by using the $RANDOM function
        2. Runs portscan on 172.31.37.171 (an RFC 1918 private IP address) and private IP of the EKS Deployment EC2 instance provisioned as part of the sample. Port scans are primarily used by bad actors to search for potential vulnerabilities. The target of the port scans are internal IP addresses and do not leave the sample VPC deployed.
        3. Curls DNS domains that are labeled for bitcoin, command and control, and other domains associated with known threats.
      4. Amazon S3
        1. Disables Block Public Access and server access logging for the S3 bucket provisioned as part of the solution.
      5. IAM
        1. Deletes the existing account password policy and creates a new password policy with a minimum length of six characters.
  6. The following Amazon EventBridge rules are created:
    1. guardduty_remediation_eks_rule – When a GuardDuty finding for EKS is created, a Lambda function attempts to delete the EKS resources. Subscribed users are notified of the finding through the deployed SNS topic.
    2. guardduty_remediation_credexfil_rule – When a GuardDuty finding for InstanceCredentialExfiltration is created, a Lambda function is used to revoke the IAM role’s temporary security credentials and AWS permissions. Subscribed users are notified of the finding through the deployed SNS topic.
    3. guardduty_respond_IAMUser_rule – When a GuardDuty finding for IAM is created, subscribed users are notified through the deployed SNS topic. There is no remediation activity triggered by this rule.
    4. Guardduty_notify_S3_rule – When a GuardDuty finding for Amazon S3 is created, subscribed users are notified through the deployed Amazon SNS topic. This rule doesn’t invoke any remediation activity.
  7. The following Lambda functions are created:
    1. guardduty_iam_remediation_function – This function revokes active sessions and sends a notification to the SNS topic.
    2. eks_cleanup_function – This function deletes the EKS resources in the EKS CloudFormation template.

      Note: Upon attempts to delete the overall sample CloudFormation stack, this runs to delete the EKS CloudFormation template.

  8. An S3 bucket stores EC2 user data scripts run from the EC2 instances

Solution deployment

You can deploy the SecurityFindingGeneratorStack solution by using either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

Option 1: Deploy the solution with AWS CloudFormation using the console

Use the console to sign in to your chosen AWS account and then choose the Launch Stack button to open the AWS CloudFormation console pre-loaded with the template for this solution. It takes approximately 10 minutes for the CloudFormation stack to complete.

Launch Stack

Option 2: Deploy the solution by using the AWS CDK

You can find the latest code for the SecurityFindingGeneratorStack solution in the SecurityFindingGeneratorStack GitHub repository, where you can also contribute to the sample code. For instructions and more information on using the AWS Cloud Development Kit (AWS CDK), see Get Started with AWS CDK.

To deploy the solution by using the AWS CDK

  1. To build the app when navigating to the project’s root folder, use the following commands:
    npm install -g aws-cdk-lib
    npm install

  2. Run the following command in your terminal while authenticated in your separate deployment AWS account to bootstrap your environment. Be sure to replace <INSERT_AWS_ACCOUNT> with your account number and replace <INSERT_REGION> with the AWS Region that you want the solution deployed to.
    cdk bootstrap aws://<INSERT_AWS_ACCOUNT>/<INSERT_REGION>

  3. Deploy the stack to generate findings based on a specific parameter that is passed. The following parameters are available:
    1. inspector
    2. guardduty
    cdk deploy SecurityFindingGeneratorStack –parameters securityserviceuserdata=inspector

Reviewing security findings

After the solution successfully deploys, security findings should start appearing in your AWS account’s GuardDuty console within a couple of minutes.

Amazon GuardDuty findings

In order to create a diverse set of GuardDuty findings, the solution uses Amazon EC2 user data to run scripts. Those scripts can be found in the sample repository. You can also review and change scripts as needed to fit your use case or to remove specific actions if you don’t want specific resources to be altered or security findings to be generated.

A comprehensive list of active GuardDuty finding types and details for each finding can be found in the Amazon GuardDuty user guide. In this solution, activities which cause the following GuardDuty findings to be generated, are performed:

To generate the EKS security findings, the EKS Deployment EC2 instance is running eksctl commands that deploy CloudFormation templates. If the EKS cluster doesn’t deploy, it might be because of capacity restraints in a specific Availability Zone. If this occurs, manually delete the failed EKS CloudFormation templates.

If you want to create the EKS cluster and security findings manually, you can do the following:

  1. Sign in to the Amazon EC2 console.
  2. Connect to the EKS Deployment EC2 instance using an IAM role that has access to start a session through Systems Manager. After connecting to the ssm-user, issue the following commands in the Session Manager session:
    1. sudo chmod 744 /home/ec2-user/guardduty-script.sh
    2. chown ec2-user /home/ec2-user/guardduty-script.sh
    3. sudo /home/ec2-user/guardduty-script.sh

It’s important that your security analysts have an incident response playbook. If playbooks don’t exist, you can refer to the GuardDuty remediation recommendations or AWS sample incident response playbooks to get started building playbooks.

Amazon Inspector findings

The findings for Amazon Inspector are generated by using the open source Vulhub collection. The open source collection has pre-built vulnerable Docker environments that pull images into Amazon ECR.

The Amazon Inspector findings that are created vary depending on what exists within the open source library at deployment time. The following are examples of findings you will see in the console:

For Amazon Inspector findings, you can refer to parts 1 and 2 of Automate vulnerability management and remediation in AWS using Amazon Inspector and AWS Systems Manager.

Clean up

If you deployed the security finding generator solution by using the Launch Stack button in the console or the CloudFormation template security_finding_generator_cfn, do the following to clean up:

  1. In the CloudFormation console for the account and Region where you deployed the solution, choose the SecurityFindingGeneratorStack stack.
  2. Choose the option to Delete the stack.

If you deployed the solution by using the AWS CDK, run the command cdk destroy.

Important: The solution uses eksctl to provision EKS resources, which deploys additional CloudFormation templates. There are custom resources within the solution that will attempt to delete the provisioned CloudFormation templates for EKS. If there are any issues, you should verify and manually delete the following CloudFormation templates:

  • eksctl-GuardDuty-Finding-Demo-cluster
  • eksctl-GuardDuty-Finding-Demo-addon-iamserviceaccount-kube-system-aws-node
  • eksctl-GuardDuty-Finding-Demo-nodegroup-ng-<GUID>

Conclusion

In this blog post, I showed you how to deploy a solution to provision resources in an AWS account to generate security findings. This solution provides a technical framework to conduct periodic simulations within your AWS environment. By having real, rather than simulated, security findings, you can enable your security teams to interact with actual resources and validate existing incident response processes. Having a repeatable mechanism to create security findings also provides your security team the opportunity to develop and test automated incident response capabilities in your AWS environment.

AWS has multiple services to assist with increasing your organization’s security posture. Security Hub provides native integration with AWS security services as well as partner services. From Security Hub, you can also implement automation to respond to findings using custom actions as seen in Use Security Hub custom actions to remediate S3 resources based on Amazon Macie discovery results. In part two of a two-part series, you can learn how to use Amazon Detective to investigate security findings in EKS clusters. Amazon Security Lake automatically normalizes and centralizes your data from AWS services such as Security Hub, AWS CloudTrail, VPC Flow Logs, and Amazon Route 53, as well as custom sources to provide a mechanism for comprehensive analysis and visualizations.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Incident Response re:Post or contact AWS Support.

Author

Jonathan Nguyen

Jonathan is a Principal Security Architect at AWS. His background is in AWS security with a focus on threat detection and incident response. He helps enterprise customers develop a comprehensive AWS security strategy and deploy security solutions at scale, and trains customers on AWS security best practices.

Log Explorer: monitor security events without third-party storage

Post Syndicated from Jen Sells original https://blog.cloudflare.com/log-explorer


Today, we are excited to announce beta availability of Log Explorer, which allows you to investigate your HTTP and Security Event logs directly from the Cloudflare Dashboard. Log Explorer is an extension of Security Analytics, giving you the ability to review related raw logs. You can analyze, investigate, and monitor for security attacks natively within the Cloudflare Dashboard, reducing time to resolution and overall cost of ownership by eliminating the need to forward logs to third party security analysis tools.

Background

Security Analytics enables you to analyze all of your HTTP traffic in one place, giving you the security lens you need to identify and act upon what matters most: potentially malicious traffic that has not been mitigated. Security Analytics includes built-in views such as top statistics and in-context quick filters on an intuitive page layout that enables rapid exploration and validation.

In order to power our rich analytics dashboards with fast query performance, we implemented data sampling using Adaptive Bit Rate (ABR) analytics. This is a great fit for providing high level aggregate views of the data. However, we received feedback from many Security Analytics power users that sometimes they need access to a more granular view of the data — they need logs.

Logs provide critical visibility into the operations of today’s computer systems. Engineers and SOC analysts rely on logs every day to troubleshoot issues, identify and investigate security incidents, and tune the performance, reliability, and security of their applications and infrastructure. Traditional metrics or monitoring solutions provide aggregated or statistical data that can be used to identify trends. Metrics are wonderful at identifying THAT an issue happened, but lack the detailed events to help engineers uncover WHY it happened. Engineers and SOC Analysts rely on raw log data to answer questions such as:

  • What is causing this increase in 403 errors?
  • What data was accessed by this IP address?
  • What was the user experience of this particular user’s session?

Traditionally, these engineers and analysts would stand up a collection of various monitoring tools in order to capture logs and get this visibility. With more organizations using multiple clouds, or a hybrid environment with both cloud and on-premise tools and architecture, it is crucial to have a unified platform to regain visibility into this increasingly complex environment.  As more and more companies are moving towards a cloud native architecture, we see Cloudflare’s connectivity cloud as an integral part of their performance and security strategy.

Log Explorer provides a lower cost option for storing and exploring log data within Cloudflare. Until today, we have offered the ability to export logs to expensive third party tools, and now with Log Explorer, you can quickly and easily explore your log data without leaving the Cloudflare Dashboard.

Log Explorer Features

Whether you’re a SOC Engineer investigating potential incidents, or a Compliance Officer with specific log retention requirements, Log Explorer has you covered. It stores your Cloudflare logs for an uncapped and customizable period of time, making them accessible natively within the Cloudflare Dashboard. The supported features include:

  • Searching through your HTTP Request or Security Event logs
  • Filtering based on any field and a number of standard operators
  • Switching between basic filter mode or SQL query interface
  • Selecting fields to display
  • Viewing log events in tabular format
  • Finding the HTTP request records associated with a Ray ID

Narrow in on unmitigated traffic

As a SOC analyst, your job is to monitor and respond to threats and incidents within your organization’s network. Using Security Analytics, and now with Log Explorer, you can identify anomalies and conduct a forensic investigation all in one place.

Let’s walk through an example to see this in action:

On the Security Analytics dashboard, you can see in the Insights panel that there is some traffic that has been tagged as a likely attack, but not mitigated.

Clicking the filter button narrows in on these requests for further investigation.

In the sampled logs view, you can see that most of these requests are coming from a common client IP address.

You can also see that Cloudflare has flagged all of these requests as bot traffic. With this information, you can craft a WAF rule to either block all traffic from this IP address, or block all traffic with a bot score lower than 10.

Let’s say that the Compliance Team would like to gather documentation on the scope and impact of this attack. We can dig further into the logs during this time period to see everything that this attacker attempted to access.

First, we can use Log Explorer to query HTTP requests from the suspect IP address during the time range of the spike seen in Security Analytics.

We can also review whether the attacker was able to exfiltrate data by adding the OriginResponseBytes field and updating the query to show requests with OriginResponseBytes > 0. The results show that no data was exfiltrated.

Find and investigate false positives

With access to the full logs via Log Explorer, you can now perform a search to find specific requests.

A 403 error occurs when a user’s request to a particular site is blocked. Cloudflare’s security products use things like IP reputation and WAF attack scores based on ML technologies in order to assess whether a given HTTP request is malicious. This is extremely effective, but sometimes requests are mistakenly flagged as malicious and blocked.

In these situations, we can now use Log Explorer to identify these requests and why they were blocked, and then adjust the relevant WAF rules accordingly.

Or, if you are interested in tracking down a specific request by Ray ID, an identifier given to every request that goes through Cloudflare, you can do that via Log Explorer with one query.

Note that the LIMIT clause is included in the query by default, but has no impact on RayID queries as RayID is unique and only one record would be returned when using the RayID filter field.

How we built Log Explorer

With Log Explorer, we have built a long-term, append-only log storage platform on top of Cloudflare R2. Log Explorer leverages the Delta Lake protocol, an open-source storage framework for building highly performant, ACID-compliant databases atop a cloud object store. In other words, Log Explorer combines a large and cost-effective storage system – Cloudflare R2 – with the benefits of strong consistency and high performance. Additionally, Log Explorer gives you a SQL interface to your Cloudflare logs.

Each Log Explorer dataset is stored on a per-customer level, just like Cloudflare D1, so that your data isn’t placed with that of other customers. In the future, this single-tenant storage model will give you the flexibility to create your own retention policies and decide in which regions you want to store your data.

Under the hood, the datasets for each customer are stored as Delta tables in R2 buckets. A Delta table is a storage format that organizes Apache Parquet objects into directories using Hive’s partitioning naming convention. Crucially, Delta tables pair these storage objects with an append-only, checkpointed transaction log. This design allows Log Explorer to support multiple writers with optimistic concurrency.

Many of the products Cloudflare builds are a direct result of the challenges our own team is looking to address. Log Explorer is a perfect example of this culture of dogfooding. Optimistic concurrent writes require atomic updates in the underlying object store, and as a result of our needs, R2 added a PutIfAbsent operation with strong consistency. Thanks, R2! The atomic operation sets Log Explorer apart from Delta Lake solutions based on Amazon Web Services’ S3, which incur the operational burden of using an external store for synchronizing writes.

Log Explorer is written in the Rust programming language using open-source libraries, such as delta-rs, a native Rust implementation of the Delta Lake protocol, and Apache Arrow DataFusion, a very fast, extensible query engine. At Cloudflare, Rust has emerged as a popular choice for new product development due to its safety and performance benefits.

What’s next

We know that application security logs are only part of the puzzle in understanding what’s going on in your environment. Stay tuned for future developments including tighter, more seamless integration between Analytics and Log Explorer, the addition of more datasets including Zero Trust logs, the ability to define custom retention periods, and integrated custom alerting.

Please use the feedback link to let us know how Log Explorer is working for you and what else would help make your job easier.

How to get it

We’d love to hear from you! Let us know if you are interested in joining our Beta program by completing this form and a member of our team will contact you.

Pricing will be finalized prior to a General Availability (GA) launch.

Tune in for more news, announcements and thought-provoking discussions! Don’t miss the full Security Week hub page.

Enhancing security analysis with Cloudflare Zero Trust logs and Elastic SIEM

Post Syndicated from Corey Mahan original https://blog.cloudflare.com/enhancing-security-analysis-with-cloudflare-zero-trust-logs-and-elastic-siem


Today, we are thrilled to announce new Cloudflare Zero Trust dashboards on Elastic. Shared customers using Elastic can now use these pre-built dashboards to store, search, and analyze their Zero Trust logs.

When organizations look to adopt a Zero Trust architecture, there are many components to get right. If products are configured incorrectly, used maliciously, or security is somehow breached during the process, it can open your organization to underlying security risks without the ability to get insight from your data quickly and efficiently.

As a Cloudflare technology partner, Elastic helps Cloudflare customers find what they need faster, while keeping applications running smoothly and protecting against cyber threats. “I’m pleased to share our collaboration with Cloudflare, making it even easier to deploy log and analytics dashboards. This partnership combines Elastic’s open approach with Cloudflare’s practical solutions, offering straightforward tools for enterprise search, observability, and security deployment,” explained Mark Dodds, Chief Revenue Officer at Elastic.

Value of Zero Trust logs in Elastic

With this joint solution, we’ve made it easy for customers to seamlessly forward their Zero Trust logs to Elastic via Logpush jobs. This can be achieved directly via a Restful API or through an intermediary storage solution like AWS S3 or Google Cloud. Additionally, Cloudflare’s integration with Elastic has undergone improvements to encompass all categories of Zero Trust logs generated by Cloudflare.

Here are detailed some highlights of what the integration offers:

  • Comprehensive Visibility: Integrating Cloudflare Logpush into Elastic provides organizations with a real-time, comprehensive view of events related to Zero Trust. This enables a detailed understanding of who is accessing resources and applications, from where, and at what times. Enhanced visibility helps detect anomalous behavior and potential security threats more effectively, allowing for early response and mitigation.
  • Field Normalization: By unifying data from Zero Trust logs in Elastic, it’s possible to apply consistent field normalization not only for Zero Trust logs but also for other sources. This simplifies the process of search and analysis, as data is presented in a uniform format. Normalization also facilitates the creation of alerts and the identification of patterns of malicious or unusual activity.
  • Efficient Search and Analysis: Elastic provides powerful data search and analysis capabilities. Having Zero Trust logs in Elastic enables quick and precise searching for specific information. This is crucial for investigating security incidents, understanding workflows, and making informed decisions.
  • Correlation and Threat Detection: By combining Zero Trust data with other security events and data, Elastic enables deeper and more effective correlation. This is essential for detecting threats that might go unnoticed when analyzing each data source separately. Correlation aids in pattern identification and the detection of sophisticated attacks.
  • Prebuilt Dashboards: The integration provides out-of-the-box dashboards offering a quick start to visualizing key metrics and patterns. These dashboards help security teams visualize the security landscape in a clear and concise manner. The integration not only provides the advantage of prebuilt dashboards designed for Zero Trust datasets but also empowers users to curate their own visualizations.

What’s new on the dashboards

One of the main assets of the integration is the out-of-the-box dashboards tailored specifically for each type of Zero Trust log. Let’s explore some of these dashboards in more detail to find out how they can help us in terms of visibility.

Gateway HTTP

This dashboard focuses on HTTP traffic and allows for monitoring and analyzing HTTP requests passing through Cloudflare’s Secure Web Gateway.

Here, patterns of traffic can be identified, potential threats detected, and a better understanding gained of how resources are being used within the network.

Every visualization in the stage is interactive. Therefore, the whole dashboard adapts to enabled filters, and they can be pinned across dashboards for pivoting. For instance, if clicking on one of the sections of the donut showing the different actions, a filter is automatically applied on that value and the whole dashboard is oriented around it.

CASB

Following with a different perspective, the CASB (Cloud Access Security Broker) dashboard provides visibility over cloud applications used by users. Its visualizations are targeted to detect threats effectively, helping in the risk management and regulatory compliance.

These examples illustrate how dashboards in the integration between Cloudflare and Elastic offer practical and effective data visualization for Zero Trust. They enable us to make data-driven decisions, identify behavioral patterns, and proactively respond to threats. By providing relevant information in a visual and accessible manner, these dashboards strengthen security posture and allow for more efficient risk management in the Zero Trust environment.

How to get started

Setup and deployment is simple. Use the Cloudflare dashboard or API to create Logpush jobs with all fields enabled for each dataset you’d like to ingest on Elastic. There are eight account-scoped datasets available to use today (Access Requests, Audit logs, CASB findings, Gateway logs including DNS, Network, HTTP; Zero Trust Session Logs) that can be ingested into Elastic.

Setup Logpush jobs to your Elastic destination via one of the following methods:

  • HTTP Endpoint mode – Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent.
  • AWS S3 polling mode – Cloudflare writes data to S3 and Elastic Agent polls the S3 bucket by listing its contents and reading new files.
  • AWS S3 SQS mode – Cloudflare writes data to S3, S3 pushes a new object notification to SQS, Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple Agents can be used in this mode.

Enabling the integration in Elastic

  1. In Kibana, go to Management > Integrations
  2. In the integrations search bar type Cloudflare Logpush.
  3. Click the Cloudflare Logpush integration from the search results.
  4. Click the Add Cloudflare Logpush button to add Cloudflare Logpush integration.
  5. Enable the Integration with the HTTP Endpoint, AWS S3 input or GCS input.
  6. Under the AWS S3 input, there are two types of inputs: using AWS S3 Bucket or using SQS.
  7. Configure Cloudflare to send logs to the Elastic Agent.

What’s next

As organizations increasingly adopt a Zero Trust architecture, understanding your organization’s security posture is paramount. The dashboards help with necessary tools to build a robust security strategy, centered around visibility, early detection, and effective threat response.  By unifying data, normalizing fields, facilitating search, and enabling the creation of custom dashboards, this integration becomes a valuable asset for any cybersecurity team aiming to strengthen their security posture.

We’re looking forward to continuing to connect Cloudflare customers with our community of technology partners, to help in the adoption of a Zero Trust architecture.

Explore this new integration today.

Alerting Rules!: InsightIDR Raises the Bar for Visibility and Coverage

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/07/06/alerting-rules-insightidr-raises-the-bar-for-visibility-and-coverage/

Alerting Rules!: InsightIDR Raises the Bar for Visibility and Coverage

By George Schneider, Information Security Manager at Listrak

I’ve worked in cybersecurity for over two decades, so I’ve seen plenty of platforms come and go—some even crash and burn. But Rapid7, specifically InsightIDR, has consistently performed above expectations. In fact, InsightIDR has become an essential resource for maintaining my company’s cybersecurity posture.

Alerting Rules!

Back in the early days, a SIEM didn’t come with a bunch of standardized alerting rules. We had to write all of our own rules to actually find what we were looking for. Today, instead of spending six hours a day hunting for threats, InsightIDR does a lot of the work for the practitioner. Now, we spend a maximum of one hour a day responding to alerts.

In addition to saving time, the out-of-the-box rules are very effective; they find things that our other security products can’t detect. This is a key reason I’ve been 100% happy with Rapid7. As a user, I just know it’s functional. It’s clear that InsightIDR is designed by and for users—there’s no fluff, and the kinks are already ironed out. Not only am I saving time and company resources, the solution is a joy to use.

Source Coverage

When scouting SIEM options, we wanted a platform that could ingest a lot of different log sources. Rapid7 covered all of the elements we use in the big platforms and various security appliances we have—and some in the cloud too. InsightIDR can ingest logs from all sources and correlate them (a key to any high-functioning SIEM) on day one.

Trust the Process

I can honestly say this is the first time I’ve ever used a product that adds new features and functionality every single quarter. It’s not just a new pretty interface either, Rapid7 consistently adds capabilities that move the product forward.

What’s also wonderful is that Rapid7 listens to customers, especially their feedback. Not to toot my own horn, but they’ve even released a handful of feature requests that I submitted over the years. So I can say with absolute sincerity that these improvements actually benefit SOC teams. They make us better at detecting the stuff that we’re most concerned about.

Visibility and Coverage, Thanks, Insight Agent!

If you’re not familiar with Insight Agent, it’s time to get acquainted. Insight Agent is critical for running forensics on a machine. If I have a machine that gets flagged for something through an automated alert, I can quickly jump in without delay because of the Insight Agent. I get lots of worthwhile information that helps me consistently finish investigations in a timely manner. I know in pretty short order whether an alert is nefarious or just a false positive.

And this is all built into the Rapid7 platform—it doesn’t require customization or installations to get up and running. You truly have a single pane of glass to do all of this, and it’s somehow super intuitive as well. Using the endpoint agent, I don’t have to switch over to something else to do additional work. It’s all right there.

“Customer support at Rapid7 is outstanding. It’s the gold standard that I now use to evaluate all other customer support.”

Thinking Outside the Pane

I also have to give a shout out to the Rapid7 community. The community at discuss.rapid7.com/ and the support I get from our Rapid7 account team cannot be overlooked. When I have a question about how to use something, my first step is to visit Discuss to see if somebody else has already posted some information about it—often saving me valuable time. If that doesn’t answer my question, the customer support at Rapid7 is outstanding. It’s the gold standard that I now use to evaluate all other customer support.

The Bottom Line

My bottom line? I love this product (and the people). To say it’s useful is an understatement. I would never recommend a product that I didn’t think was outstanding. I firmly believe in the Rapid7InsightIDR and experience how useful it is every day. So does my team.

To learn more about InsightIDR, our industry-leading cloud-native SIEM solution, watch this on-demand demo.

What’s New in InsightIDR: Q4 2022 in Review

Post Syndicated from Dina Durutlic original https://blog.rapid7.com/2023/01/17/whats-new-in-insightidr-q4-2022-in-review/

What’s New in InsightIDR: Q4 2022 in Review

As we continue to empower security teams with the freedom to focus on what matters most, Q4 focused on investments and releases that contributed to that vision. With InsightIDR, Rapid7’s cloud-native SIEM and XDR solution, teams have the scale, comprehensive contextual coverage, and expertly vetted detections they need to thwart threats early in the attack chain.

This 2022 Q4 recap post offers a closer look at the recent investments and releases we’ve made over the past quarter. Here are some of the highlights:

Easy to create and manage log search, dashboards, and reports

You spoke, we listened! Per our customers, you can now create tables with multiple columns, allowing teams to see all data in one view. For example, simply add a query with a “where” clause and select a table display followed by the columns you want displayed.

Additionally, teams can reduce groupby search results with the having() clause. Customers can filter out what data is returned from groupby results with the option to layer in existing analytics function support (e.g. count, unique, max).

What’s New in InsightIDR: Q4 2022 in Review

Accelerated time to value

The InsightIDR Onboarding Progress Tracker, available for customers during their 90 day onboarding period, is a self-serve, centralized check-list of onboarding tasks with step-by-step guidance, completion statuses, and context on the “what” and “why” of each task.

No longer onboarding? No problem! We made the progress tracker available beyond the 90-day onboarding period so customers can evaluate setup progress and ensure InsightIDR is operating at full capacity to effectively detect, investigate, and respond to threats.

What’s New in InsightIDR: Q4 2022 in Review

Visibility across your modern environment

For those that leverage Palo Alto Cortex, you can now configure Palo Alto Cortex Data Lake to send activity to InsightIDR including syslog-encrypted Web Proxy, Firewall, Ingress Authentication, etc. Similarly, for customers leveraging Zscaler, you can now configure Zscaler Log Streaming Service (LSS) to receive and parse user activity and audit logs from Zscaler Private Access through the LSS.

For teams who do not have the bandwidth to set up and manage multiple event sources pertaining to Cisco Meraki, we have added support to ingest Cisco Meraki events through the Cisco Meraki API. This will enable you to deploy and add new event sources with less management.

What’s New in InsightIDR: Q4 2022 in Review

Customers can now bring data from their Government Community Cloud (GCC) and GCC High environments when setting up the Office365 event source to ensure security standards are met when processing US Government data.

Stay tuned!

We’re always working on new product enhancements and functionality to ensure your team can stay ahead of potential threats and malicious activity. Keep an eye on the Rapid7 blog and the InsightIDR release notes to keep up to date with the latest detection and response releases at Rapid7.

Three recurring Security Hub usage patterns and how to deploy them

Post Syndicated from Tim Holm original https://aws.amazon.com/blogs/security/three-recurring-security-hub-usage-patterns-and-how-to-deploy-them/

As Amazon Web Services (AWS) Security Solutions Architects, we get to talk to customers of all sizes and industries about how they want to improve their security posture and get visibility into their AWS resources. This blog post identifies the top three most commonly used Security Hub usage patterns and describes how you can use these to improve your strategy for identifying and managing findings.

Customers have told us they want to provide security and compliance visibility to the application owners in an AWS account or to the teams that use the account; others want a single-pane-of-glass view for their security teams; and other customers want to centralize everything into a security information and event management (SIEM) system, most often due to being in a hybrid scenario.

Security Hub was launched as a posture management service that performs security checks, aggregates alerts, and enables automated remediation. Security Hub ingests findings from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, AWS Firewall Manager, and AWS Health, and also from third-party services. It can be integrated with AWS Organizations to provide a single dashboard where you can view findings across your organization.

Security Hub findings are normalized into the AWS Security Findings Format (ASFF) so that users can review them in a standardized format. This reduces the need for time-consuming data conversion efforts and allows for flexible and consistent filtering of findings based on the attributes provided in the finding, as well as the use of customizable responsive actions. Partners who have integrations with Security Hub also send their findings to AWS using the ASFF to allow for consistent attribute definition and enforced criticality ratings, meaning that findings in Security Hub have a measurable rating. This helps to simplify the complexity of managing multiple findings from different providers.

Overview of the usage patterns

In this section, we outline the objectives for each usage pattern, list the typical stakeholders we have seen these patterns support, and discuss the value of deploying each one.

Usage pattern 1: Dashboard for application owners

Use Security Hub to provide visibility to application workload owners regarding the security and compliance posture of their AWS resources.

The application owner is often responsible for the security and compliance posture of the resources they have deployed in AWS. In our experience however, it is common for large enterprises to have a separate team responsible for defining security-related privileges and to not grant application owners the ability to modify configuration settings on the AWS account that is designated as the centralized Security Hub administration account. We’ll walk through how you can enable read-only access for application owners to use Security Hub to see the overall security posture of their AWS resources.

Stakeholders: Developers and cloud teams that are responsible for the security posture of their AWS resources. These individuals are often required to resolve security events and non-compliance findings that are captured with Security Hub.

Value adds for customers: Some organizations we have worked with put the onus on workload owners to own their security findings, because they have a better understanding of the nuances of the engineering, the business needs, and the overall risk that the security findings represent. This usage pattern gives the applications owners clear visibility into the security and compliance status of their workloads in the AWS accounts so that they can define appropriate mitigation actions with consideration to their business needs and risk.

Usage pattern 2: A single pane of glass for security professionals

Use Security Hub as a single pane of glass to view, triage, and take action on AWS security and compliance findings across accounts and AWS Regions.

Security Hub generates findings by running continuous, automated security checks based on supported industry standards. Additionally, Security Hub integrates with other AWS services to collect and correlate findings and uses over 60 partner integrations to simplify and prioritize findings. With these features, security professionals can use Security Hub to manage findings across their AWS landscape.

Stakeholders: Security operations, incident responders, and threat hunters who are responsible for monitoring compliance, as well as security events.

Value adds for customers: This pattern benefits customers who don’t have a SIEM but who are looking for a centralized model of security operations. By using Security Hub and aggregating findings across Regions into a single Security Hub dashboard, they get oversight of their AWS resources without the cost and complexity of managing a SIEM.

Usage pattern 3: Centralized routing to a SIEM solution

Use AWS Security Hub as a single aggregation point for security and compliance findings across AWS accounts and Regions, and route those findings in a normalized format to a centralized SIEM or log management tool.

Customers who have an existing SIEM capability and complex environments typically deploy this usage pattern. By using Security Hub, these customers gather security and compliance-related findings across the workloads in all their accounts, ingest those into their SIEM, and investigate findings or take response and remediation actions directly within their SIEM console. This mechanism also enables customers to define use cases for threat detection and analysis in a single environment, providing a holistic view of their risk.

Stakeholders: Security operations teams, incident responders, and threat hunters. This pattern supports a centralized model of security operations, where the responsibilities for monitoring and identifying both non-compliance with defined practice, as well as security events, fall within single teams within the organization.

Value adds for customers: When Security Hub aggregates the findings from workloads across accounts and Regions in a single place, those finding are normalized by using the ASFF. This means that findings are already normalized under a single format when they are sent to the SIEM. This enables faster analytics, use case definition, and dashboarding because analysts don’t have to create multi-tiered use cases for different finding structures across vendors and services.

The ASFF also enables streamlined response through security orchestration, automation, response (SOAR) tools or AWS native orchestration tools such as AWS EventBridge. With the ASFF, you can effortlessly parse and filter events based on an attribute and customize automation.

Overall, this usage pattern helps to improve the typical key performance indicators (KPIs) the SecOps function is measured against, such as Mean Time to Detect (MTTD) or Mean Time to Respond (MTTR) in the AWS environment.

Setting up each usage pattern

In this section, we’ll go over the steps for setting up each usage pattern

Usage pattern 1: Dashboard for application owners

Use the following steps to set up a Security Hub dashboard for an account owner, where the owner can view and take action on security findings.

Prerequisites for pattern 1

This solution has the following prerequisites:

  1. Enable AWS Security Hub to check your environment against security industry standards and best practices.
  2. Next, enable the AWS service integrations for all accounts and Regions as desired. For more information, refer to Enabling all features in your organization.

Set up read-only permissions for the AWS application owner

The following steps are commonly performed by the security team or those responsible for creating AWS Identity and Access Management (IAM) policies.

  • Assign the AWS managed permission policy AWSSecurityHubReadOnlyAccess to the principal who will be assuming the role. Figure 1 shows an image of the permission statement.
    Figure 1: Assign permissions

    Figure 1: Assign permissions

  • (Optional) Create custom insights in Security Hub. Using custom insights can provide a view of areas of interest for an application owner; however, creating a new insights view is not allowed unless the following additional set of permissions are granted to the application owner role or user.
    {
    "Effect": "Allow",
    "Action": [
    "securityhub:UpdateInsight",
    "securityhub:DeleteInsight",
    "securityhub:CreateInsight"
    ],
    "Resource": "*"
    }

Pattern 1 walkthrough: View the application owner’s security findings

After the read-only IAM policy has been created and applied, the application owner can access Security Hub to view the dashboard, which provides the application owner with a view of the overall security posture of their AWS resources. In this section, we’ll walk through the steps that the application owner can take to quickly view and assess the compliance and security of their AWS resources.

To view the application owner’s dashboard in Security Hub

  1. Sign into the AWS Management Console and navigate to the AWS Security Hub service page. You will be presented with a summary of the findings. Then, depending on the security standards that are enabled, you will be presented with a view similar to the one shown in Figure 2.
    Figure 2: Summary of aggregated Security Hub standard score

    Figure 2: Summary of aggregated Security Hub standard score

    Security Hub generates its own findings by running automated and continuous checks against the rules in a set of supported security standards. On the Summary page, the Security standards card displays the security scores for each enabled standard. It also displays a consolidated security score that represents the proportion of passed controls to enabled controls across the enabled standards.

  2. Choose the hyperlink of a security standard to get an additional summarized view, as shown in Figure 3.
    Figure 3: Security Hubs standards summarized view

    Figure 3: Security Hubs standards summarized view

  3. As you choose the hyperlinks for the specific findings, you will get additional details, along with recommended remediation instructions to take.
    Figure 4: Example of finding details view

    Figure 4: Example of finding details view

  4. In the left menu of the Security Hub console, choose Findings to see the findings ranked according to severity. Choose the link text of the finding title to drill into the details and view additional information on possible remediation actions.
    Figure 5: Findings example

    Figure 5: Findings example

  5. In the left menu of the Security Hub console, choose Insights. You will be presented with a collection of related findings. Security Hub provides several managed insights to get you started with assessing your security posture. As shown in Figure 6, you can quickly see if your Amazon Simple Storage Service (Amazon S3) buckets have public write or read permissions. This is just one example of managed insights that help you quickly identify risks.
    Figure 6: Insights view

    Figure 6: Insights view

  6. You can create custom insights to track issues and resources that are specific to your environment. Note that creating custom insights requires IAM permissions, as described earlier in the Prerequisites for Pattern 1 section. Use the following steps to create a custom insight for compliance status.

    To create a custom insight, use the Group By filter and select how you want your insights to be grouped together:

    1. In the left menu of the Security Hub console, choose Insights, and then choose Create insight in the upper right corner.
    2. By default, there will be filters included in the filter bar. Put the cursor in the filter bar, choose Group By, choose Compliance Status, and then choose Apply.
      Figure 7: Creating a custom insight

      Figure 7: Creating a custom insight

    3. For Insight name, enter a relevant name for your insight, and then choose Create insight. Your custom insight will be created.

In this scenario, you learned how application owners can quickly assess the resources in an AWS account and get details about security risks and recommended remediation steps. For a more hands-on walkthrough that covers how to use Security Hub, consider spending 2–3 hours going through this AWS Security Hub workshop.

Usage pattern 2: A single pane of glass for security professionals

To use Security Hub as a centralized source of security insight, we recommend that you choose to accept security data from the available integrated AWS services and third-party products that generate findings. Check the lists of available integrations often, because AWS continues to release new services that integrate with Security Hub. Figure 8 shows the Integrations page in Security Hub, where you can find information on how to accept findings from the many integrations that are available.

Figure 8: Security Hub integrations page

Figure 8: Security Hub integrations page

Solution architecture and workflow for pattern 2

As Figure 9 shows, you can visualize Security Hub as the centralized security dashboard. Here, Security Hub can act as both the consumer and issuer of findings. Additionally, if you have security findings you want sent to Security Hub that aren’t provided by a AWS Partner or AWS service, you can create a custom provider to provide the central visibility you need.

Figure 9: Security Hub findings flow

Figure 9: Security Hub findings flow

Because Security Hub is integrated with many AWS services and partner solutions, customers get improved security visibility across their AWS landscape. With the integration of Amazon Detective, it’s convenient for security analysts to use Security Hub as the centralized incident triage starting point. Amazon Detective is a security incident response service that can be used to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities by collecting log data from AWS resources. To learn how to get started with Amazon Detective, we recommend watching this video.

Programmatically remediate high-volume workflows

Security teams increasingly rely on monitoring and automation to scale and keep up with the demands of their business. Using Security Hub, customers can configure automatic responses to findings based upon preconfigured rules. Security Hub gives you the option to create your own automated response and remediation solution or use the AWS provided solution, Security Hub Automated Response and Remediation (SHARR). SHARR is an extensible solution that provides predefined response and remediation actions (playbooks) based on industry compliance standards and best practices for security threats. For step-by-step instructions for setting up SHARR, refer to this blog post.

Routing to alerting and ticketing systems

For incidents you cannot or do not want to automatically remediate, either because the incident happened in an account with a production workload or some change control process must be followed, routing to an incident management environment may be necessary. The primary goal of incident response is reducing the time to resolution for critical incidents. Customers who use alerting or incident management systems can integrate Security Hub to streamline the time it takes to resolve incidents. ServiceNow ITSM, Slack and PagerDuty are examples of products that integrate with Security Hub. This allows for workflow processing, escalations, and notifications as required.

Additionally, Incident Manager, a capability of AWS Systems Manager, also provides response plans, an escalation path, runbook automation, and active collaboration to recover from incidents. By using runbooks, customers can set up and run automation to recover from incidents. This blog post walks through setting up runbooks.

Usage pattern 3: Centralized routing to a SIEM solution

Here, we will describe how to use Splunk as an AWS Partner SIEM solution. However, note that there are many other SIEM partners available in the marketplace; the instructions to route findings to those partners’ platforms will be available in their documentation.

Solution architecture and workflow for pattern 3

Figure 10: Security Hub findings ingestion to Splunk

Figure 10: Security Hub findings ingestion to Splunk

Figure 10 shows the use of a Security Hub delegated administrator that aggregates findings across multiple accounts and Regions, as well as other AWS services such as GuardDuty, Amazon Macie, and Inspector. These findings are then sent to Splunk through a combination of Amazon EventBridge, AWS Lambda, and Amazon Kinesis Data Firehose.

Prerequisites for pattern 3

This solution has the following prerequisites:

  • Enable Security Hub in your accounts, with one account defined as the delegated admin for other accounts within AWS Organizations, and enable cross-Region aggregation.
  • Set up third-party SIEM solution; you can visit the AWS marketplace for a list of our SIEM partners. For this walkthrough, we will be using Splunk, with the Security Hub app in Splunk and an HTTP Event Collector (HEC) with indexer acknowledgment configured.
  • Generate and deploy a CloudFormation template from Splunk’s automation, provided by Project Trumpet.
  • Enable cross-Region replication. This action can only be performed from within the delegated administrator account, or from within a standalone account that is not controlled by a delegated administrator. The aggregation Region must be a Region that is enabled by default.

Pattern 3 walkthrough: Set up centralized routing to a SIEM

To get started, first designate a Security Hub delegated administrator and configure cross-Region replication. Then you can configure integration with Splunk.

To designate a delegated administrator and configure cross-Region replication

  1. Follow the steps in Designating a Security Hub administrator account to configure the delegated administrator for Security Hub.
  2. Perform these steps to configure cross-Region replication:
    1. Sign in to the account to which you delegated Security Hub administration, and in the console, navigate to the Security Hub dashboard in your desired aggregation Region. You must have the correct permissions to access Security Hub and make this change.
    2. Choose Settings, choose Regions, and then choose Configure finding aggregation.
    3. Select the radio button that displays the Region you are currently in, and then choose Save.
    4. You will then be presented with all available Regions in which you can aggregate findings. Select the Regions you want to be part of the aggregation. You also have the option to automatically link future Regions that Security Hub becomes enabled in.
    5. Choose Save.

You have now enabled multi-Region aggregation. Navigate back to the dashboard, where findings will start to be replicated into a single view. The time it takes to replicate the findings from the Regions will vary. We recommend waiting 24 hours for the findings to be replicated into your aggregation Region.

To configure integration with Splunk

Note: These actions require that you have appropriate permissions to deploy a CloudFormation template.

  1. Navigate to https://splunktrumpet.github.io/ and enter your HEC details: the endpoint URL and HEC token. Leave Automatically generate the required HTTP Event Collector tokens on your Splunk environment unselected.
  2. Under AWS data source configuration, select only AWS CloudWatch Events, with the Security Hub findings – Imported filter applied.
  3. Download the CloudFormation template to your local machine.
  4. Sign in to the AWS Management Console in the account and Region where your Security Hub delegated administrator and Region aggregation are configured.
  5. Navigate to the CloudFormation console and choose Create stack.
  6. Choose Template is ready, and then choose Upload a template file. Upload the CloudFormation template you previously downloaded from the Splunk Trumpet page.
  7. In the CloudFormation console, on the Specify Details page, enter a name for the stack. Keep all the default settings, and then choose Next.
  8. Keep all the default settings for the stack options, and then choose Next to review.
  9. On the review page, scroll to the bottom of the page. Select the check box under the Capabilities section, next to the acknowledgment that AWS CloudFormation might create IAM resources with custom names.

    The CloudFormation template will take approximately 15–20 minutes to complete.

Test the solution for pattern 3

If you have GuardDuty enabled in your account, you can generate sample findings. Security Hub will ingest these findings and invoke the EventBridge rule to push them into Splunk. Alternatively, you can wait for findings to be generated from the periodic checks that are performed by Security Hub. Figure 11 shows an example of findings displayed in the Security Hub dashboard in Splunk.

Figure 11: Example of the Security Hub dashboard in Splunk

Figure 11: Example of the Security Hub dashboard in Splunk

Conclusion

AWS Security Hub provides multiple ways you can use to quickly assess and prioritize your security alerts and security posture. In this post, you learned about three different usage patterns that we have seen our customers implement to take advantage of the benefits and integrations offered by Security Hub. Note that these usage patterns are not mutually exclusive, but can be used together as needed.

To extend these solutions further, you can enrich Security Hub metadata with additional context by using tags, as described in this post. Configure Security Hub to ingest findings from a variety of AWS Partners to provide additional visibility and context to the overall status of your security posture. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, please start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Tim Holm

Tim Holm

Tim is a Principal Solutions Architect at AWS, he joined AWS in 2018 and is deeply passionate about security, cloud services, and building innovative solutions that solve complex business problems.

Danny Cortegaca

Danny Cortegaca

Danny is a Senior Security Specialist at AWS. He joined AWS in 2021 and specializes in security architecture, threat modelling, and driving risk focused conversations.

[The Lost Bots] S02E05: The real magic in the Magic Quadrant

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/10/19/the-lost-bots-s02e05-the-real-magic-in-the-magic-quadrant/

[The Lost Bots] S02E05: The real magic in the Magic Quadrant

In this episode, we discuss the best use of market research reports, like Magic Quadrants and Waves. If you’re in the market for a new cybersecurity solution, do you just pick a Leader and call it a day?

“Consult the MQ only after you’ve identified two vendors that would be a perfect security solution for you,” say our hosts Jeffrey Gardner, Detection and Response Practice Advisor and Stephen Davis, Lead D&R Sales Technical Advisor. When you have two that meet or exceed the requirements? “I’ll be honest, I might not care about the MQ placement,” says Davis.

Do not under any circumstances leave before the jazz hands bit: they do gather themselves and talk about how outcomes have to run the show, first and always.

Check back with us in November for our next installment of The Lost Bots!

Additional reading:

We’re Challenging Convention. Rapid7 Recognized in the 2022 Gartner® Magic Quadrant™ for SIEM.

Post Syndicated from Meaghan Donlon original https://blog.rapid7.com/2022/10/13/rapid7-recognized-in-the-2022-gartner-magic-quadrant-for-siem/

We're Challenging Convention. Rapid7 Recognized in the 2022 Gartner® Magic Quadrant™ for SIEM.

As the attack surface sprawls, under-resourced security teams have inherent disadvantages. Rapid7 InsightIDR enables resource constrained security teams to achieve sophisticated detection and response, with greater efficiency and efficacy. As a Challenger in the 2022 Gartner Magic Quadrant for SIEM, we’re proud to represent the huge number of security teams out there today that don’t have time to do it all, but are asked to do it anyway. Our goal is to keep your organization safe by finding and eliminating threats faster and more reliably.

We're Challenging Convention. Rapid7 Recognized in the 2022 Gartner® Magic Quadrant™ for SIEM.

Gartner Peer® Insights™

Rapid7 maximizes your most precious resource: time

We are grateful to have a diverse collective of customers and partners around the world, of varying size and industry focus. These smart, agile, maturing teams want to advance their detection and response programs, but their organizations and the threats they face are moving faster than their capacity is growing. The constant that unites all of these teams: they never have enough time. Yet, we feel that despite a well-documented, industry-crushing skills gap, far too many traditional SIEMs and detection products continue to introduce additional noise and complexity for these teams. The result is long days, weekend work, far too many missed dinners / concerts / games, and (scariest of all) missed threats.

The best way to achieve successful detection and response is through a pragmatic and efficient approach. Threats are still a threat—whether or not you’ve had time to set up your complex traditional SIEM or the myriad of point detection solutions around it. Attackers don’t care if you’re ready. In fact, they’re counting on you not to be. Security teams need time and access to expertise to close this gap.

That’s where we believe Rapid7 can help.

We're Challenging Convention. Rapid7 Recognized in the 2022 Gartner® Magic Quadrant™ for SIEM.

Gartner® Peer Insights

Time-to-value and efficiency at every step

From inception, the guiding principle of InsightIDR has been to deliver sophisticated detection and response, in a more efficient and effective way, and here’s how:

  • A cloud-native foundation, SaaS delivery, and software-based collectors means it is faster to deploy, removes hardware burdens that bog teams down, and accelerates the time to actually get insights.
  • Intuitive interfaces, pre-built dashboards and reports, and a robust detections library means that teams are able to activate even the most junior analysts to deliver advanced analysis and threat detections right away.
  • And highly correlated investigation timelines, response recommendations (vetted by Rapid7’s MDR team), and pre-built automation workflows help you with one of the hardest parts of your job: responding to threats before significant damage occurs.

In short, we offer a SIEM that maturing teams can get real value from. Over the last seven years, we’ve struck a balance of adding a multitude of capabilities while never compromising our core tenet and commitment to providing you with productivity efficiency and delivering a better detection and response experience.

We're Challenging Convention. Rapid7 Recognized in the 2022 Gartner® Magic Quadrant™ for SIEM.

Gartner® Peer Insights™

High-fidelity, expertly vetted detections

Leveraging a diverse mix of threat intelligence—including unique intel from Rapid7’s renowned open-source projects—the Rapid7 Threat Intelligence and Detections Engineering (TIDE) team curates emergent threat content from all corners of the threat landscape. Our TIDE team is constantly manicuring a library of both known and unknown threats to capture even the most evasive attacks. With this always-up-to-date library and native UEBA, EDR, NDR, deception technology, and cloud TDIR, InsightIDR customers can be confident that the entirety of their attack surface is covered. And because our global MDR team is leveraging the same threat library, you can be certain that alerts will be low noise, highly reliable, and primed for analysts to take action.

We're Challenging Convention. Rapid7 Recognized in the 2022 Gartner® Magic Quadrant™ for SIEM.

Gartner Peer® Insights™

The future of detection & response

We believe that as the threat and attack landscape change at a rapid pace, the approaches to unifying data, detecting, and responding need to too. Reducing the noise and accelerating response outcomes is critical for security success – regardless of your security maturity. We also believe that for this reason, Gartner has named us a Challenger in the Magic Quadrant for SIEM – and we will continue to challenge the traditional as we focus on building the right outcomes for our customers. Find a complimentary copy of the 2022 Gartner Magic Quadrant for SIEM here.

Just a few of those outcomes we are driving toward in the future:

  • More frictionless access to expertise to ensure analysts always know how to respond and can execute more quickly
  • Deepening our breadth of detections and endpoint coverage for modern, dynamic environments, so customers can continue to leverage InsightIDR as their single source of truth for detection and response
  • Making sure our MSSP partners and their customers are optimized to succeed by providing a more turnkey experience that enables these partners to tap into the scale and efficiency of InsightIDR

We are excited to share more on these initiatives soon. Thank you to our customers and partners for continuing to share your insights, ideas, pains, and future plans. You continue to fuel our innovation and validate that we are on the right track in addressing the needs of maturing security teams.

Get the full report

Download now

GARTNER and Magic Quadrant are registered trademarks and service marks, and PEER INSIGHTS is a trademark and service mark, of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences, and should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose.

How to Deploy a SIEM That Actually Works

Post Syndicated from Robert Holzer original https://blog.rapid7.com/2022/09/27/how-to-deploy-a-siem-that-actually-works/

I deployed my SIEM in days, not months – here’s how you can too

How to Deploy a SIEM That Actually Works

As an IT administrator at a highly digitized manufacturing company, I spent many sleepless nights with no visibility into the activity and security of our environment before deploying a security information and event management (SIEM) solution. At the company I work for, Schlotterer Sonnenschutz Systeme GmbH, we have a lot of manufacturing machines that rely on internet access and external companies that remotely connect to our company’s environment – and I couldn’t see any of it happening. One of my biggest priorities was to source and implement state-of-the-art security solutions – beginning with a SIEM tool.

I asked colleagues and partners in the IT sector about their experience with deploying and leveraging SIEM technology. The majority of the feedback I received was that deploying a SIEM was a lengthy and difficult process. Then, once stood up, SIEMs were often missing information or difficult to pull actionable data from.

The feedback did not instill much confidence – particularly as this would be the first time I personally had deployed a SIEM. I was prepared for a long deployment road ahead, with the risk of shelfware looming over us. However, to my surprise, after identifying Rapid7’s InsightIDR as our chosen solution, the process was manageable and efficient, and we began receiving value just days after deployment. Rapid7 is clearly an outlier in this space: able to deliver an intuitive and accelerated onboarding experience while still driving actionable insights and sophisticated security results.

3 key steps for successful SIEM deployment

Based on my experience, our team identified three critical steps that must be taken in order to have a successful SIEM deployment:

  1. Identifying core event sources and assets you intend to onboard before deployment
  2. Collecting and correlating relevant and actionable security telemetry to form a holistic and accurate view of your environment while driving reliable early threat detection (not noise)
  3. Putting data to work in your SIEM so you can begin visualizing and analyzing to validate the success of your deployment

1. Identify core event sources and assets to onboard

Before deploying a SIEM, gather as much information as possible about your environment so you can easily begin the deployment process. Rapid7 provided easy-to-understand help documentation all throughout our deployment process in order to set us up for success. The instructions were highly detailed and easy to understand, making the setup quick and painless. Additionally, they provide a wide selection of pre-built event sources out-of-the-box, simplifying my experience. Within hours, I had all the information I needed in front of me.

Based on Rapid7’s recommendations, we set up what is referred to as the six core event sources:

  • Active Directory (AD)
  • Protocol (LDAP) server
  • Dynamic Host Configuration Protocol (DHCP) event logs
  • Domain Name System (DNS)
  • Virtual Private Network (VPN)
  • Firewall

Creating these event sources will get the most information flowing through your SIEM and if your solution has user behavior entity analytics (UEBA) capabilities like InsightIDR. Getting all the data in quickly begins the baselining process so you can identify anomalies and potential user and insider threats down the road.

2. Collecting and correlating relevant and actionable security telemetry

When deployed and configured properly, a good SIEM will unify your security telemetry into a single cohesive picture. When done ineffectively, a SIEM can create an endless maze of noise and alerts. Striking a balance of ingesting the right security telemetry and threat intelligence to drive meaningful, actionable threat detections is critical to effective detection, investigation, and response. A great solution harmonizes otherwise disparate sources to give a cohesive view of the environment and malicious activity.

InsightIDR came with a native endpoint agent, network sensor, and a host of integrations to make this process much easier. To provide some context, at Schlotterer Sonnenschutz Systeme GmbH, we have a large number of mobile devices, laptops, surface devices, and other endpoints that exist outside the company. The combined Insight Network Sensor and Insight Agent monitor our environment beyond the physical borders of our IT for complete visibility across offices, remote employees, virtual devices, and more.

Personally, when it comes to installing any agent, I prefer to take a step-by-step approach to reduce any potential negative effects the agent might have on endpoints. With InsightIDR, I easily deployed the Insight Agent on my own computer; then, I pushed it to an additional group of computers. The Rapid7 Agent’s lightweight software deployment is easy on our infrastructure. It took me no time to deploy it confidently to all our endpoints.

With data effectively ingested, we prepared to turn our attention to threat detection. Traditional SIEMs we had explored left much of the detection content creation to us to configure and manage – significantly swelling the scope of deployment and day-to-day operations. However, Rapid7 comes with an expansive managed library of curated detections out of the box – eliminating the need for upfront customizing and configuring and giving us coverage immediately. The Rapid7 detections are vetted by their in-house MDR SOC, which means they don’t create too much noise, and I had to do little to no tuning so that they aligned with my environment.

3. Putting your data to work in your SIEM

For our resource-constrained team, ensuring that we had relevant dashboarding and reports to track critical systems, activity across our network, and support audits and regulatory requirements was always a big focus. From talking to my peers, we were weary of building dashboards that would require our team to take on complicated query writing to create sophisticated visuals and reports. The prebuilt dashboards included within InsightIDR were again a huge time-saver for our team and helped us mobilize around sophisticated security reporting out of the gate. For example, I am using InsightIDR’s Active Directory Admin Actions dashboard to identify:

  • What accounts were created in the past 24 hours?
  • What accounts were deleted in the past 24 hours?
  • What accounts changed their password?
  • Who was added as a domain administrator?

Because the dashboards are already built into the system, it takes me just a few minutes to see the information I need to see and export that data to an interactive HTML report I can provide to my stakeholders. When deploying your own SIEM, I recommend really digging into the visualization options, seeing what it will take to build your own cards, and exploring any available prebuilt content to understand how long it may take you to begin seeing actionable data.  

I now have knowledge about my environment. I know what happens. I know for sure that if there is anything malicious or suspicious in my environment, Rapid7, the Insight Agent, or any of the sources we have integrated to InsightIDR will catch it, and I can take action right away.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Rapid7 Makes Security Compliance Complexity a Thing of the Past With InsightIDR

Post Syndicated from KJ McCann original https://blog.rapid7.com/2022/08/30/rapid7-makes-security-compliance-complexity-a-thing-of-the-past-with-insightidr/

Rapid7 Makes Security Compliance Complexity a Thing of the Past With InsightIDR

As a unified SIEM and XDR solution, InsightIDR gives organizations the tools they need to drive an elevated and efficient compliance program.

Cybersecurity standards and compliance are mission-critical for every organization, regardless of size. Apart from the direct losses resulting from a data breach, non-compliant companies could face hefty fees, loss of business, and even jail time under growing regulations. However, managing and maintaining compliance, preparing for audits, and building necessary reports can be a full-time job, which might not be in the budget. For already-lean teams, compliance can also distract from more critical security priorities like monitoring threats, early threat detection, and accelerated response – exposing organizations to greater risk.

An efficient compliance strategy reduces risk, ensures that your team is always audit-ready, and – most importantly – drives focus on more critical security work. With InsightIDR, security practitioners can quickly meet their compliance and regulatory requirements while accelerating their overall detection and response program.

Here are three ways InsightIDR has been built to elevate and simplify your compliance processes.

1. Powerful log management capabilities for full environment visibility and compliance readiness

Complete environment visibility and security log collection are critical for compliance purposes, as well as for providing a foundation of effective monitoring and threat detection. Enterprises need to monitor user activity, behavior, and application access across their entire environment — from the cloud to on-premises services. The adoption of cloud services continues to increase, creating even more potential access points for teams to keep up with.

InsightIDR’s strong log management capabilities provide full visibility into these potential threats, as well as enable robust compliance reporting by:

  • Centralizing and aggregating all security-relevant events, making them available for use in monitoring, alerting, investigation, ad hoc searching
  • Providing the ability to search for data quickly, create data models and pivots, save searches and pivots as reports, configure alerts, and create dashboards
  • Retaining all log data for 13 months for all InsightIDR customers, enabling the correlation of data over time and meeting compliance mandates.
  • Automatically mapping data to compliance controls, allowing analysts to create comprehensive dashboards and reports with just a few clicks

To take it a step further, InsightIDR’s intuitive user interface streamlines searches while eliminating the need for IT administrators to master a search language. The out-of-the-box correlation searches can be invoked in real time or scheduled to run regularly at a specific time should the need arise for compliance audits and reporting, updated dashboards, and more.

2. Predefined compliance reports and dashboards to keep you organized and consistent

Pre-built compliance content in InsightIDR enables teams to create robust reports without investing countless hours manually building and correlating data to provide information on the organization’s compliance posture. With the pre-built reports and dashboards, you can:

  • Automatically map data to compliance controls
  • Save filters and searches, then duplicate them across dashboards
  • Create, share, and customize reports right from the dashboard
  • Make reports available in multiple formats like PDF or interactive HTML files

InsightIDR’s library of pre-built dashboards makes it easier than ever to visualize your data within the context of common frameworks. Entire dashboards created by our Rapid7 experts can be set up in just a few clicks. Our dashboards cover a variety of key compliance frameworks like PCI, ISO 27001, HIPAA, and more.

Rapid7 Makes Security Compliance Complexity a Thing of the Past With InsightIDR

3. Unified and correlated data points to provide meaningful insights

With strong log management capabilities providing a foundation for your security posture, the ability to correlate the resulting data and look for unusual behavior, system anomalies, and other indicators of a security incident is key. This information is used not only for real-time event notification but also for compliance audits and reporting, performance dashboards, historical trend analysis, and post-hoc incident forensics.

Privileged users are often the targets of attacks, and when compromised, they typically do the most damage. That’s why it’s critical to extend monitoring to these users. In fact, because of the risk involved, privileged user monitoring is a common requirement for compliance reporting in many regulated industries.

InsightIDR provides a constantly curated library of detections that span user behavior analytics, endpoints, file integrity monitoring, network traffic analysis, and cloud threat detection and response – supported by our own native endpoint agent, network sensor, and collection software. User authentications, locational data, and asset activity are baselined to identify anomalous privilege escalations, lateral movement, and compromised credentials. Customers can also connect their existing Privileged Access Management tools (like CyberArk Vault or Varonis DatAdvantage) to get a more unified view of privileged user monitoring with a single interface.

Meet compliance standards while accelerating your detection and response

We know compliance is not the only thing a security operations center (SOC) has to worry about. InsightIDR can ensure that your most critical compliance requirements are met quickly and confidently. Once you have an efficient compliance process, the team will be able to focus their time and effort on staying ahead of emergent threats and remediating attacks quickly, reducing risk to the business.

What could you do with the time back?

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Simplify SIEM Optimization With InsightIDR

Post Syndicated from Margaret Wei original https://blog.rapid7.com/2022/07/22/simplify-siem-optimization-with-insightidr/

Two key ways InsightIDR helps customers tailor reporting, detection, and response — without any headaches

Simplify SIEM Optimization With InsightIDR

For far too many years, security teams have accepted that with a SIEM comes compromise. You could have highly tailored and custom rule sets, but it meant endless amounts of tuning and configuration to create and manage them. You could have pre-built content, but that meant rigidity and noise. You could have all the dashboard bells and whistles, but that meant finding the unicorn that knew how to navigate them. Too many defenders have carried this slog, accepting this traditional SIEM reality as “it is what it is.” No more!

It’s possible to have it all — an intuitive interface and sophisticated tuning and customization

With InsightIDR, Rapid7’s leading SIEM and XDR, you can have the best of both worlds — an easy-to-use tool that’s also incredibly sophisticated. InsightIDR makes it easy and intuitive to tune your detections (without heavy script-writing or configuration required). When it comes to viewing your environment’s data and sharing key metrics, our Dashboard Library and reports are readily available and highly customizable for your unique needs.

Filter out the noise with fine-tuned alerts

Every time an analyst creates an alert it takes work. At Rapid7, we want to save you time and advance your security posture — which is where our Detections Library comes in. Curated and managed by our MDR SOC team, you can rest assured that you’ll only be alerted to behaviors that are worthy of human review so that you can make the most out of your limited time and focus on the threats that really matter.

While we focus on creating a curated, high-fidelity library of detections, we know each environment has its unique challenges — which is why our attacker behavior analytics (ABA) detections are robustly tuneable. You can also get more granular with your tuning and take the following actions:

  • Create custom alerts when your organization calls for niche detections.
  • Customize UBA directions so you’re in control of which you have turned on to align your alerting with your environment.
  • Modify ABA detections by changing the rule action, modifying its priority, and adding exceptions to the rule.
  • Stay on top of potential noise with Relative Activity, a new score for ABA detection rules that analyzes and identifies detection rules that might cause frequent investigations or notable events if switched on, as well as determines which rules may benefit from tuning, either by changing the Rule Action or adding exceptions.

Customize dashboards and reports to best suit your team

With InsightIDR, teams have access to over 45 (and counting) dashboards out of the box — from compliance dashboards for frameworks like HIPAA or ISO to Active Directory Admin Activity — to help your team focus on driving faster decision-making.

Analysts can also leverage this pre-built content as a springboard for customizing their own reports. InsightIDR provides multiple query modes and methods for creating data visualizations — so whether you are more comfortable with loose keyword search, working in our intuitive query language, or simply clicking on charts to narrow down results — every analyst can operate as an expert, regardless of their prior SIEM experience.

Simplify SIEM Optimization With InsightIDR
Easily edit dashboard card properties

InsightIDR also makes it easy to share findings and important metrics with anyone in your organization — send an interactive HTML or PDF report of any dashboard with the click of a button.

Simplify SIEM Optimization With InsightIDR
Create HTML reports in InsightIDR

Check out the other ways InsightIDR can help drive successful detection and response for your team here.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[The Lost Bots] Season 2, Episode 1: SIEM Deployment in 10 Minutes

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/06/30/the-lost-bots-season-2-episode-1-siem-deployment-in-10-minutes/

[The Lost Bots] Season 2, Episode 1: SIEM Deployment in 10 Minutes

Welcome back to The Lost Bots! In the first installment of Season 2, Rapid7 Detection and Response (D&R) Practice Advisor Jeffrey Gardner and his new co-host Stephen Davis, Lead D&R Sales Technical Advisor, give us their five pillars of success for deploying a security information and event management (SIEM) solution. They tell us which pillars are their favorites and how security practitioners — including our hosts themselves — sometimes misstep in these areas.

Watch below for a rundown of how to successfully deploy a SIEM, all in a cool 10 minutes. (Fair warning: Your actual SIEM deployment might take slightly longer than it takes to watch this episode.)


Throughout Season 2, Jeffrey and Stephen will talk through some of the biggest topics and most pressing questions in D&R and cybersecurity, both one-on-one and with guests. We’ll be publishing new episodes on the last Thursday of every month. See you in July!

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

The Average SIEM Deployment Takes 6 Months. Don’t Be Average.

Post Syndicated from Margaret Wei original https://blog.rapid7.com/2022/06/02/the-average-siem-deployment-takes-6-months-dont-be-average/

The Average SIEM Deployment Takes 6 Months. Don’t Be Average.

If you’re part of the huge growth in demand for cloud-based SIEM (Security Information and Event Management), claim your copy of the new Gartner® Report: “How to Deploy a SIEM Solution Successfully.”

Depending on what SIEM you choose, and how you approach the process, getting to operational and effective can take days, or months, or a lot longer.

Here are the Gartner report’s key findings:

  1. “Ineffective security information and event management (SIEM) deployments occur when requirements and use cases are not aligned with the organization’s risks and risk tolerance.”
  2. “Clients deploying SIEM solutions continue to take an unstructured approach when deciding which event and data sources to onboard, with the goal of getting every source in from the beginning. This leads to long and complex implementations, cost overruns, and higher probabilities of stalled or failed implementations.”
  3. “SIEM buyers struggle to choose between on-premises, cloud, or hybrid deployments due to the complexities created by the various environments that need to be monitored, e.g., on-premises, SaaS, cloud infrastructure and platform services (CIPS), remote workers.”

SIEM centralizes and visualizes your security data to help you identify anomalies in your environment. But nearly all SIEMs require you to do a ton of customizing and configuration. Nearly all disappoint with their detections. And nearly all will exhaust you with false-positive alerts… every hour of every day… until analysts start ignoring alerts, which will surely doom you someday.

Now, here’s what we think

Rapid7 began building InsightIDR nearly a decade ago. While the threat landscape keeps changing, our mission never has: to empower you to find and extinguish evil earlier, faster, easier.

InsightIDR has never been a traditional SIEM. You should consider it if:

Fast deployment is a priority to you. InsightIDR leads the SIEM market in deployment times. With SaaS delivery and a native cloud foundation, customers can be deployed and operational in days and weeks – not months and years.

Time-to-value and tangible ROI matter to your leadership team. InsightIDR combines the best of next-gen SIEM with native extended detection and response (XDR). Get highly correlated UEBA, EDR, NDR, and Cloud detections alongside your critical security logs and policy monitoring, compliance dashboards, and reporting in a single pane of glass.

Your team is tired of false positives. InsightIDR’s expertly vetted detection library provides holistic threat coverage across your entire attack surface. An emphasis on high-fidelity, low-noise detections ensures that all alerts are relevant and ready for action.

You’re ready to accelerate your security posture. InsightIDR empowers teams to up-level their security and achieve sophisticated outcomes – without the complexity of traditional SIEMs. Embedded security orchestration and automation (SOAR) capabilities give you enviable security operations center (SOC) automation and enable even new analysts to respond like experts.

Don’t forget your copy of the new Gartner® Report: “How to Deploy a SIEM Solution Successfully.”

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner, How to Deploy a SIEM Solution Successfully, Andrew Davies, Mitchell Schneider, Toby Bussa, Kelly Kavanagh, 7 July 2021

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Integrating Network Analytics Logs with your SIEM dashboard

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/network-analytics-logs/

Integrating Network Analytics Logs with your SIEM dashboard

Integrating Network Analytics Logs with your SIEM dashboard

We’re excited to announce the availability of Network Analytics Logs. Magic Transit, Magic Firewall, Magic WAN, and Spectrum customers on the Enterprise plan can feed packet samples directly into storage services, network monitoring tools such as Kentik, or their Security Information Event Management (SIEM) systems such as Splunk to gain near real-time visibility into network traffic and DDoS attacks.

What’s included in the logs

By creating a Network Analytics Logs job, Cloudflare will continuously push logs of packet samples directly to the HTTP endpoint of your choice, including Websockets. The logs arrive in JSON format which makes them easy to parse, transform, and aggregate. The logs include packet samples of traffic dropped and passed by the following systems:

  1. Network-layer DDoS Protection Ruleset
  2. Advanced TCP Protection
  3. Magic Firewall

Note that not all mitigation systems are applicable to all Cloudflare services. Below is a table describing which mitigation service is applicable to which Cloudflare service:

Mitigation System Cloudflare Service
Magic Transit Magic WAN Spectrum
Network-layer DDoS Protection Ruleset
Advanced TCP Protection
Magic Firewall

Packets are processed by the mitigation systems in the order outlined above. Therefore, a packet that passed all three systems may produce three packet samples, one from each system. This can be very insightful when troubleshooting and wanting to understand where in the stack a packet was dropped. To avoid overcounting the total passed traffic, Magic Transit users should only take into consideration the passed packets from the last mitigation system, Magic Firewall.

An example of a packet sample log:

{"AttackCampaignID":"","AttackID":"","ColoName":"bkk06","Datetime":1652295571783000000,"DestinationASN":13335,"Direction":"ingress","IPDestinationAddress":"(redacted)","IPDestinationSubnet":"/24","IPProtocol":17,"IPSourceAddress":"(redacted)","IPSourceSubnet":"/24","MitigationReason":"","MitigationScope":"","MitigationSystem":"magic-firewall","Outcome":"pass","ProtocolState":"","RuleID":"(redacted)","RulesetID":"(redacted)","RulesetOverrideID":"","SampleInterval":100,"SourceASN":38794,"Verdict":"drop"}

All the available log fields are documented here: https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/

Setting up the logs

In this walkthrough, we will demonstrate how to feed the Network Analytics Logs into Splunk via Postman. At this time, it is only possible to set up Network Analytics Logs via API. Setting up the logs requires three main steps:

  1. Create a Cloudflare API token.
  2. Create a Splunk Cloud HTTP Event Collector (HEC) token.
  3. Create and enable a Cloudflare Logpush job.

Let’s get started!

1) Create a Cloudflare API token

  1. Log in to your Cloudflare account and navigate to My Profile.
  2. On the left-hand side, in the collapsing navigation menu, click API Tokens.
  3. Click Create Token and then, under Custom token, click Get started.
  4. Give your custom token a name, and select an Account scoped permission to edit Logs. You can also scope it to a specific/subset/all of your accounts.
  5. At the bottom, click Continue to summary, and then Create Token.
  6. Copy and save your token. You can also test your token with the provided snippet in Terminal.

When you’re using an API token, you don’t need to provide your email address as part of the API credentials.

Integrating Network Analytics Logs with your SIEM dashboard

Read more about creating an API token on the Cloudflare Developers website: https://developers.cloudflare.com/api/tokens/create/

2) Create a Splunk token for an HTTP Event Collector

In this walkthrough, we’re using a Splunk Cloud free trial, but you can use almost any service that can accept logs over HTTPS. In some cases, if you’re using an on-premise SIEM solution, you may need to allowlist Cloudflare IP address in your firewall to be able to receive the logs.

  1. Create a Splunk Cloud account. I created a trial account for the purpose of this blog.
  2. In the Splunk Cloud dashboard, go to Settings > Data Input.
  3. Next to HTTP Event Collector, click Add new.
  4. Follow the steps to create a token.
  5. Copy your token and your allocated Splunk hostname and save both for later.
Integrating Network Analytics Logs with your SIEM dashboard

Read more about using Splunk with Cloudflare Logpush on the Cloudflare Developers website: https://developers.cloudflare.com/logs/get-started/enable-destinations/splunk/

Read more about creating an HTTP Event Collector token on Splunk’s website: https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector

3) Create a Cloudflare Logpush job

Creating and enabling a job is very straightforward. It requires only one API call to Cloudflare to create and enable a job.

To send the API calls I used Postman, which is a user-friendly API client that was recommended to me by a colleague. It allows you to save and customize API calls. You can also use Terminal/CMD or any other API client/script of your choice.

One thing to notice is Network Analytics Logs are account-scoped. The API endpoint is therefore a tad different from what you would normally use for zone-scoped datasets such as HTTP request logs and DNS logs.

This is the endpoint for creating an account-scoped Logpush job:

https://api.cloudflare.com/client/v4/accounts/{account-id}/logpush/jobs

Your account identifier number is a unique identifier of your account. It is a string of 32 numbers and characters. If you’re not sure what your account identifier is, log in to Cloudflare, select the appropriate account, and copy the string at the end of the URL.

https://dash.cloudflare.com/{account-id}

Then, set up a new request in Postman (or any other API client/CLI tool).

To successfully create a Logpush job, you’ll need the HTTP method, URL, Authorization token, and request body (data). The request body must include a destination configuration (destination_conf), the specified dataset (network_analytics_logs, in our case), and the token (your Splunk token).

Integrating Network Analytics Logs with your SIEM dashboard

Method:

POST

URL:

https://api.cloudflare.com/client/v4/accounts/{account-id}/logpush/jobs

Authorization: Define a Bearer authorization in the Authorization tab, or add it to the header, and add your Cloudflare API token.

Body: Select a Raw > JSON

{
"destination_conf": "{your-unique-splunk-configuration}",
"dataset": "network_analytics_logs",
"token": "{your-splunk-hec-tag}",
"enabled": "true"
}

If you’re using Splunk Cloud, then your unique configuration has the following format:

{your-unique-splunk-configuration}=splunk://{your-splunk-hostname}.splunkcloud.com:8088/services/collector/raw?channel={channel-id}&header_Authorization=Splunk%20{your-splunk–hec-token}&insecure-skip-verify=false

Definition of the variables:

{your-splunk-hostname}= Your allocated Splunk Cloud hostname.

{channel-id} = A unique ID that you choose to assign for.`{your-splunk–hec-token}` = The token that you generated for your Splunk HEC.

An important note is that customers should have a valid SSL/TLS certificate on their Splunk instance to support an encrypted connection.

After you’ve done that, you can create a GET request to the same URL (no request body needed) to verify that the job was created and is enabled.

The response should be similar to the following:

{
    "errors": [],
    "messages": [],
    "result": {
        "id": {job-id},
        "dataset": "network_analytics_logs",
        "frequency": "high",
        "kind": "",
        "enabled": true,
        "name": null,
        "logpull_options": null,
        "destination_conf": "{your-unique-splunk-configuration}",
        "last_complete": null,
        "last_error": null,
        "error_message": null
    },
    "success": true
}

Shortly after, you should start receiving logs to your Splunk HEC.

Integrating Network Analytics Logs with your SIEM dashboard

Read more about enabling Logpush on the Cloudflare Developers website: https://developers.cloudflare.com/logs/reference/logpush-api-configuration/examples/example-logpush-curl/

Reduce costs with R2 storage

Depending on the amount of logs that you read and write, the cost of third party cloud storage can skyrocket — forcing you to decide between managing a tight budget and being able to properly investigate networking and security issues. However, we believe that you shouldn’t have to make those trade-offs. With R2’s low costs, we’re making this decision easier for our customers. Instead of feeding logs to a third party, you can reap the cost benefits of storing them in R2.

To learn more about the R2 features and pricing, check out the full blog post. To enable R2, contact your account team.

Cloudflare logs for maximum visibility

Cloudflare Enterprise customers have access to detailed logs of the metadata generated by our products. These logs are helpful for troubleshooting, identifying network and configuration adjustments, and generating reports, especially when combined with logs from other sources, such as your servers, firewalls, routers, and other appliances.

Network Analytics Logs joins Cloudflare’s family of products on Logpush: DNS logs, Firewall events, HTTP requests, NEL reports, Spectrum events, Audit logs, Gateway DNS, Gateway HTTP, and Gateway Network.

Not using Cloudflare yet? Start now with our Free and Pro plans to protect your websites against DDoS attacks, or contact us for comprehensive DDoS protection and firewall-as-a-service for your entire network.

SIEM and XDR: What’s Converging, What’s Not

Post Syndicated from Amy Hunt original https://blog.rapid7.com/2022/03/23/siem-and-xdr-whats-converging-whats-not/

SIEM and XDR: What’s Converging, What’s Not

Let’s start with the conclusion: Security incident and event management (SIEM) isn’t going anywhere anytime soon.

Today, most security analysts are using their SIEMs for detection and response, making it the core tool within the security operations center (SOC). SIEM aggregates and monitors critical security telemetry, enables companies to monitor and detect threats specific to their environment and policy violations, and addresses key regulatory and compliance use cases. It has served – and will continue to serve – very important, specific purposes in the security technology stack.

Where SIEMs have traditionally struggled is in keeping pace with the threat landscape. It expands and changes daily. Very, very few security teams have the resources to consume all the relevant threat intelligence, then create the rules and configure the detections necessary to find them.

Rapid7’s SIEM, InsightIDR, is the exception, designed with a detections-first approach.

InsightIDR leverages internal and external threat intelligence, encompassing your entire attack surface. Our detection library includes threat intelligence from Rapid7’s open-source community, advanced attack surface mapping, and proprietary machine learning. Detections are curated and constantly fine-tuned by our expert Threat Intelligence and Detections Engineering team.

InsightIDR is the only SIEM that can actually do extended detection and response (XDR). And we can’t help but think all the XDR buzz is the security industry’s way of letting you know that, yes, detection and response performance is still lacking.

A cloud SIEM can provide a strong XDR foundation — agile, tailored, adaptable, and elastic

A cloud SIEM approach gives you an elastic data lake that lets you collect and process telemetry across the environment. And the core benefits of SIEM are yours: log retention, fast and flexible search, reporting, and the ability to fine-tune and customize policy violations or other rules specifically for their environment or organization. Cloud SIEM with user and entity behavior analytics (UEBA) and correlation capabilities can already achieve XDR, tying disparate data sources together to normalize, correlate/attribute, and analyze.

Of course, some customers that purchased traditional SIEM for detection and response haven’t been able to get those outcomes. They don’t have a next-generation SIEM that supports big data and real-time event analysis. Perhaps machine learning and behavioral analytics aren’t there yet.

Or maybe the SIEM has security teams drowning in alerts, ignoring too many of them. Detection and response is really hard — and it really is a symphony — especially as the environment continues to sprawl and resources remain scarce.

XDR aims to solve the challenges of the SIEM tool for effective detection and response to targeted attacks and includes behavior analysis, threat intelligence, behavior profiling, recommendations, and automation. The foundation is everything.

When we introduced InsightIDR some time ago, some criticized it as trying to do “too much”

It turns out we were doing XDR.

Today, our highly manicured detections library is expertly vetted by our global Rapid7 Managed Detection and Response (MDR) SOC, where we also get emergent threat coverage. It’s single-platform, integrated with raw threat intel from Rapid7’s open-source communities (Metasploit, Heisenberg, Sonar, Velociraptor) and strengthened signal-to-noise following our acquisition of IntSights external threat intelligence.

Call it what you like

SIEM and XDR are described as “alternatives,” “complementary,” and also barreling toward one another destined to collide. We’ve read how one is dead and the other is the future. (Must it always be this way?)

No matter what you call it, focus on the outcomes, not the acronyms. It’s easy to get lost in the buzz, but the best products for your business will be those that address your top priorities.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

How to use AWS Security Hub and Amazon OpenSearch Service for SIEM

Post Syndicated from Ely Kahn original https://aws.amazon.com/blogs/security/how-to-use-aws-security-hub-and-amazon-opensearch-service-for-siem/

AWS Security Hub provides you with a consolidated view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and current AWS security recommendations. Although Security Hub has some similarities to security information and event management (SIEM) tools, it is not designed as standalone a SIEM replacement. For example, Security Hub only ingests AWS-related security findings and does not directly ingest higher volume event logs, such as AWS CloudTrail logs. If you have use cases to consolidate AWS findings with other types of findings from on-premises or other non-AWS workloads, or if you need to ingest higher volume event logs, we recommend that you use Security Hub in conjunction with a SIEM tool.

There are also other benefits to using Security Hub and a SIEM tool together. These include being able to store findings for longer periods of time than Security Hub, aggregating findings across multiple administrator accounts, and further correlating Security Hub findings with each other and other log sources. In this blog post, we will show you how you can use Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) as a SIEM and integrate Security Hub with it to accomplish these three use cases. Amazon OpenSearch Service is a fully managed service that makes it easier to deploy, manage, and scale Elasticsearch and Kibana. OpenSearch Service is a distributed, RESTful search and analytics engine that is capable of addressing a growing number of use cases. You can expand OpenSearch Service with AWS services like Kinesis or Kinesis Data Firehose, by integrating with other AWS services, or by using traditional agents like Beats and Logstash for log ingestion, and Kibana for data visualization. Although the OpenSearch Service also is not a SIEM out-of-the-box tool, with some customization, you can use it for SIEM tool use cases.

Security Hub plus SIEM use cases

By enabling Security Hub within your AWS Organizations account structure, you immediately start receiving the benefits of viewing all of your security findings from across various AWS and partner services on a single screen. Some organizations want to go a step further and use Security Hub in conjunction with a SIEM tool for the following reasons:

  • Correlate Security Hub findings with each other and other log sources – This is the most popular reason customers choose to implement this solution. If you have various log sources outside of Security Hub findings (such as application logs, database logs, partner logs, and security tooling logs), then it makes sense to consolidate these log sources into a single SIEM solution. Then you can view both your Security Hub findings and miscellaneous logs in the same place and create alerts based on interesting correlations.
  • Store findings for longer than 90 days after the last update date – Some organizations want or need to store Security Hub findings for longer than 90 days after the last update date. They may want to do this for historical investigation, or for audit and compliance needs. Either way, this solution offers you the ability to store Security Hub findings in a private Amazon Simple Storage Service (Amazon S3) bucket, which is then consumed by Amazon OpenSearch Service.
  • Aggregate findings across multiple administrator accounts – Security Hub has a feature customers can use to designate an administrator account if they have enabled Security Hub in multiple accounts. A Security Hub administrator account can view data from and manage configuration for its member accounts. This allows customers to view and manage all their findings from multiple member accounts in one place. Sometimes customers have multiple Security Hub administrator accounts, because they have multiple organizations in AWS Organizations. In this situation, you can use this solution to consolidate all of the Security Hub administrator accounts into a single OpenSearch Service with Kibana SIEM implementation to have a single view across your environments. This related blog post walks through this use case in more detail, and shows how to centralize Security Hub findings across multiple AWS Regions and administrators. However, this blog post takes this approach further by introducing OpenSearch Service with Kibana to the use case, for a full SIEM experience.

Solution architecture

Figure 1: SIEM implementation on Amazon OpenSearch Service

Figure 1: SIEM implementation on Amazon OpenSearch Service

The solution represented in Figure 1 shows the flexibility of integrations that are possible when you create a SIEM by using Amazon OpenSearch Service. The solution allows you to aggregate findings across multiple accounts, store findings in an S3 bucket indefinitely, and correlate multiple AWS and non-AWS services in one place for visualization. This post focuses on Security Hub’s integration with the solution, but the following AWS services are also able to integrate:

Each of these services has its own dedicated dashboard within the OpenSearch SIEM solution. This makes it possible for customers to view findings and data that are relevant to each service that the SIEM tool is ingesting. OpenSearch Service also allows the customer to create aggregated dashboards, consolidating multiple services within a single dashboard, if needed.

Prerequisites

We recommend that you enable Security Hub and AWS Config across all of your accounts and Regions. For more information about how to do this, see the documentation for Security Hub and AWS Config. We also recommend that you use Security Hub and AWS Config integration with AWS Organizations to simplify the setup and automatically enable these services in all current and future accounts in your organization.

Launch the solution

In order to launch this solution within your environment, you can either launch the solution by using an AWS CloudFormation template, or by following the steps presented later in this post to customize the deployment to support integrations with non-AWS services, multi-Organization deployments, or launch within your existing OpenSearch Service environment.

To launch the solution, follow the instructions for SIEM on Amazon OpenSearch Service on GitHub.

Use the solution

Before you start using the solution, we’ll show you how this solution appears in the Security Hub dashboard, as shown in Figure 2. Navigate here by following Step 3 from the GitHub README.

Figure 2: Pre-built dashboards within solution

Figure 2: Pre-built dashboards within solution

The Security Hub dashboard highlights all major components of the service within an OpenSearch Service dashboard environment. This includes supporting all of the service integrations that are available within Security Hub (such as GuardDuty, AWS Identity and Access Management (IAM) Access Analyzer, Amazon Inspector, Amazon Macie, and AWS Systems Manager Patch Manager). The dashboard displays both findings and security standards, and you can filter by AWS account, finding type, security standard, or service integration. Figure 3 shows an overview of the visual dashboard experience when you deploy the solution.

Figure 3: Dashboard preview

Figure 3: Dashboard preview

Use case 1: Correlate Security Hub findings with each other and other log sources and create alerts

This solution uses OpenSearch Service and Kibana to allow you to search through both Security Hub findings and logs from any other AWS and non-AWS systems. You can then create alerts within Kibana based on interesting correlations between Security Hub and any other logged events. Although Security Hub supports ingesting a vast number of integrations and findings, it cannot create correlation rules like a SIEM tool can. However, you can create such rules using SIEM on OpenSearch Service. It’s important to take a closer look when multiple AWS security services generate findings for a single resource, because this potentially indicates elevated risk or multiple risk vectors. Depending on your environment, the initial number of findings in Security Hub may be high, so you may need to prioritize which findings require immediate action. Security Hub natively gives you the ability to filter findings by resource, account, severity, and many other details.

As part of the findings, you can send notifications through alerts that are generated by SIEM on OpenSearch Service in several ways: Amazon Simple Notification Service (Amazon SNS) by consuming messages in an appropriate tool or configuring recipient email addresses, Amazon Chime, Slack (using AWS Chatbot) or custom webhook to your organization’s ticketing system. You can then respond to these new security incident-oriented findings through ticketing, chat, or incident management systems.

Solution overview for use case 1

Figure 4: Solution overview diagram

Figure 4: Solution overview diagram

Figure 4 gives an overview of the solution for use case 1. This solution requires that you have Security Hub and GuardDuty enabled in your AWS account. Logs from AWS services, including Security Hub, are ingested into an S3 bucket, then are automatically extracted, transformed, and loaded (ETL) and populated into the SIEM system that is running on OpenSearch Service using AWS Lambda. After capturing the logs, you will be able to visualize them on the dashboard and analyze correlations of multiple logs. Within the SIEM on OpenSearch Service solution, you will create a rule to detect failures, such as CloudTrail authentication failures in logs. Then, you will configure the solution to publish alerts to Amazon SNS and send emails when logs match rules.

Implement the solution for use case 1

You will now set up this workflow to alert you by email when logs in OpenSearch match certain rules that you create.

Step 1: Create and visualize findings in OpenSearch Dashboards

Security Hub and other AWS services export findings to Amazon S3 in a centralized log bucket. You can ingest logs from CloudTrail, VPC Flow Logs, and GuardDuty, which are often used in AWS security analytics. In this step, you import simulated security incident data in OpenSearch Dashboards, and use the dashboard to visualize the data in the logs.

To navigate OpenSearch Dashboards

  1. Generate pseudo-security incidents. You can simulate the results by generating sample findings in GuardDuty.
  2. In OpenSearch Dashboards, go to the Discover screen. The Discover screen is divided into three major sections: Search bar, index/display field list, and time-series display, as shown in Figure 5.
    Figure 5: OpenSearch Dashboards

    Figure 5: OpenSearch Dashboards

  3. In OpenSearch Dashboards, select log-aws-securityhub-* or log-aws-vpcflowlogs-* or log-aws-cloudtrail-* or any other index patterns and add event.module to the display field. event.module is a field that indicates where the log originates from. If you are collecting other threat information, such as Security Hub, @log-type is Security Hub, and event.module indicates where the log originated from (either Amazon Inspector or Amazon Macie for example). After you have added event.module, filter the desired Security Hub integrated service (for example, Amazon Inspector) to display. When testing the environment covered in this blog post outside a production context, you can use Kinesis Data Generator to generate sample user traffic. Other tools are also available.
  4. Select the following on the dashboard to see the visualized information:
    • CloudTrail Summary
    • VpcFlowLogs Summary
    • GuardDuty Summary
    • All – Threat Hunting

Step 2: Configure alerts to match log criteria

Next, you will configure alerts to match log criteria. First you need to set the destination for alerts, and then set what to monitor.

To configure alerts

  1. In OpenSearch Dashboards, in the left menu, choose Alerting.
  2. To add the details of SNS, on the Destinations tab, choose Add destinations, and enter the following parameters:
    • Name: aes-siem-alert-destination
    • Type: Amazon SNS
    • SNS Alert: arn:aws:sns:<AWS-REGION>:<111111111111>:aes-siem-alert
      • Replace <111111111111> with your AWS account ID and correct the Region name
      • Replace <AWS-REGION> with the Region you are using, for example, eu-west-1
    • IAM Role ARN: arn:aws:iam::<111111111111>:role/aes-siem-sns-role
      • Replace &<111111111111> with your AWS account ID
  3. Choose Create to complete setting the alert destination.
    Figure 6: Edit alert destination

    Figure 6: Edit alert destination

  4. In OpenSearch Dashboards, in the left menu, select Alerting. You will now set what to monitor. Here you monitor a CloudTrail trail authentication failure. There are two normalized log times: @timestamp and event.ingested. The difference is between the log occurrence time (@timestamp) and the SIEM reception time (event.ingested). Use event.ingested for logs with a large time lag from occurrence to reception. You can specify flexible conditions by selecting Define using extraction query for the filter definition.
  5. On the Monitors tab, choose Create monitor.
  6. Enter the following parameters. If there is no description, use the default value.
    • Name: Authentication failed
    • Method of definition: Define using extraction query
    • Indices: log-aws-cloudtrail-* (manual input, not pull-down)
    • Define extraction query: Enter the following query.
      {
      	"query": {
      		"bool": {
      			"filter": [
      			{"term": {"eventSource": "signin.amazonaws.com"}},
      			{"term": {"event.outcome": "failure"}},
      			{"range": {
      				"event.ingested": {
      				"from": "{{period_end}}||-20m",
      				"to": "{{period_end}}"}}
      				}
      			]
      		}
      	}
      }
      

  7. Enter the following remaining parameters of the monitor:
    • Frequency: By interval
    • Monitor schedule: Every 3 minutes
  8. Choose Create to create the monitor.

Step 3: Set up trigger to send email via Amazon SNS

Now you will set the alert firing condition, known as the trigger. This is the setting for alerting when the monitored conditions (Monitors) are met. By default, the alert will be triggered if the number of hits is greater than 0. In this step , you will not change it, only give it a name.

To set up the trigger

  1. Select Create trigger and for Trigger name, enter Authentication failed trigger.
  2. Scroll down to Configure actions.
    Figure 7: Create trigger

    Figure 7: Create trigger

  3. Set what the trigger should do (action). In this case, you want to publish to SNS. Set the following parameters for the body of the email
    • Action name: Authentication failed action
    • Destination: Choose aes-siem-alert-destination – (Amazon SNS)
    • Message subject: (SIEM) Auth failure alert
    • Action throttling: Select Enable action throttling, and set throttle action to only trigger every 10 minutes.
    • Message: Copy and paste the following message into the text box. After pasting, choose Send test message at the bottom right of the screen to confirm that you can receive the test email.

      Monitor ctx.monitor.name just entered alert status. Please investigate the issue.

      Trigger: ctx.trigger.name

      Severity: ctx.trigger.severity

      @timestamp: ctx.results.0.hits.hits.0._source.@timestamp

      event.action: ctx.results.0.hits.hits.0._source.event.action

      error.message: ctx.results.0.hits.hits.0._source.error.message

      count: ctx.results.0.hits.total.value

      source.ip: ctx.results.0.hits.hits.0._source.source.ip

      source.geo.country_name: ctx.results.0.hits.hits.0._source.source.geo.country_name

    Figure 8: Configure actions

    Figure 8: Configure actions

  4. You will receive an alert email in a few minutes. You can check the occurrence status, including the history, by the following method:
    1. In OpenSearch Dashboards, on the left menu, choose Alerting.
    2. On the Monitors tab, choose Authentication failed.
    3. You can check the status of the alert in the History pane.
    Figure 9: Email alert

    Figure 9: Email alert

Use case 1 shows you how to correlate various Security Hub findings through this OpenSearch Service SIEM solution. However, you can take the solution a step further and build more complex correlation checks by following the procedure in the blog post Correlate security findings with AWS Security Hub and Amazon EventBridge. This information can then be ingested into this OpenSearch Service SIEM solution for viewing on a single screen.

Use case 2: Store findings for longer than 90 days after last update date

Security Hub has a maximum storage time of 90 days for events, but your organization might require data storage beyond that period, with flexibility to specify a custom retention period to meet your needs. The SIEM on Amazon OpenSearch Service solution creates a centralized S3 bucket where findings from Security Hub and various other services are collected and stored, and this bucket can be configured to store data as long as you require. The S3 bucket can persist data indefinitely, or you can create an S3 object lifecycle policy to set a custom retention timeframe. Lifecycle policies allow you to either transition objects between S3 storage classes or delete objects after a specified period. Alternatively, you can use S3 Intelligent-Tiering to allow the Amazon S3 service to move data between tiers, based on user access patterns.

Either lifecycle policies or S3 Intelligent-Tiering will allow you to optimize costs for data that is stored in S3, to keep data for archive or backup purposes when it is no longer available in Security Hub or OpenSearch Service. Within the solution, this centralized bucket is called aes-siem-xxxxxxxx-log and is configured to store data for OpenSearch Service to consume indefinitely. The Amazon S3 User Guide has instructions for configuring an S3 lifecycle policy that is explicitly defined by the user on the centralized bucket. Or you can follow the instructions for configuring intelligent tiering to allow the S3 service to manage which tier data is stored in automatically. After data is archived, you can use Amazon Athena to query the S3 bucket for historical information that has been removed from OpenSearch Service, because this S3 bucket acts as a centralized security event repository.

Use case 3: Aggregate findings across multiple administrator accounts

There are cases where you might have multiple Security Hub administrator accounts within one or multiple organizations. For these use cases, you can consolidate findings across these multiple Security Hub administrator accounts into a single S3 bucket for centralized storage, archive, backup, and querying. This gives you the ability to create a single SIEM on OpenSearch Service to minimize the number of monitoring tools you need. In order to do this, you can use S3 replication to automatically copy findings to a centralized S3 bucket. You can follow this detailed walkthrough on how to set up the correct bucket permissions in order to allow replication between the accounts. You can also follow this related blog post to configure cross-Region Security Hub findings that are centralized in a single S3 bucket, if cross-Region replication is appropriate for your security needs. With cross-account S3 replication set up for Security Hub archived event data, you can import data from the centralized S3 bucket into OpenSearch Service by using the Lambda function within the solution in this blog post. This Lambda function automatically normalizes and enriches the log data and imports it into OpenSearch Service, so that users only need to configure data storage in the S3 bucket, and the Lambda function will automatically import the data.

Conclusion

In this blog post, we showed how you can use Security Hub with a SIEM to store findings for longer than 90 days, aggregate findings across multiple administrator accounts, and correlate Security Hub findings with each other and other log sources. We used the solution to walk through building the SIEM and explained how Security Hub could be used within that solution to add greater flexibility. This post describes one solution to create your own SIEM using OpenSearch Service; however, we also recommend that you read the blog post Visualize AWS Security Hub Findings using Analytics and Business Intelligence Tools, in order to see a different method of consolidating and visualizing insights from Security Hub.

To learn more, you can also try out this solution through the new SIEM on AWS OpenSearch Service workshop.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, please start a new thread on the Security Hub forum or contact AWS Support.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ely Kahn

Ely Kahn

Ely Kahn is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Anthony Pasquariello

Anthony Pasquariello

Anthony Pasquariello is a Senior Solutions Architect at AWS based in New York City. He specializes in modernization and security for our advanced enterprise customers. Anthony enjoys writing and speaking about all things cloud. He’s pursuing an MBA, and received his MS and BS in Electrical & Computer Engineering.

Aashmeet Kalra

Aashmeet Kalra

Aashmeet Kalra is a Principal Solutions Architect working in the Global and Strategic team at AWS in San Francisco. Aashmeet has over 17 years of experience designing and developing innovative solutions for customers globally. She specializes in advanced analytics, machine learning and builder/developer experience.

Grant Joslyn

Grant Joslyn

Grant Joslyn is a solutions architect for the US state and local government public sector team at Amazon Web Services (AWS). He specializes in end user compute and cloud automation. He provides technical and architectural guidance to customers building secure solutions on AWS. He is a subject matter expert and thought leader for strategic initiatives that help customers embrace DevOps practices.

Akihiro Nakajima

Akihiro Nakajima

Akihiro Nakajima is a Senior Solutions Architect, Security Specialist at Amazon Web Services Japan. He has more than 20 years of experience in security, specifically focused on incident analysis and response, threat hunting, and digital forensics. He leads development of open-source software, “SIEM on Amazon OpenSearch Service”.

This CISO Isn’t Real, but His Problems Sure Are

Post Syndicated from Amy Hunt original https://blog.rapid7.com/2022/02/22/this-ciso-isnt-real-but-his-problems-sure-are/

This CISO Isn’t Real, but His Problems Sure Are

In 2021, data breaches soared past 2020 levels. This year, it’s expected to be worse. The odds are stacked against this poor guy (and you) now – but a unified extended detection and response (XDR) and SIEM restacks them in your favor.

Take a few minutes to check out this CISO’s day, and you’ll see how.

Go to this resource-rich page for smart, fast information, and a few minutes of fun too. Don’t miss it.

This CISO Isn’t Real, but His Problems Sure Are

Still here on this page reading? Fine, let’s talk about you.

Most CISOs like adrenaline, but c’mon

Cybersecurity isn’t for the fragile foam flowers among us, people who require shade and soft breezes. A little chaos is fun. Adrenaline and cortisol? They give you heightened physical and mental capacity. But it becomes problematic when it doesn’t stop, when you don’t remember your last 40-hour week, or when weekends and holidays are wrecked.

Work-life balance programs are funny, right?

A lot of your co-workers may be happy, but life in the SOC is its own thing. CISOs average about two years in their jobs. And 40% admit job stress has affected their relationships with their partners and/or children.

Many of your peers agree: Unified SIEM and XDR changes everything

A whopping 88% of Rapid7 customers say their detection and response has improved since they started using InsightIDR. And 93% say our unified SIEM and XDR has helped them level up and advance security programs.

You have the power to change your day. See how this guy did.

More products, more partners, and a new look for Cloudflare Logs

Post Syndicated from Bharat Nallan Chakravarthy original https://blog.cloudflare.com/logpush-ui-update/

More products, more partners, and a new look for Cloudflare Logs

We are excited to announce a new look and new capabilities for Cloudflare Logs! Customers on our Enterprise plan can now configure Logpush for Firewall Events and Network Error Logs Reports directly from the dashboard. Additionally, it’s easier to send Logs directly to our analytics partners Microsoft Azure Sentinel, Splunk, Sumo Logic, and Datadog. This blog post discusses how customers use Cloudflare Logs, how we’ve made it easier to consume logs, and tours the new user interface.

New data sets for insight into more products

Cloudflare Logs are almost as old as Cloudflare itself, but we have a few big improvements: new datasets and new destinations.

Cloudflare has a large number of products, and nearly all of them can generate Logs in different data sets. We have “HTTP Request” Logs, or one log line for every L7 HTTP request that we handle (whether cached or not). We also provide connection Logs for Spectrum, our proxy for any TCP or UDP based application. Gateway, part of our Cloudflare for Teams suite, can provide Logs for HTTP and DNS traffic.

Today, we are introducing two new data sets:

Firewall Events gives insight into malicious traffic handled by Cloudflare. It provides detailed information about everything our WAF does. For example, Firewall Events shows whether a request was blocked outright or whether we issued a CAPTCHA challenge.  About a year ago we introduced the ability to send Firewall Events directly to your SIEM; starting today, I’m thrilled to share that you can enable this directly from the dashboard!

Network Error Logging (NEL) Reports provides information about clients that can’t reach our network. To enable NEL Reports for your zone and start seeing where clients are having issues reaching our network, reach out to your account manager.

Take your Logs anywhere with an S3-compatible API

To start using logs, you need to store them first. Cloudflare has long supported AWS, Azure, and Google Cloud as storage destinations. But we know that customers use a huge variety of storage infrastructure, which could be hosted on-premise or with one of our Bandwidth Alliance partners.

Starting today, we support any storage destination with an S3-compatible API. This includes:

And best of all, it’s super easy to get data into these locations using our new UI!

“As always, we love that our partnership with Cloudflare allows us to seamlessly offer customers our easy, plug and play storage solution, Backblaze B2 Cloud Storage. Even better is that, as founding members of the Bandwidth Alliance, we can do it all with free egress.”
Nilay Patel, Co-founder and VP of Solutions Engineering and Sales, Backblaze.

Push Cloudflare Logs directly to our analytics partners

While many customers like to store Logs themselves, we’ve also heard that many customers want to get Logs into their analytics provider directly — without going through another layer. Getting high volume log data out of object storage and into an analytics provider can require building and maintaining a costly, time-consuming, and fragile integration.

Because of this, we now provide direct integrations with four analytics platforms: Microsoft Azure Sentinel, Sumo Logic, Splunk, and Datadog. And starting today, you can push Logs directly into Sumo Logic, Splunk and Datadog from the UI! Customers can add Cloudflare to Azure Sentinel using the Azure Marketplace.

“Organizations are in a state of digital transformation on a journey to the cloud. Most of our customers deploy services in multiple clouds and have legacy systems on premise. Splunk provides visibility across all of this, and more importantly, with SOAR we can automate remediation. We are excited about the Cloudflare partnership, and adding their data into Splunk drives the outcomes customers need to modernize their security operations.”
Jane Wong, Vice President, Product Management, Security at Splunk

“Securing enterprise IT environments can be challenging – from devices, to users, to apps, to data centers on-premises or in the cloud. In today’s environment of increasingly sophisticated cyber-attacks, our mutual customers rely on Microsoft Azure Sentinel for a comprehensive view of their enterprise.  Azure Sentinel enables SecOps teams to collect data at cloud scale and empowers them with AI and ML to find the real threats in those signals, reducing alert fatigue by as much as 90%. By integrating directly with Cloudflare Logs we are making it easier and faster for customers to get complete visibility across their entire stack.”
Sarah Fender, Partner Group Program Manager, Azure Sentinel at Microsoft

“As a long time Cloudflare partner we’ve worked together to help joint customers analyze events and trends from their websites and applications to provide end-to-end visibility to improve digital experiences. We’re excited to expand our partnership as part of the Cloudflare Analytics Ecosystem to provide comprehensive real-time insights for both observability and the security of mission-critical applications and services with our Cloud SIEM solution.”
John Coyle, Vice President of Business Development for Sumo Logic

“Knowing that applications perform as well in the real world as they do in the datacenter is critical to ensuring great digital experiences. Combining Cloudflare Logs with Datadog telemetry about application performance in a single pane of glass ensures teams will have a holistic view of their application delivery.”
Michael Gerstenhaber, Sr. Director of Product, Datadog

Why Cloudflare Logs?

Cloudflare’s mission is to help build a better Internet. We do that by providing a massive global network that protects and accelerates our customers’ infrastructure. Because traffic flows across our network before reaching our customers, it means we have a unique vantage point into that traffic. In many cases, we have visibility that our customers don’t have — whether we’re telling them about the performance of our cache, the malicious HTTP requests we drop at our edge, a spike in L3 data flows, the performance of their origin, or the CPU used by their serverless applications.

To provide this ability, we have analytics throughout our dashboard to help customers understand their network traffic, firewall, cache, load balancer, and much more. We also provide alerts that can tell customers when they see an increase in errors or spike in DDoS activity.

But some customers want more than what we currently provide with our analytics products. Many of our enterprise customers use SIEMs like Splunk and Sumo Logic or cloud monitoring tools like Datadog. These products can extend the capabilities of Cloudflare by showcasing Cloudflare data in the context of customers’ other infrastructure and providing advanced functionality on this data.

To understand how this works, consider a typical L7 DDoS attack against one of our customers.  Very commonly, an attack like this might originate from a small number of IP addresses and a customer might choose to block the source IPs completely. After blocking the IP addresses, customers may want to:

  • Search through their Logs to see all the past instances of activity from those IP addresses.
  • Search through Logs from all their other applications and infrastructure to see all activity from those IP addresses
  • Understand exactly what that attacker was trying to do by looking at the request payload blocked in our WAF (securely encrypted thanks to HPKE!)
  • Set an alert for similar activity, to be notified if something similar happens again

All these are made possible using SIEMs and infrastructure monitoring tools. For example, our customer NOV uses Splunk to “monitor our network and applications by alerting us to various anomalies and high-fidelity incidents”.

“One of the most valuable sources of data is Cloudflare,” said John McLeod, Chief Information Security Officer at NOV. “It provides visibility into network and application attacks. With this integration, it will be easier to get Cloudflare Logs into Splunk, saving my team time and money.”

A new UI for our growing product base

With so many new data sets and new destinations, we realized that our existing user interface was not good enough. We went back to the drawing board to design a more intuitive user experience to help you quickly and easily set up Logpush.

You can still set up Logpush in the same place in the dashboard, in the Analytics > Logs tab:

More products, more partners, and a new look for Cloudflare Logs

The new UI first prompts users to select the data set to push. Here you’ll also notice that we’ve added support for Firewall Events and NEL Reports!

More products, more partners, and a new look for Cloudflare Logs

After configuring details like which fields to push, customers can then select where the Logs are going. Here you can see our three new destinations, S3-compatible storage, Sumo Logic, Datadog and Splunk:

More products, more partners, and a new look for Cloudflare Logs

Coming soon

Of course, we’re not done yet! We have more Cloudflare products in the pipeline and more destinations planned where customers can send their Logs. Additionally, we’re working on adding more flexibility to our logging pipeline so that customers can configure to send Logs for the entire account, plus filter Logs to just send error codes, for example.

Ultimately, we want to make working with Cloudflare Logs as useful as possible — on Cloudflare itself! We’re working to help customers solve their performance and security challenges with data at massive scale. If that sounds interesting, please join us! We’re hiring Systems Engineers for the Data team.

How to Combat Alert Fatigue With Cloud-Based SIEM Tools

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/22/how-to-combat-alert-fatigue-with-cloud-based-siem-tools/

How to Combat Alert Fatigue With Cloud-Based SIEM Tools

Today’s security teams are facing more complexity than ever before. IT environments are changing and expanding rapidly, resulting in proliferating data as organizations adopt more tools to stay on top of their sprawling environments. And with an abundance of tools comes an abundance of alerts, leading to the inevitable alert fatigue for security operations teams. Research completed by Enterprise Strategy Group determined 40% of organizations use 10 to 25 separate security tools, and 30% use 26 to 50. That means thousands (or tens of thousands!) of alerts daily, depending on the organization’s size.

Fortunately, there’s a way to get the visibility your team needs and streamline alerts: leveraging a cloud-based SIEM. Here are a few key ways a cloud-based SIEM can help combat alert fatigue to accelerate threat detection and response.

Access all of your critical security data in one place

Traditional SIEMs focus primarily on log management and are centered around compliance instead of giving you a full picture of your network. The rigidity of these outdated solutions is the opposite of what today’s agile teams need. A cloud SIEM can unify diverse data sets across on-premises, remote, and cloud environments, to provide security operations teams with the holistic visibility they need in one place, eliminating the need to jump in and out of multiple tools (and the thousands of alerts that they produce).

With modern cloud SIEMs like Rapid7’s InsightIDR, you can collect more than just logs from across your environment and ingest data including user activity, cloud, endpoints, and network traffic—all into a single solution. With your data in one place, cloud SIEMs deliver meaningful context and prioritization to help you avoid an abundance of alerts.

Cut through the noise to detect attacks early in the attack chain

By analyzing all of your data together, a cloud SIEM uses machine learning to better recognize patterns in your environment to understand what’s normal and what’s a potential threat. The result? More fine-tuned detections so your team is only alerted when there are real signs of a threat.

Instead of bogging you down with false positives, cloud SIEMs provide contextual, actionable alerts. InsightIDR offers customers high-quality, out-of-the-box alerts created and curated by our expert analysts based on real threats—so you can stop attacks early in the attack chain instead of sifting through a mountain of data and worthless alerts.

Accelerate response with automation

With automation, you can reduce alert fatigue and further improve your SOC’s efficiency. By leveraging a cloud SIEM that has built-in automation, or has the ability to integrate with a security orchestration and automation (SOAR) tool, your SOC can offload a significant amount of their workload and free up analysts to focus on what matters most, all while still improving security posture.

A cloud SIEM with expert-driven detections and built-in automation enables security teams to respond to and remediate attacks in a fraction of the time, instead of manually investigating thousands of alerts. InsightIDR integrates seamlessly with InsightConnect, Rapid7’s security orchestration and automation response (SOAR) tool, to reduce alert fatigue, automate containment, and improve investigation handling.

With holistic network visibility and advanced analysis, cloud-based SIEM tools provide teams with high context alerts and correlation to fight alert fatigue and accelerate incident detection and response. Learn more about how InsightIDR can help eliminate alert fatigue and more by checking out our outcomes pages.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.