Tag Archives: siem

Export logs from Cloudflare Gateway with Logpush

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/export-logs-from-cloudflare-gateway-with-logpush/

Export logs from Cloudflare Gateway with Logpush

Like many people, I have spent a lot more time at home in the last several weeks. I use the free version of Cloudflare Gateway, part of Cloudflare for Teams, to secure the Internet-connected devices on my WiFi network. In the last week, Gateway has processed about 114,000 DNS queries from those devices and blocked nearly 100 as potential security risks.

I can search those requests in the Cloudflare for Teams UI. The logs capture the hostname requested, the time of the request, and Gateway’s decision to allow or block. This works fine for one-off investigations into a block, but does not help if I want to analyze the data more thoroughly. The last thing I want to do is click through hundreds or thousands of pages.

That problem is even more difficult for organizations attempting to keep hundreds or thousands of users and their devices secure. Whether they secure roaming devices with DoH or a static IP address, or keep users safe as they return to offices, deployments at that scale need a better option for auditing tens or hundreds of millions of queries each week.

Starting today, you can configure the automatic export of logs from Cloudflare Gateway to third-party storage destinations or security information and event management (SIEM) tools. Once exported, your team can analyze and audit the data as needed. The feature builds on the same robust Cloudflare Logpush Service that powers data export from Cloudflare’s infrastructure products.

Cloudflare Gateway

Cloudflare Gateway is one-half of Cloudflare for Teams, Cloudflare’s platform for securing users, devices, and data. With Cloudflare for Teams, our global network becomes your team’s network, replacing on-premise appliances and security subscriptions with a single solution delivered closer to your users – wherever they work.

Export logs from Cloudflare Gateway with Logpush

As part of that platform, Cloudflare Gateway blocks threats on the public Internet from becoming incidents inside your organization. Gateway’s first release added DNS security filtering and content blocking to the world’s fastest DNS resolver, Cloudflare’s 1.1.1.1.

Deployment takes less than 5 minutes. Teams can secure entire office networks and segment traffic reports by location. For distributed organizations, Gateway can be deployed via MDM on networks that support IPv6 or using a dedicated IPv4 as part of a Cloudflare Enterprise account.

With secure DNS filtering, administrators can click a single button to block known threats, like sources of malware or phishing sites. Policies can be extended to block specific categories, like gambling sites or social media. When users request a filtered site, Gateway stops the DNS query from resolving and prevents the device from connecting to a malicious destination or hostname with blocked material.

Cloudflare Logpush

The average user makes about 5,000 DNS queries each day. For an organization with 1,000 employees, that produces 5M rows of data daily. That data includes regular Internet traffic, but also potential trends like targeted phishing campaigns or the use of cloud storage tools that are not approved by your IT organization.

The Cloudflare for Teams UI presents some summary views of that data, but each organization has different needs for audit, retention, or analysis. The best way to let you investigate the data in any way you need is to give you all of it. However the volume of data and how often you might need to review it means that API calls or CSV downloads are not suitable. A real logging pipeline is required.

Cloudflare Logpush solves that challenge. Cloudflare’s Logpush Service exports the data captured by Cloudflare’s network to storage destinations that you control. Rather than requiring your team to build a system to call Cloudflare APIs and pull data, Logpush routinely exports data with fields that you configure.

Cloudflare’s data team built the Logpush pipeline to make it easy to integrate with popular storage providers. Logpush supports AWS S3, Google Cloud Storage, Sumo Logic, and Microsoft Azure out of the box. Administrators can choose a storage provider, validate they own the destination, and configure exports of logs that will send deltas every five minutes from that point onward.

How it works

When enabled, you can navigate to a new section of the Logs component in the Cloudflare for Teams UI, titled “Logpush”. Once there, you’ll be able to choose which fields you want to export from Cloudflare Gateway and the storage destination.

Export logs from Cloudflare Gateway with Logpush

The Logpush wizard will walk you through validating that you own the destination and configuring how you want folders to be structured. When saved, Logpush will send updated logs every five minutes to that destination. You can configure multiple destinations and monitor for any issues by returning to this section of the Cloudflare for Teams UI.

Export logs from Cloudflare Gateway with Logpush

What’s next?

Cloudflare’s Logpush Service is only available to customers on a contract plan. If you are interested in upgrading, please let us know. All Cloudflare for Teams plans include 30-days of data that can be searched in the UI.

Cloudflare Access, the other half of Cloudflare for Teams, also supports granular log export. You can configure Logpush for Access in the Cloudflare dashboard that houses Infrastructure features like the WAF and CDN. We plan to migrate that configuration to this UI in the near future.

Stream Firewall Events directly to your SIEM

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/stream-firewall-events-directly-to-your-siem/

Stream Firewall Events directly to your SIEM

Stream Firewall Events directly to your SIEM

The highest trafficked sites using Cloudflare receive billions of requests per day. But only about 5% of those requests typically trigger security rules, whether they be “managed” rules such as our WAF and DDoS protections, or custom rules such as those configured by customers using our powerful Firewall Rules and Rate Limiting engines.

When enforcement is taken on a request that interrupts the flow of malicious traffic, a Firewall Event is logged with detail about the request including which rule triggered us to take action and what action we took, e.g., challenged or blocked outright.

Previously, if you wanted to ingest all of these events into your SIEM or logging platform, you had to take the whole firehose of requests—good and bad—and then filter them client side. If you’re paying by the log line or scaling your own storage solution, this cost can add up quickly. And if you have a security team monitoring logs, they’re being sent a lot of extraneous data to sift through before determining what needs their attention most.

As of today, customers using Cloudflare Logs can create Logpush jobs that send only Firewall Events. These events arrive much faster than our existing HTTP requests logs: they are typically delivered to your logging platform within 60 seconds of sending the response to the client.

In this post we’ll show you how to use Terraform and Sumo Logic, an analytics integration partner, to get this logging set up live in just a few minutes.

Process overview

The steps below take you through the process of configuring Cloudflare Logs to push security events directly to your logging platform. For purposes of this tutorial, we’ve chosen Sumo Logic as our log destination, but you’re free to use any of our analytics partners, or any logging platform that can read from cloud storage such as AWS S3, Azure Blob Storage, or Google Cloud Storage.

To configure Sumo Logic and Cloudflare we make use of Terraform, a popular Infrastructure-as-Code tool from HashiCorp. If you’re new to Terraform, see Getting started with Terraform and Cloudflare for a guided walkthrough with best practice recommendations such as how to version and store your configuration in git for easy rollback.

Once the infrastructure is in place, you’ll send a malicious request towards your site to trigger the Cloudflare Web Application Firewall, and watch as the Firewall Events generated by that request shows up in Sumo Logic about a minute later.

Stream Firewall Events directly to your SIEM

Prerequisites

Install Terraform and Go

First you’ll need to install Terraform. See our Developer Docs for instructions.

Next you’ll need to install Go. The easiest way on macOS to do so is with Homebrew:

$ brew install golang
$ export GOPATH=$HOME/go
$ mkdir $GOPATH

Go is required because the Sumo Logic Terraform Provider is a “community” plugin, which means it has to be built and installed manually rather than automatically through the Terraform Registry, as will happen later for the Cloudflare Terraform Provider.

Install the Sumo Logic Terraform Provider Module

The official installation instructions for installing the Sumo Logic provider can be found on their GitHub Project page, but here are my notes:

$ mkdir -p $GOPATH/src/github.com/terraform-providers && cd $_
$ git clone https://github.com/SumoLogic/sumologic-terraform-provider.git
$ cd sumologic-terraform-provider
$ make install

Prepare Sumo Logic to receive Cloudflare Logs

Install Sumo Logic livetail utility

While not strictly necessary, the livetail tool from Sumo Logic makes it easy to grab the Cloudflare Logs challenge token we’ll need in a minute, and also to view the fruits of your labor: seeing a Firewall Event appear in Sumo Logic shortly after the malicious request hit the edge.

On macOS:

$ brew cask install livetail
...
==> Verifying SHA-256 checksum for Cask 'livetail'.
==> Installing Cask livetail
==> Linking Binary 'livetail' to '/usr/local/bin/livetail'.
🍺  livetail was successfully installed!

Generate Sumo Logic Access Key

This step assumes you already have a Sumo Logic account. If not, you can sign up for a free trial here.

  1. Browse to https://service.$ENV.sumologic.com/ui/#/security/access-keys where $ENV should be replaced by the environment you chose on signup.
  2. Click the “+ Add Access Key” button, give it a name, and click “Create Key”
  3. In the next step you’ll save the Access ID and Access Key that are provided as environment variables, so don’t close this modal until you do.

Generate Cloudflare Scoped API Token

  1. Log in to the Cloudflare Dashboard
  2. Click on the profile icon in the top-right corner and then select “My Profile”
  3. Select “API Tokens” from the nav bar and click “Create Token”
  4. Click the “Get started” button next to the “Create Custom Token” label

On the Create Custom Token screen:

  1. Provide a token name, e.g., “Logpush – Firewall Events”
  2. Under Permissions, change Account to Zone, and then select Logs and Edit, respectively, in the two drop-downs to the right
  3. Optionally, change Zone Resources and IP Address Filtering to restrict restrict access for this token to specific zones or from specific IPs

Click “Continue to summary” and then “Create token” on the next screen. Save the token somewhere secure, e.g., your password manager, as it’ll be needed in just a minute.

Set environment variables

Rather than add sensitive credentials to source files (that may get submitted to your source code repository), we’ll set environment variables and have the Terraform modules read from them.

$ export CLOUDFLARE_API_TOKEN="<your scoped cloudflare API token>"
$ export CF_ZONE_ID="<tag of zone you wish to send logs for>"

We’ll also need your Sumo Logic environment, Access ID, and Access Key:

$ export SUMOLOGIC_ENVIRONMENT="eu"
$ export SUMOLOGIC_ACCESSID="<access id from previous step>"
$ export SUMOLOGIC_ACCESSKEY="<access key from previous step>"

Create the Sumo Logic Collector and HTTP Source

We’ll create a directory to store our Terraform project in and build it up as we go:

$ mkdir -p ~/src/fwevents && cd $_

Then we’ll create the Collector and HTTP source that will store and provide Firewall Events logs to Sumo Logic:

$ cat <<'EOF' | tee main.tf
##################
### SUMO LOGIC ###
##################
provider "sumologic" {
    environment = var.sumo_environment
    access_id = var.sumo_access_id
}

resource "sumologic_collector" "collector" {
    name = "CloudflareLogCollector"
    timezone = "Etc/UTC"
}

resource "sumologic_http_source" "http_source" {
    name = "firewall-events-source"
    collector_id = sumologic_collector.collector.id
    timezone = "Etc/UTC"
}
EOF

Then we’ll create a variables file so Terraform has credentials to communicate with Sumo Logic:

$ cat <<EOF | tee variables.tf
##################
### SUMO LOGIC ###
##################
variable "sumo_environment" {
    default = "$SUMOLOGIC_ENVIRONMENT"
}

variable "sumo_access_id" {
    default = "$SUMOLOGIC_ACCESSID"
}
EOF

With our Sumo Logic configuration set, we’ll initialize Terraform with terraform init and then preview what changes Terraform is going to make by running terraform plan:

$ terraform init

Initializing the backend...

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # sumologic_collector.collector will be created
  + resource "sumologic_collector" "collector" {
      + destroy        = true
      + id             = (known after apply)
      + lookup_by_name = false
      + name           = "CloudflareLogCollector"
      + timezone       = "Etc/UTC"
    }

  # sumologic_http_source.http_source will be created
  + resource "sumologic_http_source" "http_source" {
      + automatic_date_parsing       = true
      + collector_id                 = (known after apply)
      + cutoff_timestamp             = 0
      + destroy                      = true
      + force_timezone               = false
      + id                           = (known after apply)
      + lookup_by_name               = false
      + message_per_request          = false
      + multiline_processing_enabled = true
      + name                         = "firewall-events-source"
      + timezone                     = "Etc/UTC"
      + url                          = (known after apply)
      + use_autoline_matching        = true
    }

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Assuming everything looks good, let’s execute the plan:

$ terraform apply -auto-approve
sumologic_collector.collector: Creating...
sumologic_collector.collector: Creation complete after 3s [id=108448215]
sumologic_http_source.http_source: Creating...
sumologic_http_source.http_source: Creation complete after 0s [id=150364538]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Success! At this point you could log into the Sumo Logic web interface and confirm that your Collector and HTTP Source were created successfully.

Create a Cloudflare Logpush Job

Before we’ll start sending logs to your collector, you need to demonstrate the ability to read from it. This validation step prevents accidental (or intentional) misconfigurations from overrunning your logs.

Tail the Sumo Logic Collector and await the challenge token

In a new shell window—you should keep the current one with your environment variables set for use with Terraform—we’ll start tailing Sumo Logic for events sent from the firewall-events-source HTTP source.

The first time that you run livetail you’ll need to specify your Sumo Logic Environment, Access ID and Access Key, but these values will be stored in the working directory for subsequent runs:

$ livetail _source=firewall-events-source
### Welcome to Sumo Logic Live Tail Command Line Interface ###
1 US1
2 US2
3 EU
4 AU
5 DE
6 FED
7 JP
8 CA
Please select Sumo Logic environment: 
See http://help.sumologic.com/Send_Data/Collector_Management_API/Sumo_Logic_Endpoints to choose the correct environment. 3
### Authenticating ###
Please enter your Access ID: <access id>
Please enter your Access Key <access key>
### Starting Live Tail session ###

Request and receive challenge token

Before requesting a challenge token, we need to figure out where Cloudflare should send logs.

We do this by asking Terraform for the receiver URL of the recently created HTTP source. Note that we modify the URL returned slightly as Cloudflare Logs expects sumo:// rather than https://.

$ export SUMO_RECEIVER_URL=$(terraform state show sumologic_http_source.http_source | grep url | awk '{print $3}' | sed -e 's/https:/sumo:/; s/"//g')

$ echo $SUMO_RECEIVER_URL
sumo://endpoint1.collection.eu.sumologic.com/receiver/v1/http/<redacted>

With URL in hand, we can now request the token.

$ curl -sXPOST -H "Content-Type: application/json" -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" -d '{"destination_conf":"'''$SUMO_RECEIVER_URL'''"}' https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/logpush/ownership

{"errors":[],"messages":[],"result":{"filename":"ownership-challenge-bb2912e0.txt","message":"","valid":true},"success":true}

Back in the other window where your livetail is running you should see something like this:

{"content":"eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4R0NNIiwidHlwIjoiSldUIn0..WQhkW_EfxVy8p0BQ.oO6YEvfYFMHCTEd6D8MbmyjJqcrASDLRvHFTbZ5yUTMqBf1oniPNzo9Mn3ZzgTdayKg_jk0Gg-mBpdeqNI8LJFtUzzgTGU-aN1-haQlzmHVksEQdqawX7EZu2yiePT5QVk8RUsMRgloa76WANQbKghx1yivTZ3TGj8WquZELgnsiiQSvHqdFjAsiUJ0g73L962rDMJPG91cHuDqgfXWwSUqPsjVk88pmvGEEH4AMdKIol0EOc-7JIAWFBhcqmnv0uAXVOH5uXHHe_YNZ8PNLfYZXkw1xQlVDwH52wRC93ohIxg.pHAeaOGC8ALwLOXqxpXJgQ","filename":"ownership-challenge-bb2912e0.txt"}

Copy the content value from above into an environment variable, as you’ll need it in a minute to create the job:

$ export LOGPUSH_CHALLENGE_TOKEN="<content value>"

Create the Logpush job using the challenge token

With challenge token in hand, we’ll use Terraform to create the job.

First you’ll want to choose the log fields that should be sent to Sumo Logic. You can enumerate the list by querying the dataset:

$ curl -sXGET -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/logpush/datasets/firewall_events/fields | jq .
{
  "errors": [],
  "messages": [],
  "result": {
    "Action": "string; the code of the first-class action the Cloudflare Firewall took on this request",
    "ClientASN": "int; the ASN number of the visitor",
    "ClientASNDescription": "string; the ASN of the visitor as string",
    "ClientCountryName": "string; country from which request originated",
    "ClientIP": "string; the visitor's IP address (IPv4 or IPv6)",
    "ClientIPClass": "string; the classification of the visitor's IP address, possible values are: unknown | clean | badHost | searchEngine | whitelist | greylist | monitoringService | securityScanner | noRecord | scan | backupService | mobilePlatform | tor",
    "ClientRefererHost": "string; the referer host",
    "ClientRefererPath": "string; the referer path requested by visitor",
    "ClientRefererQuery": "string; the referer query-string was requested by the visitor",
    "ClientRefererScheme": "string; the referer url scheme requested by the visitor",
    "ClientRequestHTTPHost": "string; the HTTP hostname requested by the visitor",
    "ClientRequestHTTPMethodName": "string; the HTTP method used by the visitor",
    "ClientRequestHTTPProtocol": "string; the version of HTTP protocol requested by the visitor",
    "ClientRequestPath": "string; the path requested by visitor",
    "ClientRequestQuery": "string; the query-string was requested by the visitor",
    "ClientRequestScheme": "string; the url scheme requested by the visitor",
    "Datetime": "int or string; the date and time the event occurred at the edge",
    "EdgeColoName": "string; the airport code of the Cloudflare datacenter that served this request",
    "EdgeResponseStatus": "int; HTTP response status code returned to browser",
    "Kind": "string; the kind of event, currently only possible values are: firewall",
    "MatchIndex": "int; rules match index in the chain",
    "Metadata": "object; additional product-specific information. Metadata is organized in key:value pairs. Key and Value formats can vary by Cloudflare security product and can change over time",
    "OriginResponseStatus": "int; HTTP origin response status code returned to browser",
    "OriginatorRayName": "string; the RayId of the request that issued the challenge/jschallenge",
    "RayName": "string; the RayId of the request",
    "RuleId": "string; the Cloudflare security product-specific RuleId triggered by this request",
    "Source": "string; the Cloudflare security product triggered by this request",
    "UserAgent": "string; visitor's user-agent string"
  },
  "success": true
}

Then you’ll append your Cloudflare configuration to the main.tf file:

$ cat <<EOF | tee -a main.tf

##################
### CLOUDFLARE ###
##################
provider "cloudflare" {
  version = "~> 2.0"
}

resource "cloudflare_logpush_job" "firewall_events_job" {
  name = "fwevents-logpush-job"
  zone_id = var.cf_zone_id
  enabled = true
  dataset = "firewall_events"
  logpull_options = "fields=RayName,Source,RuleId,Action,EdgeResponseStatusDatetime,EdgeColoName,ClientIP,ClientCountryName,ClientASNDescription,UserAgent,ClientRequestHTTPMethodName,ClientRequestHTTPHost,ClientRequestHTTPPath&timestamps=rfc3339"
  destination_conf = replace(sumologic_http_source.http_source.url,"https:","sumo:")
  ownership_challenge = "$LOGPUSH_CHALLENGE_TOKEN"
}
EOF

And add to the variables.tf file:

$ cat <<EOF | tee -a variables.tf

##################
### CLOUDFLARE ###
##################
variable "cf_zone_id" {
  default = "$CF_ZONE_ID"
}

Next we re-run terraform init to install the latest Cloudflare Terraform Provider Module. You’ll need to make sure you have at least version 2.6.0 as this is the version in which we added Logpush job support:

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "cloudflare" (terraform-providers/cloudflare) 2.6.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

With the latest Terraform installed, we check out the plan and then apply:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

sumologic_collector.collector: Refreshing state... [id=108448215]
sumologic_http_source.http_source: Refreshing state... [id=150364538]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # cloudflare_logpush_job.firewall_events_job will be created
  + resource "cloudflare_logpush_job" "firewall_events_job" {
      + dataset             = "firewall_events"
      + destination_conf    = "sumo://endpoint1.collection.eu.sumologic.com/receiver/v1/http/(redacted)"
      + enabled             = true
      + id                  = (known after apply)
      + logpull_options     = "fields=RayName,Source,RuleId,Action,EdgeResponseStatusDatetime,EdgeColoName,ClientIP,ClientCountryName,ClientASNDescription,UserAgent,ClientRequestHTTPMethodName,ClientRequestHTTPHost,ClientRequestHTTPPath&timestamps=rfc3339"
      + name                = "fwevents-logpush-job"
      + ownership_challenge = "(redacted)"
      + zone_id             = "(redacted)"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
$ terraform apply --auto-approve
sumologic_collector.collector: Refreshing state... [id=108448215]
sumologic_http_source.http_source: Refreshing state... [id=150364538]
cloudflare_logpush_job.firewall_events_job: Creating...
cloudflare_logpush_job.firewall_events_job: Creation complete after 3s [id=13746]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Success! Last step is to test your setup.

Testing your setup by sending a malicious request

The following step assumes that you have the Cloudflare WAF turned on. Alternatively, you can create a Firewall Rule to match your request and generate a Firewall Event that way.

First make sure that livetail is running as described earlier:

$ livetail "_source=firewall-events-source"
### Authenticating ###
### Starting Live Tail session ###

Then in a browser make the following request https://example.com/<script>alert()</script>. You should see the following returned:

Stream Firewall Events directly to your SIEM

And a few moments later in livetail:

{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"958052","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"958051","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973300","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973307","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973331","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"981176","Action":"drop","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}

Note that for this one malicious request Cloudflare Logs actually sent 6 separate Firewall Events to Sumo Logic. The reason for this is that this specific request triggered a variety of different Managed Rules: #958051, 958052, 973300, 973307, 973331, and 981176.

Seeing it all in action

Here’s a demo of  launching livetail, making a malicious request in a browser, and then seeing the result sent from the Cloudflare Logpush job:

Stream Firewall Events directly to your SIEM

Visualizing Amazon GuardDuty findings

Post Syndicated from Mike Fortuna original https://aws.amazon.com/blogs/security/visualizing-amazon-guardduty-findings/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help protect your AWS accounts and workloads. Enable GuardDuty and it begins monitoring for:

  • Anomalous API activity
  • Potentially unauthorized deployments and compromised instances
  • Reconnaissance by attackers.

GuardDuty analyzes and processes VPC flow log, AWS CloudTrail event log, and DNS log data sources. You don’t need to manually manage these data sources because the data is automatically leveraged and analyzed when you activate GuardDuty. For example, GuardDuty consumes VPC Flow Log events directly from the VPC Flow Logs feature through an independent and duplicative stream of flow logs. As a result, you don’t incur any operational burden on existing workloads.

GuardDuty helps find potential threats in your AWS environment by producing security findings that you can view in the GuardDuty console or consume through Amazon CloudWatch Events, which is a service that makes alerts actionable and easier to integrate into existing event management and workflow systems. One common question we hear from customers is “how do I visualize these findings to generate meaningful insights?” In this post, we’re going to show you how to create a dashboard that includes visualizations like this:
 

Figure 1: Example visualization

Figure 1: Example visualization

You’ll learn how you can use AWS services to create a pipeline for your GuardDuty findings so you can log and visualize them. The services include:

Architecture

The architectural diagram below illustrates the pipeline we’ll create.
 

Figure 2: Architectural diagram

Figure 2: Architectural diagram

We’ll walk through the data flow to explain the architecture and highlight the additional customizations available to you.

  1. Amazon GuardDuty is enabled in an account and begins monitoring CloudTrail logs, VPC flow logs, and DNS query logs. If a threat is detected, GuardDuty forwards a finding to CloudWatch Events. For a newly generated finding, GuardDuty sends a notification based on its CloudWatch event within 5 minutes of the finding. CloudWatch Events allows you to send upstream notifications to various services filtered on your configured event patterns. We’ll configure an event pattern that only forwards events coming from the GuardDuty service.
  2. We define two targets in our CloudWatch Event Rule. The first target is a Kinesis Firehose stream for delivery into an Elasticsearch domain and an S3 bucket. The second target is an SNS Topic for Email/SMS notification of findings. We’ll send all findings to our targets; however, you can filter and format the findings you send by using a Lambda function (or by event pattern matching with a CloudWatch Event Rule). For example, you could send only high-severity alarms (that is, findings with detail.severity > 7).
  3. The Firehose stream delivers findings to Amazon Elasticsearch, which provides visualization and analysis for our event findings. The stream also delivers findings to an S3 bucket. The S3 bucket is used for long term archiving. This data can augment your data lake and you can use services such as Amazon Athena to perform advanced analytics.
  4. We’ll search, explore, and visualize the GuardDuty findings using Kibana and the Elasticsearch query Domain Specific Language (DSL) to gain valuable insights. Amazon Elasticsearch has a built-in Kibana plugin to visualize the data and perform operational analyses.
  5. To provide a simplified and secure authentication method, we provide user authentication to Kibana with Amazon Cognito User Pools. This method provides improved security from traditional IP whitelists or proxy infrastructure.
  6. Our second CloudWatch Event target is SNS, which has subscribed email endpoint(s) that allow your operations teams to receive email (or SMS messages) when a new GuardDuty Event is received.

If you would like to centralize your findings from multiple regions into a single S3 bucket, you can adapt this pipeline. You would deploy the frontend of the pipeline by configuring Kinesis Firehose in the remote regions to point to the S3 bucket in the centralized region. You can leverage prefixes in the Kinesis Firehose configuration to identify the source region. For example, you would configure a prefix of us-west-1 for events originating from the us-west-1 region. Analytic queries from tools such as Athena can then selectively target the desired region.

Deployment Steps

This CloudFormation template will install the pipeline and components required for GuardDuty visualization:
 
Select button to launch stack

When you start the stack creation process you will be prompted for the following information:

  • Stack name — This is the name of the stack you will create
  • EmailAddress — This email address is used to create a username in Cognito and a subscriber to the SNS topic.
  • ESDomainName — This will be the name given to the Elasticsearch Domain.
  • IndexName — This will be the Index created by Firehose to load data into Elasticsearch.

 

Figure 3: The "Create stack" interface

Figure 3: The “Create stack” interface

Once the infrastructure is installed, you’ll follow two main steps, each of which is described in detail later:

  1. Add Cognito authentication to Kibana, which is hosted on the Elasticsearch domain. At the time of writing, this can’t be done natively in CloudFormation. We’ll also confirm the SNS subscription so we can start to receive GuardDuty Findings via email.
  2. Configure Kibana with the index, the appropriate scripted fields, and the dashboard to provide the visualizations. We’ll also enable GuardDuty to start monitoring your account and send sample findings to test the pipeline.

Step 1: Enable Cognito authentication in Kibana

To enable user authentication to your dashboards hosted in Kibana, you need to enable the integration from the Elasticsearch domain that was created within the Cloudformation template.

  1. Open the AWS Console and select the Cognito service. Select Manage User Pools to access the User Pool that was created in Cloudformation. Select the user pool beginning with the name VisualizeGuardDutyUserPool and, under the App Integration menu item, select Domain name.
     
    Figure 4: The "Domain name" interface

    Figure 4: The “Domain name” interface

  2. You need to create a unique domain prefix to allow Kibana to authenticate using Cognito. Enter a unique domain prefix (it can only contain lowercase letters, numbers, and hyphens). After entering the prefix, select the Check Availability button to ensure it’s available in the region. If it’s available, select Save Changes button.
  3. From the AWS Console, select the Cloudformation service.
  4. Select the template you created for your pipeline, select the Outputs tab, and then, under Value, copy the value of ESCognitoRole. You’ll use this role when you enable Cognito authentication of Elasticsearch.
     
    Figure 5: The "Outputs" tab and the "ESCognitoRole" key

    Figure 5: The “Outputs” tab and the “ESCognitoRole” key

  5. Next, browse to the Elasticsearch Service, select the domain you created from the CloudFormation template, and select the Configure cluster button:
     
    Figure 6: The "Configure cluster" button

    Figure 6: The “Configure cluster” button

  6. Under the Kibana authentication section, select the Enable Amazon Cognito for authentication checkbox. You’ll be presented with several fields you need to configure, including: Cognito User Pool (the name of the user pool should start with VisualizeGuardDutyUserPool), Cognito Identity Pool (the name of the identity pool should start with VisualizeGuardDutyIDPool), and IAM Role Name (this was copied in step 4 earlier). A Cognito User Pool is a user directory in Amazon Cognito, we use this to create a user account to provide authentication to Kibana. Amazon Cognito Identity Pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. The Cognito Identity Pool in our case is used to provide federated access to Kibana. After you provided values for these fields, select the Submit button.
     
    Figure 7: The "Kibana authentication" interface

    Figure 7: The “Kibana authentication” interface

  7. The cluster reconfiguration will take several minutes to complete processing. When you see Domain status as Active, you can proceed.
  8. Finally, confirm the subscription email you received from SNS. Look for an email from: AWS Notifications <[email protected]>, open the message and select Confirm subscription to allow SNS to send you email when the SNS Topic receives a notification for new GuardDuty findings.

Step 2: Set up the Kibana dashboard and enable GuardDuty

Now, you can set up the Kibana dashboard with custom visualizations.

  1. Open the CloudFormation service page and select the stack you created earlier.
  2. Under the Outputs section, copy the Kibana URL.
     
    Figure 8: Copy the Kibana URL

    Figure 8: Copy the Kibana URL

  3. Paste the Kibana URL in a new browser window.
  4. Check your email client. You should have an email containing the temporary password from Cognito. Copy the temporary password and use it to log in to Cognito. If you haven’t received the email, check your email junk folder. You can also create additional users in the Cognito User Pool that was created from the CloudFormation Stack to provide additional users Kibana access.
     
    Figure 9: Example email with temporary password

    Figure 9: Example email with temporary password

  5. At the login prompt, enter the email address and password for the Cognito user the CloudFormation template created. A prompt to change your password will appear. Change your password to proceed. The Cognito User Pool requires: upper case letters, lower case letters, special characters, and numbers with a minimum length of 8 characters.
  6. It’s time to add mapping information to your index to instruct Kibana that some of the fields are delivered as geopoints. This allows these fields to be properly visualized with a Coordinate Map. Select Dev Tools in the menu on the left side:
  7.  

    Figure 10: Select "Dev tools"

    Figure 10: Select “Dev tools”

  8. Paste the following API call in the text box to provide the appropriate mappings for the networkConnectionAction & portProbeAction geolocation field. This calls the Elasticsearch API and updates the geolocation mapping for the above fields:
    
    PUT _template/gdt
    {
      "template": "gdt*",
      "settings": {},
      "mappings": {
        "_default_": {
          "properties": {
            "detail.service.action.portProbeAction.portProbeDetails.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            },
            "detail.service.action.networkConnectionAction.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            }        
          }
        }
      }
    }
    

  9. After you paste the API call be sure to remove whitespace after the ending brace. This allows you to select the green arrow to execute it. You should receive a message that the call was successful.
     
    Figure 11: Paste the API call

    Figure 11: Paste the API call

  10. Next, enable GuardDuty and send sample findings so you can create the Kibana Dashboard with data present. Find the GuardDuty service in the AWS Console and select the Get started button.
  11. From the Welcome to GuardDuty page, select the Enable GuardDuty button.
  12. Next, send some sample events. From the GuardDuty service, select the Settings menu on the left-hand menu, and then select Generate sample findings as shown here:
     
    Figure 12: The "Generate sample findings" button

    Figure 12: The “Generate sample findings” button

  13. Optionally, if you want to test with real GuardDuty findings, you can leverage the Amazon GuardDuty Tester. This AWS CloudFormation template creates an isolated environment with a bastion host, a tester EC2 instance, and two target EC2 instances to simulate five types of common attacks that GuardDuty is built to detect and notify you with generated findings. Once deployed, you would use the tester EC2 instance to execute a shell script to generate GuardDuty findings. Additional detail about this option can be found in the GuardDuty documentation.
  14. On the Kibana landing page, in the menu on the left side, create the Index by selecting Management.
  15. On the Management page, select Index Patterns.
  16. On the Create index pattern page, under Index patterns, enter gdt-* (if you used a different IndexName in the Cloudformation template, use that here), and then select Next Step.

    Note: It takes several minutes for the GuardDuty findings to generate a CloudWatch Event, work through the pipeline, and create the index in Elasticsearch. If the index doesn’t appear initially, please wait a few minutes and try again.

     

    Figure 13: The "Create index pattern" page

    Figure 13: The “Create index pattern” page

  17. Under Time Filter field name, select time from the drop-down list, and then select Create index pattern.
     
    Figure 14: The "Time Filter field name" list

    Figure 14: The “Time Filter field name” list

Create scripted fields

With the Index defined, we will now create two scripted fields that your dashboard visualizations will use.

Define the severity level

  1. Select the Index you just created, and then select scripted fields.
     
    Figure 15: The "scripted fields" tab

    Figure 15: The “scripted fields” tab

  2. Select Add Scripted Field, and enter the following information:
    • Name — sevLevel
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — copy and paste this script into the text-entry field:
      
      if (doc['detail.severity'].value < 3.9) { 
          return "Low";
      }
      else {if (doc['detail.severity'].value < 6.9) {
                return "Medium";
             }
      return "High";
      }
      

  3. After entering the information, select Create Field.

The sevLevel field provides a value-to-level mapping as defined by GuardDuty Severity Levels. This allows you to visualize the severity levels in a more user-friendly format (High, Medium, and Low) instead of a cryptic numerical value. To generate sevLevel, we used Kibana painless scripting, which allows custom field creation.

Define the attack type

  1. Now create a second scripted field for typeCategory. The typeCategory field extracts the finding attack type. Enter the following information:
    • Name — typeCategory
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — Copy and paste this script into the text-entry field:
      
      def path = doc['detail.type.keyword'].value;
      if (path != null) {
          int firstColon = path.indexOf(":");
          if (firstColon > 0) {
          return path.substring(0,firstColon);
          }
      }
      return "";
      

  2. After entering the information, select Create Field.

The typeCategory field is used to define the broad category “attack type.” The source field (detail.type.keyword) provides a lot of detailed information (for example: Recon:EC2/PortProbeUnprotectedPort), but we want to visualize the category of “attack type” in the high-level dashboard (that is, only Recon). We can still visualize on a more granular level, if necessary.

Create the Kibana Dashboard

  1. Create the Kibana dashboard by importing a JSON file containing its definition. To do this, download the Kibana dashboard and visualizations definition JSON file from here.
  2. Select Management in the menu on the left, and then select Saved Objects. On the right, select Import.
  3. Select the JSON file you downloaded and select Open. This imports the GuardDuty dashboard and visualizations. Select Yes, overwrite all objects.
  4. In the Index Pattern Conflicts section, under New index pattern, select gdt-*, and then select Confirm all changes.

Dashboard in action

  1. Select Dashboard in the menu on the left.
  2. Select the Guard Duty Summary link.

Your GuardDuty Dashboard will look like this:
 

Figure 16: The GuardDuty dashboard with callouts

Figure 16: The GuardDuty dashboard

The dashboard provides the following visualizations:

  1. This filter allows you to filter sample findings from real findings. If you generate sample findings from the GuardDuty AWS console, this filter allows you to remove the sample findings from the dashboard.
  2. The GuardDuty — Affected Instances chart shows which EC2 instances have associated findings. This visualization allows you to filter specific instances from display by selecting them in the graphic.
  3. The Guard Duty — Threat Type chart allows you to filter on the general attack type (inner circle) as well as the specific attack type (outer circle).
  4. The Guard Duty — Events Per Day graph allows you to visualize and filter on a specific time or date to show findings for that specific time, as well as search for temporal patterns in findings.
  5. GuardDuty — Top10 Findings provides a list of the top 10 findings by count.
  6. GuardDuty — Total Events provides the total number of events based on the criteria chosen. This value will change based on the filters defined.
  7. The GuardDuty — Heatmap — Port Probe Source Countries visualizes the countries where port probes are issued from. This is a Coordinate Map visualization that allows you to see the source and volume of the port probes targeting your instances.
  8. The GuardDuty — Network Connection Source Countries visualizes where brute force attacks are coming from. This is a Region Map visualization that allows you to highlight the country the brute force attacks are sourced from.
  9. GuardDuty — Severity Levels is a pie chart that show findings by severity levels (High, Med, Low), and you can filter by a specific level (that is, only show high-severity findings). This visualization uses the scripted field we created earlier for simplified visualization.
  10. The All-GuardDuty table includes the raw findings for all events. This provides complete raw event detail and the ability to filter at very granular levels.

In a previous blog, we saw how you can create a Kibana dashboard to visualize your network security posture by visualizing your VPC flow logs. This GuardDuty dashboard augments that dashboard. You can use a single Elasticsearch cluster to host both of these dashboards, in addition to other data sources you want to analyze and report on.

Conclusion

We’ve outlined an approach to rapidly build a pipeline to help you archive, analyze, and visualize your GuardDuty findings for rapid insight and actionable intelligence. You can extend this solution in a number of ways, including:

  • Modifying the alert email sent with a structured message (instead of raw JSON)
  • Adding additional visualizations, such as a heatmaps or timeseries charts
  • Extending the solution across AWS accounts or regions.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum.

Want more AWS Security news? Follow us on Twitter.

Michael Fortuna

Michael Fortuna

Michael is a Solutions Architect in AWS supporting enterprise customers and their journey to the cloud. Prior to his work on AWS and cloud technologies, Michael’s areas of focus included software-defined networking, security, collaboration, and virtualization technologies. He’s very excited to work as an SA because it allows him to dive deep on technology while helping customers.

Ravi Sakaria

Ravi Sakariar

Ravi is a Senior Solutions Architect at AWS based in New York. He works with enterprise customers as they transform their business and journey to the cloud. He enjoys the culture of innovation at Amazon because it’s similar to his prior experiences building startup companies. Outside of work, Ravi enjoys spending time with his family, cooking, and watching the New Jersey Devils.

How to Manage Amazon GuardDuty Security Findings Across Multiple Accounts

Post Syndicated from Tom Stickle original https://aws.amazon.com/blogs/security/how-to-manage-amazon-guardduty-security-findings-across-multiple-accounts/

Introduced at AWS re:Invent 2017, Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. In an AWS Blog post, Jeff Barr shows you how to enable GuardDuty to monitor your AWS resources continuously. That blog post shows how to get started with a single GuardDuty account and provides an overview of the features of the service. Your security team, though, will probably want to use GuardDuty to monitor a group of AWS accounts continuously.

In this post, I demonstrate how to use GuardDuty to monitor a group of AWS accounts and have their findings routed to another AWS account—the master account—that is owned by a security team. The method I demonstrate in this post is especially useful if your security team is responsible for monitoring a group of AWS accounts over which it does not have direct access—known as member accounts. In this solution, I simplify the work needed to enable GuardDuty in member accounts and configure findings by simplifying the process, which I do by enabling GuardDuty in the master account and inviting member accounts.

Enable GuardDuty in a master account and invite member accounts

To get started, you must enable GuardDuty in the master account, which will receive GuardDuty findings. The master account should be managed by your security team, and it will display the findings from all member accounts. The master account can be reverted later by removing any member accounts you add to it. Adding member accounts is a two-way handshake mechanism to ensure that administrators from both the master and member accounts formally agree to establish the relationship.

To enable GuardDuty in the master account and add member accounts:

  1. Navigate to the GuardDuty console.
  2. In the navigation pane, choose Accounts.
    Screenshot of the Accounts choice in the navigation pane
  1. To designate this account as the GuardDuty master account, start adding member accounts:
    • You can add individual accounts by choosing Add Account, or you can add a list of accounts by choosing Upload List (.csv).
  1. Now, add the account ID and email address of the member account, and choose Add. (If you are uploading a list of accounts, choose Browse, choose the .csv file with the member accounts [one email address and account ID per line], and choose Add accounts.)
    Screenshot of adding an account

For security reasons, AWS checks to make sure each account ID is valid and that you’ve entered each member account’s email address that was used to create the account. If a member account’s account ID and email address do not match, GuardDuty does not send an invitation.
Screenshot showing the Status of Invite

  1. After you add all the member accounts you want to add, you will see them listed in the Member accounts table with a Status of Invite. You don’t have to individually invite each account—you can choose a group of accounts and when you choose to invite one account in the group, all accounts are invited.
  2. When you choose Invite for each member account:
    1. AWS checks to make sure the account ID is valid and the email address provided is the email address of the member account.
    2. AWS sends an email to the member account email address with a link to the GuardDuty console, where the member account owner can accept the invitation. You can add a customized message from your security team. Account owners who receive the invitation must sign in to their AWS account to accept the invitation. The service also sends an invitation through the AWS Personal Health Dashboard in case the member email address is not monitored. This invitation appears in the member account under the AWS Personal Health Dashboard alert bell on the AWS Management Console.
    3. A pending-invitation indicator is shown on the GuardDuty console of the member account, as shown in the following screenshot.
      Screenshot showing the pending-invitation indicator

When the invitation is sent by email, it is sent to the account owner of the GuardDuty member account.
Screenshot of the invitation sent by email

The account owner can click the link in the email invitation or the AWS Personal Health Dashboard message, or the account owner can sign in to their account and navigate to the GuardDuty console. In all cases, the member account displays the pending invitation in the member account’s GuardDuty console with instructions for accepting the invitation. The GuardDuty console walks the account owner through accepting the invitation, including enabling GuardDuty if it is not already enabled.

If you prefer to work in the AWS CLI, you can enable GuardDuty and accept the invitation. To do this, call CreateDetector to enable GuardDuty, and then call AcceptInvitation, which serves the same purpose as accepting the invitation in the GuardDuty console.

  1. After the member account owner accepts the invitation, the Status in the master account is changed to Monitored. The status helps you track the status of each AWS account that you invite.
    Screenshot showing the Status change to Monitored

You have enabled GuardDuty on the member account, and all findings will be forwarded to the master account. You can now monitor the findings about GuardDuty member accounts from the GuardDuty console in the master account.

The member account owner can see GuardDuty findings by default and can control all aspects of the experience in the member account with AWS Identity and Access Management (IAM) permissions. Users with the appropriate permissions can end the multi-account relationship at any time by toggling the Accept button on the Accounts page. Note that ending the relationship changes the Status of the account to Resigned and also triggers a security finding on the side of the master account so that the security team knows the member account is no longer linked to the master account.

Working with GuardDuty findings

Most security teams have ticketing systems, chat operations, security information event management (SIEM) systems, or other security automation systems to which they would like to push GuardDuty findings. For this purpose, GuardDuty sends all findings as JSON-based messages through Amazon CloudWatch Events, a scalable service to which you can subscribe and to which AWS services can stream system events. To access these events, navigate to the CloudWatch Events console and create a rule that subscribes to the GuardDuty-related findings. You then can assign a target such as Amazon Kinesis Data Firehose that can place the findings in a number of services such as Amazon S3. The following screenshot is of the CloudWatch Events console, where I have a rule that pulls all events from GuardDuty and pushes them to a preconfigured AWS Lambda function.

Screenshot of a CloudWatch Events rule

The following example is a subset of GuardDuty findings that includes relevant context and information about the nature of a threat that was detected. In this example, the instanceId, i-00bb62b69b7004a4c, is performing Secure Shell (SSH) brute-force attacks against IP address 172.16.0.28. From a Lambda function, you can access any of the following fields such as the title of the finding and its description, and send those directly to your ticketing system.

Example GuardDuty findings

You can use other AWS services to build custom analytics and visualizations of your security findings. For example, you can connect Kinesis Data Firehose to CloudWatch Events and write events to an S3 bucket in a standard format, which can be encrypted with AWS Key Management Service and then compressed. You also can use Amazon QuickSight to build ad hoc dashboards by using AWS Glue and Amazon Athena. Similarly, you can place the data from Kinesis Data Firehose in Amazon Elasticsearch Service, with which you can use tools such as Kibana to build your own visualizations and dashboards.

Like most other AWS services, GuardDuty is a regional service. This means that when you enable GuardDuty in an AWS Region, all findings are generated and delivered in that region. If you are regulated by a compliance regime, this is often an important requirement to ensure that security findings remain in a specific jurisdiction. Because customers have let us know they would prefer to be able to enable GuardDuty globally and have all findings aggregated in one place, we intend to give the choice of regional or global isolation as we evolve this new service.

Summary

In this blog post, I have demonstrated how to use GuardDuty to monitor a group of GuardDuty member accounts and aggregate security findings in a central master GuardDuty account. You can use this solution whether or not you have direct control over the member accounts.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about using GuardDuty, start a thread in the GuardDuty forum or contact AWS Support.

-Tom

OSSIM Download – Open Source SIEM Tools & Software

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/10/ossim-download-open-source-siem-tools-software/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

OSSIM Download – Open Source SIEM Tools & Software

OSSIM is a popular Open Source SIEM or Security Information and Event Management (SIEM) product, providing event collection, normalization and correlation.

OSSIM stands for Open Source Security Information Management, it was launched in 2003 by security engineers because of the lack of available open source products, OSSIM was created specifically to address the reality many security professionals face: A SIEM, whether it is open source or commercial, is virtually useless without the basic security controls necessary for security visibility.

Read the rest of OSSIM Download – Open Source SIEM Tools & Software now! Only available at Darknet.