All posts by Jeffrey Gardner

Kill Chains: Part 3→What’s Next

Post Syndicated from Jeffrey Gardner original https://blog.rapid7.com/2021/06/25/kill-chains-part-3-whats-next/

Kill Chains: Part 3→What’s Next

Life, the Universe, and Kill Chains

As the final entry in this blog series, we want to quickly recap what we have previously discussed and also look into the possible future of kill chains. If you haven’t already done so, please make sure to read the previous 2 entries in this series: Kill chains: Part 1→Strategic and operational value, and Kill chains: Part 2 →Strategic and tactical use cases.

Fun with Graphs

In an effort to save time (and your sanity) I’ve created the following graph to illustrate the differences between the different kill chains:

Kill Chains: Part 3→What’s Next

What’s the bottom line? To paraphrase a line from the film The Gentlemen, “for (almost) every use case there is a kill chain, and for every kill chain a strategy.” Focused on malware defense or security awareness? The Cyber Kill Chain is worth a look. Need to assess your operational capabilities? MITRE ATT&CK. Looking to accurately model the behavior of attackers? Unified Kill Chain is “the way” (#mandalorian).

The Future

The kill chains of today (Lockheed Martin Cyber Kill Chain, MITRE ATT&CK, Unified Kill Chain) can trace their origins to a model first proposed by the military in the late 1990’s known as F2T2EA (find, fix, track, target, engage, and assess). However, as we all know, attackers and their attacks evolve over time—and the rate at which they are evolving continues to accelerate. Since our kill chains evolved from military strategy, it only makes sense to look at what’s happened in military strategy since the 90s to get a glimpse of where the evolution of the cyber kill chains may be heading.

A newer model used by special operators is F3EAD (find, fix, finish, exploit, analyze, and disseminate). Let’s take a quick look at how this applies to cyber operations:

  • Find: Ask “who, what, where, when, why” when looking at an event
  • Fix: Verify what was discovered in the previous phase (true positive / false positive)
  • Finish: Use the information from the previous 2 phases to determine a course of action and degrade/eliminate the threat
  • Exploit: Identify IOCs using information from the previous phases
  • Analyze: Fuse your self-generated intelligence with third-party sources to identify any additional anomalous activity occurring in the environment
  • Disseminate: Distribute the results of the previous phases within the Security Operations Center (SOC) and to additional key stakeholders

One thing missing from the F3EAD model when applied to cyber operations is the inclusion of automation, aka Security Orchestration and Automation and Response (SOAR). The gains in efficiency can greatly increase the speed at which the finish, exploit, and analyze phases can be completed. The first two phases, find and fix, are something I believe still requires the human touch due to the “fuzzy” (aka contextual) nature of events occurring within an organization.

The TLDR of the above? The future of kill chains must include the fusion of intelligence and automation without removing the human element from the equation. Until the equivalent of Skynet is invented, e.g. a truly sentient version of artificial intelligence capable of thinking in abstract ways, the “gut feeling” an analyst or incident responder gets when examining data will continue to be an advantage for us regular humans. Pairing this with the unmatched efficiency and speed gained by utilizing SOAR = winning!

The Verdict

Kill chains represent a comprehensive way to think about and visualize cyber attacks. Being able to communicate using a common lexicon (i.e. the terms and concepts in a kill chain) is critical to helping all levels of your organization understand the importance of security. However, I fear another fracturing of our lexicon will occur as more and newer versions of kill chains are introduced. Additionally, there appears to be an overreliance on only detecting and preventing the Tactics, Techniques, and Procedures (TTPs) found within these frameworks. Attackers have proven to be incredibly creative and endlessly resourceful, so their TTPs are going to change and evolve in ways we cannot yet imagine. This doesn’t mean we should discount the importance of using kill chains as part of our toolkit, but it should remain a part of our kit, and not the gold standard by which we judge the effectiveness of the security programs we have created.  

——————

Jeffrey Gardner, Practice Advisor for Detection and Response at Rapid7, recently presented a deep dive into all things kill chain. In it, he discusses how these methodologies can help your security organization cut down on threats and drastically reduce breach response times. You can also read the previous entries in this series for a general overview of kill chains and the specific frameworks we’ve discussed.

Watch the webcast now

Go back and read Part 1→Strategic and operational value, or Part 2 →Strategic and tactical use cases

Attack Surface Analysis Part 3: Red and Purple Teaming

Post Syndicated from Jeffrey Gardner original https://blog.rapid7.com/2021/06/22/attack-surface-analysis-part-3-red-and-purple-teaming/

Part 3: Red and Purple Teaming

Attack Surface Analysis Part 3: 
Red and Purple Teaming

This is the third and final installment in our 2021 series around attack surface analysis. In part 1 I offered a description and the value and challenge of vulnerability assessment. Part 2 explored the why and how of conducting penetration testing and gave some tips on what to look for when planning an engagement. In this installment I’ll detail the final 2 analysis techniques—red and purple teaming.

Previously, we rather generically defined a red team engagement as a capabilities assessment. Time to get a little more specific with our terminology with a better definition, once again courtesy of NIST:

“A [red team is a] group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment.”

(Source: https://csrc.nist.gov/glossary/term/Red_Team)

If you’re scratching your head about now thinking “well, that sounds awful similar to a pentest,” I’ve put together the following table to help really illustrate the differences:

Attack Surface Analysis Part 3: 
Red and Purple Teaming

Additionally, like the various methodologies available for pentesting, red teams have different options in how they perform their engagements. The most common methodology that many of you have no doubt heard of is the MITRE ATT&CK framework, but there are others out there. Each of the options below has a different focus, whether it be red teaming for financial services or threat intel-based red teaming, so there is a flavor available to meet your needs:

  1. TIBER-EU—Threat Intelligence-Based Ethical Red Teaming Framework
  2. CBEST—Framework originating in the UK
  3. iCAST—Intelligence-Led Cyber Attack Simulation Testing
  4. FEER—Financial Entities Ethical Red Teaming
  5. AASE—Adversarial Attack Simulation Exercises
  6. NATO—CCDCOE red team framework

You may be thinking, “There’s no way I can stand up an internal red team, and I don’t have the budget for a professional engagement, but I would really like to test my blue team. How can I do this on my own!?” Well, you don’t have to! There are plenty of open source tools available to help you take that first step. While the following tools are nowhere near as capable or extensive as a human-led team, they do give a number of useful insights into potential weaknesses in your detection and response capabilities:

  1. APTSimulator—Batch script for Windows that makes it look as if a system were compromised
  2. Atomic Red Team—Detection tests mapped to the MITRE ATT&CK framework
  3. AutoTTP—Automated Tactics, Techniques & Procedures
  4. Blue Team Training Toolkit (BT3)—Software for defensive security training
  5. Caldera—Automated adversary emulation system by MITRE that performs post-compromise adversarial behavior within Windows networks
  6. DumpsterFire—Cross-platform tool for building repeatable, time-delayed, distributed security events
  7. Metta—Information security preparedness tool
  8. Network Flight Simulator—Utility used to generate malicious network traffic and help teams to evaluate network-based controls and overall visibility
  9. Red Team Automation (RTA)—Framework of scripts designed to allow blue teams to test their capabilities, modeled after MITRE ATT&CK
  10. RedHunt-OS—Virtual machine loaded with a number of tools designed for adversary emulation and threat hunting

Lastly, before we head into a description of purple teaming, I want to reiterate what we’ve discussed this far. The goal of a red team engagement is not just discovering gaps in the detection and response capabilities of an organization. The purpose is to discover the blue team’s weaknesses in terms of processes, coordination, communication, etc., with the list of detection gaps being a byproduct of the engagement itself.

Purple Teaming

While the name may give away the upcoming discussion (red team + blue team = purple team), the purpose of the purple team is to enhance information sharing between both teams, not to replace or combine either team into a new entity.

  • Red Team = Tests an organization’s defensive processes, coordination, etc.
  • Blue Team = Understands attacker TTPs and designs defenses accordingly
  • Purple Team = Ensures both teams are cooperating
  • Red teams should share TTPs with the blue team
  • Blue teams should share knowledge of defensive actions with the red team

Realistically, if both of your teams are already doing this, then congratulations! You have a functional purple team. However, if you’re like me and are a fan of more form and structure, check out the illustration below:

Attack Surface Analysis Part 3: 
Red and Purple Teaming

(Source: https://github.com/DefensiveOrigins/AtomicPurpleTeam)

Seems pretty simple right? In theory it is, but in practice it gets a little more difficult (though probably not in the way you’re thinking). The biggest hurdle to effective purple teaming is helping the blue and red teams overcome the competitiveness that exists between them. Team Blue doesn’t want to give away how they catch bad guys, and Team Red doesn’t want to give away the secrets of the dark arts. By breaking down those walls you can show Blue they’re better defenders by understanding how Red operates, and Red that they can enhance their effectiveness by expanding their knowledge of defensive operations in partnership with Blue. In this way, the teams will actually want to work together (and dogs and cats will start living together, MASS HYSTERIA).

I hope the information above is helpful as you determine which analysis strategy makes sense for you! Check out the other posts in this series for more information on additional analysis techniques to take your program to the next level:

Part 1: Vulnerability Scanning                                     Part 2: Penetration Testing