Tag Archives: Supply Chain Attacks

polyfill.io now available on cdnjs: reduce your supply chain risk

Post Syndicated from Sven Sauleau original https://blog.cloudflare.com/polyfill-io-now-available-on-cdnjs-reduce-your-supply-chain-risk


Polyfill.io is a popular JavaScript library that nullifies differences across old browser versions. These differences often take up substantial development time.

It does this by adding support for modern functions (via polyfilling), ultimately letting developers work against a uniform environment simplifying development. The tool is historically loaded by linking to the endpoint provided under the domain polyfill.io.

In the interest of providing developers with additional options to use polyfill, today we are launching an alternative endpoint under cdnjs. You can replace links to polyfill.io “as is” with our new endpoint. You will then rely on the same service and reputation that cdnjs has built over the years for your polyfill needs.

Our interest in creating an alternative endpoint was also sparked by some concerns raised by the community, and main contributors, following the transition of the domain polyfill.io to a new provider (Funnull).

The concerns are that any website embedding a link to the original polyfill.io domain, will now be relying on Funnull to maintain and secure the underlying project to avoid the risk of a supply chain attack. Such an attack would occur if the underlying third party is compromised or alters the code being served to end users in nefarious ways, causing, by consequence, all websites using the tool to be compromised.

Supply chain attacks, in the context of web applications, are a growing concern for security teams, and also led us to build a client side security product to detect and mitigate these attack vectors: Page Shield.

Irrespective of the scenario described above, this is a timely reminder of the complexities and risks tied to modern web applications. As maintainers and contributors of cdnjs, currently used by more than 12% of all sites, this reinforces our commitment to help keep the Internet safe.

polyfill.io on cdnjs

The full polyfill.io implementation has been deployed at the following URL:

https://cdnjs.cloudflare.com/polyfill/

The underlying bundle link is:

For minified: https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js
For unminified: https://cdnjs.cloudflare.com/polyfill/v3/polyfill.js

Usage and deployment is intended to be identical to the original polyfill.io site. As a developer, you should be able to simply “replace” the old link with the new cdnjs-hosted link without observing any side effects, besides a possible improvement in performance and reliability.

If you don’t have access to the underlying website code, but your website is behind Cloudflare, replacing the links is even easier, as you can deploy a Cloudflare Worker to update the links for you:

export interface Env {}

export default {
    async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
        ctx.passThroughOnException();

        const response = await fetch(request);

        if ((response.headers.get('content-type') || '').includes('text/html')) {
            const rewriter = new HTMLRewriter()
                .on('link', {
                    element(element) {
                        const rel = element.getAttribute('rel');
                        if (rel === 'preconnect') {
                            const href = new URL(element.getAttribute('href') || '', request.url);

                            if (href.hostname === 'polyfill.io') {
                                href.hostname = 'cdnjs.cloudflare.com';
                                element.setAttribute('href', href.toString());
                            }
                        }
                    },
                })

                .on('script', {
                    element(element) {
                        if (element.hasAttribute('src')) {
                            const src = new URL(element.getAttribute('src') || '', request.url);
                            if (src.hostname === 'polyfill.io') {
                                src.hostname = 'cdnjs.cloudflare.com';
                                src.pathname = '/polyfill' + src.pathname;

                                element.setAttribute('src', src.toString());
                            }
                        }
                    },
                });

            return rewriter.transform(response);
        } else {
            return response;
        }
    },
};

Instructions on how to deploy a worker can be found on our developer documentation.

You can also test the Worker on your website without deploying the worker. You can find instructions on how to do this in another blog post we wrote in the past.

Implemented with Rust on Cloudflare Workers

We were happy to discover that polyfill.io is a Rust project. As you might know, Rust has been a first class citizen on Cloudflare Workers from the start.

The polyfill.io service was hosted on Fastly and used their Rust library. We forked the project to add the compatibility for Cloudflare Workers, and plan to make the fork publicly accessible in the near future.

Worker

The https://cdnjs.cloudflare.com/polyfill/[...].js endpoints are also implemented in a Cloudflare Worker that wraps our Polyfill.io fork. The wrapper uses Cloudflare’s Rust API and looks like the following:

#[event(fetch)]
async fn main(req: Request, env: Env, ctx: Context) -> Result<Response> {
    let metrics = {...};

    let polyfill_store = get_d1(&req, &env)?;
    let polyfill_env = Arc::new(service::Env { polyfill_store, metrics });
    
    // Run the polyfill.io entrypoint
    let res = service::handle_request(req2, polyfill_env).await;

    let status_code = if let Ok(res) = &res {
        res.status_code()
    } else {
        500
    };
    metrics
        .requests
        .with_label_values(&[&status_code.to_string()])
        .inc();

    ctx.wait_until(async move {
        if let Err(err) = metrics.report_metrics().await {
            console_error!("failed to report metrics: {err}");
        }
    });

    res
}

The wrapper only sets up our internal metrics and logging tools, so we can monitor uptime and performance of the underlying logic while calling the Polyfill.io entrypoint.

Storage for the Polyfill files

All the polyfill files are stored in a key-value store powered by Cloudflare D1. This allows us to fetch as many polyfill files as we need with a single SQL query, as opposed to the original implementation doing one KV get() per file.

For performance, we have one Cloudflare D1 instance per region and the SQL queries are routed to the nearest database.

cdnjs for your JavaScript libraries

cdnjs is hosting over 6k JavaScript libraries as of today. We are looking for ways to improve the service and provide new content. We listen to community feedback and welcome suggestions on our community forum, or cdnjs on GitHub.

Page Shield is also available to all paid plans. Log in to turn it on with a single click to increase visibility and security for your third party assets.

3 Takeaways From the 2022 Verizon Data Breach Investigations Report

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/05/31/3-takeaways-from-the-2022-verizon-data-breach-investigations-report/

3 Takeaways From the 2022 Verizon Data Breach Investigations Report

Sometimes, data surprises you. When it does, it can force you to rethink your assumptions and second-guess the way you look at the world. But other times, data can reaffirm your assumptions, giving you hard proof they’re the right ones — and providing increased motivation to act decisively based on that outlook.

The 2022 edition of Verizon’s Data Breach Investigations Report (DBIR), which looks at data from cybersecurity incidents that occurred in 2021, is a perfect example of this latter scenario. This year’s DBIR rings many of the same bells that have been resounding in the ears of security pros worldwide for the past 12 to 18 months — particularly, the threat of ransomware and the increasing relevance of complex supply chain attacks.

Here are our three big takeaways from the 2022 DBIR, and why we think they should have defenders doubling down on the big cybersecurity priorities of the current moment.

1. Ransomware’s rise is reaffirmed

In 2021, it was hard to find a cybersecurity headline that didn’t somehow pertain to ransomware. It impacted some 80% of businesses last year and threatened some of the institutions most critical to our society, from primary and secondary schools to hospitals.

This year’s DBIR confirms that ransomware is the critical threat that security pros and laypeople alike believe it to be. Ransomware-related breaches increased by 13% in 2021, the study found — that’s a greater increase than we saw in the past 5 years combined. In fact, nearly 50% of all system intrusion incidents — i.e., those involving a series of steps by which attackers infiltrate a company’s network or other systems — involved ransomware last year.

While the threat has massively increased, the top methods of ransomware delivery remain the ones we’re all familiar with: desktop sharing software, which accounted for 40% of incidents, and email at 35%, according to Verizon’s data. The growing ransomware threat may seem overwhelming, but the most important steps organizations can take to prevent these attacks remain the fundamentals: educating end users on how to spot phishing attempts and maintain security best practices, and equipping infosec teams with the tools needed to detect and respond to suspicious activity.

2. Attackers are eyeing the supply chain

In 2021 and 2022, we’ve been using the term “supply chain” more than we ever thought we would. COVID-induced disruptions in the flow of commodities and goods caused lumber to skyrocket and automakers to run short on microchips.

But security pros have had a slightly different sense of the term on their minds: the software supply chain. Breaches from Kaseya to SolarWinds — not to mention the Log4j vulnerability — reminded us all that vendors’ systems are just as likely a vector of attack as our own.

Unfortunately, Verizon’s Data Breach Investigations Report indicates these incidents are not isolated events — the software supply chain is, in fact, a major avenue of exploitation by attackers. In fact, 62% of cyberattacks that follow the system intrusion pattern began with the threat actors exploiting vulnerabilities in a partner’s systems, the study found.

Put another way: If you were targeted with a system intrusion attack last year, it was almost twice as likely that it began on a partner’s network than on your own.

While supply chain attacks still account for just under 10% of overall cybersecurity incidents, according to the Verizon data, the study authors point out that this vector continues to account for a considerable slice of all incidents each year. That means it’s critical for companies to keep an eye on both their own and their vendors’ security posture. This could include:

  • Demanding visibility into the components behind software vendors’ applications
  • Staying consistent with regular patching updates
  • Acting quickly to remediate and emergency-patch when the next major vulnerability that could affect high numbers of web applications rears its head

3. Mind the app

Between Log4Shell and Spring4Shell, the past 6 months have jolted developers and security pros alike to the realization that their web apps might contain vulnerable code. This proliferation of new avenues of exploitation is particularly concerning given just how commonly attackers target web apps.

Compromising a web application was far and away the top cyberattack vector in 2021, accounting for roughly 70% of security incidents, according to Verizon’s latest DBIR. Meanwhile, web servers themselves were the most commonly exploited asset type — they were involved in nearly 60% of documented breaches.

More than 80% of attacks targeting web apps involved the use of stolen credentials, emphasizing the importance of user awareness and strong authentication protocols at the endpoint level. That said, 30% of basic web application attacks did involve some form of exploited vulnerability — a percentage that should be cause for concern.

“While this 30% may not seem like an extremely high number, the targeting of mail servers using exploits has increased dramatically since last year, when it accounted for only 3% of the breaches,” the authors of the Verizon DBIR wrote.

That means vulnerability exploits accounted for a 10 times greater proportion of web application attacks in 2021 than they did in 2022, reinforcing the importance of being able to quickly and efficiently test your applications for the most common types of vulnerabilities that hackers take advantage of.

Stay the course

For those who’ve been tuned into the current cybersecurity landscape, the key themes of the 2022 Verizon DBIR will likely feel familiar — and with so many major breaches and vulnerabilities that claimed the industry’s attention in 2021, it would be surprising if there were any major curveballs we missed. But the key takeaways from the DBIR remain as critical as ever: Ransomware is a top-priority threat, software supply chains need greater security controls, and web applications remain a key attack vector.

If your go-forward cybersecurity plan reflects these trends, that means you’re on the right track. Now is the time to stick to that plan and ensure you have tools and tactics in place that let you focus on the alerts and vulnerabilities that matter most.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

2022 Planning: Designing Effective Strategies to Manage Supply Chain Risk

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2021/10/22/2022-planning-designing-effective-strategies-to-manage-supply-chain-risk/

2022 Planning: Designing Effective Strategies to Manage Supply Chain Risk

Supply chains are on everyone’s mind right now — from consumer-tech bottlenecks to talks of holiday-season toy shortages. Meanwhile, cyberattacks targeting elements of the supply chain have become increasingly common and impactful — making this area of security a top priority as organizations ensure their digital defense plans are ready for 2022.

Here’s the thing, though: Supply chains are enormously complex, and securing all endpoints in your partner ecosystem can be a herculean challenge.

On Thursday, October 21, 2 members of Rapid7’s Research team — Erick Galinkin, Principal Artificial Intelligence Researcher, and Bob Rudis, Chief Security Data Scientist — sat down to get the perspectives of 2 industry panelists: Loren Morgan, VP of Global IT Operations, Infrastructure and Delivery at Owens & Minor; and Dan Walsh, CISO at VillageMD. They discussed the dynamics of supply chain security, how they think about vendor risk, and what they’re doing to tackle these challenges at their organizations.



2022 Planning: Designing Effective Strategies to Manage Supply Chain Risk

Head to our 2022 Planning series page for more – full replay available soon!

What is supply chain risk, anyway?

The conversation kicked off with a foundational question: What do we mean when we talk about supply chain risk? The answer here is particularly important, given how sprawling and multivariate modern-day supply chains have become.

Dan defined the concept as “the risk inherent in the way we deliver business results.” For example, you might be working with a solutions provider whose software relies on open-source libraries, which could introduce vulnerabilities. The impact can be particularly high when a vendor your organization relies on in a strategic, business-critical capacity experiences a security issue.

Bob noted that the nature of supply chain risk hasn’t fundamentally changed in the past decade-plus — what’s different today is the scale of the problem. That includes not only the size of supply chains themselves but also the magnitude of the risks, as attacks increase in frequency and scope.

For Loren, acknowledging and acting on these growing risks means asking a central question: How are our partners investing in their own defenses? And further, how can we get visibility into the actions our vendors are taking to counteract their vulnerabilities?

Dropping the SBOM

Erick pointed out that one of the more practical ways of achieving visibility with technology vendors is the software bill of materials (SBOM). An SBOM is a list of all the libraries, dependencies, third-party modules, and other components that a provider brings into their software product.

“It’s like an ingredient list on a package of food,” Dan said. Because of the level of detail it provides, an SBOM can offer much greater insight into vulnerabilities than a compliance certification like SOC2 would.

“Ultimately, from our vendors, what we’re looking for is trust,” Dan noted. The visibility an SBOM provides can go a long way toward achieving that trust.

But not all vendors might jump at the request to produce an SBOM. And how do you know the SBOM is fully accurate and complete? The cloud complicates the picture considerably, too.

“A SaaSBOM is a lot trickier,” Erick noted. With fully cloud-based applications, verifying what’s in an SBOM becomes a much tougher task. And cloud misconfigurations have become an increasingly prominent source of vulnerabilities — especially as today’s end users are leveraging an array of easy-to-use SaaS tools and browser extensions, multiplying the potential points of risk.

Dan suggested that in the future, the industry might move to an ABOM — a highly memorable shorthand for “application bill of materials” — which would include all source code, infrastructure, and other key components that make an application tick. This would help provide a deeper level of visibility and trust when evaluating the risks inherent in the ever-growing lists of applications that enterprises rely on in today’s cloud-first technology ecosystem.

Taking action

So, what key concepts and practices should you implement as you put together a 2022 cybersecurity plan that factors in supply chain risk? Here are a few suggestions our panel discussed.​

  • Invest in talent: “Find somebody who’s been there, done that,” Loren urged. Having experienced people on board who can stand up a third-party risk assessment program and handle everything it entails — from interviewing vendors to reviewing SBOMs and other artifacts — can help make this complex task more manageable.
  • Tailor scrutiny by vendor: Not all third parties carry the same level of risk, primarily because of the type of data they access. Accordingly, your vetting process should reflect the vendor you’re evaluating and the specific level of risk associated with them. This will save time and energy when evaluating partners who don’t introduce as much risk and ensure the higher-risk vendors get the appropriate level of scrutiny. In Dan’s work at VillageMD, for example, private health information (PHI) is the most critical type of data that needs the highest security, so vendors handling PHI need to be more rigorously vetted.
  • Think about your internal supply chain: As Bob pointed out, virtually all organizations today are doing some amount of development — whether they’re a full-on software provider or simply building their own website. That means we’re all susceptible to introducing the same kinds of vulnerabilities that our vendors might, impacting not just our own security but our customers’ as well. For example, what happens if a developer introduces a vulnerable component into your product’s source code? Or what if your DevOps team introduced a misconfiguration? Does your security operations team have a clear way to know that? Be sure to put guardrails in place by establishing a foundational software development life cycle (SDLC) process for all areas where you’re doing development.
  • Identify your no-go’s: Each of our panelists also had a few things they considered make-or-break when it comes to vendor assessments — requests that, if not met, would sink any conversation with a potential partner. For Bob, it was a vendor’s ability to supply a penetration test with complete findings. Loren echoed this, and also said he insists that partners share their data handling processes. For Dan, it was the right to audit the vendor and their software annually. Identify what these no-go’s are for your organization, and build them into vendor conversations and contracts.

Ultimately, holding your vendors accountable is the most important step you can take in the effort to build a secure supply chain.

“It’s incumbent on consumers to hold their vendors’ feet to the fire and say, ‘How are you doing this?'” Erick commented. Demand real data and clear documentation rather than vague responses. When we do this for our own organizations, we make each other safer by demanding more of vendors and raising the bar for security across the supply chain.

Stay tuned for the next 2 installments in our 2022 Planning webcast series! Next up, we’ll be discussing the path to effective cybersecurity maturity and how to factor that journey into your 2022 cybersecurity program. Sign up today!

Security at Scale in the Open-Source Supply Chain

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/09/08/security-at-scale-in-the-open-source-supply-chain/

Security at Scale in the Open-Source Supply Chain

“We’ve all heard of paying it forward, but this is ridiculous!” That’s probably what most of us think when one of our partners or vendors inadvertently leaves an open door into our shared supply-chain network; an attacker can enter at any time. Well, we probably think in slightly more expletive-laden terms, but nonetheless, no organization or company wants to be the focal point of blame from a multitude of (formerly) trusting partners or vendors.

Open-source software (OSS) is particularly susceptible to these vulnerabilities. OSS is simultaneously incredible and incredibly vulnerable. In fact, there are so many risks that can result from largely structuring operations on OSS that vendors may not prioritize patching a vulnerability once their security team is alerted. And can we blame them? They want to continue operations and feed the bottom line, not put a pause on operations to forever chase vulnerabilities and patch them one-by-one. But that leaves all of their supply-chain partners open to exploitation. What to do?

The supply-chain scene

Throughout a 12-month timeframe spanning 2019-2020, attacks aimed at OSS increased 430%, according to a study by Sonatype. It’s not quite as simple as “gain access to one, gain access to all,” but if a bad actor is properly motivated, this is exactly what can happen. In terms of motivation, supply-chain attackers can fall into 2 groups:

  • Bandwagoners: Attackers falling into this group will often wait for public disclosure of supply-chain vulnerabilities.
  • Ahead-of-the-curvers: Attackers falling into this group will actively hunt for and exploit vulnerabilities, saddling the unfortunate organization with malware and threatening its entire supply chain.

To add to the favor of attackers, the same Sonatype study also found that a shockingly low percentage of security organizations do not even learn of new open-source vulnerabilities in the short term after they’re disclosed. Sure, everyone’s busy and has their priorities. But that ethos exists while these vulnerabilities are being exploited. Perhaps the project was shipped on time, but malicious code was simultaneously being injected somewhere along the line. Then, instead of continuing with forward progress, remediation becomes the name of the game.  

According to the Sonatype report, there were more than a trillion open-source component and container download requests in 2020 alone. The most important aspects to consider then are the security history of your component(s) and how dependents along your supply chain are using them. Obviously, this can be overwhelming to think about, but with researchers increasingly focused on remediation at scale, the future of supply-chain security is starting to look brighter.

Learn more about open-source security + win some cash!

Submit to the 2021 Velociraptor Contributor Competition

Securing at scale

Instead of the one-by-one approach to patching, security professionals need to start thinking about securing entire classes of vulnerabilities. It’s true that there is no current catch-all mechanism for such efficient action. But researchers can begin to work together to create methodologies that enable security organizations to better prioritize vulnerability risk management (VRM) instead of filing each one away to patch at a later date.

Of course, preventive security measures — inclusive of our shift-left culture — can help to mitigate the need to scale such remediation actions; the fact remains though that bad actors will always find a way. Therefore, until there are effective ways to eliminate large swaths of vulnerabilities at once, there is a growing need for teams to adhere to current best practices and measures like:  

  • Dedicating time and resources to help ensure code is secure all along the chain
  • Thinking holistically about the security of open-source code with regard to the CI/CD lifecycle and the entire stack
  • Being willing to pitch in and develop coordinated, industry-wide efforts to improve the security of OSS at scale
  • Educating outside stakeholders on just how interdependent supply-chain-linked organizations are

As supply-chain attackers refine their methods to target ever-larger companies, the pressure is on developers to refine their understanding of how each and every contributor on a team can expose the organization and its partners along the chain, as The Linux Foundation points out. However, is this too much to put on the shoulders of DevOps? Shifting left to a DevSecOps culture is great and all, but teams are now being asked to think in the context of securing an entire supply chain’s worth of output.

This is why the industry at large must continue the push for research into new ways to eliminate entire classes of vulnerabilities. That’s a seismic shift left that will only help developers — and really, everyone — put more energy into things other than security.

Monitoring mindfully

While a proliferation of OSS components — as advantageous as they are for collaboration at scale — can make a supply chain vulnerable, the power of one open-source community can help monitor another open-source community. Velociraptor by Rapid7 is an open-source digital forensics and incident response (DFIR) platform.

This powerful DFIR tool thrives in loaded conditions. It can quickly scale incident response and monitoring and help security organizations to better prioritize remediation — actions well-suited to address the scale of modern supply-chain attacks. How quickly organizations choose to respond to incidents or vulnerabilities is, of course, up to them.

Supply chain security is ever-evolving

If one link in the chain is attacked via a long-languishing vulnerability whose risk has increasingly become harder to manage, it almost goes without saying that company’s partners or vendors immediately lose confidence in it because the entire chain is now at risk. The public’s confidence likely will follow.

There are any number of preventive measures an interdependent security organization can implement. However, the need for further research into scaling security for whole classes of vulnerabilities comes at a crucial time as global supply-chain attacks more frequently occur in all shapes and sizes.

Want to contribute to a more secure open-source future?

Submit to the 2021 Velociraptor Contributor Competition

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Post Syndicated from Justin Pagano original https://blog.rapid7.com/2021/07/09/securing-the-supply-chain-lessons-learned-from-the-codecov-compromise/

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Supply chain attacks are all the rage these days. While they’re not a new part of the threat landscape, they are growing in popularity among more sophisticated threat actors, and they can create significant system-wide disruption, expense, and loss of confidence across multiple organizations, sectors, or regions. The compromise of Codecov’s Bash Uploader script is one of the latest such attacks. While much is still unknown about the full impact of this incident on organizations around the world, it’s been another wake up call for the world that cybersecurity problems are getting more complex by the day.

This blog post is meant to provide the security community with defensive knowledge and techniques to protect against supply chain attacks involving continuous integration (CI) systems, such as Jenkins, Bamboo, etc., and version control systems, such as GitHub, GitLab, etc. It covers prevention techniques — for software suppliers and consumers — as well as detection and response techniques in the form of a playbook.

It has been co-developed by our Information Security, Security Research, and Managed Detection & Response teams. We believe one of the best ways for organizations to close their security achievement gap and outpace attackers is by openly sharing knowledge about ever-evolving security best practices.

Defending CI systems and source code repositories from similar supply chain attacks

Below are some of the security best practices defenders can use to prevent, detect, and respond to incidents like the Codecov compromise.

Securing the Supply Chain: Lessons Learned from the Codecov Compromise

Figure 1: High-level overview of known Codecov supply chain compromise stages

Prevention techniques

Provide and perform integrity checks for executable code

If you’re a software consumer

Use collision-resistant checksum hashes, such as with SHA-256 and SHA-512, provided by your vendor to validate checksums for all executable files or code they provide. Likewise, verify digital signatures for all executable files or code they provide.

If either of these integrity checks fail, notify your vendor ASAP as this could be a sign of compromised code.

If you’re a software supplier

Provide collision-resistant hashes, such as with SHA-256 and SHA-512, and store checksums out-of-band from their corresponding files (i.e. make it so that an attacker has to successfully carry out two attacks to compromise your code: one against the system hosting your checksum data and another against your content delivery systems). Provide users with easy-to-use instructions, including sample code, for performing checksum validation.

Additionally, digitally sign all executable code using tamper-resistant code signing frameworks such as in-toto and secure software update frameworks such as The Update Framework (TUF) (see DataDog’s blog post about using these tools for reference). Simply signing code with a private key is insufficient since attackers have demonstrated ways to compromise static signing keys stored on servers to forge authentic digital signatures.

Relevant for the following Codecov compromise attack stages:

  • Customers’ CI jobs dynamically load Bash Uploader

Version control third-party software components

Store and load local copies of third-party components in a version control system to track changes over time. Only update them after comparing code differences between versions, performing checksum validation, and authenticating digital signatures.

Relevant for the following Codecov compromise attack stages:

  • Bash Uploader script modified and replaced in GCS

Implement egress filtering

Identify trusted internet-accessible systems and apply host-based or network-based firewall rules to only allow egress network traffic to those trusted systems. Use specific IP addresses and fully qualified domain names whenever possible, and fallback to using IP ranges, subdomains, or domains only when necessary.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Implement IP address safelisting

While zero-trust-networking (ZTN) doubts have been cast on the effectiveness of network perimeter security controls, such as IP address safelisting, they are still one of the easiest and most effective ways to mitigate attacks targeting internet routable systems. IP address safelisting is especially useful in the context of protecting service account access to systems when ZTN controls like hardware-backed device authentication certificates aren’t feasible to implement.

Popular source code repository services, such as GitHub, provide this functionality, although it may require you to host your own server or, if using their cloud hosted option, have multiple organizations in place to host your private repositories separately from your public repositories.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos
  • Bash Uploader script modified and replaced in GCS

Apply least privilege permissions for CI jobs using job-specific credentials

For any credentials a CI job uses, provide a credential for that specific job (i.e. do not reuse a single credential across multiple CI jobs). Only provision permissions to each credential that are needed for the CI job to execute successfully: no more, no less. This will shrink the blast radius of a credential compromise

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Use encrypted secrets management for safe credential storage

If you absolutely cannot avoid storing credentials in source code, use cryptographic tooling such as AWS KMS and the AWS Encryption SDK to encrypt credentials before storing them in source code. Otherwise, store them in a secrets management solution, such as Vault, AWS Secrets Manager, or GitHub Actions Encrypted Secrets (if you’re using GitHub Actions as your CI service, that is).

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Block plaintext secrets from code commits

Implement pre-commit hooks with tools like git-secrets to detect and block plaintext credentials before they’re committed to your repositories.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Use automated frequent service account credential rotation

Rotate credentials that are used programmatically (e.g. service account passwords, keys, tokens, etc.) to ensure that they’re made unusable at some point in the future if they’re exposed or obtained by an attacker.

If you’re able to automate credential rotation, rotate them as frequently as hourly. Also, create two “credential rotator” credentials that can both rotate all service account credentials and rotate each other. This ensures that the credential that is used to rotate other credentials is also short lived.

Relevant for the following Codecov compromise attack stages:

  • Creds used to access source code repos

Detection techniques

While we strongly advocate for adopting multiple layers of prevention controls to make it harder for attackers to compromise software supply chains, we also recognize that prevention controls are imperfect by themselves. Having multiple layers of detection controls is essential for catching suspicious or malicious activity that you can’t (or in some cases shouldn’t) have prevention controls for.

Identify Dependencies

You’ll need these in place to create detection rules and investigate suspicious activity:

  1. Process execution logs, including full command line data, or CI job output logs
  2. Network logs (firewall, network flow, etc.), including source and destination IP address
  3. Authentication logs (on-premise and cloud-based applications), including source IP and identity/account name
  4. Activity audit logs (on-premise and cloud-based applications), including source IP and identity/account name
  5. Indicators of compromise (IOCs), including IPs, commands, file hashes, etc.

Ingress from atypical IP addresses or regions

Whether or not you’re able to implement IP address safelisting for accessing certain systems/environments, use an IP address safelist to detect when atypical IP addresses are accessing critical systems that should only be accessed by trusted IPs.

Relevant for the following Codecov compromise attack stages:

  • Bash Uploader script modified and replaced in GCS
  • Creds used to access source code repos

Egress to atypical IP addresses or regions

Whether or not you’re able to implement egress filtering for certain systems/environments, use an IP address safelist to detect when atypical IP addresses are being connected to.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Environment variables being passed to network connectivity processes

It’s unusual for a system’s local environment variables to be exported and passed into processes used to communicate over a network (curl, wget, nc, etc.), regardless of the IP address or domain being connected to.

Relevant for the following Codecov compromise attack stages:

  • Environment variables, including creds, exfiltrated

Response techniques

The response techniques outlined below are, in some cases, described in the context of the IOCs that were published by Codecov. Dependencies identified in “Detection techniques” above are also dependencies for response steps outlined below.

Data exfiltration response steps: CI servers

Identify and contain affected systems and data

  1. Search CI systems’ process logs, job output, and job configuration files to identify usage of compromised third-party components (in regex form). This will identify potentially affected CI systems that have been using the third-party component that is in scope. This is useful for getting a full inventory of potentially affected systems and examining any local logs that might not be in your SIEM

curl (-s )?https://codecov.io/bash

2. Search for known IOC IP addresses in regex form (based on RegExr community pattern)

(.*79\.135\.72\.34|178\.62\.86\.114|104\.248\.94\.23|185\.211\.156\.78|91\.194\.227\.*|5\.189\.73\.*|218\.92\.0\.247|122\.228\.19\.79|106\.107\.253\.89|185\.71\.67\.56|45\.146\.164\.164|118\.24\.150\.193|37\.203\.243\.207|185\.27\.192\.99\.*)

3. Search for known IOC command line pattern(s)

curl -sm 0.5 -d "$(git remote -v)

4. Create forensic image of affected system(s) identified in steps 1 – 3

5. Network quarantine and/or power off affected system(s)

6. Replace affected system(s) with last known good backup, image snapshot, or clean rebuild

7. Analyze forensic image and historical CI system process, job output, and/or network traffic data to identify potentially exposed sensitive data, such as credentials

Search for malicious usage of potentially exposed credentials

  1. Search authentication and activity audit logs for IP address IOCs
  2. Search authentication and activity audit logs for potentially compromised account events originating from IP addresses outside of organization’s known IP addresses
  3. This could potentially uncover new IP address IOCs

Unauthorized access response steps: source code repositories

Clone full historical source code repository content

Note: This content is based on git-based version control systems

Version control systems such as git coincidentally provide forensics-grade information by virtue of them tracking all changes over time. In order to be able to fully search all data from a given repository, certain git commands must be run in sequence.

  1. Set git config to get full commit history for all references (branches, tags), including pull requests, and clone repositories that need to be analyzed (*nix shell script)

git config --global remote.origin.fetch '+refs/pull/*:refs/remotes/origin/pull/*'
# Space delimited list of repos to clone
declare -a repos=("repo1" "repo2" "repo3")
git_url="https://myGitServer.biz/myGitOrg"
# Loop through each repo and clone it locally
for r in ${repos[@]}; do
echo "Cloning $git_url/$r"
git clone "$git_url/$r"
done

2. In the same directory where repositories were cloned from the step above, export full git commit history in text format for each repository. List git committers at top of each file in case they need to be contacted to gather context (*nix shell script)

git fetch --all
for dir in *(/) ; do
(rm $dir.commits.txt
cd $dir
git fetch --all
echo "******COMMITTERS FOR THIS REPO********" >> ../$dir.commits.txt
git shortlog -s -n >> ../$dir.commits.txt
echo "**************************************" >> ../$dir.commits.txt
git log --all --decorate --oneline --graph -p >> ../$dir.commits.txt
cd ..)
done

a. Note: the below steps can be done with tools such as Atom, Visual Studio Code, and Sublime Text and extensions/plugins you can install in them.

If performing manual reviews of these commit history text files, create copies of those files and use the regex below to find and replace git’s log graph formatting that prepends each line of text

^(\|\s*)*(\+|-|\\|/\||\*\|*)*\s*

b. Then, sort the text in ascending or descending order and de-duplicate/unique-ify it. This will make it easier to manually parse.

Search for binary files and content in repositories

Exporting commit history to text files does not export data from any binary files (e.g. ZIP files, XLSX files, etc.). In order to thoroughly analyze source code repository content, binary files need to be identified and reviewed.

  1. Find binary files in folder containing all cloned git repositories based on file extension (*nix shell script)

find -E . -regex '.*\.(jpg|png|pdf|doc|docx|xls|xlsx|zip|7z|swf|atom|mp4|mkv|exe|ppt|pptx|vsd|rar|tiff|tar|rmd|md)'

2. Find binary files in folder containing all cloned git repositories based on MIME type (*nix shell script)

find . -type f -print0 | xargs -0 file --mime-type | grep -e image/jpg -e image/png -e application/pdf -e application/msword -e application/vnd.openxmlformats-officedocument.wordprocessingml.document -e application/vnd.ms-excel -e application/vnd.openxmlformats-officedocument.spreadsheetml.sheet -e application/zip -e application/x-7z-compressed -e application/x-shockwave-flash -e video/mp4 -e application/vnd.ms-powerpoint -e application/vnd.openxmlformats-officedocument.presentationml.presentation -e application/vnd.visio -e application/vnd.rar -e image/tiff -e application/x-targ

3. Find encoded binary content in commit history text files and other text-based files

grep -ir "url(data:" | cut -d\) -f1
grep -ir "base64" | cut -d\" -f1

Search for plaintext credentials: passwords, API keys, tokens, certificate private keys, etc.

  1. Search commit history text files for known credential patterns using tools such as TruffleHog and GitLeaks
  2. Search binary file contents identified in Search for binary files and content in repositories for credentials

Search logs for malicious access to discovered credentials

  1. Follow steps from Data exfiltration response (Searching for malicious usage of potentially exposed credentials) using logs from systems associated with credentials discovered in Search for plaintext credentials

New findings about attacker behavior from Project Sonar

We are fortunate to have a tremendous amount of data at our fingertips thanks to Project Sonar which conducts internet-wide surveys across more than 70 different services and protocols to gain insights into global exposure to common vulnerabilities. We analyzed data from Project Sonar to see if we could gain any additional context about the IP address IOCs associated with Codecov’s Bash Uploader script compromise. What we found was interesting, to say the least:

  • The threat actor set up the first exfiltration server (178.62.86[.]114) on or about February 1, 2021
  • Historical DNS records from remotly[.]ru and seasonver[.]ru have, and continue to, point to this server
  • The threat actor configured a simple HTTP redirect on the exfiltration server to about.codecov.io to avoid detection
    { “http_code”: 301, “http_body”: “”, “server”: “nginx”, “alt-svc”: “clear”, “location”: “http://about.codecov.io/“, “via”: “1.1 google” }
  • The redirect was removed from the exfiltration server on or before February 22, 2021, presumably by the server owner having detected these changes
  • The threat actor set up new infrastructure (104.248.94[.]23) that more closely mirrored Codecov’s GCP setup as their new exfiltration server on or about March 7, 2021
    { “http_code”: 301, “http_body”: “”, “server”: “envoy”, “alt-svc”: “clear”, “location”: “http://about.codecov.io/“, “via”: “1.1 google” }
  • The new exfiltration server was last seen on April 1, 2021

We hope the content in this blog will help defenders prevent, detect, and respond to these types of supply chain attacks going forward.