Tag Archives: Supply Chain Security

Unlocking security updates for transitive dependencies with npm

Post Syndicated from Bryan Dragon original https://github.blog/2023-01-19-unlocking-security-updates-for-transitive-dependencies-with-npm/

Dependabot helps developers secure their software with automated security updates: when a security advisory is published that affects a project dependency, Dependabot will try to submit a pull request that updates the vulnerable dependency to a safe version if one is available. Of course, there’s no rule that says a security vulnerability will only affect direct dependencies—dependencies at any level of a project’s dependency graph could become vulnerable.

Until recently, Dependabot did not address vulnerabilities on transitive dependencies, that is, on the dependencies sitting one or more levels below a project’s direct dependencies. Developers would encounter an error message in the GitHub UI and they would have to manually update the chain of ancestor dependencies leading to the vulnerable dependency to bring it to a safe version.

Screenshot of the warning a user sees when they try to update the version of a project when its dependencies sitting one or more levels below a project’s direct dependencies were out of date. The message reads, "Dependabot cannot update minimist to a non-vulnerable version."

Internally, this would show up as a failed background job due to an update-not-possible error—and we would see a lot of these errors.

Understanding the challenge

Dependabot offers two strategies for updating dependencies: scheduled version updates and security updates. With version updates, the explicit goal is to keep project dependencies updated to the latest available version, and Dependabot can be configured to widen or increase a version requirement so that it accommodates the latest version. With security updates, Dependabot tries to make the most conservative update that removes the vulnerability while respecting version requirements. In this post we’ll be looking at security updates.

As an example, let’s say we have a repository with security updates enabled that contains an npm project with a single dependency on react-scripts@^4.0.3.

Not all package managers handle version requirements in the same way, so let’s quickly refresh. A version requirement like ^4.0.3 (a “caret range”) in npm permits updates to versions that don’t change the leftmost nonzero element in the MAJOR.MINOR.PATCH semver version number. The version requirement ^4.0.3, then, can be understood as allowing versions greater than or equal to 4.0.3 and less than 5.0.0.

On March 18, 2022, a high-severity security advisory was published for node-forge, a popular npm package that provides tools for writing cryptographic and network-heavy applications. The advisory impacts versions earlier than 1.3.0, the patched version released the day before the advisory was published.

While we don’t have a direct dependency on node-forge, if we zoom in on our project’s dependency tree we can see that we do indirectly depend on a vulnerable version:

react-scripts@^4.0.3             4.0.3
  - [email protected]    3.11.1
    - selfsigned@^1.10.7         1.10.14
      - node-forge@^0.10.0       0.10.0

In order to resolve the vulnerability, we need to bring node-forge from 0.10.0 to 1.3.0, but a sequence of conflicting ancestor dependencies prevents us from doing so:

  • 4.0.3 is the latest version of react-scripts permitted by our project
  • 3.11.1 is the only version of webpack-dev-server permitted by [email protected]
  • 1.10.14 is the latest version of selfsigned permitted by [email protected]
  • 0.10.0 is the latest version of node-forge permitted by[email protected]

This is the point at which the security update would fail with an update-not-possible error. The challenge is in finding the version of selfsigned that permits [email protected], the version of webpack-dev-server that permits that version of selfsigned, and so on up the chain of ancestor dependencies until we reach react-scripts.

How we chose npm

When we set out to reduce the rate of update-not-possible errors, the first thing we did was pull data from our data warehouse in order to identify the greatest opportunities for impact.

JavaScript is the most popular ecosystem that Dependabot supports, both by Dependabot enablement and by update volume. In fact, more than 80% of the security updates that Dependabot performs are for npm and Yarn projects. Given their popularity, improving security update outcomes for JavaScript projects promised the greatest potential for impact, so we focused our investigation there.

npm and Yarn both include an operation that audits a project’s dependencies for known security vulnerabilities, but currently only npm natively has the ability to additionally make the updates needed to resolve the vulnerabilities that it finds.

After a successful engineering spike to assess the feasibility of integrating with npm’s audit functionality, we set about productionizing the approach.

Tapping into npm audit

When you run the npm audit command, npm collects your project’s dependencies, makes a bulk request to the configured npm registry for all security advisories affecting them, and then prepares an audit report. The report lists each vulnerable dependency, the dependency that requires it, the advisories affecting it, and whether a fix is possible—in other words, almost everything Dependabot should need to resolve a vulnerable transitive dependency.

node-forge  <=1.2.1
Severity: high
Open Redirect in node-forge - https://github.com/advisories/GHSA-8fr3-hfg3-gpgp
Prototype Pollution in node-forge debug API. - https://github.com/advisories/GHSA-5rrq-pxf6-6jx5
Improper Verification of Cryptographic Signature in node-forge - https://github.com/advisories/GHSA-cfm4-qjh2-4765
URL parsing in node-forge could lead to undesired behavior. - https://github.com/advisories/GHSA-gf8q-jrpm-jvxq
fix available via `npm audit fix --force`
Will install [email protected], which is a breaking change
node_modules/node-forge
  selfsigned  1.1.1 - 1.10.14
  Depends on vulnerable versions of node-forge
  node_modules/selfsigned

There were two ways in which we had to supplement npm audit to meet our requirements:

  1. The audit report doesn’t include the chain of dependencies linking a vulnerable transitive dependency, which a developer may not recognize, to a direct dependency, which a developer should recognize. The last step in a security update job is creating a pull request that removes the vulnerability and we wanted to include some context that lets developers know how changes relate to their project’s direct dependencies.
  2. Dependabot performs security updates for one vulnerable dependency at a time. (Updating one dependency at a time keeps diffs to a minimum and reduces the likelihood of introducing breaking changes.) npm audit and npm audit fix, however, operate on all project dependencies, which means Dependabot wouldn’t be able to tell which of the resulting updates were necessary for the dependency it’s concerned with.

Fortunately, there’s a JavaScript API for accessing the audit functionality underlying the npm audit and npm audit fix commands via Arborist, the component npm uses to manage dependency trees. Since Dependabot is a Ruby application, we wrote a helper script that uses the Arborist.audit() API and can be invoked in a subprocess from Ruby. The script takes as input a vulnerable dependency and a list of security advisories affecting it and returns as output the updates necessary to remove the vulnerabilities as reported by npm.

To meet our first requirement, the script uses the audit results from Arborist.audit() to perform a depth-first traversal of the project’s dependency tree, starting with direct dependencies. This top-down, recursive approach allows us to maintain the chain of dependencies linking the vulnerable dependency to its top-level ancestor(s) (which we’ll want to mention later when creating a pull request), and its worst-case time complexity is linear in the total number of dependencies.

function buildDependencyChains(auditReport, name) {
  const helper = (node, chain, visited) => {
    if (!node) {
      return []
    }
    if (visited.has(node.name)) {
      // We've already seen this node; end path.
      return []
    }
    if (auditReport.has(node.name)) {
      const vuln = auditReport.get(node.name)
      if (vuln.isVulnerable(node)) {
        return [{ fixAvailable: vuln.fixAvailable, nodes: [node, ...chain.nodes] }]
      } else if (node.name == name) {
        // This is a non-vulnerable version of the advisory dependency; end path.
        return []
      }
    }
    if (!node.edgesOut.size) {
      // This is a leaf node that is unaffected by the vuln; end path.
      return []
    }
    return [...node.edgesOut.values()].reduce((chains, { to }) => {
      // Only prepend current node to chain/visited if it's not the project root.
      const newChain = node.isProjectRoot ? chain : { nodes: [node, ...chain.nodes] }
      const newVisited = node.isProjectRoot ? visited : new Set([node.name, ...visited])
      return chains.concat(helper(to, newChain, newVisited))
    }, [])
  }
  return helper(auditReport.tree, { nodes: [] }, new Set())
}

To meet our second requirement of operating on one vulnerable dependency at a time, the script takes advantage of the fact that the Arborist constructor accepts a custom audit registry URL to be used when requesting bulk advisory data. We initialize a mock audit registry server using nock that returns only the list of advisories (in the expected format) for the dependency that was passed into the script and we tell the Arborist instance to use it.

const arb = new Arborist({
  auditRegistry: 'http://localhost:9999',
  // ...
})

const scope = nock('http://localhost:9999')
  .persist()
  .post('/-/npm/v1/security/advisories/bulk')
  .reply(200, convertAdvisoriesToRegistryBulkFormat(advisories))

We see both of these use cases—linking a vulnerable dependency to its top-level ancestor and conducting an audit for a single package or a particular set of vulnerabilities—as opportunities to extend Arborist and we’re working on integrating them upstream.

Back in the Ruby code, we parse and verify the audit results emitted by the helper script, accounting for scenarios such as a dependency being downgraded or removed in order to fix a vulnerability, and we incorporate the updates recommended by npm into the remainder of the security update job.

With a viable update path in hand, Dependabot is able to make the necessary updates to remove the vulnerability and submit a pull request that tells the developer about the transitive dependency and its top-level ancestor.

Screenshot of an open pull request that tells the developer about the transitive dependency and its top-level ancestor. The pull request is titled "Bump node-forge and react-scripts" and has a message from Dependabot that reads, "Merging this pull request will resolve 6 Dependabot alerts on node-forge including a high severity alert."

Caveats

When npm audit decides that a vulnerability can only be fixed by changing major versions, it requires use of the force option with npm audit fix. When the force option is used, npm will update to the latest version of a package, even if it means jumping several major versions. This breaks with Dependabot’s previous security update behavior. It also achieves our goal: to unlock conflicting dependencies in order to bring the vulnerable dependency to an unaffected version. Of course, you should still always review the changelog for breaking changes when jumping minor or major versions of a package.

Impact

We rolled out support for transitive security updates with npm in September 2022. Now, having a full quarter of data with the changes in place, we’re able to measure the impact: between Q1Y22 and Q4Y22 we saw a 42% reduction in update-not-possible errors for security updates on JavaScript projects. 🎉

If you have Dependabot security updates enabled on your npm projects, there’s nothing extra for you to do—you’re already benefiting from this improvement.

Looking ahead

I hope this post illustrates some of the considerations and trade-offs that are necessary when making improvements to an established system like Dependabot. We prefer to leverage the native functionality provided by package managers whenever possible, but as package managers come in all shapes and sizes, the approach may vary substantially from one ecosystem to the next.

We hope other package managers will introduce functionality similar to npm audit and npm audit fix that Dependabot can integrate with and we look forward to extending support for transitive security updates to those ecosystems as they do.

One developer’s journey bringing Dependabot to GitHub Enterprise Server

Post Syndicated from Landon Grindheim original https://github.blog/2022-06-07-one-developers-journey-bringing-dependabot-to-github-enterprise-server/

If you’re like me, you’re still excited by last week’s news that Dependabot is generally available on GitHub Enterprise Server (GHES). Developers using GHES can now let Dependabot secure their dependencies and keep them up-to-date. You know who would have loved that? Me at my last job.

Before joining GitHub, I spent five years working on teams that relied on GHES to host our code. As a GHES user, I really, really wanted Dependabot. Here’s why.

🤕 Dependencies

One constant pain point for my previous teams was staying on top of dependencies. Creating a Rails project with rails new results in an app with 74 dependencies, Django apps start with 88 dependencies, and a project initialized with Create React App will have 1,432 dependencies!

Unfortunately, security vulnerabilities happen, and they can expose your customers to existential risk, so it’s important they are handled as soon as they’re published.

As I’m most familiar with the Ruby ecosystem, I’ll use Nokogiri, a gem for parsing XML and HTML, to illustrate the process of manually resolving a vulnerability. Nokogiri has been a dependency of every Rails app I’ve maintained. It’s also seen seven vulnerabilities since 2019. To fix these manually, we’ve had to:

  • Clone `my_rails_app`
  • Track down and parse the Nokogiri release notes
  • Patch Nokogiri in `my_rails_app` to a non-vulnerable version
  • Push the changes and open a pull request
  • Wait for CI to pass
  • Get the necessary reviews
  • Deploy, observe, and merge

This is just one of (at least) 74 dependencies in one Rails app. My team maintained 14 Rails apps in our microservices-based architecture, so we needed to repeat the process for each app. A single vulnerability would eat up days of engineering time. That’s just one dependency in one ecosystem. We also worked on apps written in Elixir, Python, JavaScript, and PHP.

If an engineer was patching vulnerabilities, they couldn’t pursue feature work, the thing our customers could actually see. This would, understandably, lead to conversations about which vulnerabilities were most likely to be exploited and which we could tolerate for now.

If we had Dependabot security updates, that process would have started with a pull request. What took an engineer days to complete on their own could have been done before lunch.

We could have invested in keeping all of our dependencies up-to-date. Incremental upgrades are typically easier to perform and pose less risk. They also give bad actors less time to find and exploit vulnerabilities. One of my previous teams was still running Rails 3.2, which was no longer maintained when Rails 6 was released six years later. As support phased out, we had to apply our own security patches to our codebase instead of getting them from the framework. This made upgrading even harder. We spent years trying to get to a supported version, but other product priorities always won out.

If my team had Dependabot version updates, Dependabot would have opened pull requests each time a new version of Rails was released. We’d still need to make changes to ensure our apps were compliant with the new versions, but the changes would be made incrementally, making the lift much lighter. But we didn’t have Dependabot. We had to upgrade manually, and that meant upgrading didn’t happen until it became a P0.

A new home

I joined GitHub in 2021 to work on Dependabot. Being intimately familiar with the challenges Dependabot could help address, I wanted to be part of the solution. Little did I know, the team was just starting the process of bringing Dependabot to GHES. Call it serendipity, a dream come true, or tea leaves arranged just so.

I quickly realized why Dependabot wasn’t already on GHES. GitHub acquired Dependabot in 2019, and it took some time to scale Dependabot to be able to secure GitHub’s millions of repositories. To achieve this, we ported the service’s backend to run on Moda, GitHub’s internal Kubernetes-based platform. The dependency update jobs that result in pull requests were updated to run on lightweight Firecracker VMs, allowing Dependabot to create millions of pull requests in just hours. It was an impressive effort by a small team.

That effort, however, didn’t lend itself to the architecture of GHES, where everything runs on a single server with limited resources. An auto-scaling backend and network of VMs wasn’t an option. Instead, we needed to port Dependabot’s backend to run on Nomad, the container orchestration option on GHES. The jobs running on Firecracker VMs needed to run on our customers’ hardware. Fortunately, organizations can self-host GitHub Actions runners in GHES, so we adapted them to run on GitHub Actions. We also had to adjust our development processes to support continuous delivery in the cloud and less frequent GHES releases.

The result is that developers relying on GHES now have the option to have their dependencies updated for them. Now, my former teammates can update their dependencies by:

  • Viewing the already opened pull request
  • Reviewing the pull request and the included release notes
  • Deploying, observing, and merging

We’re really proud of that. As for me, I get the immense satisfaction of knowing that I built something that will directly benefit my former teammates. It doesn’t get much better than that!

Guess what? GitHub is hiring. What would you like to make better?

If you’re inspired to work at GitHub, we’d love for you to join us. Check out our Careers page to see all of our current job openings.

  • Dedicated remote-first company with flexible hours
  • Building great products used by tens of millions of people and companies around the world
  • Committed to nurturing a diverse and inclusive workplace
  • And so much more!

What’s Changed for Cybersecurity in Banking and Finance: New Study

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/05/10/whats-changed-for-cybersecurity-in-banking-and-finance-new-study/

What's Changed for Cybersecurity in Banking and Finance: New Study

Cybersecurity in financial services is a complex picture. Not only has a range of new tech hit the industry in the last 5 years, but compliance requirements introduce another layer of difficulty to the lives of infosec teams in this sector. To add to this picture, the overall cybersecurity landscape has rapidly transformed, with ransomware attacks picking up speed and high-profile vulnerabilities hitting the headlines at an alarming pace.

VMware recently released the 5th annual installment of their Modern Bank Heists report, and the results show a changing landscape for cybersecurity in banking and finance. Here’s a closer look at what CISOs and security leaders in finance said about the security challenges they’re facing — and what they’re doing to solve them.

Destructive threats and ransomware attacks on banks are increasing

The stakes for cybersecurity are higher than ever at financial institutions, as threat actors are increasingly using more vicious tactics. Banks have seen an uptick in destructive cyberattacks — those that delete data, damage hard drives, disrupt network connections, or otherwise leave a trail of digital wreckage in their wake.

63% of financial institutions surveyed in the VMware report said they’ve seen an increase in these destructive attacks targeting their organization — that’s 17% more than said the same in last year’s version of the report.

At the same time, finance hasn’t been spared from the rise in ransomware attacks, which have also become increasingly disruptive. Nearly 3 out of 4 respondents to the survey said they’d been hit by at least one ransomware attack. What’s more, 63% of those ended up paying the ransom.

Supply chain security: No fun in the sun

Like ransomware, island hopping is also on the rise — and while that might sound like something to do on a beach vacation, that’s likely the last thing the phrase brings to mind for security pros at today’s financial institutions.

IT Pro describes island hopping attacks as “the process of undermining a company’s cyber defenses by going after its vulnerable partner network, rather than launching a direct attack.” The source points to the high-profile data breach that rocked big-box retailer Target in 2017. Hackers found an entry point to the company’s data not through its own servers, but those of Fazio Mechanical Services, a third-party vendor.

In the years since the Target breach, supply chain cybersecurity has become an even greater area of focus for security pros across industries, thanks to incidents like the SolarWinds breach and large-scale vulnerabilities like Log4Shell that reveal just how many interdependencies are out there. Now, threats in the software supply chain are becoming more apparent by the day.

VMware’s study found that 60% of security leaders in finance have seen an increase in island hopping attacks — 58% more than said the same last year. The uptick in threats originating from partners’ systems is clearly keeping security officers up at night: 87% said they’re concerned about the security posture of the service providers they rely on.

The proliferation of mobile and web applications associated with the rise of financial technology (fintech) may be exacerbating the problem. VMware notes API attacks are one of the primary methods of island hopping — and they found a whopping 94% of financial-industry security leaders have experienced an API attack through a fintech application, while 58% said they’ve seen an increase in application security incidents overall.

How financial institutions are improving cybersecurity

With attacks growing more dangerous and more frequent, security leaders in finance are doubling down on their efforts to protect their organizations. The majority of companies surveyed in VMware’s study said they planned a 20% to 30% boost to their cybersecurity budget in 2022. But what types of solutions are they investing in with that added cash?

The number 1 security investment for CISOs this year is extended detection and response (XDR), with 24% listing this as their top priority. Closely following were workload security at 22%, mobile security at 21%, threat intelligence at 15%, and managed detection and response (MDR) at 11%. In addition, 51% said they’re investing in threat hunting to help them stay ahead of the attackers.

Today’s threat landscape has grown difficult to navigate — especially when financial institutions are competing for candidates in a tight cybersecurity talent market. In the meantime, the financial industry has only grown more competitive, and the pace of innovation is at an all-time high. Having powerful, flexible tools that can streamline and automate security processes is essential to keep up with change. For banks and finance organizations to attain the level of visibility they need to innovate while keeping their systems protected, these tools are crucial.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

How to Strategically Scale Vendor Management and Supply Chain Security

Post Syndicated from AJ Debole original https://blog.rapid7.com/2022/04/26/how-to-strategically-scale-vendor-management-and-supply-chain-security/

How to Strategically Scale Vendor Management and Supply Chain Security

This post is co-authored by Collin Huber

Recent security events — particularly the threat actor activity from the Lapsu$ group, Spring4Shell, and various new supply-chain attacks — have the security community on high alert. Security professionals and network defenders around the world are wondering what we can do to make the organizations we serve less likely to be featured in an article as the most recently compromised company.

In this post, we’ll articulate some simple changes we can all make in the near future to provide more impactful security guidance and controls to decrease risk in our environments.

Maintain good cyber hygiene

Here are some basic steps that organizations can take to ensure their security posture is in good health and risks are at a manageable level.

1.  Review privileged user activity for anomalies

Take this opportunity to review logs of privileged user activity. Additionally, review instances of changed passwords, as well as any other unexpected activity. Interview the end user to help determine the authenticity of the change. Take into consideration the types of endpoints used across your network, as well as expected actions or any changes to privileges (e.g. privilege escalation).

2. Enforce use of multifactor authentication

Has multifactor authentication (MFA) deployment stalled at your firm? This is an excellent opportunity to revisit deployment of these initiatives. Use of MFA reduces the potential for compromise in a significant number of instances. There are several options for deployment of MFA. Hardware-based MFA methods, such as FIDO tokens, are typically the strongest, and numerous options offer user-friendly ways to use MFA — for example, from a smartphone. Ensure that employees and third parties are trained not to accept unexpected prompts to approve a connection.

3. Understand vendor risks

Does your acquisition process consider the security posture of the vendor in question? Based on the use case for the vendor and the business need, consider the security controls you require to maintain the integrity of your environment. Additionally, review available security reports to identify security controls to investigate further. If a security incident has occurred, consider the mitigating controls that were missing for that vendor. Depending on the response of that vendor and their ability to implement those security controls, determine if this should influence purchase decisions or contract renewal.      

4. Review monitoring and alerts

Review system logs for other critical systems, including those with high volumes of data. Consider reviewing systems that may not store, process, or transmit sensitive data but could have considerable vulnerabilities. Depending on the characteristics of these systems and their mitigating controls, it may be appropriate to prioritize patching, implement additional mitigating controls, and even consider additional alerting.

Always make sure to act as soon as you can. It’s better to enact incident response (IR) plans and de-escalate than not to.

Build a more secure supply chain

Risks are inherent in the software supply chain, but there are some strategies that can help you ensure your vendors are as secure as possible. Here are three key concepts to consider implementing.

1. Enumerate edge connection points between internal and vendor environments

Every organization has ingress and egress points with various external applications and service providers. When new services or vendors are procured, access control lists (ACLs) are updated to accommodate the new data streams — which presents an opportunity to record simple commands for shutting those streams down in the event of a vendor compromise.

Early stages of an incident are often daunting, frustrating, and confusing for all parties involved. Empowering information security (IS) and information technology (IT) teams to have these commands ahead of time decreases the guesswork that needs to be done to create them when an event occurs. This frees up resources to perform other critical elements of your IR plan as appropriate.

One of the most critical elements of incident response is containment. Many vendors will immediately disable external connections when an attack is discovered, but relying on an external party to act in the best interest of your organization is a challenging position for any security professional. If your organization has a list of external connections open to the impacted vendor, creating templates or files to easily cut and paste commands to cut off the connection is an easy step in the planning phase of incident response. These commands can be approved for dispatch by senior leadership and immediately put in place to ensure whatever nefarious behavior occurring on the vendor’s network cannot pass into your environment.

An additional benefit of enumerating and memorializing these commands enables teams to practice them or review them during annual updates of the IRP or tabletop exercises. If your organization does not have this information prepared right now, you have a great opportunity to collaborate with your IS and IT teams to improve your preparedness for a vendor compromise.

Vendor compromises can result in service outages which may have an operational impact on your organization. When your organization is considering ways to mitigate potential risks associated with outages and other supply chain issues, review your business continuity plan to ensure it has the appropriate coverage and provides right-sized guidance for resiliency. It may not make business sense to have alternatives for every system or process, so memorialize accepted risks in a Plan of Action and Milestones (POAM) and/or your Risk Register to record your rationale and demonstrate due diligence.

2. Maintain a vendor inventory with key POCs and SLAs

Having a centralized repository of vendors with key points of contact (POCs) for the account and service-level agreements (SLAs) relevant to the business relationship is an invaluable asset in the event of a breach or attack. The repository enables rapid communication with the appropriate parties at the vendor to open and maintain a clear line of communication, so you can share updates and get critical questions answered in a timely fashion. Having SLAs related to system downtime and system support is also instrumental to ensure the vendor is furnishing the agreed-upon services as promised.

3. Prepare templates to communicate to customers and other appropriate parties

Finally, set up templates for communications about what your team is doing to protect the environment and answer any high-level questions in the event of a security incident. For these documents, it is best to work with legal departments and senior leadership to ensure the amount of information provided and the manner in which it is disclosed is appropriate.

  • Internal communication: Have a formatted memo to easily address some key elements of what is occurring to keep staff apprised of the situation. You may want to include remarks indicating an investigation is underway, your internal environment is being monitored, relevant impacts staff may see, who to contact if external parties have questions, and reiterate how to report unusual device behavior to your HelpDesk or security team.
  • External communication: Communication for the press regarding the investigation or severity of the breach as appropriate.
  • Regulatory notices: Work with legal teams to templatize regulatory notifications to ensure the right data is easily provided by technical teams to be shared in an easy-to-update format.

Complex software supply chains introduce a wide range of vulnerabilities into our environments – but with these strategic steps in place, you can limit the impacts of security incidents and keep risk to a minimum in your third-party vendor relationships.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

InsightCloudSec Supports the Recently Updated NSA/CISA Kubernetes Hardening Guide

Post Syndicated from Alon Berger original https://blog.rapid7.com/2022/04/14/insightcloudsec-supports-the-recently-updated-nsa-cisa-kubernetes-hardening-guide/

InsightCloudSec Supports the Recently Updated NSA/CISA Kubernetes Hardening Guide

The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) recently updated their Kubernetes Hardening Guide, which was originally published in August 2021.

With the help and feedback received from numerous partners in the cybersecurity community, this guide outlines a strong line of action towards minimizing the chances of potential threats and vulnerabilities within Kubernetes deployments, while adhering to strict compliance requirements and recommendations.

The purpose of the Kubernetes hardening guide

This newly updated guide comes to the aid of multiple teams — including security, DevOps, system administrators, and developers — by focusing on the security challenges associated with setting up, monitoring, and maintaining a Kubernetes cluster. It brings together strategies to help organizations avoid misconfigurations and implement recommended hardening measures by highlighting three main sources of compromise:

  • Supply chain risks: These often occur during the container build cycle or infrastructure acquisition and are more challenging to mitigate.
  • Malicious threat actors: Attackers can exploit vulnerabilities and misconfigurations in components of the Kubernetes architecture, such as the control plane, worker nodes, or containerized applications.
  • Insider threats: These can be administrators, users, or cloud service providers, any of whom may have special access to the organization’s Kubernetes infrastructure.

“This guide focuses on security challenges and suggests hardening strategies for administrators of National Security Systems and critical infrastructure. Although this guide is tailored to National Security Systems and critical infrastructure organizations, NSA and CISA also encourage administrators of federal and state, local, tribal, and territorial (SLTT) government networks to implement the recommendations in this guide,” the authors state.

CIS Benchmarks vs. the Kubernetes Hardening Guide

For many practitioners, the Center for Internet Security (CIS) is the gold standard for security benchmarks; however, their benchmarks are not the only guidance available.

While the CIS is compliance gold, the CIS Benchmarks are very prescriptive and usually offer minimal explanations. In creating their own Kubernetes hardening guidelines, it appears that the NSA and CISA felt there was a need for a higher-level security resource that explained more of the challenges and rationale behind Kubernetes security. In this respect, the two work as perfect complements — you get strategies and rationale with the Kubernetes Hardening Guide and the extremely detailed prescriptive checks and controls enumerated by CIS.

In other words, CIS Benchmarks offer the exact checks you should use, along with recommended settings. The NSA and CISA guide supplements these by explaining challenges and recommendations, why they matter, and detailing how potential attackers look at the attack. In version 1.1, the updates include the latest hardening recommendations necessary to protect and defend against today’s threat actors.

Breaking down the updated guidance

As mentioned, the guide breaks down the Kubernetes threat model into three main sources: supply chain, malicious threat actors, and insider threats. This model reviews threats within the Kubernetes cluster and beyond its boundaries by including underlying infrastructure and surrounding workloads that Kubernetes does not manage.

Via a new compliance pack, InsightCloudSec supports and covers the main sources of compromise for a Kubernetes cluster, as mentioned in the guide. Below are the high-level points of concern, and additional examples of checks and insights, as provided by the InsightCloud Platform:

  • Supply chain: This is where attack vectors are more diverse and hard to tackle. An attacker might manipulate certain elements, services, and other product components. It is crucial to continuously monitor the entire container life cycle, from build to runtime. InsightCloudSec provides security checks to cover the supply chain level, including:

    • Checking that containers are retrieved from known and trusted registries/repositories
    • Checking for container runtime vulnerabilities
  • Kubernetes Pod security: Kubernetes Pods are often used as the attacker’s initial execution point. It is essential to have a strict security policy, in order to prevent or limit the impact of a successful compromise. Examples of relevant checks available in InsightCloudSec include:

    • Non-root containers and “rootless” container engines
      • Reject containers that execute as the root user or allow elevation to root.
      • Check K8s container configuration to use SecurityContext:runAsUser specifying a non-zero user or runAsUser.
      • Deny container features frequently exploited to break out, such as hostPID, hostIPC, hostNetwork, allowedHostPath.
    • Immutable container file systems
      • Where possible, run containers with immutable file systems.
      • Kubernetes administrators can mount secondary read/write file systems for specific directories where applications require write access.
    • Pod security enforcement
      • Harden applications against exploitation using security services such as SELinux®, AppArmor®, and secure computing mode (seccomp).
    • Protecting Pod service account tokens
      • Disable the secret token from being mounted by using the automountServiceAccountToken: false directive in the Pod’s YAML specification.
  • Network separation and hardening: Monitoring the Kubernetes cluster’s networking is key. It holds the communication among containers, Pods, services, and other external components. These resources are not isolated by default and therefore could lead to lateral movement or privilege escalations if not separated and encrypted properly. InsightCloudSec provides checks to validate that the relevant security policies are in place:

    • Namespaces
      • Set up network policies to isolate resources. Pods and services in different namespaces can still communicate with each other unless additional separation is enforced.
    • Network policies
      • Set up network policies to isolate resources. Pods and services in different namespaces can still communicate with each other unless additional separation is enforced.
    • Resource policies
      • Use resource requirements and limits.
    • Control plane hardening
      • Set up TLS encryption.
      • Configure control plane components to use authenticated, encrypted communications using Transport Layer Security (TLS) certificates.
      • Encrypt etcd at rest, and use a separate TLS certificate for communication.
      • Secure the etcd datastore with authentication and role-based access control (RBAC) policies. Set up TLS certificates to enforce Hypertext Transfer Protocol Secure (HTTPS) communication between the etcd server and API servers. Using a separate certificate authority (CA) for etcd may also be beneficial, as it trusts all certificates issued by the root CA by default.
    • Kubernetes Secrets
      • Place all credentials and sensitive information encrypted in Kubernetes Secrets rather than in configuration files
  • Authentication and authorization: Probably the primary mechanisms to leverage toward restricting access to cluster resources are authentication and authorization. There are several configurations that are supported but not enabled by default, such as RBAC controls. InsightCloudSec provides security checks that cover the activity of both users and service accounts, enabling faster detection of any unauthorized behavior:

    • Prohibit the addition of the service token by setting automaticServiceAccountToken or automaticServiceAccounttoken to false.
    • Anonymous requests should be disabled by passing the --anonymous-auth=false option to the API server.
    • Start the API server with the --authorizationmode=RBAC flag in the following command. Leaving authorization-mode flags, such as AlwaysAllow, in place allows all authorization requests, effectively disabling all authorization and limiting the ability to enforce least privilege for access.
  • Audit logging and threat detection: Kubernetes audit logs are a goldmine for security, capturing attributed activity in the cluster and making sure configurations are properly set. The security checks provided by InsightCloudSec ensure that the security audit tools are enabled. In order to keep track of any suspicious activity:

    • Check that the Kubernetes native audit logging configuration is enabled.
    • Check that seccomp: audit mode is enabled. The seccomp tool is disabled by default but can be used to limit a container’s system call abilities, thereby lowering the kernel’s attack surface. Seccomp can also log what calls are being made by using an audit profile.
  • Upgrading and application security practices: Security is an ongoing process, and it is vital to stay up to date with upgrades, updates, and patches not only in Kubernetes, but also in hypervisors, virtualization software, and other plugins. Furthermore, administrators need to make sure they uninstall old and unused components as well, in order to reduce the attack surface and risk of outdated tools. InsightCloudSec provides the checks required for such scenarios, including:

    • Promptly applying security patches and updates
    • Performing periodic vulnerability scans and penetration tests
    • Uninstalling and deleting unused components from the environment

Stay up to date with InsightCloudSec

Announcements like this catch the attention of the cybersecurity community, who want to take advantage of new functionalities and requirements in order to make sure their business is moving forward safely. However, this can often come with a hint of hesitation, as organizations need to ensure their services and settings are used properly and don’t introduce unintended consequences to their environment.

In order to help our customers to continuously stay aligned with the new guidelines, InsightCloudSec is already geared with a new compliance pack that provides additional coverage and support, based on insights that are introduced in the Kubernetes Hardening Guide.

Want to see InsightCloudSec in action? Check it out today.

Additional reading:

GitHub Availability Report: March 2022

Post Syndicated from Jakub Oleksy original https://github.blog/2022-04-06-github-availability-report-march-2022/

In March, we experienced a number of incidents that resulted in significant impact and degraded state of availability to some core GitHub services. This blog post includes a detailed follow-up on a series of incidents that occurred due to degraded database stability, and a distinct incident impacting the Actions service.

Database Stability

Last month, we experienced a number of recurring incidents that impacted the availability of our services. We want to acknowledge the impact this had on our customers, and take this opportunity during our monthly report to provide additional details as a result of further investigations and share what we have learned.

Background

The underlying theme of these issues was due to resource contention in our mysql1 cluster, which impacted the performance of a large number of our services and features during periods of peak load.

Each of these incidents resulted in a degraded state of availability for write operations on our primary services (including Git, issues, and pull requests). While some read operations were not impacted, any user who performed a write operation that involved our mysql1 cluster was affected, as the database could not handle the load.

After the other services recovered, GitHub Actions queues were saturated. We enabled the queues gradually to catch up in real time, and as a result our status page noted the multi-hour outages. When Actions are delayed, it can also impact CI completion and a host of other functions.

What we learned

These incidents were characterized by a burst in load during peak hours of GitHub traffic. During these bursts, our mysql1 cluster was not able to handle the load generated by traffic on the system and we were forced to fail-over and take other mitigations, as mentioned in the previous post.

Some of these incidents were related to our efforts to improve visibility on the database, but all of them were related to the low amount of headroom we had on our primary database and thus its susceptibility to a few poorly performing queries.

Optimizing for stability

Because of this, even after we mitigated the initial causes of downtime due to poor query performance, we were still running with low headroom and decided to take a proactive approach to managing load by intentionally slowing down services during peak hours. Furthermore, we took a calculated approach to increase capacity on the database by further optimizing queries.

Rather than risk another site outage, we established lower performance alerting thresholds on the database and proactively throttled webhooks and Actions services (the two largest drivers of automated load on the system) as we approached unsafe margins of error on March 14 14:43 UTC. We understood the potential impact to our customers, but decided it would be safer to proactively limit load on the system rather than risk another outage on multiple services.

In the meantime, we implemented a series of optimizations between March 14 and March 28 that drove queries per second on this database down by over 50% and reduced our transaction volume by 70% at peak load times. Through these performance optimizations, we became more confident in our headroom, but given ongoing investigations, we did not want to chance any unwarranted impacts.

Minimizing impact to our users

After the incidents mentioned above, we took steps to make sure we would be in a position, if necessary, to shut down any services driving high peak load. This meant taking maintenance windows for three services starting on March 24. We proactively paused migrations and team synchronization during peak load due to their potential impact.

We also took maintenance windows for GitHub Actions even though we did not actually throttle any actions and no customers were impacted during these windows. We did this in order to proactively notify customers of possible disruption. While it didn’t end up being the case, we knew we would need to throttle GitHub Actions if we saw any significant database degradation during these time windows. While this may have caused uncertainty for some customers, we wanted to prepare them for any potential impact.

Next steps

Immediate changes

In addition to the improvements mentioned above, we have significantly reduced our database performance alerting thresholds so that we are not “running hot” and will be well positioned to take action before customers are impacted.

We have also accelerated work that was already in progress to continue to shard this particular cluster and apply the learnings from this incident to other clusters that already exist outside of mysql1.

Additional technical and organizational initiatives

Due to the nature of this incident, we have also dedicated a team of engineers to study our internal processes and procedures, observability, and change release processes. While we’re still actively revisiting this incident, we feel confident we have mitigated the initial issues and we have the correct alerting and processes in place to ensure this problem is not likely to occur again.

We understand that the Actions service is critical to many of our customers. With new and ongoing investments across architecture and processes, we’ll continue to bring focus specifically to Actions reliability, including more graceful degradations when other GitHub services are experiencing issues, as well as faster recovery times.

March 29 10:26 UTC (lasting 57 minutes)

During an operation to move GitHub Actions and checks data to its own dedicated, sharded database cluster, a misconfiguration on the new database cluster caused the application to encounter errors. Once we reverted our changes, we were able to recover. This incident resulted in the failure or delay of some queued jobs for a period of time. Once mitigation was initiated, jobs that were queued during the incident were run successfully after the issue was resolved.

The Actions and checks data resides in a multi-tenant database cluster. As part of our efforts to improve reliability and scale, we have been working on functionally partitioning the Actions data to its own sharded database cluster. The switch over to the new cluster involves gradually switching over reads and then switching over writes. Immediately after switching the write traffic, we noticed Actions SLOs were breached and initiated a revert back to the old database. After we reverted back to the old database, we saw an immediate improvement in availability.

Upon further investigation, we discovered that update and delete queries were processed correctly on the new cluster, but insert queries were failing because of missing permissions on the new cluster. All changes processed on the new cluster were replicated back to the old cluster before the switch back, ensuring data integrity.

We have paused any attempts for migrations until we fully investigate and apply our learnings. Furthermore, due to the risk associated with these operations, we will no longer be attempting them during peak traffic hours, which occur between 12:00 and 21:00 UTC. From a technical perspective, we’re looking to scrutinize and improve our operational workflows for these database operations. Additionally, we are going to be performing an audit of our configurations and topology across our environment, to ensure we have properly covered them in our testing strategy. As part of these efforts, we uncovered a gap where we need to extend our pre-migration checklist with a step to verify permissions more thoroughly.

In summary

Every month we share an update on GitHub’s availability, including a description of any incidents that may have occurred and an update on how we are evolving our engineering systems and practices in response. Our hope is that by increasing our transparency and sharing what we’ve learned, everyone can gain from our experiences. At GitHub, we take the trust you place in us very seriously, and we hope this is a way for you to help hold us accountable for continuously improving our operational excellence, as well as our product functionality.

To learn more about our efforts to make GitHub more resilient every day, check out the GitHub engineering blog.

Prevent the introduction of known vulnerabilities into your code

Post Syndicated from Courtney Claessens original https://github.blog/2022-04-06-prevent-introduction-known-vulnerabilities-into-your-code/

Understanding your supply chain is critical to maintaining the security of your software. Dependabot already alerts you when vulnerabilities are found in your existing dependencies, but what if you add a new dependency with a vulnerability? With the dependency review action, you can proactively block pull requests that introduce dependencies with known vulnerabilities.

How it works

The GitHub Action automates finding and blocking vulnerabilities that are currently only displayed in the rich diff of a pull request. When you add the dependency review action to your repository, it will scan your pull requests for dependency changes. Then, it will check the GitHub Advisory Database to see if any of the new dependencies have existing vulnerabilities. If they do, the action will raise an error so that you can see which dependency has a vulnerability and implement the fix with the contextual intelligence provided. The action is supported by a new API endpoint that diffs the dependencies between any two revisions.

Demo of dependency review enforcement

The action can be found on GitHub Marketplace and in your repository’s Actions tab under the Security heading. It is available for all public repositories, as well as private repositories that have Github Advanced Security licensed.

We’re continuously improving the experience

While we’re currently in public beta, we’ll be adding functionality for you to have more control over what causes the action to fail and can set criteria on the vulnerability severity, license type, or other factors We’re also improving how failed action runs are surfaced in the UI and increasing flexibility around when it’s executed.

If you have feedback or questions

We’re very keen to hear any and all feedback! Pop into the feedback discussion, and let us know how the new action is working for you, and how you’d like to see it grow.

For more information, visit the action and the documentation.

An Inside Look at CISA’s Supply Chain Task Force

Post Syndicated from Chad Kliewer, MS, CISSP, CCSP original https://blog.rapid7.com/2022/03/14/an-inside-look-at-cisas-supply-chain-task-force/

An Inside Look at CISA’s Supply Chain Task Force

When one mentions supply chains these days, we tend to think of microchips from China causing delays in automobile manufacturing or toilet paper disappearing from store shelves. Sure, there are some chips in the communications infrastructure, but the cyber supply chain is mostly about virtual things – the ones you can’t actually touch.  

In 2018, the Cybersecurity and Infrastructure Security Agency (CISA) established the Information and Communications Technology (ICT) Supply Chain Risk Management (SCRM) Task Force as a public-private joint effort to build partnerships and enhance ICT supply chain resilience. To date, the Task Force has worked on 7 Executive Orders from the White House that underscore the importance of supply chain resilience in critical infrastructure.

Background

The ICT-SCRM Task Force is made up of members from the following sectors:

  • Information Technology (IT) – Over 40 IT companies, including service providers, hardware, software, and cloud have provided input.
  • Communications – Nearly 25 communications associations and companies are included, with representation from the wireline, wireless, broadband, and broadcast areas.
  • Government – More than 30 government organizations and agencies are represented on the Task Force.

These three sector groups touch nearly every facet of critical infrastructure that businesses and government require. The Task Force is dedicated to identifying threats and developing solutions to enhance resilience by reducing the attack surface of critical infrastructure. This diverse group is poised perfectly to evaluate existing practices and elevate them to new heights by enhancing existing standards and frameworks with up-to-date practical advice.

Working groups

The core of the task force is the working groups. These groups are created and disbanded as needed to address core areas of the cyber supply chain. Some of the working groups have been concentrating on areas like:

  • The legal risks of information sharing
  • Evaluating supply chain threats
  • Identifying criteria for building Qualified Bidder Lists and Qualified Manufacturer Lists
  • The impacts of the COVID-19 pandemic on supply chains
  • Creating a vendor supply chain risk management template

Ongoing efforts

After two years of producing some great resources and rather large reports, the ICT-SCRM Task Force recognized the need to ensure organizations of all sizes can take advantage of the group’s resources, even if they don’t have a dedicated risk management professional at their disposal. This led to the creation of both a Small and Medium Business (SMB) working group, as well as one dedicated to Product Marketing.

The SMB working group chose to review and adapt the Vendor SCRM template for use by small and medium businesses, which shows the template can be a great resource for companies and organizations of all sizes.  

Out of this template, the group described three cyber supply chain scenarios that an SMB (or any size organization, really) could encounter. From that, the group further simplified the process by creating an Excel spreadsheet that provides a document that is easy for SMBs to share with their prospective vendors and partners as a tool to evaluate their cybersecurity posture. Most importantly, the document does not promote a checkbox approach to cybersecurity — it allows for partial compliance, with room provided for explanations. It also allows many of the questions to be removed if the prospective partner possesses a SOC1/2 certification, thereby eliminating duplication in questions.

What the future holds

At the time of this writing, the Product Marketing and SMB working groups are hard at work making sure everyone, including the smallest businesses, are using the ICT-SCRM Task Force Resources to their fullest potential. Additional workstreams are being developed and will be announced soon, and these will likely include expansion with international partners and additional critical-infrastructure sectors.

For more information, you can visit the CISA ICT-SCRM Task Force website.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.