Tag Archives: infrastructure

Industrial Control System Malware Discovered

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/industrial-control-system-malware-discovered.html

The Department of Energy, CISA, the FBI, and the NSA jointly issued an advisory describing a sophisticated piece of malware called Pipedream that’s designed to attack a wide range of industrial control systems. This is clearly from a government, but no attribution is given. There’s also no indication of how the malware was discovered. It seems not to have been used yet.

More information. News article.

White House Warns of Possible Russian Cyberattacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/white-house-warns-of-possible-russian-cyberattacks.html

News:

The White House has issued its starkest warning that Russia may be planning cyberattacks against critical-sector U.S. companies amid the Ukraine invasion.

[…]

Context: The alert comes after Russia has lobbed a series of digital attacks at the Ukrainian government and critical industry sectors. But there’s been no sign so far of major disruptive hacks against U.S. targets even as the government has imposed increasingly harsh sanctions that have battered the Russian economy.

  • The public alert followed classified briefings government officials conducted last week for more than 100 companies in sectors at the highest risk of Russian hacks, Neuberger said. The briefing was prompted by “preparatory activity” by Russian hackers, she said.
  • U.S. analysts have detected scanning of some critical sectors’ computers by Russian government actors and other preparatory work, one U.S. official told my colleague Ellen Nakashima on the condition of anonymity because of the matter’s sensitivity. But whether that is a signal that there will be a cyberattack on a critical system is not clear, Neuberger said.
  • Neuberger declined to name specific industry sectors under threat but said they’re part of critical infrastructure ­– a government designation that includes industries deemed vital to the economy and national security, including energy, finance, transportation and pipelines.

President Biden’s statement. White House fact sheet. And here’s a video of the extended Q&A with deputy national security adviser Anne Neuberger.

EDITED TO ADD (3/23): Long — three hour — conference call with CISA.

US Critical Infrastructure Companies Will Have to Report When They Are Hacked

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/us-critical-infrastructure-companies-will-have-to-report-when-they-are-hacked.html

This will be law soon:

Companies critical to U.S. national interests will now have to report when they’re hacked or they pay ransomware, according to new rules approved by Congress.

[…]

The reporting requirement legislation was approved by the House and the Senate on Thursday and is expected to be signed into law by President Joe Biden soon. It requires any entity that’s considered part of the nation’s critical infrastructure, which includes the finance, transportation and energy sectors, to report any “substantial cyber incident” to the government within three days and any ransomware payment made within 24 hours.

Even better would be if they had to report it to the public.

Cloudflare, CrowdStrike, and Ping Identity launch the Critical Infrastructure Defense Project

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/announcing-critical-infrastructure-defense/

Cloudflare, CrowdStrike, and Ping Identity launch the Critical Infrastructure Defense Project

Cloudflare, CrowdStrike, and Ping Identity launch the Critical Infrastructure Defense Project

Today, in partnership with CrowdStrike and Ping Identity, Cloudflare is launching the Critical Infrastructure Defense Project (CriticalInfrastructureDefense.org). The Project was born out of conversations with cybersecurity and government experts concerned about potential retaliation to the sanctions that resulted from the Russian invasion of Ukraine.

In particular, there is a fear that critical United States infrastructure will be targeted with cyber attacks. While these attacks may target any industry, the experts we consulted with were particularly concerned about three areas that were often underprepared and could cause significant disruption: hospitals, energy, and water.

To help address that need, Cloudflare, CrowdStrike, and Ping Identity have committed under the Critical Infrastructure Defense Project to offer a broad suite of our products for free for at least the next four months to any United States-based hospital, or energy or water utility. You can learn more at: www.CriticalInfrastructureDefense.org.

We are not powerless against hackers. Organizations that have adopted a Zero Trust approach to security have been successful at mitigating even determined attacks. There are three core components to any Zero Trust security approach: 1) Network Security, 2) Endpoint Security; and 3) Identity.

Cloudflare, CrowdStrike, and Ping Identity launch the Critical Infrastructure Defense Project

Cloudflare, CrowdStrike, and Ping Identity are three of the leading Zero Trust security companies securing each of these components. Cloudflare’s Zero Trust network security offers a broad set of services that organizations can easily implement to ensure their connections are protected no matter where users access the network. CrowdStrike provides a broad set of end point security services to ensure that laptops, phones, and servers are not compromised. And Ping Identity provides identity solutions, including multi-factor authentication, that are foundational to any organization’s posture.

Each of us is great at what we do on our own. Together, we provide an integrated solution that is unrivaled and proven to stand up to even the most sophisticated nation state cyber attacks.

And this is what we think is required, because the current threat is significantly higher than what we have seen since any of our companies was founded. We all built our companies relying on the nation’s infrastructure, and we believe it is incumbent on us to provide our technology in order to protect that infrastructure when it is threatened. For this period of heightened risk, we are all providing our services at no cost to organizations in these most vulnerable sectors.

We’ve also worked together to ensure our products function in harmony and are easy to implement. We don’t want short-staffed IT teams, long requisition processes, or limited budgets to stand in the way of getting the protection that’s needed in place immediately. We’ve taken a cue from hospitals to triage the risks through a recommended list showing organizations that may be short of IT staff how they can proceed: suggesting what they should prioritize over the next day, over the next week, and over the next month.

You can download the recommended security triage program here. We know that not every organization will be able to implement every recommendation. But every step you get through on the list will help your organization be incrementally better prepared for whatever is to come.

Our teams are also committed to working directly with organizations in these sectors to make onboarding as quick and painless as possible. We will onboard customers under this project with the same level of service as if they were our largest paying customers. We believe it is our duty to help ensure that the nation’s critical infrastructure remains online and available through this challenging time.

We anticipate that, based on what we learn over the days ahead, the Critical Infrastructure Defense Project may expand to additional sectors and countries. We hope the predictions of retaliatory cyberattacks don’t come true. But, if they do, we know our solutions can mitigate the risk, and we stand ready to fully deploy them to protect our most critical infrastructure.

Cloudflare, CrowdStrike, and Ping Identity launch the Critical Infrastructure Defense Project

Ransomware Attacks against Water Treatment Plants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/ransomware-attacks-against-water-treatment-plants.html

According to a report from CISA last week, there were three ransomware attacks against water treatment plants last year.

WWS Sector cyber intrusions from 2019 to early 2021 include:

  • In August 2021, malicious cyber actors used Ghost variant ransomware against a California-based WWS facility. The ransomware variant had been in the system for about a month and was discovered when three supervisory control and data acquisition (SCADA) servers displayed a ransomware message.
  • In July 2021, cyber actors used remote access to introduce ZuCaNo ransomware onto a Maine-based WWS facility’s wastewater SCADA computer. The treatment system was run manually until the SCADA computer was restored using local control and more frequent operator rounds.
  • In March 2021, cyber actors used an unknown ransomware variant against a Nevada-based WWS facility. The ransomware affected the victim’s SCADA system and backup systems. The SCADA system provides visibility and monitoring but is not a full industrial control system (ICS).

Suing Infrastructure Companies for Copyright Violations

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/suing-infrastructure-companies-for-copyright-violations.html

It’s a matter of going after those with deep pockets. From Wired:

Cloudflare was sued in November 2018 by Mon Cheri Bridals and Maggie Sottero Designs, two wedding dress manufacturers and sellers that alleged Cloudflare was guilty of contributory copyright infringement because it didn’t terminate services for websites that infringed on the dressmakers’ copyrighted designs….

[Judge] Chhabria noted that the dressmakers have been harmed “by the proliferation of counterfeit retailers that sell knock-off dresses using the plaintiffs’ copyrighted images” and that they have “gone after the infringers in a range of actions, but to no avail — every time a website is successfully shut down, a new one takes its place.” Chhabria continued, “In an effort to more effectively stamp out infringement, the plaintiffs now go after a service common to many of the infringers: Cloudflare. The plaintiffs claim that Cloudflare contributes to the underlying copyright infringement by providing infringers with caching, content delivery, and security services. Because a reasonable jury could not — at least on this record — conclude that Cloudflare materially contributes to the underlying copyright infringement, the plaintiffs’ motion for summary judgment is denied and Cloudflare’s motion for summary judgment is granted.”

I was an expert witness for Cloudflare in this case, basically explaining to the court how the service works.

Netflix Drive

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/netflix-drive-a607538c3055

A file and folder interface for Netflix Cloud Services

Written by Vikram Krishnamurthy, Kishore Kasi, Abhishek Kapatkar, Tejas Chopra, Prudhviraj Karumanchi, Kelsey Francis, Shailesh Birari

In this post, we are introducing Netflix Drive, a Cloud drive for media assets and providing a high level overview of some of its features and interfaces. We intend this to be a first post in a series of posts covering Netflix Drive. In the future posts, we will do an architectural deep dive into the several components of Netflix Drive.

Netflix, and particularly Studio applications (and Studio in the Cloud) produce petabytes of data backed by billions of media assets. Several artists and workflows that may be globally distributed, work on different projects, and each of these projects produce content that forms a part of the large corpus of assets.

Here is an example of globally distributed production where several artists and workflows work in conjunction to create and share assets for one or many projects.

Fig 1: Globally distributed production with artists working on different assets from different parts of the world

There are workflows in which these artists may want to view a subset of these assets from this large dataset, for example, pertaining to a specific project. These artists may want to create personal workspaces and work on generating intermediate assets. To support such use cases, access control at the user workspace and project workspace granularity is extremely important for presenting a globally consistent view of pertinent data to these artists.

Netflix Drive aims to solve this problem of exposing different namespaces and attaching appropriate access control to help build a scalable, performant, globally distributed platform for storing and retrieving pertinent assets.

Netflix Drive is envisioned to be a Cloud Drive for Studio and Media applications and lends itself to be a generic paved path solution for all content in Netflix.

It exposes a file/folder interface for applications to save their data and an API interface for control operations. Netflix Drive relies on a data store that will be the persistent storage layer for assets, and a metadata store which will provide a relevant mapping from the file system hierarchy to the data store entities. The major pieces, as shown in Fig. 2, are the file system interface, the API interface, and the metadata and data stores. We will delve into these in the following sections.

Fig 2: Netflix Drive components

File interface for Netflix Drive

Creative applications such as Nuke, Maya, Adobe Photoshop store and retrieve content using files and folders. Netflix Drive relies on FUSE (File System In User Space) to provide POSIX files and folders interface to such applications. A FUSE based POSIX interface provides feature customization elasticity, deployment configuration flexibility as well as a standard and seamless file/folder interface. A similar user space abstraction is available for Windows (WinFSP) and MacOS (MacFUSE)

The operations that originate from user, application and system actions on files and folders translate to a well defined set of function and system calls which are forwarded by the Linux Virtual File System Layer (or a pass-through/filter driver in Windows) to the FUSE layer in user space. The resulting metadata and data operations will be implemented by appropriate metadata and data adapters in Netflix Drive.

Fig 3: POSIX interface of Netflix Drive

The POSIX files and folders interface for Netflix Drive is designed as a layered system with the FUSE implementation hooks forming the top layer. This layer will provide entry points for all of the relevant VFS calls that will be implemented. Netflix Drive contains an abstraction layer below FUSE which allows different metadata and data stores to be plugged into the architecture by having their corresponding adapters implement the interface. We will discuss more about the layered architecture in the section below.

API Interface for Netflix Drive

Along with exposing a file interface which will be a hub of all abstractions, Netflix Drive also exposes API and Polled Task interfaces to allow applications and workflow tools to trigger control operations in Netflix Drive.

For example, applications can explicitly use REST endpoints to publish files stored in Netflix Drive to cloud, and later use a REST endpoint to retrieve a subset of the published files from cloud. The API interface can also be used to track the transfers of large files and allows other applications to be built on top of Netflix Drive.

Fig 4: Control interface of Netflix Drive

The Polled Task interface allows studio and media workflow orchestrators to post or dispatch tasks to Netflix Drive instances on disparate workstations or containers. This allows Netflix Drive to be bootstrapped with an empty namespace when the workstation comes up and dynamically project a specific set of assets relevant to the artists’ work sessions or workflow stages. Further these assets can be projected into a namespace of the artist’s or application’s choosing.

Alternatively, workstations/containers can be launched with the assets of interest prefetched at startup. These allow artists and applications to obtain a workstation which already contains relevant files and optionally add and delete asset trees during the work session. For example, artists perform transformative work on files, and use Netflix Drive to store/fetch intermediate results as well as the final copy which can be transformed back into a media asset.

Bootstrapping Netflix Drive

Given the two different modes in which applications can interact with Netflix Drive, now let us discuss how Netflix Drive is bootstrapped.

On startup, Netflix Drive expects a manifest that contains information about the data store, metadata store, and credentials (tied to a user login) to form an instance of namespace hierarchy. A Netflix Drive mount point may contain multiple Netflix Drive namespaces.

A dynamic instance allows Netflix Drive to show a user-selected and user-accessible subset of data from a large corpus of assets. A user instance allows it to act like a Cloud Drive, where users can work on content which is automatically synced in the background periodically to Cloud. On restart on a new machine, the same files and folders will be prefetched from the cloud. We will cover the different namespaces of Netflix Drive in more detail in a subsequent blog post.

Here is an example of a typical bootstrap manifest file.

This image shows a bootstrap manifest json which highlights how Netflix Drive can work with different metadata stores (such as Redis, CockroachDB), and data stores (such as Ceph, S3) and tie them together to provide persistence layer for assets
A sample manifest file.

The manifest is a persistent artifact which renders a user workstation its Netflix Drive personality. It survives instance failures and is able to recreate the same stateful interface on any newly deployed instance.

Metadata and Data Store Abstractions

In order to allow a variety of different metadata stores and data stores to be easily plugged into the architecture, Netflix Drive exposes abstract interfaces for both metadata and data stores. Here is a high level diagram explaining the different layers of abstractions in Netflix Drive

Fig 5: Layered architecture of Netflix Drive

Metadata Store Characteristics

Each file in Netflix Drive would have one or many corresponding metadata nodes, corresponding to different versions of the file. The file system hierarchy would be modeled as a tree in the metadata store where the root node is the top level folder for the application.

Each metadata node will contain several attributes, such as checksum of the file, location of the data, user permissions to access data, file metadata such as size, modification time, etc. A metadata node may also provide support for extended attributes which can be used to model ACLs, symbolic links, or other expressive file system constructs.

Metadata Store may also expose the concept of workspaces, where each user/application can have several workspaces, and can share workspaces with other users/applications. These are higher level constructs that are very useful to Studio applications.

Data Store Characteristics

Netflix Drive relies on a data store that allows streaming bytes into files/objects persisted on the storage media. The data store should expose APIs that allow Netflix Drive to perform I/O operations. The transfer mechanism for transport of bytes is a function of the data store.

In the first manifestation, Netflix Drive is using an object store (such as Amazon S3) as a data store. In order to expose file store-like properties, there were some changes needed in the object store. Each file can be stored as one or more objects. For Studio applications, file sizes may exceed the maximum object size for Cloud Storage, and so, the data store service should have the ability to store multiple parts of a file as separate objects. It is the responsibility of the data store service to tie these objects to a single file and inform the metadata store of the single unique Id for these several object parts. This Data store internally implements the chunking of file into several parts, encrypting of the content, and life cycle management of the data.

Multi-tiered architecture

Netflix Drive allows multiple data stores to be a part of the same installation via its bootstrap manifest.

Fig 6: Multiple data stores of Netflix Drive

Some studio applications such as encoding and transcoding have different I/O characteristics than a typical cloud drive.

Most of the data produced by these applications is ephemeral in nature, and is read often initially. The final encoded copy needs to be persisted and the ephemeral data can be deleted. To serve such applications, Netflix Drive can persist the ephemeral data in storage tiers which are closer to the application that allow lower read latencies and better economies for read request, since cloud storage reads incur an egress cost. Finally, once the encoded copy is prepared, this copy can be persisted by Netflix Drive to a persistent storage tier in the cloud. A single data store may also choose to archive some subset of content stored in cheaper alternatives.

Security

Studio applications require strict adherence to security models where only users or applications with specific permissions should be allowed to access specific assets. Security is one of the cornerstones of Netflix Drive design. Netflix Drive dynamic namespace design allows an artist or workflow to access only a small subset of the assets based on the workspace information and access control and is one of the benefits of using Netflix Drive in Studio workflows. Netflix Drive encapsulates the authentication and authorization models in its metadata store. These are translated into POSIX ACLs in Netflix Drive. In the future, Netflix Drive can allow more expressive ACLs by leveraging extended attributes associated with Metadata nodes corresponding to an asset.

Netflix Drive is currently being used by several Studio teams as the paved path solution for working with assets and is integrated with several media suite applications. As of today, Netflix Drive can be installed on CentOS, MacOS and Windows. In the future blog posts, we will cover implementation details, learnings, performance analysis of Netflix Drive, and some of the applications and workflows built on top of Netflix Drive.

If you are passionate about building Storage and Infrastructure solutions for Netflix Data Platform, we are always looking for talented engineers and managers. Please check out our job listings


Netflix Drive was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

GPS Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/gps-vulnerabilities.html

Really good op-ed in the New York Times about how vulnerable the GPS system is to interference, spoofing, and jamming — and potential alternatives.

The 2018 National Defense Authorization Act included funding for the Departments of Defense, Homeland Security and Transportation to jointly conduct demonstrations of various alternatives to GPS, which were concluded last March. Eleven potential systems were tested, including eLoran, a low-frequency, high-power timing and navigation system transmitted from terrestrial towers at Coast Guard facilities throughout the United States.

“China, Russia, Iran, South Korea and Saudi Arabia all have eLoran systems because they don’t want to be as vulnerable as we are to disruptions of signals from space,” said Dana Goward, the president of the Resilient Navigation and Timing Foundation, a nonprofit that advocates for the implementation of an eLoran backup for GPS.

Also under consideration by federal authorities are timing systems delivered via fiber optic network and satellite systems in a lower orbit than GPS, which therefore have a stronger signal, making them harder to hack. A report on the technologies was submitted to Congress last week.

GPS is a piece of our critical infrastructure that is essential to a lot of the rest of our critical infrastructure. It needs to be more secure.

Attack against Florida Water Treatment Facility

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/attack-against-florida-water-treatment-facility.html

A water treatment plant in Oldsmar, Florida, was attacked last Friday. The attacker took control of one of the systems, and increased the amount of sodium hydroxide — that’s lye — by a factor of 100. This could have been fatal to people living downstream, if an alert operator hadn’t noticed the change and reversed it.

We don’t know who is behind this attack. Despite its similarities to a Russian attack of a Ukrainian power plant in 2015, my bet is that it’s a disgruntled insider: either a current or former employee. It just doesn’t make sense for Russia to be behind this.

ArsTechnica is reporting on the poor cybersecurity at the plant:

The Florida water treatment facility whose computer system experienced a potentially hazardous computer breach last week used an unsupported version of Windows with no firewall and shared the same TeamViewer password among its employees, government officials have reported.

Brian Krebs points out that the fact that we know about this attack is what’s rare:

Spend a few minutes searching Twitter, Reddit or any number of other social media sites and you’ll find countless examples of researchers posting proof of being able to access so-called “human-machine interfaces” — basically web pages designed to interact remotely with various complex systems, such as those that monitor and/or control things like power, water, sewage and manufacturing plants.

And yet, there have been precious few known incidents of malicious hackers abusing this access to disrupt these complex systems. That is, until this past Monday, when Florida county sheriff Bob Gualtieri held a remarkably clear-headed and fact-filled news conference about an attempt to poison the water supply of Oldsmar, a town of around 15,000 not far from Tampa.

Including Hackers in NATO Wargames

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/including-hackers-in-nato-wargames.html

This essay makes the point that actual computer hackers would be a useful addition to NATO wargames:

The international information security community is filled with smart people who are not in a military structure, many of whom would be excited to pose as independent actors in any upcoming wargames. Including them would increase the reality of the game and the skills of the soldiers building and training on these networks. Hackers and cyberwar experts would demonstrate how industrial control systems such as power supply for refrigeration and temperature monitoring in vaccine production facilities are critical infrastructure; they’re easy targets and should be among NATO’s priorities at the moment.

Diversity of thought leads to better solutions. We in the information security community strongly support the involvement of acknowledged nonmilitary experts in the development and testing of future cyberwar scenarios. We are confident that independent experts, many of whom see sharing their skills as public service, would view participation in these cybergames as a challenge and an honor.

More on the Security of the 2020 US Election

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/more-on-the-security-of-the-2020-us-election.html

Last week I signed on to two joint letters about the security of the 2020 election. The first was as one of 59 election security experts, basically saying that while the election seems to have been both secure and accurate (voter suppression notwithstanding), we still need to work to secure our election systems:

We are aware of alarming assertions being made that the 2020 election was “rigged” by exploiting technical vulnerabilities. However, in every case of which we are aware, these claims either have been unsubstantiated or are technically incoherent. To our collective knowledge, no credible evidence has been put forth that supports a conclusion that the 2020 election outcome in any state has been altered through technical compromise.

That said, it is imperative that the US continue working to bolster the security of elections against sophisticated adversaries. At a minimum, all states should employ election security practices and mechanisms recommended by experts to increase assurance in election outcomes, such as post-election risk-limiting audits.

The New York Times wrote about the letter.

The second was a more general call for election security measures in the US:

Obviously elections themselves are partisan. But the machinery of them should not be. And the transparent assessment of potential problems or the assessment of allegations of security failure — even when they could affect the outcome of an election — must be free of partisan pressures. Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election.

These pile on to the November 12 statement from Cybersecurity and Infrastructure Security Agency (CISA) and the other agencies of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee. While I’m not sure how they have enough comparative data to claim that “the November 3rd election was the most secure in American history,” they are certainly credible in saying that “there is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.”

We have a long way to go to secure our election systems from hacking. Details of what to do are known. Getting rid of touch-screen voting machines is important, but baseless claims of fraud don’t help.