Tag Archives: reports

Nation-State Espionage Campaigns against Middle East Defense Contractors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/nation-state_es.html

Report on espionage attacks using LinkedIn as a vector for malware, with details and screenshots. They talk about “several hints suggesting a possible link” to the Lazarus group (aka North Korea), but that’s by no means definite.

As part of the initial compromise phase, the Operation In(ter)ception attackers had created fake LinkedIn accounts posing as HR representatives of well-known companies in the aerospace and defense industries. In our investigation, we’ve seen profiles impersonating Collins Aerospace (formerly Rockwell Collins) and General Dynamics, both major US corporations in the field.

Detailed report.

New Hacking-for-Hire Company in India

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/new_hacking-for.html

Citizen Lab has a new report on Dark Basin, a large hacking-for-hire company in India.

Key Findings:

  • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.
  • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.

  • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.

  • We link Dark Basin with high confidence to an Indian company, BellTroX InfoTech Services, and related entities.

  • Citizen Lab has notified hundreds of targeted individuals and institutions and, where possible, provided them with assistance in tracking and identifying the campaign. At the request of several targets, Citizen Lab shared information about their targeting with the US Department of Justice (DOJ). We are in the process of notifying additional targets.

BellTroX InfoTech Services has assisted clients in spying on over 10,000 email accounts around the world, including accounts of politicians, investors, journalists and activists.

News article. Boing Boing post

Theft of CIA’s "Vault Seven" Hacking Tools Due to Its Own Lousy Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/theft_of_cias_v.html

The Washington Post is reporting on an internal CIA report about its “Vault 7” security breach:

The breach — allegedly committed by a CIA employee — was discovered a year after it happened, when the information was published by WikiLeaks, in March 2017. The anti-secrecy group dubbed the release “Vault 7,” and U.S. officials have said it was the biggest unauthorized disclosure of classified information in the CIA’s history, causing the agency to shut down some intelligence operations and alerting foreign adversaries to the spy agency’s techniques.

The October 2017 report by the CIA’s WikiLeaks Task Force, several pages of which were missing or redacted, portrays an agency more concerned with bulking up its cyber arsenal than keeping those tools secure. Security procedures were “woefully lax” within the special unit that designed and built the tools, the report said.

Without the WikiLeaks disclosure, the CIA might never have known the tools had been stolen, according to the report. “Had the data been stolen for the benefit of a state adversary and not published, we might still be unaware of the loss,” the task force concluded.

The task force report was provided to The Washington Post by the office of Sen. Ron Wyden (D-Ore.), a member of the Senate Intelligence Committee, who has pressed for stronger cybersecurity in the intelligence community. He obtained the redacted, incomplete copy from the Justice Department.

It’s all still up on WikiLeaks.

The DoD Isn’t Fixing Its Security Problems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/the_dod_isnt_fi.html

It has produced several reports outlining what’s wrong and what needs to be fixed. It’s not fixing them:

GAO looked at three DoD-designed initiatives to see whether the Pentagon is following through on its own goals. In a majority of cases, DoD has not completed the cybersecurity training and awareness tasks it set out to. The status of various efforts is simply unknown because no one has tracked their progress. While an assessment of “cybersecurity hygiene” like this doesn’t directly analyze a network’s hardware and software vulnerabilities, it does underscore the need for people who use digital systems to interact with them in secure ways. Especially when those people work on national defense.

[…]

The report focuses on three ongoing DoD cybersecurity hygiene initiatives. The 2015 Cybersecurity Culture and Compliance Initiative outlined 11 education-related goals for 2016; the GAO found that the Pentagon completed only four of them. Similarly, the 2015 Cyber Discipline plan outlined 17 goals related to detecting and eliminating preventable vulnerabilities from DoD’s networks by the end of 2018. GAO found that DoD has met only six of those. Four are still pending, and the status of the seven others is unknown, because no one at DoD has kept track of the progress.

GAO repeatedly identified lack of status updates and accountability as core issues within DoD’s cybersecurity awareness and education efforts. It was unclear in many cases who had completed which training modules. There were even DoD departments lacking information on which users should have their network access revoked for failure to complete trainings.

The report.

Bug Bounty Programs Are Being Used to Buy Silence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/bug_bounty_prog.html

Investigative report on how commercial bug-bounty programs like HackerOne, Bugcrowd, and SynAck are being used to silence researchers:

Used properly, bug bounty platforms connect security researchers with organizations wanting extra scrutiny. In exchange for reporting a security flaw, the researcher receives payment (a bounty) as a thank you for doing the right thing. However, CSO’s investigation shows that the bug bounty platforms have turned bug reporting and disclosure on its head, what multiple expert sources, including HackerOne’s former chief policy officer, Katie Moussouris, call a “perversion.”

[…]

Silence is the commodity the market appears to be demanding, and the bug bounty platforms have pivoted to sell what willing buyers want to pay for.

“Bug bounties are best when transparent and open. The more you try to close them down and place NDAs on them, the less effective they are, the more they become about marketing rather than security,” Robert Graham of Errata Security tells CSO.

Leitschuh, the Zoom bug finder, agrees. “This is part of the problem with the bug bounty platforms as they are right now. They aren’t holding companies to a 90-day disclosure deadline,” he says. “A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence.”

The bug bounty platforms’ NDAs prohibit even mentioning the existence of a private bug bounty. Tweeting something like “Company X has a private bounty program over at Bugcrowd” would be enough to get a hacker kicked off their platform.

The carrot for researcher silence is the money — bounties can range from a few hundred to tens of thousands of dollars — but the stick to enforce silence is “safe harbor,” an organization’s public promise not to sue or criminally prosecute a security researcher attempting to report a bug in good faith.

More on Law Enforcement Backdoor Demands

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/more_on_law_enf.html

The Carnegie Endowment for International Peace and Princeton University’s Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the “going dark” debate. They have released their report: “Moving the Encryption Policy Conversation Forward.

The main contribution seems to be that attempts to backdoor devices like smartphones shouldn’t also backdoor communications systems:

Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

I don’t believe that backdoor access to encryption data at rest offers “an achievable balance of risk vs. benefit” either, but I agree that the two aspects should be treated independently.

EDITED TO ADD (9/12): This report does an important job moving the debate forward. It advises that policymakers break the issues into component parts. Instead of talking about restricting all encryption, it separates encrypted data at rest (storage) from encrypted data in motion (communication). It advises that policymakers pick the problems they have some chance of solving, and not demand systems that put everyone in danger. For example: no key escrow, and no use of software updates to break into devices).

Data in motion poses challenges that are not present for data at rest. For example, modern cryptographic protocols for data in motion use a separate “session key” for each message, unrelated to the private/public key pairs used to initiate communication, to preserve the message’s secrecy independent of other messages (consistent with a concept known as “forward secrecy”). While there are potential techniques for recording, escrowing, or otherwise allowing access to these session keys, by their nature, each would break forward secrecy and related concepts and would create a massive target for criminal and foreign intelligence adversaries. Any technical steps to simplify the collection or tracking of session keys, such as linking keys to other keys or storing keys after they are used, would represent a fundamental weakening of all the communications.

These are all big steps forward given who signed on to the report. Not just the usual suspects, but also Jim Baker — former general counsel of the FBI — and Chris Inglis: former deputy director of the NSA.

Computers and Video Surveillance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/computers_and_video.html

It used to be that surveillance cameras were passive. Maybe they just recorded, and no one looked at the video unless they needed to. Maybe a bored guard watched a dozen different screens, scanning for something interesting. In either case, the video was only stored for a few days because storage was expensive.

Increasingly, none of that is true. Recent developments in video analytics — fueled by artificial intelligence techniques like machine learning — enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.

An ACLU report published Thursday called “the Dawn of Robot Surveillance” says AI-aided video surveillance “won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated ‘video analytics’ technologies threaten to fundamentally change the nature of surveillance.”

Let’s take the technologies one at a time. First: video analytics. Computers are getting better at recognizing what’s going on in a video. Detecting when a person or vehicle enters a forbidden area is easy. Modern systems can alarm when someone is walking in the wrong direction — going in through an exit-only corridor, for example. They can count people or cars. They can detect when luggage is left unattended, or when previously unattended luggage is picked up and removed. They can detect when someone is loitering in an area, is lying down, or is running. Increasingly, they can detect particular actions by people. Amazon’s cashier-less stores rely on video analytics to figure out when someone picks an item off a shelf and doesn’t put it back.

More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment. Other systems can describe what’s happening in a video scene.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we give to Facebook and other social media sites, and the photos governments collect in the process of issuing ID cards and drivers licenses. The technology already exists to automatically identify everyone a camera “sees” in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the smartphones we carry with us everywhere, or by our laptops or Bluetooth-connected devices. Police have been tracking phones for years, and this practice can now be combined with video analytics.

Once a monitoring system identifies people, their data can be combined with other data, either collected or purchased: from cell phone records, GPS surveillance history, purchasing data, and so on. Social media companies like Facebook have spent years learning about our personalities and beliefs by what we post, comment on, and “like.” This is “data inference,” and when combined with video it offers a powerful window into people’s behaviors and motivations.

Camera resolution is also improving. Gigapixel cameras as so good that they can capture individual faces and identify license places in photos taken miles away. “Wide-area surveillance” cameras can be mounted on airplanes and drones, and can operate continuously. On the ground, cameras can be hidden in street lights and other regular objects. In space, satellite cameras have also dramatically improved.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention — or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.

This is bad, because ubiquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives­ — when the surveillance system gets it wrong­ — will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. A recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.

Some jurisdictions are starting to notice. Last month, San Francisco became the first city to ban facial recognition technology by police and other government agencies. A similar ban is being considered in Somerville, MA, and Oakland, CA. These are exceptions, and limited to the more liberal areas of the country.

We often believe that technological change is inevitable, and that there’s nothing we can do to stop it — or even to steer it. That’s simply not true. We’re led to believe this because we don’t often see it, understand it, or have a say in how or when it is deployed. The problem is that technologies of cameras, resolution, machine learning, and artificial intelligence are complex and specialized.

Laws like what was just passed in San Francisco won’t stop the development of these technologies, but they’re not intended to. They’re intended as pauses, so our policy making can catch up with technology. As a general rule, the US government tends to ignore technologies as they’re being developed and deployed, so as not to stifle innovation. But as the rate of technological change increases, so does the unanticipated effects on our lives. Just as we’ve been surprised by the threats to democracy caused by surveillance capitalism, AI-enabled video surveillance will have similar surprising effects. Maybe a pause in our headlong deployment of these technologies will allow us the time to discuss what kind of society we want to live in, and then enact rules to bring that kind of society about.

This essay previously appeared on Vice Motherboard.

Video Surveillance by Computer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/video_surveilla.html

The ACLU’s Jay Stanley has just published a fantastic report: “The Dawn of Robot Surveillance” (blog post here) Basically, it lays out a future of ubiquitous video cameras watched by increasingly sophisticated video analytics software, and discusses the potential harms to society.

I’m not going to excerpt a piece, because you really need to read the whole thing.

The Human Cost of Cyberattacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/the_human_cost_.html

The International Committee of the Red Cross has just published a report: “The Potential Human Cost of Cyber-Operations.” It’s the result of an “ICRC Expert Meeting” from last year, but was published this week.

Here’s a shorter blog post if you don’t want to read the whole thing. And commentary by one of the authors.

China’s AI Strategy and its Security Implications

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/chinas_ai_strat.html

Gregory C. Allen at the Center for a New American Security has a new report with some interesting analysis and insights into China’s AI strategy, commercial, government, and military. There are numerous security — and national security — implications.

Hacking Construction Cranes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/hacking_constru.html

Construction cranes are vulnerable to hacking:

In our research and vulnerability discoveries, we found that weaknesses in the controllers can be (easily) taken advantage of to move full-sized machines such as cranes used in construction sites and factories. In the different attack classes that we’ve outlined, we were able to perform the attacks quickly and even switch on the controlled machine despite an operator’s having issued an emergency stop (e-stop).

The core of the problem lies in how, instead of depending on wireless, standard technologies, these industrial remote controllers rely on proprietary RF protocols, which are decades old and are primarily focused on safety at the expense of security. It wasn’t until the arrival of Industry 4.0, as well as the continuing adoption of the industrial internet of things (IIoT), that industries began to acknowledge the pressing need for security.

News article. Report.

Congressional Report on the 2017 Equifax Data Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/congressional_r.html

The US House of Representatives Committee on Oversight and Government Reform has just released a comprehensive report on the 2017 Equifax hack. It’s a great piece of writing, with a detailed timeline, root cause analysis, and lessons learned. Lance Spitzner also commented on this.

Here is my testimony before before the House Subcommittee on Digital Commerce and Consumer Protection last November.

How to Punish Cybercriminals

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/how_to_punish_c.html

Interesting policy paper by Third Way: “To Catch a Hacker: Toward a comprehensive strategy to identify, pursue, and punish malicious cyber actors“:

In this paper, we argue that the United States currently lacks a comprehensive overarching strategic approach to identify, stop and punish cyberattackers. We show that:

  • There is a burgeoning cybercrime wave: A rising and often unseen crime wave is mushrooming in America. There are approximately 300,000 reported malicious cyber incidents per year, including up to 194,000 that could credibly be called individual or system-wide breaches or attempted breaches. This is likely a vast undercount since many victims don’t report break-ins to begin with. Attacks cost the US economy anywhere from $57 billion to $109 billion annually and these costs are increasing.
  • There is a stunning cyber enforcement gap: Our analysis of publicly available data shows that cybercriminals can operate with near impunity compared to their real-world counterparts. We estimate that cyber enforcement efforts are so scattered that less than 1% of malicious cyber incidents see an enforcement action taken against the attackers.

  • There is no comprehensive US cyber enforcement strategy aimed at the human attacker: Despite the recent release of a National Cyber Strategy, the United States still lacks a comprehensive strategic approach to how it identifies, pursues, and punishes malicious human cyberattackers and the organizations and countries often behind them. We believe that the United States is as far from this human attacker strategy as the nation was toward a strategic approach to countering terrorism in the weeks and months before 9/11.

In order to close the cyber enforcement gap, we argue for a comprehensive enforcement strategy that makes a fundamental rebalance in US cybersecurity policies: from a heavy focus on building better cyber defenses against intrusion to also waging a more robust effort at going after human attackers. We call for ten US policy actions that could form the contours of a comprehensive enforcement strategy to better identify, pursue and bring to justice malicious cyber actors that include building up law enforcement, enhancing diplomatic efforts, and developing a measurable strategic plan to do so.

Security Vulnerabilities in US Weapons Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/security_vulner_17.html

The US Government Accounting Office just published a new report: “Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities” (summary here). The upshot won’t be a surprise to any of my regular readers: they’re vulnerable.

From the summary:

Automation and connectivity are fundamental enablers of DOD’s modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.

In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.

It is definitely easier, and cheaper, to ignore the problem or pretend it isn’t a big deal. But that’s probably a mistake in the long run.

New Report on Police Digital Forensics Techniques

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/new_report_on_p.html

According to a new CSIS report, “going dark” is not the most pressing problem facing law enforcement in the age of digital data:

Over the past year, we conducted a series of interviews with federal, state, and local law enforcement officials, attorneys, service providers, and civil society groups. We also commissioned a survey of law enforcement officers from across the country to better understand the full range of difficulties they are facing in accessing and using digital evidence in their cases. Survey results indicate that accessing data from service providers — much of which is not encrypted — is the biggest problem that law enforcement currently faces in leveraging digital evidence.

This is a problem that has not received adequate attention or resources to date. An array of federal and state training centers, crime labs, and other efforts have arisen to help fill the gaps, but they are able to fill only a fraction of the need. And there is no central entity responsible for monitoring these efforts, taking stock of the demand, and providing the assistance needed. The key federal entity with an explicit mission to assist state and local law enforcement with their digital evidence needs­ — the National Domestic Communications Assistance Center (NDCAC)­has a budget of $11.4 million, spread among several different programs designed to distribute knowledge about service providers’ poli­cies and products, develop and share technical tools, and train law enforcement on new services and tech­nologies, among other initiatives.

From a news article:

In addition to bemoaning the lack of guidance and help from tech companies — a quarter of survey respondents said their top issue was convincing companies to hand over suspects’ data — law enforcement officials also reported receiving barely any digital evidence training. Local police said they’d received only 10 hours of training in the past 12 months; state police received 13 and federal officials received 16. A plurality of respondents said they only received annual training. Only 16 percent said their organizations scheduled training sessions at least twice per year.

This is a point that Susan Landau has repeatedly made, and also one I make in my new book. The FBI needs technical expertise, not backdoors.

Here’s the report.

Department of Commerce Report on the Botnet Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/department_of_c.html

Last month, the US Department of Commerce released a report on the threat of botnets and what to do about it. I note that it explicitly said that the IoT makes the threat worse, and that the solutions are largely economic.

The Departments determined that the opportunities and challenges in working toward dramatically reducing threats from automated, distributed attacks can be summarized in six principal themes.

  1. Automated, distributed attacks are a global problem. The majority of the compromised devices in recent noteworthy botnets have been geographically located outside the United States. To increase the resilience of the Internet and communications ecosystem against these threats, many of which originate outside the United States, we must continue to work closely with international partners.
  2. Effective tools exist, but are not widely used. While there remains room for improvement, the tools, processes, and practices required to significantly enhance the resilience of the Internet and communications ecosystem are widely available, and are routinely applied in selected market sectors. However, they are not part of common practices for product development and deployment in many other sectors for a variety of reasons, including (but not limited to) lack of awareness, cost avoidance, insufficient technical expertise, and lack of market incentives

  3. Products should be secured during all stages of the lifecycle. Devices that are vulnerable at time of deployment, lack facilities to patch vulnerabilities after discovery, or remain in service after vendor support ends make assembling automated, distributed threats far too easy.

  4. Awareness and education are needed. Home users and some enterprise customers are often unaware of the role their devices could play in a botnet attack and may not fully understand the merits of available technical controls. Product developers, manufacturers, and infrastructure operators often lack the knowledge and skills necessary to deploy tools, processes, and practices that would make the ecosystem more resilient.

  5. Market incentives should be more effectively aligned. Market incentives do not currently appear to align with the goal of “dramatically reducing threats perpetrated by automated and distributed attacks.” Product developers, manufacturers, and vendors are motivated to minimize cost and time to market, rather than to build in security or offer efficient security updates. Market incentives must be realigned to promote a better balance between security and convenience when developing products.

  6. Automated, distributed attacks are an ecosystem-wide challenge. No single stakeholder community can address the problem in isolation.

[…]

The Departments identified five complementary and mutually supportive goals that, if realized, would dramatically reduce the threat of automated, distributed attacks and improve the resilience and redundancy of the ecosystem. A list of suggested actions for key stakeholders reinforces each goal. The goals are:

  • Goal 1: Identify a clear pathway toward an adaptable, sustainable, and secure technology marketplace.
  • Goal 2: Promote innovation in the infrastructure for dynamic adaptation to evolving threats.
  • Goal 3: Promote innovation at the edge of the network to prevent, detect, and mitigate automated, distributed attacks.
  • Goal 4: Promote and support coalitions between the security, infrastructure, and operational technology communities domestically and around the world
  • Goal 5: Increase awareness and education across the ecosystem.