Tag Archives: hacking

AIs as Computer Hackers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/ais-as-computer-hackers.html

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies.

In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense.

There was a traditional human–team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didn’t come in last in every category all of the time.

I figured it was only a matter of time. It would be the same story we’ve seen in so many other areas of AI: the games of chess and go, X-ray and disease diagnostics, writing fake news. AIs would improve every year because all of the core technologies are continually improving. Humans would largely stay the same because we remain humans even as our tools improve. Eventually, the AIs would routinely beat the humans. I guessed that it would take about a decade.

But now, five years later, I have no idea if that prediction is still on track. Inexplicably, DARPA never repeated the event. Research on the individual components of the software vulnerability lifecycle does continue. There’s an enormous amount of work being done on automatic vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which machine learning systems excel, if they can only be taught how to recognize a vulnerability. There is also work on automatic vulnerability exploitation and lots on automatic update and patching. Still, there is something uniquely powerful about a competition that puts all of the components together and tests them against others.

To see that in action, you have to go to China. Since 2017, China has held at least seven of these competitions—called Robot Hacking Games—many with multiple qualifying rounds. The first included one team each from the United States, Russia, and Ukraine. The rest have been Chinese only: teams from Chinese universities, teams from companies like Baidu and Tencent, teams from the military. Rules seem to vary. Sometimes human–AI hybrid teams compete.

Details of these events are few. They’re Chinese language only, which naturally limits what the West knows about them. I didn’t even know they existed until Dakota Cary, a research analyst at the Center for Security and Emerging Technology and a Chinese speaker, wrote a report about them a few months ago. And they’re increasingly hosted by the People’s Liberation Army, which presumably controls how much detail becomes public.

Some things we can infer. In 2016, none of the Cyber Grand Challenge teams used modern machine learning techniques. Certainly most of the Robot Hacking Games entrants are using them today. And the competitions encourage collaboration as well as competition between the teams. Presumably that accelerates advances in the field.

None of this is to say that real robot hackers are poised to attack us today, but I wish I could predict with some certainty when that day will come. In 2018, I wrote about how AI could change the attack/defense balance in cybersecurity. I said that it is impossible to know which side would benefit more but predicted that the technologies would benefit the defense more, at least in the short term. I wrote: “Defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.”

Unfortunately, it’s the People’s Liberation Army and not DARPA that will be the first to learn if I am right or wrong and how soon it matters.

This essay originally appeared in the January/February 2022 issue of IEEE Security & Privacy.

Kevin Mitnick Hacked California Law in 1983

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/kevin-mitnick-hacked-california-law-in-1983.html

Early in his career, Kevin Mitnick successfully hacked California law. He told me the story when he heard about my new book, which he partially recounts his 2012 book, Ghost in the Wires.

The setup is that he just discovered that there’s warrant for his arrest by the California Youth Authority, and he’s trying to figure out if there’s any way out of it.

As soon as I was settled, I looked in the Yellow Pages for the nearest law school, and spent the next few days and evenings there poring over the Welfare and Institutions Code, but without much hope.

Still, hey, “Where there’s a will…” I found a provision that said that for a nonviolent crime, the jurisdiction of the Juvenile Court expired either when the defendant turned twenty-one or two years after the commitment date, whichever occurred later. For me, that would mean two years from February 1983, when I had been sentenced to the three years and eight months.

Scratch, scratch. A little arithmetic told me that this would occur in about four months. I thought, What if I just disappear until their jurisdiction ends?

This was the Southwestern Law School in Los Angeles. This was a lot of manual research—no search engines in those days. He researched the relevant statutes, and case law that interpreted those statutes. He made copies of everything to hand to his attorney.

I called my attorney to try out the idea on him. His response sounded testy: “You’re absolutely wrong. It’s a fundamental principle of law that if a defendant disappears when there’s a warrant out for him, the time limit is tolled until he’s found, even if it’s years later.”

And he added, “You have to stop playing lawyer. I’m the lawyer. Let me do my job.”

I pleaded with him to look into it, which annoyed him, but he finally agreed. When I called back two days later, he had talked to my Parole Officer, Melvin Boyer, the compassionate guy who had gotten me transferred out of the dangerous jungle at LA County Jail. Boyer had told him, “Kevin is right. If he disappears until February 1985, there’ll be nothing we can do. At that point the warrant will expire, and he’ll be off the hook.”

So he moved to Northern California and lived under an assumed name for four months.

What’s interesting to me is how he approaches legal code in the same way a hacker approaches computer code: pouring over the details, looking for a bug—a mistake—leading to an exploitable vulnerability. And this was in the days before you could do any research online. He’s spending days in the law school library.

This is exactly the sort of thing I am writing about in A Hacker’s Mind. Legal code isn’t the same as computer code, but it’s a series of rules with inputs and outputs. And just like computer code, legal code has bugs. And some of those bugs are also vulnerabilities. And some of those vulnerabilities can be exploited—just as Mitnick learned.

Mitnick was a hacker. His attorney was not.

US Cyber Command Operations During the 2022 Midterm Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/us-cyber-command-operations-during-the-2022-midterm-elections.html

The head of both US Cyber Command and the NSA, Gen. Paul Nakasone, broadly discussed that first organization’s offensive cyber operations during the runup to the 2022 midterm elections. He didn’t name names, of course:

We did conduct operations persistently to make sure that our foreign adversaries couldn’t utilize infrastructure to impact us,” said Nakasone. “We understood how foreign adversaries utilize infrastructure throughout the world. We had that mapped pretty well. And we wanted to make sure that we took it down at key times.”

Nakasone noted that Cybercom’s national mission force, aided by NSA, followed a “campaign plan” to deprive the hackers of their tools and networks. “Rest assured,” he said. “We were doing operations well before the midterms began, and we were doing operations likely on the day of the midterms.” And they continued until the elections were certified, he said.

We know Cybercom did similar things in 2018 and 2020, and presumably will again in two years.

No-Fly List Exposed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/no-fly-list-exposed.html

I can’t remember the last time I thought about the US no-fly list: the list of people so dangerous they should never be allowed to fly on an airplane, yet so innocent that we can’t arrest them. Back when I thought about it a lot, I realized that the TSA’s practice of giving it to every airline meant that it was not well protected, and it certainly ended up in the hands of every major government that wanted it.

The list is back in the news today, having been left exposed on an insecure airline computer. (The airline is CommuteAir, a company so obscure that I’ve never heard of it before.)

This is, of course, the problem with having to give a copy of your secret list to lots of people.

The FBI Identified a Tor User

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/the-fbi-identified-a-tor-user.html

No details, though:

According to the complaint against him, Al-Azhari allegedly visited a dark web site that hosts “unofficial propaganda and photographs related to ISIS” multiple times on May 14, 2019. In virtue of being a dark web site—­that is, one hosted on the Tor anonymity network—­it should have been difficult for the site owner’s or a third party to determine the real IP address of any of the site’s visitors.

Yet, that’s exactly what the FBI did. It found Al-Azhari allegedly visited the site from an IP address associated with Al-Azhari’s grandmother’s house in Riverside, California. The FBI also found what specific pages Al-Azhari visited, including a section on donating Bitcoin; another focused on military operations conducted by ISIS fighters in Iraq, Syria, and Nigeria; and another page that provided links to material from ISIS’s media arm. Without the FBI deploying some form of surveillance technique, or Al-Azhari using another method to visit the site which exposed their IP address, this should not have been possible.

There are lots of ways to de-anonymize Tor users. Someone at the NSA gave a presentation on this ten years ago. (I wrote about it for the Guardian in 2013, an essay that reads so dated in light of what we’ve learned since then.) It’s unlikely that the FBI uses the same sorts of broad surveillance techniques that the NSA does, but it’s certainly possible that the NSA did the surveillance and passed the information to the FBI.

Hacking Trespass Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/12/hacking-trespass-law.html

This article talks about public land in the US that is completely surrounded by private land, which in some cases makes it inaccessible to the public. But there’s a hack:

Some hunters have long believed, however, that the publicly owned parcels on Elk Mountain can be legally reached using a practice called corner-crossing.

Corner-crossing can be visualized in terms of a checkerboard. Ever since the Westward Expansion, much of the Western United States has been divided into alternating squares of public and private land. Corner-crossers, like checker pieces, literally step from one public square to another in diagonal fashion, avoiding trespassing charges. The practice is neither legal nor illegal. Most states discourage it, but none ban it.

It’s an interesting ambiguity in the law: does checker trespass on white squares when it moves diagonally over black squares? But, of course, the legal battle isn’t really about that. It’s about the rights of property owners vs the rights of those who wish to walk on this otherwise-inaccessible public land.

This particular hack will be adjudicated in court. State court, I think, which means the answer might be different in different states. It’s not an example I discuss in my new book, but it’s similar to many I do discuss. It’s the act of adjudicating hacks that allows systems to evolve.

Sirius XM Software Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/12/sirius-xm-software-vulnerability.html

This is new:

Newly revealed research shows that a number of major car brands, including Honda, Nissan, Infiniti, and Acura, were affected by a previously undisclosed security bug that would have allowed a savvy hacker to hijack vehicles and steal user data. According to researchers, the bug was in the car’s Sirius XM telematics infrastructure and would have allowed a hacker to remotely locate a vehicle, unlock and start it, flash the lights, honk the horn, pop the trunk, and access sensitive customer info like the owner’s name, phone number, address, and vehicle details.

Cars are just computers with four wheels and an engine. It’s no surprise that the software is vulnerable, and that everything is connected.

Hacking Automobile Keyless Entry Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/hacking-automobile-keyless-entry-systems.html

Suspected members of a European car-theft ring have been arrested:

The criminals targeted vehicles with keyless entry and start systems, exploiting the technology to get into the car and drive away.

As a result of a coordinated action carried out on 10 October in the three countries involved, 31 suspects were arrested. A total of 22 locations were searched, and over EUR 1 098 500 in criminal assets seized.

The criminals targeted keyless vehicles from two French car manufacturers. A fraudulent tool—marketed as an automotive diagnostic solution, was used to replace the original software of the vehicles, allowing the doors to be opened and the ignition to be started without the actual key fob.

Among those arrested feature the software developers, its resellers and the car thieves who used this tool to steal vehicles.

The article doesn’t say how the hacking tool got installed into cars. Were there crooked auto mechanics, dealers, or something else?

Massive Data Breach at Uber

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/09/massive-data-breach-at-uber.html

It’s big:

The breach appeared to have compromised many of Uber’s internal systems, and a person claiming responsibility for the hack sent images of email, cloud storage and code repositories to cybersecurity researchers and The New York Times.

“They pretty much have full access to Uber,” said Sam Curry, a security engineer at Yuga Labs who corresponded with the person who claimed to be responsible for the breach. “This is a total compromise, from what it looks like.”

It looks like a pretty basic phishing attack; someone gave the hacker their login credentials. And because Uber has lousy internal security, lots of people have access to everything. So once a hacker gains a foothold, they have access to everything.

This is the same thing that Mudge accuses Twitter of: too many employees have broad access within the company’s network.

More details. Slashdot thread.

EDITED TO ADD (9/20): More details.

Relay Attack against Teslas

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/09/relay-attack-against-teslas.html

Nice work:

Radio relay attacks are technically complicated to execute, but conceptually easy to understand: attackers simply extend the range of your existing key using what is essentially a high-tech walkie-talkie. One thief stands near you while you’re in the grocery store, intercepting your key’s transmitted signal with a radio transceiver. Another stands near your car, with another transceiver, taking the signal from their friend and passing it on to the car. Since the car and the key can now talk, through the thieves’ range extenders, the car has no reason to suspect the key isn’t inside—and fires right up.

But Tesla’s credit card keys, like many digital keys stored in cell phones, don’t work via radio. Instead, they rely on a different protocol called Near Field Communication or NFC. Those keys had previously been seen as more secure, since their range is so limited and their handshakes with cars are more complex.

Now, researchers seem to have cracked the code. By reverse-engineering the communications between a Tesla Model Y and its credit card key, they were able to properly execute a range-extending relay attack against the crossover. While this specific use case focuses on Tesla, it’s a proof of concept—NFC handshakes can, and eventually will, be reverse-engineered.

25 Years of Nmap: Happy Scan-iversary!

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/09/01/25-years-of-nmap-happy-scan-iversary/

25 Years of Nmap: Happy Scan-iversary!

I didn’t know it then, but on September 1, 1997, my life changed. That was the day that Fyodor’s Nmap was first released to the world, courtesy of the venerable Phrack magazine. (By the way, check out our recent podcast with Fyodor himself if you haven’t yet.) At the time, I had just started my legitimate IT career, but boy oh boy, I was in the thick of it when it came to hackery hijinks. I won’t admit to any crimes or anything in this, my now-very-legitimate company’s blog post, but let me tell you: 1997 was a truly magical time for the nascent field of what would eventually become known as information security.

At the risk of making this sound like a “kids-these-days/back-in-my-day” kind of blog post, let me just say that if you wanted to probe and profile computers — yes, even computers you owned, legitimately — your choices were simultaneously limited and practically unbounded. In order to conduct network scanning, you had a bunch of tools available to you, all of which worked a little differently, ranging from “completely broken” to “kind of okay for some users.” People who were into this sort of thing generally got frustrated with the tooling floating around and wrote their own, which meant that their tools tended to only work for them, since these projects were heavily dependent on that one person’s local operating system configuration.

Nmap changed all that.

Early infosec’s magic moment

From the outset, Nmap was a simple tool that literally fit in a magazine article about network scanning tactics and tricks. It was two files of about 2,100 lines of code, and unlike many hacker tools of the day, it actually compiled for me on the first try.

Most importantly, Fyodor’s code style was weirdly easy to read, even for a non-programmer hacker hobbyist like myself (I didn’t get my first “real” IT job until 1998, but I did spend quite a bit of time in university computer labs for… reasons).

25 Years of Nmap: Happy Scan-iversary!
A snippet of the original code published in Phrack 51

Smack in the middle, you can see elements like `send_tcp_raw()` (pictured above) that directly reflected the language in the TCP/IP standard, RFC 793, so the code was generally accessible to both hobbyists and professionals who had motivation to figure out how this TCP/IP stuff worked, really.

Incidentally, other projects were also popping off at the time, as well — l0phtcrack (a proprietary utility for recovering passwords) was released a few months before, and Nessus (a little open-source vulnerability scanner) was released a few months after, so there was definitely something in the ether during this 12-month period. Hacker tooling was transforming into infosec tooling, which meant more “luser n00bs,” like myself, could get themselves enmeshed and enamored of the occult magicks of internet technology. Nmap, at least for me, stood out as a true oracle to the weird ways of packet crafting and network sleight-of-hand you could use in fun, unexpected ways to learn about the world.

Happy Scan-iversary, Nmap. Thanks for the cool career.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

High-School Graduation Prank Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/high-school-graduation-prank-hack.html

This is a fun story, detailing the hack a group of high school students perpetrated against an Illinois school district, hacking 500 screens across a bunch of schools.

During the process, the group broke into the school’s IT systems; repurposed software used to monitor students’ computers; discovered a new vulnerability (and reported it); wrote their own scripts; secretly tested their system at night; and managed to avoid detection in the school’s network. Many of the techniques were not sophisticated, but they were pretty much all illegal.

It has a happy ending: no one was prosecuted.

A spokesperson for the D214 school district tells WIRED they can confirm the events in Duong’s blog post happened. They say the district does not condone hacking and the “incident highlights the importance of the extensive cybersecurity learning opportunities the District offers to students.”

“The District views this incident as a penetration test, and the students involved presented the data in a professional manner,” the spokesperson says, adding that its tech team has made changes to avoid anything similar happening again in the future.

The school also invited the students to a debrief, asking them to explain what they had done. “We were kind of scared at the idea of doing the debrief because we have to join a Zoom call, potentially with personally identifiable information,” Duong says. Eventually, he decided to use his real name, while other members created anonymous accounts. During the call, Duong says, they talked through the hack and he provided more details on ways the school could secure its system.

EDITED TO ADD (9/13): Here’s Minh Duong’s Defcon slides. You can see the table of contents of their report on page 59, and the school’s response on page 60.

Signal Phone Numbers Exposed in Twilio Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/signal-phone-numbers-exposed-in-twilio-hack.html

Twilio was hacked earlier this month, and the phone numbers of 1,900 Signal users were exposed:

Here’s what our users need to know:

  • All users can rest assured that their message history, contact lists, profile information, whom they’d blocked, and other personal data remain private and secure and were not affected.
  • For about 1,900 users, an attacker could have attempted to re-register their number to another device or learned that their number was registered to Signal. This attack has since been shut down by Twilio. 1,900 users is a very small percentage of Signal’s total users, meaning that most were not affected.

We are notifying these 1,900 users directly, and prompting them to re-register Signal on their devices.

If you were not notified, don’t worry about it. But it does bring up the old question: Why does Signal require a phone number to use? It doesn’t have to be that way.

USB “Rubber Ducky” Attack Tool

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/usb-rubber-ducky-attack-tool.html

The USB Rubber Ducky is getting better and better.

Already, previous versions of the Rubber Ducky could carry out attacks like creating a fake Windows pop-up box to harvest a user’s login credentials or causing Chrome to send all saved passwords to an attacker’s webserver. But these attacks had to be carefully crafted for specific operating systems and software versions and lacked the flexibility to work across platforms.

The newest Rubber Ducky aims to overcome these limitations. It ships with a major upgrade to the DuckyScript programming language, which is used to create the commands that the Rubber Ducky will enter into a target machine. While previous versions were mostly limited to writing keystroke sequences, DuckyScript 3.0 is a feature-rich language, letting users write functions, store variables, and use logic flow controls (i.e., if this… then that).

That means, for example, the new Ducky can run a test to see if it’s plugged into a Windows or Mac machine and conditionally execute code appropriate to each one or disable itself if it has been connected to the wrong target. It also can generate pseudorandom numbers and use them to add variable delay between keystrokes for a more human effect.

Perhaps most impressively, it can steal data from a target machine by encoding it in binary format and transmitting it through the signals meant to tell a keyboard when the CapsLock or NumLock LEDs should light up. With this method, an attacker could plug it in for a few seconds, tell someone, “Sorry, I guess that USB drive is broken,” and take it back with all their passwords saved.

Russia Creates Malware False-Flag App

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/07/russia-creates-malware-false-flag-app.html

The Russian hacking group Turla released an Android app that seems to aid Ukrainian hackers in their attacks against Russian networks. It’s actually malware, and provides information back to the Russians:

The hackers pretended to be a “community of free people around the world who are fighting russia’s aggression”—much like the IT Army. But the app they developed was actually malware. The hackers called it CyberAzov, in reference to the Azov Regiment or Battalion, a far-right group that has become part of Ukraine’s national guard. To add more credibility to the ruse they hosted the app on a domain “spoofing” the Azov Regiment: cyberazov[.]com.

[…]

The app actually didn’t DDoS anything, but was designed to map out and figure out who would want to use such an app to attack Russian websites, according to Huntely.

[…]

Google said the fake app wasn’t hosted on the Play Store, and that the number of installs “was miniscule.”

Details from Google’s Threat Analysis Group here.

NSO Group’s Pegasus Spyware Used against Thailand Pro-Democracy Activists and Leaders

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/07/nso-groups-pegasus-spyware-used-against-thailand-pro-democracy-activists-and-leaders.html

Yet another basic human rights violation, courtesy of NSO Group: Citizen Lab has the details:

Key Findings

  • We discovered an extensive espionage campaign targeting Thai pro-democracy protesters, and activists calling for reforms to the monarchy.
  • We forensically confirmed that at least 30 individuals were infected with NSO Group’s Pegasus spyware.
  • The observed infections took place between October 2020 and November 2021.
  • The ongoing investigation was triggered by notifications sent by Apple to Thai civil society members in November 2021. Following the notification, multiple recipients made contact with civil society groups, including the Citizen Lab.
  • The report describes the results of an ensuing collaborative investigation by the Citizen Lab, and Thai NGOs iLaw, and DigitalReach.
  • A sample of the victims was independently analyzed by Amnesty International’s Security Lab which confirms the methodology used to determine Pegasus infections.

[…]

NSO Group has denied any wrongdoing and maintains that its products are to be used “in a legal manner and according to court orders and the local law of each country.” This justification is problematic, given the presence of local laws that infringe on international human rights standards and the lack of judicial oversight, transparency, and accountability in governmental surveillance, which could result in abuses of power. In Thailand, for example, Section 112 of the Criminal Code (also known as the lèse-majesté law), which criminalizes defamation, insults, and threats to the Thai royal family, has been criticized for being “fundamentally incompatible with the right to freedom of expression,” while the amended Computer Crime Act opens the door to potential rights violations, as it “gives overly broad powers to the government to restrict free speech [and] enforce surveillance and censorship.” Both laws have been used in concert to prosecute lawyers and activists, some of whom were targeted with Pegasus.

More details. News articles.

A few months ago, Ronan Farrow wrote a really good article on NSO Group and its problems. The company was itself hacked in 2021.

L3Harris Corporation was looking to buy NSO Group, but dropped its bid after the Biden administration expressed concerns. The US government blacklisted NSO Group last year, and the company is even more toxic than it was as a result—and a mess internally.

In another story, the nephew of jailed Hotel Rwanda dissident was also hacked by Pegasus.

EDITED TO ADD (7/28): The House Intelligence Committee held hearings on what to do about this rogue industry. It’s important to remember that while NSO Group gets all the heat, there are many other companies that do the same thing.

John-Scott Railton at the hearing:

If NSO Group goes bankrupt tomorrow, there are other companies, perhaps seeded with U.S. venture capital, that will attempt to step in to fill the gap. As long as U.S. investors see the mercenary spyware industry as a growth market, the U.S. financial sector is poised to turbocharge the problem and set fire to our collective cybersecurity and privacy.