Tag Archives: scams

Details of a Phone Scam

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/details-of-a-phone-scam.html

First-person account of someone who fell for a scam, that started as a fake Amazon service rep and ended with a fake CIA agent, and lost $50,000 cash. And this is not a naive or stupid person.

The details are fascinating. And if you think it couldn’t happen to you, think again. Given the right set of circumstances, it can.

It happened to Cory Doctorow.

EDITED TO ADD (2/23): More scams, these involving timeshares.

How .tk Became a TLD for Scammers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/11/how-tk-became-a-tld-for-scammers.html

Sad story of Tokelau, and how its top-level domain “became the unwitting host to the dark underworld by providing a never-ending supply of domain names that could be weaponized against internet users. Scammers began using .tk websites to do everything from harvesting passwords and payment information to displaying pop-up ads or delivering malware.”

LLMs and Phishing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance….” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams.

So why were scammers still sending such obviously dubious emails? In 2012, researcher Cormac Herley offered an answer: It weeded out all but the most gullible. A smart scammer doesn’t want to waste their time with people who reply and then realize it’s a scam when asked to wire money. By using an obvious scam email, the scammer can focus on the most potentially profitable people. It takes time and effort to engage in the back-and-forth communications that nudge marks, step by step, from interlocutor to trusted acquaintance to pauper.

Long-running financial scams are now known as pig butchering, growing the potential mark up until their ultimate and sudden demise. Such scams, which require gaining trust and infiltrating a target’s personal finances, take weeks or even months of personal time and repeated interactions. It’s a high stakes and low probability game that the scammer is playing.

Here is where LLMs will make a difference. Much has been written about the unreliability of OpenAI’s GPT models and those like them: They “hallucinate” frequently, making up things about the world and confidently spouting nonsense. For entertainment, this is fine, but for most practical uses it’s a problem. It is, however, not a bug but a feature when it comes to scams: LLMs’ ability to confidently roll with the punches, no matter what a user throws at them, will prove useful to scammers as they navigate hostile, bemused, and gullible scam targets by the billions. AI chatbot scams can ensnare more people, because the pool of victims who will fall for a more subtle and flexible scammer—one that has been trained on everything ever written online—is much larger than the pool of those who believe the king of Nigeria wants to give them a billion dollars.

Personal computers are powerful enough today that they can run compact LLMs. After Facebook’s new model, LLaMA, was leaked online, developers tuned it to run fast and cheaply on powerful laptops. Numerous other open-source LLMs are under development, with a community of thousands of engineers and scientists.

A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.

This is a change in both scope and scale. LLMs will change the scam pipeline, making them more profitable than ever. We don’t know how to live in a world with a billion, or 10 billion, scammers that never sleep.

There will also be a change in the sophistication of these attacks. This is due not only to AI advances, but to the business model of the internet—surveillance capitalism—which produces troves of data about all of us, available for purchase from data brokers. Targeted attacks against individuals, whether for phishing or data collection or scams, were once only within the reach of nation-states. Combine the digital dossiers that data brokers have on all of us with LLMs, and you have a tool tailor-made for personalized scams.

Companies like OpenAI attempt to prevent their models from doing bad things. But with the release of each new LLM, social media sites buzz with new AI jailbreaks that evade the new restrictions put in place by the AI’s designers. ChatGPT, and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their release, and in dozens of different ways. Most protections against bad uses and harmful output are only skin-deep, easily evaded by determined users. Once a jailbreak is discovered, it usually can be generalized, and the community of users pulls the LLM open through the chinks in its armor. And the technology is advancing too fast for anyone to fully understand how they work, even the designers.

This is all an old story, though: It reminds us that many of the bad uses of AI are a reflection of humanity more than they are a reflection of AI technology itself. Scams are nothing new—simply intent and then action of one person tricking another for personal gain. And the use of others as minions to accomplish scams is sadly nothing new or uncommon: For example, organized crime in Asia currently kidnaps or indentures thousands in scam sweatshops. Is it better that organized crime will no longer see the need to exploit and physically abuse people to run their scam operations, or worse that they and many others will be able to scale up scams to an unprecedented level?

Defense can and will catch up, but before it does, our signal-to-noise ratio is going to drop dramatically.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

Complex Impersonation Story

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/complex-impersonation-story.html

This is a story of one piece of what is probably a complex employment scam. Basically, real programmers are having their resumes copied and co-opted by scammers, who apply for jobs (or, I suppose, get recruited from various job sites), then hire other people with Western looks and language skills are to impersonate those first people on Zoom job interviews. Presumably, sometimes the scammers get hired and…I suppose…collect paychecks for a while until they get found out and fired. But that requires a bunch of banking fraud as well, so I don’t know.

EDITED TO ADD (10/11): Brian Krebs writes about fake LinkedIn profiles, which is probably another facet of this fraud system. Someone needs to unravel all of the threads.

Man-in-the-Middle Phishing Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/man-in-the-middle-phishing-attack.html

Here’s a phishing campaign that uses a man-in-the-middle attack to defeat multi-factor authentication:

Microsoft observed a campaign that inserted an attacker-controlled proxy site between the account users and the work server they attempted to log into. When the user entered a password into the proxy site, the proxy site sent it to the real server and then relayed the real server’s response back to the user. Once the authentication was completed, the threat actor stole the session cookie the legitimate site sent, so the user doesn’t need to be reauthenticated at every new page visited. The campaign began with a phishing email with an HTML attachment leading to the proxy server.

$23 Million YouTube Royalties Scam

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/23-million-youtube-royalties-scam.html

Scammers were able to convince YouTube that other peoples’ music was their own. They successfully stole $23 million before they were caught.

No one knows how common this scam is, and how much money total is being stolen in this way. Presumably this is not an uncommon fraud.

While the size of the heist and the breadth of the scheme may be very unique, it’s certainly a situation that many YouTube content creators have faced before. YouTube’s Content ID system, meant to help creators, has been weaponized by bad faith actors in order to make money off content that isn’t theirs. While some false claims are just mistakes caused by automated systems, the MediaMuv case is a perfect example of how fraudsters are also purposefully taking advantage of digital copyright rules.

YouTube attempts to be cautious with who it provides CMS and Content ID tool access because of how powerful these systems are. As a result, independent creators and artists cannot check for these false copyright claims nor do they have the power to directly act on them. They need to go through a digital rights management company that does have access. And it seems like thieves are doing the same, falsifying documents to gain access to these YouTube tools through these third parties that are “trusted” with these tools by YouTube.

Hackers Using Fake Police Data Requests against Tech Companies

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/hackers-using-fake-police-data-requests-against-tech-companies.html

Brian Krebs has a detailed post about hackers using fake police data requests to trick companies into handing over data.

Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name.

But in certain circumstances ­– such as a case involving imminent harm or death –­ an investigating authority may make what’s known as an Emergency Data Request (EDR), which largely bypasses any official review and does not require the requestor to supply any court-approved documents.

It is now clear that some hackers have figured out there is no quick and easy way for a company that receives one of these EDRs to know whether it is legitimate. Using their illicit access to police email systems, the hackers will send a fake EDR along with an attestation that innocent people will likely suffer greatly or die unless the requested data is provided immediately.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR -­- and potentially having someone’s blood on their hands -­- or possibly leaking a customer record to the wrong person.

Another article claims that both Apple and Facebook (or Meta, or whatever they want to be called now) fell for this scam.

We allude to this kind of risk in our 2015 “Keys Under Doormats” paper:

Third, exceptional access would create concentrated targets that could attract bad actors. Security credentials that unlock the data would have to be retained by the platform provider, law enforcement agencies, or some other trusted third party. If law enforcement’s keys guaranteed access to everything, an attacker who gained access to these keys would enjoy the same privilege. Moreover, law enforcement’s stated need for rapid access to data would make it impractical to store keys offline or split keys among multiple keyholders, as security engineers would normally do with extremely high-value credentials.

The “credentials” are even more insecure than we could have imagined: access to an email address. And the data, of course, isn’t very secure. But imagine how this kind of thing could be abused with a law enforcement encryption backdoor.

Fraud on Zelle

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/fraud-on-zelle.html

Zelle is rife with fraud:

Zelle’s immediacy has also made it a favorite of fraudsters. Other types of bank transfers or transactions involving payment cards typically take at least a day to clear. But once crooks scare or trick victims into handing over money via Zelle, they can siphon away thousands of dollars in seconds. There’s no way for customers — and in many cases, the banks themselves — to retrieve the money.

[…]

It’s not clear who is legally liable for such losses. Banks say that returning money to defrauded customers is not their responsibility, since the federal law covering electronic transfers — known in the industry as Regulation E ­– requires them to cover only “unauthorized” transactions, and the fairly common scam that Mr. Faunce fell prey to tricks people into making the transfers themselves. Victims say because they were duped into sending the money, the transaction is unauthorized. Regulatory guidance has so far been murky.

When swindled customers, already upset to find themselves on the hook, search for other means of redress, many are enraged to find out that Zelle is owned and operated by banks.

[…]

The Zelle network is operated by Early Warning Services, a company created and owned by seven banks: Bank of America, Capital One, JPMorgan Chase, PNC, Truist, U.S. Bank and Wells Fargo. Early Warning, based in Scottsdale, Ariz., manages the system’s technical infrastructure. But the 1,425 banks and credit unions that use Zelle can customize the app and add their own security settings.

Stealing Bicycles by Swapping QR Codes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/stealing-bicycles-by-swapping-qr-codes.html

This is a clever hack against those bike-rental kiosks:

They’re stealing Citi Bikes by switching the QR scan codes on two bicycles near each other at a docking station, then waiting for an unsuspecting cyclist to try to unlock a bike with his or her smartphone app.

The app doesn’t work for the rider but does free up the nearby Citi Bike with the switched code, where a thief is waiting, jumps on the bicycle and rides off.

Presumably they’re using camera, printers, and stickers to swap the codes on the bikes. And presumably the victim is charged for not returning the stolen bicycle.

This story is from last year, but I hadn’t seen it before. There’s a video of one theft at the link.

Wire Fraud Scam Upgraded with Bitcoin

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/11/wire-fraud-scam-upgraded-with-bitcoin.html

The FBI has issued a bulletin describing a bitcoin variant of a wire fraud scam:

As the agency describes it, the scammer will contact their victim and somehow convince them that they need to send money, either with promises of love, further riches, or by impersonating an actual institution like a bank or utility company. After the mark is convinced, the scammer will have them get cash (sometimes out of investment or retirement accounts), and head to an ATM that sells cryptocurrencies and supports reading QR codes. Once the victim’s there, they’ll scan a QR code that the scammer sent them, which will tell the machine to send any crypto purchased to the scammer’s address. Just like that, the victim loses their money, and the scammer has successfully exploited them.

[…]

The “upgrade” (as it were) for scammers with the crypto ATM method is two-fold: it can be less friction than sending a wire transfer, and at the end the scammer has cryptocurrency instead of fiat. With wire transfers, you have to fill out a form, and you may give that form to an actual person (who could potentially vibe check you). Using the ATM method, there’s less time to reflect on the fact that you’re about to send money to a stranger. And, if you’re a criminal trying to get your hands on Bitcoin, you won’t have to teach your targets how to buy coins on the internet and transfer them to another wallet — they probably already know how to use an ATM and scan a QR code.

Friday Squid Blogging: Squid Game Cryptocurrency Was a Scam

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/11/friday-squid-blogging-squid-game-cryptocurrency-was-a-scam.html

The Squid Game cryptocurrency was a complete scam:

The SQUID cryptocurrency peaked at a price of $2,861 before plummeting to $0 around 5:40 a.m. ET., according to the website CoinMarketCap. This kind of theft, commonly called a “rug pull” by crypto investors, happens when the creators of the crypto quickly cash out their coins for real money, draining the liquidity pool from the exchange.

I don’t know why anyone would trust an investment — any investment — that you could buy but not sell.

Wired story.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Using Fake Student Accounts to Shill Brands

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/11/using-fake-student-accounts-to-shill-brands.html

It turns out that it’s surprisingly easy to create a fake Harvard student and get a harvard.edu email account. Scammers are using that prestigious domain name to shill brands:

Basically, it appears that anyone with $300 to spare can ­– or could, depending on whether Harvard successfully shuts down the practice — advertise nearly anything they wanted on Harvard.edu, in posts that borrow the university’s domain and prestige while making no mention of the fact that it in reality they constitute paid advertising….

A Harvard spokesperson said that the university is working to crack down on the fake students and other scammers that have gained access to its site. They also said that the scammers were creating the fake accounts by signing up for online classes and then using the email address that process provided to infiltrate the university’s various blogging platforms.

Textbook Rental Scam

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/textbook-rental-scam.html

Here’s a story of someone who, with three compatriots, rented textbooks from Amazon and then sold them instead of returning them. They used gift cards and prepaid credit cards to buy the books, so there was no available balance when Amazon tried to charge them the buyout price for non-returned books. They also used various aliases and other tricks to bypass Amazon’s fifteen-book limit. In all, they stole 14,000 textbooks worth over $1.5 million.

The article doesn’t link to the indictment, so I don’t know how they were discovered.

Detecting Phishing Emails

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/detecting-phishing-emails.html

Research paper: Rick Wash, “How Experts Detect Phishing Scam Emails“:

Abstract: Phishing scam emails are emails that pretend to be something they are not in order to get the recipient of the email to undertake some action they normally would not. While technical protections against phishing reduce the number of phishing emails received, they are not perfect and phishing remains one of the largest sources of security risk in technology and communication systems. To better understand the cognitive process that end users can use to identify phishing messages, I interviewed 21 IT experts about instances where they successfully identified emails as phishing in their own inboxes. IT experts naturally follow a three-stage process for identifying phishing emails. In the first stage, the email recipient tries to make sense of the email, and understand how it relates to other things in their life. As they do this, they notice discrepancies: little things that are “off” about the email. As the recipient notices more discrepancies, they feel a need for an alternative explanation for the email. At some point, some feature of the email — usually, the presence of a link requesting an action — triggers them to recognize that phishing is a possible alternative explanation. At this point, they become suspicious (stage two) and investigate the email by looking for technical details that can conclusively identify the email as phishing. Once they find such information, then they move to stage three and deal with the email by deleting it or reporting it. I discuss ways this process can fail, and implications for improving training of end users about phishing.