Tag Archives: disinformation

MERDA – A Framework For Countering Disinformation

Post Syndicated from Bozho original https://techblog.bozho.net/merda-a-framework-for-countering-disinformation/

Yesterday, on an conference about disinformation, I jokingly coined the acronym MERDA (Monitor, Educate, React, Disrupt, Adapt) for countering disinformation. Now I’ll put the pretentious label “framework” and describe what I mean by that. While this may not seem a very technical topic, fit for a techblog, in fact it has a lot of technical aspects, as disinformation today is spread through technical means (social networks, anonymous websites, messengers). And therefore especially the “Disrupt” part is quite technical.

Monitor – in order to tackle disinformation narratives, we need to monitor them. This includes media monitoring tools (including social media) and building reports on rising narratives that may potentially be disinformation campaigns. These tools include a lot of scraping online content, and consuming APIs where such exist and are accessible. Notably, Facebook removed much of their API access to content, which makes it harder to monitor for trends. It has to be noted that this doesn’t mean monitoring individuals – it’s just about trends, keywords, phrases – sometimes known, sometimes unknown (e.g. the tool can look for very popular tweets, extract the key phrases from it, and then search for that). Governments can list their “named entities” and keep track of narratives/keywords/phrases relating to these named entities (ministers, prime minister, ministries, parties, etc.)

Educate – media literacy, and social media literacy, is a skill. Knowing that “Your page will be disabled if you don’t click here” is a skill. Being able to recognize logical fallacies and propaganda techniques is also a skill and it needs to be taught. Ultimately, the best defense against disinformation is a well informed and prepared public.

React – public institutions need to know how and when to react to certain narratives. It helps if they know them (through monitoring), but they need the so called “strategic communications” in order to respond adequately to disinformation about current events, debunking, pre-bunking and giving the official angle (note that I’m not saying the official angle is always right – it sometimes isn’t, that’s why it has to be supported by credible evidence).

Disrupt – this is the hard part – how to disrupt disinformation campaigns. How to identify and disable troll farms, which engage in coordinated inauthentic behavior – sharing, liking, commenting, cross-posting in groups – creating an artificial buzz around a topic. Facebook is, I think, quite bad at that – this is why I have proposed a local legislation that requires following certain guidelines for identifying troll farms (groups of fake accounts). Then we need a mechanism to take them down, which takes into account freedom of speech – i.e. the possibility that someone is not, in fact, a troll, but merely a misled observer. Fortunately, the digital services act provides for out-of-court appeals for moderator decisions.

The “disrupt” part is not just about troll farms – it’s about fake websites as well. Tracking linked websites, identifying the flow of narratives through these websites, trying to find the ultimate owners, is a hard and quite technical task. We know that there are thousands such anonymous websites that repost, in various languages, disinformation narratives – but taking down a website requires good legal reasons. “I don’t like their articles” is not a good reason.

The “disrupt” part also needs to tackle ad networks – some obscure ad networks are the way disinformation websites get financial support. They usually advertise not-so-legal products. Stopping the inflow of money is one way to reduce disinformation.

Adapt – threat actors in the disinformation space (usually nation-states like Russia) are dynamic and they change their tactics, techniques and procedures (TTPs). Institutions that are trying to reduce the harm of disinformation also need to be adaptable, to constantly look for new ways of getting the false or misleading information through.

Tackling disinformation is walking on thin ice. A wrong step may be seen as curbing free speech. But if we analyze patterns and techniques, rather than content itself, then we are on mostly on the safe side – it doesn’t matter what the article says, if it’s shared by 100 fake accounts and the website is supported by ads of illegal drugs that use deep fakes of famous physicians.

And it’s a complicated technical task – I’ve seen companies claiming they identify troll farms, rings of fake news website, etc. But I haven’t seen any tool that’s good enough. And MERDA … is the situation we are in – active, coordinated exploitation of misleading and incorrect information for political and geopolitical purposes.

The post MERDA – A Framework For Countering Disinformation appeared first on Bozho's tech blog.

Political Disinformation and AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/political-disinformation-and-ai.html

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the US presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the US and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert, I believe it’s a tool uniquely suited to Internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.

Election season will soon be in full swing in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June, and the US in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the UK don’t have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan, Indonesia, India, and many African countries. Russia cares about the UK, Poland, Germany, and the EU in general. Everyone cares about the United States.

And that’s only considering the largest players. Every US national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.

A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the US. They talked about their expectations regarding election interference in 2024. They expected the usual players—Russia, China, and Iran—and a significant new one: “domestic actors.” That is a direct result of this reduced cost.

Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups, and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal, and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.

Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is a newer platform that is controlled by China and more suitable for short, provocative videos—ones that AI makes much easier to produce. And the current crop of generative AIs are being connected to tools that will make content distribution easier as well.

Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says—or amplifies—something political. These persona bots, as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.

That’s just one scenario. The military officers in Russia, China, and elsewhere in charge of election interference are likely to have their best people thinking of others. And their tactics are likely to be much more sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before rolling them out at scale. When that happens, it’s important to be able to fingerprint these tactics. Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now.

In the computer security world, researchers recognize that sharing methods of attack and their effectiveness is the only way to build strong defensive systems. The same kind of thinking also applies to these information campaigns: The more that researchers study what techniques are being employed in distant countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016. I believe the US needs to have efforts in place to fingerprint and identify AI-produced propaganda in Taiwan, where a presidential candidate claims a deepfake audio recording has defamed him, and other places. Otherwise, we’re not going to see them when they arrive here. Unfortunately, researchers are instead being targeted and harassed.

Maybe this will all turn out okay. There have been some important democratic elections in the generative AI era with no significant disinformation issues: primaries in Argentina, first-round elections in Ecuador, and national elections in Thailand, Turkey, Spain, and Greece. But the sooner we know what to expect, the better we can deal with what comes.

This essay previously appeared in The Conversation.

Undermining Democracy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/undermining-democracy.html

Last Thursday, Rudy Giuliani, a Trump campaign lawyer, alleged a widespread voting conspiracy involving Venezuela, Cuba, and China. Another lawyer, Sidney Powell, argued that Mr. Trump won in a landslide, the entire election in swing states should be overturned and the legislatures should make sure that the electors are selected for the president.

The Republican National Committee swung in to support her false claim that Mr. Trump won in a landslide, while Michigan election officials have tried to stop the certification of the vote.

It is wildly unlikely that their efforts can block Joe Biden from becoming president. But they may still do lasting damage to American democracy for a shocking reason: the moves have come from trusted insiders.

American democracy’s vulnerability to disinformation has been very much in the news since the Russian disinformation campaign in 2016. The fear is that outsiders, whether they be foreign or domestic actors, will undermine our system by swaying popular opinion and election results.

This is half right. American democracy is an information system, in which the information isn’t bits and bytes but citizens’ beliefs. When peoples’ faith in the democratic system is undermined, democracy stops working. But as information security specialists know, outsider attacks are hard. Russian trolls, who don’t really understand how American politics works, have actually had a difficult time subverting it.

When you really need to worry is when insiders go bad. And that is precisely what is happening in the wake of the 2020 presidential election. In traditional information systems, the insiders are the people who have both detailed knowledge and high level access, allowing them to bypass security measures and more effectively subvert systems. In democracy, the insiders aren’t just the officials who manage voting but also the politicians who shape what people believe about politics. For four years, Donald Trump has been trying to dismantle our shared beliefs about democracy. And now, his fellow Republicans are helping him.

Democracy works when we all expect that votes will be fairly counted, and defeated candidates leave office. As the democratic theorist Adam Przeworski puts it, democracy is “a system in which parties lose elections.” These beliefs can break down when political insiders make bogus claims about general fraud, trying to cling to power when the election has gone against them.

It’s obvious how these kinds of claims damage Republican voters’ commitment to democracy. They will think that elections are rigged by the other side and will not accept the judgment of voters when it goes against their preferred candidate. Their belief that the Biden administration is illegitimate will justify all sorts of measures to prevent it from functioning.

It’s less obvious that these strategies affect Democratic voters’ faith in democracy, too. Democrats are paying attention to Republicans’ efforts to stop the votes of Democratic voters ­- and especially Black Democratic voters -­ from being counted. They, too, are likely to have less trust in elections going forward, and with good reason. They will expect that Republicans will try to rig the system against them. Mr. Trump is having a hard time winning unfairly, because he has lost in several states. But what if Mr. Biden’s margin of victory depended only on one state? What if something like that happens in the next election?

The real fear is that this will lead to a spiral of distrust and destruction. Republicans ­ who are increasingly committed to the notion that the Democrats are committing pervasive fraud -­ will do everything that they can to win power and to cling to power when they can get it. Democrats ­- seeing what Republicans are doing ­ will try to entrench themselves in turn. They suspect that if the Republicans really win power, they will not ever give it back. The claims of Republicans like Senator Mike Lee of Utah that America is not really a democracy might become a self-fulfilling prophecy.

More likely, this spiral will not directly lead to the death of American democracy. The U.S. federal system of government is complex and hard for any one actor or coalition to dominate completely. But it may turn American democracy into an unworkable confrontation between two hostile camps, each unwilling to make any concession to its adversary.

We know how to make voting itself more open and more secure; the literature is filled with vital and important suggestions. The more difficult problem is this. How do you shift the collective belief among Republicans that elections are rigged?

Political science suggests that partisans are more likely to be persuaded by fellow partisans, like Brad Raffensperger, the Republican secretary of state in Georgia, who said that election fraud wasn’t a big problem. But this would only be effective if other well-known Republicans supported him.

Public outrage, alternatively, can sometimes force officials to back down, as when people crowded in to denounce the Michigan Republican election officials who were trying to deny certification of their votes.

The fundamental problem, however, is Republican insiders who have convinced themselves that to keep and hold power, they need to trash the shared beliefs that hold American democracy together.

They may have long-term worries about the consequences, but they’re unlikely to do anything about those worries in the near-term unless voters, wealthy donors or others whom they depend on make them pay short-term costs.

This essay was written with Henry Farrell, and previously appeared in the New York Times.

2020 Was a Secure Election

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/2020-was-a-secure-election.html

Over at Lawfare: “2020 Is An Election Security Success Story (So Far).”

What’s more, the voting itself was remarkably smooth. It was only a few months ago that professionals and analysts who monitor election administration were alarmed at how badly unprepared the country was for voting during a pandemic. Some of the primaries were disasters. There were not clear rules in many states for voting by mail or sufficient opportunities for voting early. There was an acute shortage of poll workers. Yet the United States saw unprecedented turnout over the last few weeks. Many states handled voting by mail and early voting impressively and huge numbers of volunteers turned up to work the polls. Large amounts of litigation before the election clarified the rules in every state. And for all the president’s griping about the counting of votes, it has been orderly and apparently without significant incident. The result was that, in the midst of a pandemic that has killed 230,000 Americans, record numbers of Americans voted­ — and voted by mail — ­and those votes are almost all counted at this stage.

On the cybersecurity front, there is even more good news. Most significantly, there was no serious effort to target voting infrastructure. After voting concluded, the director of the Cybersecurity and Infrastructure Security Agency (CISA), Chris Krebs, released a statement, saying that “after millions of Americans voted, we have no evidence any foreign adversary was capable of preventing Americans from voting or changing vote tallies.” Krebs pledged to “remain vigilant for any attempts by foreign actors to target or disrupt the ongoing vote counting and final certification of results,” and no reports have emerged of threats to tabulation and certification processes.

A good summary.