All posts by Bruce Schneier

On the Catastrophic Risk of AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.

I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said.

In my talk at the RSA Conference last month, I talked about the power level of our species becoming too great for our systems of governance. Talking about those systems, I said:

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

That was what I was thinking about when I agreed to sign on to the statement: “Pandemics, nuclear weapons, AI—yeah, I would put those three in the same bucket. Surely we can spend the same effort on AI risk as we do on future pandemics. That’s a really low bar.” Clearly I should have focused on the word “extinction,” and not the relative comparisons.

Seth Lazar, Jeremy Howard, and Arvind Narayanan wrote:

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there­—ne that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

I agree with that, and with their follow up:

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom.

This is what I wrote in Click Here to Kill Everybody (2018):

I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machine learning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks arealready prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.

Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.

I do think we should worry about catastrophic AI and robotics risk. It’s the fact that they affect the world in a direct, physical manner—and that they’re vulnerable to class breaks.

(Other things to read: David Chapman is good on scary AI. And Kieran Healy is good on the statement.)

Okay, enough. I should also learn not to sign on to group statements.

Chinese Hacking of US Critical Infrastructure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/chinese-hacking-of-us-critical-infrastructure.html

Everyone is writing about an interagency and international report on Chinese hacking of US critical infrastructure.

Lots of interesting details about how the group, called Volt Typhoon, accesses target networks and evades detection.

Brute-Forcing a Fingerprint Reader

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/brute-forcing-a-fingerprint-reader.html

It’s neither hard nor expensive:

Unlike password authentication, which requires a direct match between what is inputted and what’s stored in a database, fingerprint authentication determines a match using a reference threshold. As a result, a successful fingerprint brute-force attack requires only that an inputted image provides an acceptable approximation of an image in the fingerprint database. BrutePrint manipulates the false acceptance rate (FAR) to increase the threshold so fewer approximate images are accepted.

BrutePrint acts as an adversary in the middle between the fingerprint sensor and the trusted execution environment and exploits vulnerabilities that allow for unlimited guesses.

In a BrutePrint attack, the adversary removes the back cover of the device and attaches the $15 circuit board that has the fingerprint database loaded in the flash storage. The adversary then must convert the database into a fingerprint dictionary that’s formatted to work with the specific sensor used by the targeted phone. The process uses a neural-style transfer when converting the database into the usable dictionary. This process increases the chances of a match.

With the fingerprint dictionary in place, the adversary device is now in a position to input each entry into the targeted phone. Normally, a protection known as attempt limiting effectively locks a phone after a set number of failed login attempts are reached. BrutePrint can fully bypass this limit in the eight tested Android models, meaning the adversary device can try an infinite number of guesses. (On the two iPhones, the attack can expand the number of guesses to 15, three times higher than the five permitted.)

The bypasses result from exploiting what the researchers said are two zero-day vulnerabilities in the smartphone fingerprint authentication framework of virtually all smartphones. The vulnerabilities—­one known as CAMF (cancel-after-match fail) and the other MAL (match-after-lock)—result from logic bugs in the authentication framework. CAMF exploits invalidate the checksum of transmitted fingerprint data, and MAL exploits infer matching results through side-channel attacks.

Depending on the model, the attack takes between 40 minutes and 14 hours.

Also:

The ability of BrutePrint to successfully hijack fingerprints stored on Android devices but not iPhones is the result of one simple design difference: iOS encrypts the data, and Android does not.

Other news articles. Research paper.

Expeditionary Cyberspace Operations

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/expeditionary-cyberspace-operations.html

Cyberspace operations now officially has a physical dimension, meaning that the United States has official military doctrine about cyberattacks that also involve an actual human gaining physical access to a piece of computing infrastructure.

A revised version of Joint Publication 3-12 Cyberspace Operations—published in December 2022 and while unclassified, is only available to those with DoD common access cards, according to a Joint Staff spokesperson—officially provides a definition for “expeditionary cyberspace operations,” which are “[c]yberspace operations that require the deployment of cyberspace forces within the physical domains.”

[…]

“Developing access to targets in or through cyberspace follows a process that can often take significant time. In some cases, remote access is not possible or preferable, and close proximity may be required, using expeditionary [cyber operations],” the joint publication states. “Such operations are key to addressing the challenge of closed networks and other systems that are virtually isolated. Expeditionary CO are often more regionally and tactically focused and can include units of the CMF or special operations forces … If direct access to the target is unavailable or undesired, sometimes a similar or partial effect can be created by indirect access using a related target that has higher-order effects on the desired target.”

[…]

“Allowing them to support [combatant commands] in this way permits faster adaptation to rapidly changing needs and allows threats that initially manifest only in one [area of responsibility] to be mitigated globally in near real time. Likewise, while synchronizing CO missions related to achieving [combatant commander] objectives, some cyberspace capabilities that support this activity may need to be forward-deployed; used in multiple AORs simultaneously; or, for speed in time-critical situations, made available via reachback,” it states. “This might involve augmentation or deployment of cyberspace capabilities to forces already forward or require expeditionary CO by deployment of a fully equipped team of personnel and capabilities.”

On the Poisoning of LLMs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/on-the-poisoning-of-llms.html

Interesting essay on the poisoning of LLMs—ChatGPT in particular:

Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.

They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.

Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.

Indiana, Iowa, and Tennessee Pass Comprehensive Privacy Laws

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/indiana-iowa-and-tennessee-pass-comprehensive-privacy-laws.html

It’s been a big month for US data privacy. Indiana, Iowa, and Tennessee all passed state privacy laws, bringing the total number of states with a privacy law up to eight. No private right of action in any of those, which means it’s up to the states to enforce the laws.

Credible Handwriting Machine

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/credible-handwriting-machine.html

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.

Google Is Not Deleting Old YouTube Videos

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/google-is-not-deleting-old-youtube-videos.html

Google has backtracked on its plan to delete inactive YouTube videos—at least for now. Of course, it could change its mind anytime it wants.

It would be nice if this would get people to think about the vulnerabilities inherent in letting a for-profit monopoly decide what of human creativity is worth saving.

Friday Squid Blogging: Peruvian Squid-Fishing Regulation Drives Chinese Fleets Away

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/friday-squid-blogging-peruvian-squid-fishing-regulation-drives-chinese-fleets-away.html

A Peruvian oversight law has the opposite effect:

Peru in 2020 began requiring any foreign fishing boat entering its ports to use a vessel monitoring system allowing its activities to be tracked in real time 24 hours a day. The equipment, which tracks a vessel’s geographic position and fishing activity through a proprietary satellite communication system, sought to provide authorities with visibility into several hundred Chinese squid vessels that every year amass off the west coast of South America.

[…]

Instead of increasing oversight, the new Peruvian regulations appear to have driven Chinese ships away from the country’s ports—and kept crews made up of impoverished Filipinos and Indonesians at sea for longer periods, exposing them to abuse, according to new research published by Peruvian fishing consultancy Artisonal.

Two things to note here. One is that the Peruvian law was easy to hack, which China promptly did. The second is that no nation-state has the proper regulatory footprint to manage the world’s oceans. These are global issues, and need global solutions. Of course, our current society is terrible at global solutions—to anything.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Microsoft Secure Boot Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/microsoft-secure-boot-bug.html

Microsoft is currently patching a zero-day Secure-Boot bug.

The BlackLotus bootkit is the first-known real-world malware that can bypass Secure Boot protections, allowing for the execution of malicious code before your PC begins loading Windows and its many security protections. Secure Boot has been enabled by default for over a decade on most Windows PCs sold by companies like Dell, Lenovo, HP, Acer, and others. PCs running Windows 11 must have it enabled to meet the software’s system requirements.

Microsoft says that the vulnerability can be exploited by an attacker with either physical access to a system or administrator rights on a system. It can affect physical PCs and virtual machines with Secure Boot enabled.

That’s important. This is a nasty vulnerability, but it takes some work to exploit it.

The problem with the patch is that it breaks backwards compatibility: “…once the fixes have been enabled, your PC will no longer be able to boot from older bootable media that doesn’t include the fixes.”

And:

Not wanting to suddenly render any users’ systems unbootable, Microsoft will be rolling the update out in phases over the next few months. The initial version of the patch requires substantial user intervention to enable—you first need to install May’s security updates, then use a five-step process to manually apply and verify a pair of “revocation files” that update your system’s hidden EFI boot partition and your registry. These will make it so that older, vulnerable versions of the bootloader will no longer be trusted by PCs.

A second update will follow in July that won’t enable the patch by default but will make it easier to enable. A third update in “first quarter 2024” will enable the fix by default and render older boot media unbootable on all patched Windows PCs. Microsoft says it is “looking for opportunities to accelerate this schedule,” though it’s unclear what that would entail.

So it’ll be almost a year before this is completely fixed.

Micro-Star International Signing Key Stolen

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/micro-star-international-signing-key-stolen.html

Micro-Star International—aka MSI—had its UEFI signing key stolen last month.

This raises the possibility that the leaked key could push out updates that would infect a computer’s most nether regions without triggering a warning. To make matters worse, Matrosov said, MSI doesn’t have an automated patching process the way Dell, HP, and many larger hardware makers do. Consequently, MSI doesn’t provide the same kind of key revocation capabilities.

Delivering a signed payload isn’t as easy as all that. “Gaining the kind of control required to compromise a software build system is generally a non-trivial event that requires a great deal of skill and possibly some luck.” But it just got a whole lot easier.

Ted Chiang on the Risks of AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/ted-chiang-on-the-risks-of-ai.html

Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?”

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans­—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

EDITED TO ADD: Ted Chiang’s previous essay, “ChatGPT Is a Blurry JPEG of the Web” is also worth reading.

Building Trustworthy AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/building-trustworthy-ai.html

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.

Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.

Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is a well worn one in the technology sector. Twenty years ago, Google’s search engine rapidly rose to monopolistic dominance because of its transformative information retrieval capability. Over time, the company’s dependence on revenue from search advertising led them to degrade that capability. Today, many observers look forward to the death of the search paradigm entirely. Amazon has walked the same path, from honest marketplace to one riddled with lousy products whose vendors have paid to have the company show them to you. We can do better than this. If each of us are going to have an AI assistant helping us with essential activities daily and even advocating on our behalf, we each need to know that it has our interests in mind. Building trustworthy AI will require systemic change.

First, a trustworthy AI system must be controllable by the user. That means that the model should be able to run on a user’s owned electronic devices (perhaps in a simplified form) or within a cloud service that they control. It should show the user how it responds to them, such as when it makes queries to search the web or external services, when it directs other software to do things like sending an email on a user’s behalf, or modifies the user’s prompts to better express what the company that made it thinks the user wants. It should be able to explain its reasoning to users and cite its sources. These requirements are all well within the technical capabilities of AI systems.

Furthermore, users should be in control of the data used to train and fine-tune the AI system. When modern LLMs are built, they are first trained on massive, generic corpora of textual data typically sourced from across the Internet. Many systems go a step further by fine-tuning on more specific datasets purpose built for a narrow application, such as speaking in the language of a medical doctor, or mimicking the manner and style of their individual user. In the near future, corporate AIs will be routinely fed your data, probably without your awareness or your consent. Any trustworthy AI system should transparently allow users to control what data it uses.

Many of us would welcome an AI-assisted writing application fine tuned with knowledge of which edits we have accepted in the past and which we did not. We would be more skeptical of a chatbot knowledgeable about which of their search results led to purchases and which did not.

You should also be informed of what an AI system can do on your behalf. Can it access other apps on your phone, and the data stored with them? Can it retrieve information from external sources, mixing your inputs with details from other places you may or may not trust? Can it send a message in your name (hopefully based on your input)? Weighing these types of risks and benefits will become an inherent part of our daily lives as AI-assistive tools become integrated with everything we do.

Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.

In the world’s first few months of widespread use of models like ChatGPT, we’ve learned a lot about how AI creates risks for users. Everyone has heard by now that LLMs “hallucinate,” meaning that they make up “facts” in their outputs, because their predictive text generation systems are not constrained to fact check their own emanations. Many users learned in March that information they submit as prompts to systems like ChatGPT may not be kept private after a bug revealed users’ chats. Your chat histories are stored in systems that may be insecure.

Researchers have found numerous clever ways to trick chatbots into breaking their safety controls; these work largely because many of the “rules” applied to these systems are soft, like instructions given to a person, rather than hard, like coded limitations on a product’s functions. It’s as if we are trying to keep AI safe by asking it nicely to drive carefully, a hopeful instruction, rather than taking away its keys and placing definite constraints on its abilities.

These risks will grow as companies grant chatbot systems more capabilities. OpenAI is providing developers wide access to build tools on top of GPT: tools that give their AI systems access to your email, to your personal account information on websites, and to computer code. While OpenAI is applying safety protocols to these integrations, it’s not hard to imagine those being relaxed in a drive to make the tools more useful. It seems likewise inevitable that other companies will come along with less bashful strategies for securing AI market share.

Just like with any human, building trust with an AI will be hard won through interaction over time. We will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions. Building trust in that way is only possible if these systems are transparent about their capabilities, what inputs they use and when they will share them, and whose interests they are evolving to represent.

This essay was written with Nathan Sanders, and previously appeared on Gizmodo.com.

FBI Disables Russian Malware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/fbi-disables-russian-malware.html

Reuters is reporting that the FBI “had identified and disabled malware wielded by Russia’s FSB security service against an undisclosed number of American computers, a move they hoped would deal a death blow to one of Russia’s leading cyber spying programs.”

The headline says that the FBI “sabotaged” the malware, which seems to be wrong.

Presumably we will learn more soon.

EDITED TO ADD: New York Times story.

EDITED TO ADD: Maybe “sabotaged” is the right word. The FBI hacked the malware so that it disabled itself.

Despite the bravado of its developers, Snake is among the most sophisticated pieces of malware ever found, the FBI said. The modular design, custom encryption layers, and high-caliber quality of the code base have made it hard if not impossible for antivirus software to detect. As FBI agents continued to monitor Snake, however, they slowly uncovered some surprising weaknesses. For one, there was a critical cryptographic key with a prime length of just 128 bits, making it vulnerable to factoring attacks that expose the secret key. This weak key was used in Diffie-Hellman key exchanges that allowed each infected machine to have a unique key when communicating with another machine.

PIPEDREAM Malware against Industrial Control Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/pipedream-malware-against-industrial-control-systems.html

Another nation-state malware, Russian in origin:

In the early stages of the war in Ukraine in 2022, PIPEDREAM, a known malware was quietly on the brink of wiping out a handful of critical U.S. electric and liquid natural gas sites. PIPEDREAM is an attack toolkit with unmatched and unprecedented capabilities developed for use against industrial control systems (ICSs).

The malware was built to manipulate the network communication protocols used by programmable logic controllers (PLCs) leveraged by two critical producers of PLCs for ICSs within the critical infrastructure sector, Schneider Electric and OMRON.

CISA advisory. Wired article.

AI Hacking Village at DEF CON This Year

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/ai-hacking-village-at-def-con-this-year.html

At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack.

The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications. Participants will be given laptops to use to attack the models. Any bugs discovered will be disclosed using industry-standard responsible disclosure practices.

Friday Squid Blogging: “Mediterranean Beef Squid” Hoax

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/friday-squid-blogging-mediterranean-beef-squid-hoax.html

The viral video of the “Mediterranean beef squid”is a hoax.

It’s not even a deep fake; it’s a plastic toy.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.