Tag Archives: ChatGPT

On the Poisoning of LLMs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/on-the-poisoning-of-llms.html

Interesting essay on the poisoning of LLMs—ChatGPT in particular:

Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.

They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.

Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.

Credible Handwriting Machine

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/credible-handwriting-machine.html

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.

Introducing Cursor: the Cloudflare AI Assistant

Post Syndicated from Ricky Robinett original http://blog.cloudflare.com/introducing-cursor-the-ai-assistant-for-docs/

Introducing Cursor: the Cloudflare AI Assistant

Introducing Cursor: the Cloudflare AI Assistant

Today we’re excited to be launching Cursor – our experimental AI assistant, trained to answer questions about Cloudflare’s Developer Platform. This is just the first step in our journey to help developers build in the fastest way possible using AI, so we wanted to take the opportunity to share our vision for a generative developer experience.

Whenever a new, disruptive technology comes along, it’s not instantly clear what the native way to interact with that technology will be.

However, if you’ve played around with Large Language Models (LLMs) such as ChatGPT, it’s easy to get the feeling that this is something that’s going to change the way we work. The question is: how? While this technology already feels super powerful, today, we’re still in the relatively early days of it.

While Developer Week is all about meeting developers where they are, this is one of the things that’s going to change just that — where developers are, and how they build code. We’re already seeing the beginnings of how the way developers write code is changing, and adapting to them. We wanted to share with you how we’re thinking about it, what’s on the horizon, and some of the large bets to come.

How is AI changing developer experience?

If there’s one big thing we can learn from the exploding success of ChatGPT, it’s the importance of pairing technology with the right interface. GPT-3 — the technology powering ChatGPT has been around for some years now, but the masses didn’t come until ChatGPT made it accessible to the masses.

Since the primary customers of our platform are developers, it’s on us to find the right interfaces to help developers move fast on our platform, and we believe AI can unlock unprecedented developer productivity. And we’re still in the beginning of that journey.

Wave 1: AI generated content

One of the things ChatGPT is exceptionally good at is generating new content and articles. If you’re a bootstrapped developer relations team, the first day playing around with ChatGPT may have felt like you struck the jackpot of productivity. With a simple inquiry, ChatGPT can generate in a few seconds a tutorial that would have otherwise taken hours if not days to write out.

This content still needs to be tested — do the code examples work? Does the order make sense? While it might not get everything right, it’s a massive productivity boost, allowing a small team to multiply their content output.

In terms of developer experience, examples and tutorials are crucial for developers, especially as they start out with a new technology, or seek validation on a path they’re exploring.

However, with AI generated content, it’s always going to be limited to well, how much of it you generated. To compare it to the newspaper, this content is still one size fits all. If as a developer you stray ever so slightly off the beaten path (choose a different framework than the one tutorial suggests, or a different database), you’re still left to put the pieces together, navigating tens of open tabs in order to stitch together your application.

If this content is already being generated by AI, however, why not just go straight to the source, and allow developers to generate their own, personal guides?

Wave 2: Q&A assistants

Since developers love to try out new technologies, it’s no surprise that developers are going to be some of the early adopters for technology such as ChatGPT. Many developers are already starting to build applications alongside their trusted bard, ChatGPT.

Rather than using generated content, why not just go straight to the source, and ask ChatGPT to generate something that’s tailored specifically for you?

There’s one tiny problem: the information is not always up to date. Which is why plugins are going to become a super important way to interact.

But what about someone who’s already on Cloudflare’s docs? Here, you want a native experience where someone can ask questions and receive answers. Similarly, if you have a question, why spend time searching the docs, if you can just ask and receive an answer?

Wave 3: generative experiences

In the examples above, you were still relying on switching back and forth between a dedicated AI interface and the problem at hand. In one tab you’re asking questions, while in another, you’re implementing the answers.

But taking things another step further, what if AI just met you where you were? In terms of developer experience, we’re already starting to see this in the authoring phase. Tools like GitHub Copilot help developers generate boilerplate code and tests, allowing developers to focus on more complex tasks like designing architecture and algorithms.

Sometimes, however, the first iteration AI comes up with might not match what you, the developer had in mind, which is why we’re starting to experiment with a flow-based generative approach, where you can ask AI to generate several versions, and build out your design with the one that matches your expectations the most.

The possibilities are endless, enabling developers to start applications from prompts rather than pre-generated templates.

We’re excited for all the possibilities AI will unlock to make developers more productive than ever, and we’d love to hear from you how AI is changing the way you change applications.

We’re also excited to share our first steps into the realm of AI driven developer experience with the release of our first two ChatGPT plugins, and by welcoming a new member of our team —Cursor, our docs AI assistant.

Our first milestone to AI driven UX: AI Assisted Docs

As the first step towards using AI to streamline our developer experience, we’re excited to introduce a new addition to our documentation to help you get answers as quickly as possible.

How to use Cursor

Here’s a sample exchange with Cursor:

Introducing Cursor: the Cloudflare AI Assistant

You’ll notice that when you ask a question, it will respond with two pieces of information: a text based response answering your questions, and links to relevant pages in our documentation that can help you go further.

Here’s what happens when we ask “What video formats does Stream support?”.

If you were looking through our examples you may not immediately realize that this specific example uses both Workers and R2.

In its current state, you can think of it as your assistant to help you learn about our products and navigate our documentation in a conversational way. We’re labeling Cursor as experimental because this is the very beginning stages of what we feel like a Cloudflare AI assistant could do to help developers. It is helpful, but not perfect. To deal with its lack of perfection, we took an approach of having it do fewer things better. You’ll find there are many things it isn’t good at today.

How we built Cursor

Under the hood, Cursor is powered by Workers, Durable Objects, OpenAI, and the Cloudflare developer docs. It uses the same backend that we’re using to power our ChatGPT Docs plugin, and you can read about that here.

It uses the “Search-Ask” method, stay tuned for more details on how you can build your own.

A sneak peek into the future

We’re already thinking about the future, we wanted to give you a small preview of what we think this might look like here:

With this type of interface, developers could use a UI to have an AI generate code and developers then link that code together visually. Whether that’s with other code generated by the AI or code they’ve written themselves. We’ll be continuing to explore interfaces that we hope to help you all build more efficiently and can’t wait to get these new interfaces in your hands.

We need your help

Our hope is to quickly update and iterate on how Cursor works as developers around the world use it. As you’re using it to explore our documentation, join us on Discord to let us know your experience.

Query Cloudflare Radar and our docs using ChatGPT plugins

Post Syndicated from Ricky Robinett original http://blog.cloudflare.com/cloudflare-chatgpt-plugins/

Query Cloudflare Radar and our docs using ChatGPT plugins

Query Cloudflare Radar and our docs using ChatGPT plugins

When OpenAI launched ChatGPT plugins in alpha we knew that it opened the door for new possibilities for both Cloudflare users and developers building on Cloudflare. After the launch, our team quickly went to work seeing what we could build, and today we’re very excited to share with you two new Cloudflare ChatGPT plugins – the Cloudflare Radar plugin and the Cloudflare Docs plugin.

The Cloudflare Radar plugin allows you to talk to ChatGPT about real-time Internet patterns powered by Cloudflare Radar.

The Cloudflare Docs plugin allows developers to use ChatGPT to help them write and build Cloudflare applications with the most up-to-date information from our documentation. It also serves as an open source example of how to build a ChatGPT plugin with Cloudflare Workers.

Let’s do a deeper dive into how each of these plugins work and how we built them.

Cloudflare Radar ChatGPT plugin

When ChatGPT introduced plugins, one of their use cases was retrieving real-time data from third-party applications and their APIs and letting users ask relevant questions using natural language.

Cloudflare Radar has lots of data about how people use the Internet, a well-documented public API, an OpenAPI specification, and it’s entirely built on top of Workers, which gives us lots of flexibility for improvements and extensibility. We had all the building blocks to create a ChatGPT plugin quickly. So, that's what we did.

We added an OpenAI manifest endpoint which describes what the plugin does, some branding assets, and an enriched OpenAPI schema to tell ChatGPT how to use our data APIs. The longest part of our work was fine-tuning the schema with good descriptions (written in natural language, obviously) and examples of how to query our endpoints.

Amusingly, the descriptions ended up much improved by the need to explain the API endpoints to ChatGPT. An interesting side effect is that this benefits us humans also.

{
    "/api/v1/http/summary/ip_version": {
        "get": {
            "operationId": "get_SummaryIPVersion",
            "parameters": [
                {
                    "description": "Date range from today minus the number of days or weeks specified in this parameter, if not provided always send 14d in this parameter.",
                    "required": true,
                    "schema": {
                        "type": "string",
                        "example": "14d",
                        "enum": ["14d","1d","2d","7d","28d","12w","24w","52w"]
                    },
                    "name": "dateRange",
                    "in": "query"
                }
            ]
        }
    }

Luckily, itty-router-openapi, an easy and compact OpenAPI 3 schema generator and validator for Cloudflare Workers that we built and open-sourced when we launched Radar 2.0, made it really easy for us to add the missing parts.

import { OpenAPIRouter } from '@cloudflare/itty-router-openapi'

const router = OpenAPIRouter({
  aiPlugin: {
    name_for_human: 'Cloudflare Radar API',
    name_for_model: 'cloudflare_radar',
    description_for_human: "Get data insights from Cloudflare's point of view.",
    description_for_model:
      "Plugin for retrieving the data based on Cloudflare Radar's data. Use it whenever a user asks something that might be related to Internet usage, eg. outages, Internet traffic, or Cloudflare Radar's data in particular.",
    contact_email: '[email protected]',
    legal_info_url: 'https://www.cloudflare.com/website-terms/',
    logo_url: 'https://cdn-icons-png.flaticon.com/512/5969/5969044.png',
  },
})

We incorporated our changes into itty-router-openapi, and now it supports the OpenAI manifest and route, and a few other options that make it possible for anyone to build their own ChatGPT plugin on top of Workers too.

The Cloudflare Radar ChatGPT is available to non-free ChatGPT users or anyone on OpenAI’s plugin's waitlist. To use it, simply open ChatGPT, go to the Plugin store and install Cloudflare Radar.

Query Cloudflare Radar and our docs using ChatGPT plugins

Once installed, you can talk to it and ask questions about our data using natural language.

When you add plugins to your account, ChatGPT will prioritize using their data based on what the language model understands from the human-readable descriptions found in the manifest and Open API schema. If ChatGPT doesn't think your prompt can benefit from what the plugin provides, then it falls back to its standard capabilities.

Another interesting thing about plugins is that they extend ChatGPT's limited knowledge of the world and events after 2021 and can provide fresh insights based on recent data.

Here are a few examples to get you started:

"What is the percentage distribution of traffic per TLS protocol version?"

Query Cloudflare Radar and our docs using ChatGPT plugins

"What's the HTTP protocol version distribution in Portugal?"

Query Cloudflare Radar and our docs using ChatGPT plugins

Now that ChatGPT has context, you can add some variants, like switching the country and the date range.

“How about the US in the last six months?”

Query Cloudflare Radar and our docs using ChatGPT plugins

You can also combine multiple topics (ChatGPT will make multiple API calls behind the scenes and combine the results in the best possible way).

“How do HTTP protocol versions compare with TLS protocol versions?”

Query Cloudflare Radar and our docs using ChatGPT plugins

Out of ideas? Ask it “What can I ask the Radar plugin?”, or “Give me a random insight”.

Be creative, too; it understands a lot about our data, and we keep improving it. You can also add date or country filters using natural language in your prompts.

Cloudflare Docs ChatGPT plugin

The Cloudflare Docs plugin is a ChatGPT Retrieval Plugin that lets you access the most up-to-date knowledge from our developer documentation using ChatGPT. This means if you’re using ChatGPT to assist you with building on Cloudflare that the answers you’re getting or code that’s being generated will be informed by current best practices and information located within our docs. You can set up and run the Cloudflare Docs ChatGPT Plugin by following the read me in the example repo.

Query Cloudflare Radar and our docs using ChatGPT plugins

The plugin was built entirely on Workers and uses KV as a vector store. It can also keep its index up-to-date using Cron Triggers, Queues and Durable Objects.

The plugin is a Worker that responds to POST requests from ChatGPT to a /query endpoint. When a query comes in, the Worker converts the query text into an embedding vector via the OpenAI embeddings API and uses this to find, and return, the most relevant document snippets from Cloudflare’s developer documentation.

The way this is achieved is by first converting every document in Cloudflare’s developer documentation on GitHub into embedding vectors (again using OpenAI’s API) and storing them in KV. This storage format allows you to find semantically similar content by doing a similarity search (we use cosine similarity), where two pieces of text that are similar in meaning will result in the two embedding vectors having a high similarity score. Cloudflare’s entire developer documentation compresses to under 5MB when converted to embedding vectors, so fetching these from KV is very quick. We’ve also explored building larger vector stores on Workers, as can be seen in this demo of 1 million vectors stored on Durable Object storage. We’ll be releasing more open source libraries to support these vector store use cases in the near future.

So ChatGPT will query the plugin when it believes the user’s question is related to Cloudflare’s developer tools, and the plugin will return a list of up-to-date information snippets directly from our documentation. ChatGPT can then decide how to use these snippets to best answer the user’s question.

The plugin also includes a “Scheduler” Worker that can periodically refresh the documentation embedding vectors, so that the information is always up-to-date. This is advantageous because ChatGPT’s own knowledge has a cutoff of September 2021 – so it’s not aware of changes in documentation, or new Cloudflare products.

The Scheduler Worker is triggered by a Cron Trigger, on a schedule you can set (eg, hourly), where it will check which content has changed since it last ran via GitHub’s API. It then sends these document paths in messages to a Queue to be processed. Workers will batch process these messages – for each message, the content is fetched from GitHub, and then turned into embedding vectors via OpenAI’s API. A Durable Object is used to coordinate all the Queue processing so that when all the batches have finished processing, the resulting embedding vectors can be combined and stored in KV, ready for querying by the plugin.

This is a great example of how Workers can be used not only for front-facing HTTP APIs, but also for scheduled batch-processing use cases.

Let us know what you think

We are in a time when technology is constantly changing and evolving, so as you experiment with these new plugins please let us know what you think. What do you like? What could be better? Since ChatGPT plugins are in alpha, changes to the plugins user interface or performance (i.e. latency) may occur. If you build your own plugin, we’d love to see it and if it’s open source you can submit a pull request on our example repo. You can always find us hanging out in our developer discord.

Magic in minutes: how to build a ChatGPT plugin with Cloudflare Workers

Post Syndicated from Kristian Freeman original http://blog.cloudflare.com/magic-in-minutes-how-to-build-a-chatgpt-plugin-with-cloudflare-workers/

Magic in minutes: how to build a ChatGPT plugin with Cloudflare Workers

Magic in minutes: how to build a ChatGPT plugin with Cloudflare Workers

Today, we're open-sourcing our ChatGPT Plugin Quickstart repository for Cloudflare Workers, designed to help you build awesome and versatile plugins for ChatGPT with ease. If you don’t already know, ChatGPT is a conversational AI model from OpenAI which has an uncanny ability to take chat input and generate human-like text responses.

With the recent addition of ChatGPT plugins, developers can create custom extensions and integrations to make ChatGPT even more powerful. Developers can now provide custom flows for ChatGPT to integrate into its conversational workflow – for instance, the ability to look up products when asking questions about shopping, or retrieving information from an API in order to have up-to-date data when working through a problem.

That's why we're super excited to contribute to the growth of ChatGPT plugins with our new Quickstart template. Our goal is to make it possible to build and deploy a new ChatGPT plugin to production in minutes, so developers can focus on creating incredible conversational experiences tailored to their specific needs.

How it works

Our Quickstart is designed to work seamlessly with Cloudflare Workers. Under the hood, it uses our command-line tool wrangler to create a new project and deploy it to Workers.

When building a ChatGPT plugin, there are three things you need to consider:

  1. The plugin's metadata, which includes the plugin's name, description, and other info
  2. The plugin's schema, which defines the plugin's input and output
  3. The plugin's behavior, which defines how the plugin responds to user input

To handle all of these parts in a simple, easy-to-understand API, we've created the @cloudflare/itty-router-openapi package, which makes it easy to manage your plugin's metadata, schema, and behavior. This package is included in the ChatGPT Plugin Quickstart, so you can get started right away.

To show how the package works, we'll look at two key files in the ChatGPT Plugin Quickstart: index.js and search.js. The index.js file contains the plugin's metadata and schema, while the search.js file contains the plugin's behavior. Let's take a look at each of these files in more detail.

In index.js, we define the plugin's metadata and schema. The metadata includes the plugin's name, description, and version, while the schema defines the plugin's input and output.

The configuration matches the definition required by OpenAI's plugin manifest, and helps ChatGPT understand what your plugin is, and what purpose it serves.

Here's what the index.js file looks like:

import { OpenAPIRouter } from "@cloudflare/itty-router-openapi";
import { GetSearch } from "./search";

export const router = OpenAPIRouter({
  schema: {
    info: {
      title: 'GitHub Repositories Search API',
      description: 'A plugin that allows the user to search for GitHub repositories using ChatGPT',
      version: 'v0.0.1',
    },
  },
  docs_url: '/',
  aiPlugin: {
    name_for_human: 'GitHub Repositories Search',
    name_for_model: 'github_repositories_search',
    description_for_human: "GitHub Repositories Search plugin for ChatGPT.",
    description_for_model: "GitHub Repositories Search plugin for ChatGPT. You can search for GitHub repositories using this plugin.",
    contact_email: '[email protected]',
    legal_info_url: 'http://www.example.com/legal',
    logo_url: 'https://workers.cloudflare.com/resources/logo/logo.svg',
  },
})

router.get('/search', GetSearch)

// 404 for everything else
router.all('*', () => new Response('Not Found.', { status: 404 }))

export default {
  fetch: router.handle
}

In the search.js file, we define the plugin's behavior. This is where we define how the plugin responds to user input. It also defines the plugin's schema, which ChatGPT uses to validate the plugin's input and output.

Importantly, this doesn't just define the implementation of the code. It also automatically generates an OpenAPI schema that helps ChatGPT understand how your code works — for instance, that it takes a parameter "q", that it is of "String" type, and that it can be described as "The query to search for". With the schema defined, the handle function makes any relevant parameters available as function arguments, to implement the logic of the endpoint as you see fit.

Here's what the search.js file looks like:

import { ApiException, OpenAPIRoute, Query, ValidationError } from "@cloudflare/itty-router-openapi";

export class GetSearch extends OpenAPIRoute {
  static schema = {
    tags: ['Search'],
    summary: 'Search repositories by a query parameter',
    parameters: {
      q: Query(String, {
        description: 'The query to search for',
        default: 'cloudflare workers'
      }),
    },
    responses: {
      '200': {
        schema: {
          repos: [
            {
              name: 'itty-router-openapi',
              description: 'OpenAPI 3 schema generator and validator for Cloudflare Workers',
              stars: '80',
              url: 'https://github.com/cloudflare/itty-router-openapi',
            }
          ]
        },
      },
    },
  }

  async handle(request: Request, env, ctx, data: Record<string, any>) {
    const url = `https://api.github.com/search/repositories?q=${data.q}`

    const resp = await fetch(url, {
      headers: {
        'Accept': 'application/vnd.github.v3+json',
        'User-Agent': 'RepoAI - Cloudflare Workers ChatGPT Plugin Example'
      }
    })

    if (!resp.ok) {
      return new Response(await resp.text(), { status: 400 })
    }

    const json = await resp.json()

    // @ts-ignore
    const repos = json.items.map((item: any) => ({
      name: item.name,
      description: item.description,
      stars: item.stargazers_count,
      url: item.html_url
    }))

    return {
      repos: repos
    }
  }
}

The quickstart smooths out the entire development process, so you can focus on crafting custom behaviors, endpoints, and features for your ChatGPT plugins without getting caught up in the nitty-gritty. If you aren't familiar with API schemas, this also means that you can rely on our schema and manifest generation tools to handle the complicated bits, and focus on the implementation to build your plugin.

Besides making development a breeze, it's worth noting that you're also deploying to Workers, which takes advantage of Cloudflare's vast global network. This means your ChatGPT plugins enjoy low-latency access and top-notch performance, no matter where your users are located. By combining the strengths of Cloudflare Workers with the versatility of ChatGPT plugins, you can create conversational AI tools that are not only powerful and scalable but also cost-effective and globally accessible.

Example

To demonstrate the capabilities of our quickstarts, we've created two example ChatGPT plugins. The first, which we reviewed above, connects ChatGPT with the GitHub Repositories Search API. This plugin enables users to search for repositories by simply entering a search term, returning useful information such as the repository's name, description, star count, and URL.

One intriguing aspect of this example is the property where the plugin could go beyond basic querying. For instance, when asked "What are the most popular JavaScript projects?", ChatGPT was able to intuitively understand the user's intent and craft a new query parameter for querying both by the number of stars (measuring popularity), and the specific programming language (JavaScript) without requiring any explicit prompting. This showcases the power and adaptability of ChatGPT plugins when integrated with external APIs, providing more insightful and context-aware responses.

Magic in minutes: how to build a ChatGPT plugin with Cloudflare Workers

The second plugin uses the Pirate Weather API to retrieve up-to-date weather information. Remarkably, OpenAI is able to translate the request for a specific location (for instance, “Seattle, Washington”) into longitude and latitude values – which the Pirate Weather API uses for lookups – and make the correct API request, without the user needing to do any additional work.

Magic in minutes: how to build a ChatGPT plugin with Cloudflare Workers

With our ChatGPT Plugin Quickstarts, you can create custom plugins that connect to any API, database, or other data source, giving you the power to create ChatGPT plugins that are as unique and versatile as your imagination. The possibilities are endless, opening up a whole new world of conversational AI experiences tailored to specific domains and use cases.

Get started today

The ChatGPT Plugin Quickstarts don’t just make development a snap—it also offers seamless deployment and scaling thanks to Cloudflare Workers. With the generous free plan provided by Workers, you can deploy your plugin quickly and scale it infinitely as needed.

Our ChatGPT Plugin Quickstarts are all about sparking creativity, speeding up development, and empowering developers to create amazing conversational AI experiences. By leveraging Cloudflare Workers' robust infrastructure and our streamlined tooling, you can easily build, deploy, and scale custom ChatGPT plugins, unlocking a world of endless possibilities for conversational AI applications.

Whether you're crafting a virtual assistant, a customer support bot, a language translator, or any other conversational AI tool, our ChatGPT Plugin Quickstarts are a great place to start. We're excited to provide this Quickstart, and would love to see what you build with it. Join us in our Discord community to share what you're working on!

AI to Aid Democracy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/ai-to-aid-democracy.html

There’s good reason to fear that AI systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested AI systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. AI could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an AI not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

An AI built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

Today, state-of-the-art AI systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their AIs and what sort of access we have. They can steer and shape those AIs to conform to their corporate interests. That isn’t the world we want. Instead, we want AI options that are both public goods and directed toward public good.

We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

But a public option AI could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that AI will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. AI will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these AI uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting, lobbying, and campaign messaging. Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and AI could serve a similar role for everyday citizens.

If the hallucination problem can be solved, LLMs could also become explainers and educators. Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

Finally, and most ambitiously, AI could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed, we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden, imagines how AI might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public AI systems would be political.

Some technological tools are already publicly available. In fairness, tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology, the Lawrence Livermore National Laboratory, and other Department of Energy labs, as well as universities and nonprofits, with the AI expertise and capability to oversee this effort.

Instead of releasing half-finished AI systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent AI chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

To build assistive AI for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “align” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

Capturing that kind of user interaction and feedback within a political environment suspicious of both AI and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply AI through piecemeal democratic engineering, carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

The next generation of AI experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public AI option.

Even with these approaches, building and fielding a democratic AI option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial AI domination undermines democratic politics—will be much messier and much worse.

This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

EDITED TO ADD: Linux Weekly News discussion.

LLMs and Phishing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance….” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams.

So why were scammers still sending such obviously dubious emails? In 2012, researcher Cormac Herley offered an answer: It weeded out all but the most gullible. A smart scammer doesn’t want to waste their time with people who reply and then realize it’s a scam when asked to wire money. By using an obvious scam email, the scammer can focus on the most potentially profitable people. It takes time and effort to engage in the back-and-forth communications that nudge marks, step by step, from interlocutor to trusted acquaintance to pauper.

Long-running financial scams are now known as pig butchering, growing the potential mark up until their ultimate and sudden demise. Such scams, which require gaining trust and infiltrating a target’s personal finances, take weeks or even months of personal time and repeated interactions. It’s a high stakes and low probability game that the scammer is playing.

Here is where LLMs will make a difference. Much has been written about the unreliability of OpenAI’s GPT models and those like them: They “hallucinate” frequently, making up things about the world and confidently spouting nonsense. For entertainment, this is fine, but for most practical uses it’s a problem. It is, however, not a bug but a feature when it comes to scams: LLMs’ ability to confidently roll with the punches, no matter what a user throws at them, will prove useful to scammers as they navigate hostile, bemused, and gullible scam targets by the billions. AI chatbot scams can ensnare more people, because the pool of victims who will fall for a more subtle and flexible scammer—one that has been trained on everything ever written online—is much larger than the pool of those who believe the king of Nigeria wants to give them a billion dollars.

Personal computers are powerful enough today that they can run compact LLMs. After Facebook’s new model, LLaMA, was leaked online, developers tuned it to run fast and cheaply on powerful laptops. Numerous other open-source LLMs are under development, with a community of thousands of engineers and scientists.

A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.

This is a change in both scope and scale. LLMs will change the scam pipeline, making them more profitable than ever. We don’t know how to live in a world with a billion, or 10 billion, scammers that never sleep.

There will also be a change in the sophistication of these attacks. This is due not only to AI advances, but to the business model of the internet—surveillance capitalism—which produces troves of data about all of us, available for purchase from data brokers. Targeted attacks against individuals, whether for phishing or data collection or scams, were once only within the reach of nation-states. Combine the digital dossiers that data brokers have on all of us with LLMs, and you have a tool tailor-made for personalized scams.

Companies like OpenAI attempt to prevent their models from doing bad things. But with the release of each new LLM, social media sites buzz with new AI jailbreaks that evade the new restrictions put in place by the AI’s designers. ChatGPT, and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their release, and in dozens of different ways. Most protections against bad uses and harmful output are only skin-deep, easily evaded by determined users. Once a jailbreak is discovered, it usually can be generalized, and the community of users pulls the LLM open through the chinks in its armor. And the technology is advancing too fast for anyone to fully understand how they work, even the designers.

This is all an old story, though: It reminds us that many of the bad uses of AI are a reflection of humanity more than they are a reflection of AI technology itself. Scams are nothing new—simply intent and then action of one person tricking another for personal gain. And the use of others as minions to accomplish scams is sadly nothing new or uncommon: For example, organized crime in Asia currently kidnaps or indentures thousands in scam sweatshops. Is it better that organized crime will no longer see the need to exploit and physically abuse people to run their scam operations, or worse that they and many others will be able to scale up scams to an unprecedented level?

Defense can and will catch up, but before it does, our signal-to-noise ratio is going to drop dramatically.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

Prompt Injection Attacks on Large Language Models

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/03/prompt-injection-attacks-on-large-language-models.html

This is a good survey on prompt injection attacks on large language models (like ChatGPT).

Abstract: We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM.

In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats.

Defending against AI Lobbyists

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/defending-against-ai-lobbyists.html

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying.

We speculated that AI-assisted lobbyists could use generative models to write op-eds and regulatory comments supporting a position, identify members of Congress who wield the most influence over pending legislation, use network pattern identification to discover undisclosed or illegal political coordination, or use supervised machine learning to calibrate the optimal contribution needed to sway the vote of a legislative committee member.

These are all examples of what we call AI hacking. Hacks are strategies that follow the rules of a system, but subvert its intent. Currently a human creative process, future AIs could discover, develop, and execute these same strategies.

While some of these activities are the longtime domain of human lobbyists, AI tools applied against the same task would have unfair advantages. They can scale their activity effortlessly across every state in the country—human lobbyists tend to focus on a single state—they may uncover patterns and approaches unintuitive and unrecognizable by human experts, and do so nearly instantaneously with little chance for human decision makers to keep up.

These factors could make AI hacking of the democratic process fundamentally ungovernable. Any policy response to limit the impact of AI hacking on political systems would be critically vulnerable to subversion or control by an AI hacker. If AI hackers achieve unchecked influence over legislative processes, they could dictate the rules of our society: including the rules that govern AI.

We admit that this seemed far fetched when we first wrote about it in 2021. But now that the emanations and policy prescriptions of ChatGPT have been given an audience in the New York Times and innumerable other outlets in recent weeks, it’s getting harder to dismiss.

At least one group of researchers is already testing AI techniques to automatically find and advocate for bills that benefit a particular interest. And one Massachusetts representative used ChatGPT to draft legislation regulating AI.

The AI technology of two years ago seems quaint by the standards of ChatGPT. What will the technology of 2025 seem like if we could glimpse it today? To us there is no question that now is the time to act.

First, let’s dispense with the concepts that won’t work. We cannot solely rely on explicit regulation of AI technology development, distribution, or use. Regulation is essential, but it would be vastly insufficient. The rate of AI technology development, and the speed at which AI hackers might discover damaging strategies, already outpaces policy development, enactment, and enforcement.

Moreover, we cannot rely on detection of AI actors. The latest research suggests that AI models trying to classify text samples as human- or AI-generated have limited precision, and are ill equipped to handle real world scenarios. These reactive, defensive techniques will fail because the rate of advancement of the “offensive” generative AI is so astounding.

Additionally, we risk a dragnet that will exclude masses of human constituents that will use AI to help them express their thoughts, or machine translation tools to help them communicate. If a written opinion or strategy conforms to the intent of a real person, it should not matter if they enlisted the help of an AI (or a human assistant) to write it.

Most importantly, we should avoid the classic trap of societies wrenched by the rapid pace of change: privileging the status quo. Slowing down may seem like the natural response to a threat whose primary attribute is speed. Ideas like increasing requirements for human identity verification, aggressive detection regimes for AI-generated messages, and elongation of the legislative or regulatory process would all play into this fallacy. While each of these solutions may have some value independently, they do nothing to make the already powerful actors less powerful.

Finally, it won’t work to try to starve the beast. Large language models like ChatGPT have a voracious appetite for data. They are trained on past examples of the kinds of content that they will be asked to generate in the future. Similarly, an AI system built to hack political systems will rely on data that documents the workings of those systems, such as messages between constituents and legislators, floor speeches, chamber and committee voting results, contribution records, lobbying relationship disclosures, and drafts of and amendments to legislative text. The steady advancement towards the digitization and publication of this information that many jurisdictions have made is positive. The threat of AI hacking should not dampen or slow progress on transparency in public policymaking.

Okay, so what will help?

First, recognize that the true threats here are malicious human actors. Systems like ChatGPT and our still-hypothetical political-strategy AI are still far from artificial general intelligences. They do not think. They do not have free will. They are just tools directed by people, much like lobbyist for hire. And, like lobbyists, they will be available primarily to the richest individuals, groups, and their interests.

However, we can use the same tools that would be effective in controlling human political influence to curb AI hackers. These tools will be familiar to any follower of the last few decades of U.S. political history.

Campaign finance reforms such as contribution limits, particularly when applied to political action committees of all types as well as to candidate operated campaigns, can reduce the dependence of politicians on contributions from private interests. The unfair advantage of a malicious actor using AI lobbying tools is at least somewhat mitigated if a political target’s entire career is not already focused on cultivating a concentrated set of major donors.

Transparency also helps. We can expand mandatory disclosure of contributions and lobbying relationships, with provisions to prevent the obfuscation of the funding source. Self-interested advocacy should be transparently reported whether or not it was AI-assisted. Meanwhile, we should increase penalties for organizations that benefit from AI-assisted impersonation of constituents in political processes, and set a greater expectation of responsibility to avoid “unknowing” use of these tools on their behalf.

Our most important recommendation is less legal and more cultural. Rather than trying to make it harder for AI to participate in the political process, make it easier for humans to do so.

The best way to fight an AI that can lobby for moneyed interests is to help the little guy lobby for theirs. Promote inclusion and engagement in the political process so that organic constituent communications grow alongside the potential growth of AI-directed communications. Encourage direct contact that generates more-than-digital relationships between constituents and their representatives, which will be an enduring way to privilege human stakeholders. Provide paid leave to allow people to vote as well as to testify before their legislature and participate in local town meetings and other civic functions. Provide childcare and accessible facilities at civic functions so that more community members can participate.

The threat of AI hacking our democracy is legitimate and concerning, but its solutions are consistent with our democratic values. Many of the ideas above are good governance reforms already being pushed and fought over at the federal and state level.

We don’t need to reinvent our democracy to save it from AI. We just need to continue the work of building a just and equitable political system. Hopefully ChatGPT will give us all some impetus to do that work faster.

This essay was written with Nathan Sanders, and appeared on the Belfer Center blog.

ChatGPT Is Ingesting Corporate Secrets

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/chatgpt-is-ingesting-corporate-secrets.html

Interesting:

According to internal Slack messages that were leaked to Insider, an Amazon lawyer told workers that they had “already seen instances” of text generated by ChatGPT that “closely” resembled internal company data.

This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.

[…]

“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”

ChatGPT Hardware a Look at 8x NVIDIA A100 Powering the Tool

Post Syndicated from Patrick Kennedy original https://www.servethehome.com/chatgpt-hardware-a-look-at-8x-nvidia-a100-systems-powering-the-tool-openai-microsoft-azure-supermicro-inspur-asus-dell-gigabyte/

If you have heard about the OpenAI ChatGPT AI inference running on the NVIDIA A100 and what to know what a NVIDIA A100 is, this is for you

The post ChatGPT Hardware a Look at 8x NVIDIA A100 Powering the Tool appeared first on ServeTheHome.

Научни новини: ChatGPT, медицина, биотехнологии и биоразнообразие

Post Syndicated from Михаил Ангелов original https://www.toest.bg/nauchni-novini-chatgpt-meditsina-biotehnologii-i-bioraznoobrazie/

Изкуствен интелект

Научни новини: ChatGPT, медицина, биотехнологии и биоразнообразие

Въпреки че е достъпен само от три месеца, чатботът ChatGPT предизвиква много вълнения. Достъпът до програмата бе сравнително ограничен, но вече всеки, склонен да сподели имейла и телефона си с разработчиците, може да се възползва от способностите му.

Потенциалът е усетен бързо от предприемчиви ученици и студенти, които го използват като помощник при писането на домашни. Едно от първите съобщения за това идва от Университета на Северен Мичиган, където преподавател по философия се е усъмнил в есе, много по-добро от останалите. След разговор със студента става ясно, че есето е генерирано от ChatGPT. Притеснителното е, че ако студентът не си беше признал, преподавателят е щял да бъде принуден да приеме есето, защото няма преки доказателства, че то не е авторско.

След този случай в академичните среди в САЩ започва обсъждане как да се справят с новото предизвикателство. Редица средни училища забраняват достъпа до програмата през училищната мрежа и устройства, но тази мярка лесно се заобикаля от технически грамотните ученици. Поради това висшите учебни заведения са се спрели на промени в структурата на заданията и оценяването им. Освен ограничаване на домашните работи, преподавалите търсят нови похвати, които по-адекватно ще оценяват способността на студентите за критично мислене.

Бяха публикувани и някои сензационни новини, че програмата успешно е преминала редица изпити, сред които за получаване на лекарски права и юридическа правоспособност, както и по MBA. Въпреки че работите са със средни оценки, това повдига въпроси не само за възможностите на софтуера, но и за структурата на изпитите и за потенциални промени в тях.

Очаквано, започна публикуването и на програми, които се опитват да определят дали даден текст е генериран от ChatGPT. Една от тях е GPTZero, разработена от 22-годишния студент от Принстън Едуард Тиан. Вече е достъпен и сходен софтуер от създателите на ChatGPT. В това своеобразно състезание е много трудно някой да вземе превес и обикновено балансът на силите клони към „атакуващия“.

Продуктът се разработва от компанията OpenAI – нестопанско дружество, сред чиито съоснователи e Илън Мъск. Нейна разработка е и алгоритъмът художник DALL·E, който разбуни духовете в артистичните среди и даде повод за дискусии на тема етика, трудово и авторско право. Вече се среща „съавторство“ между AI продукти под формата на илюстрирани книги, в които рисунките са дело на MidJourney, а текстът – на ChatGPT.

Въпреки че някои експерти не смятат технологията за заплаха, тъй като не може да предостави цитати и задълбочени познания по повечето теми, трябва да се има предвид, че тя все още прави първите си стъпки. Следващите версии ще са обучени с още по-обширни набори от данни и ще се възползват от напредъка в разработката на нови алгоритми и хардуер. Не трябва да се пренебрегва и фактът, че инвестициите в този тип компании тепърва ще нарастват. OpenAI вече обявиха, че са получили ново финансиране от Microsoft в размер на няколко милиарда долара.

Медицина

Терапия на тумори

Това е област, която се развива постоянно и често в нея се прилагат методи от авангарда на научния прогрес. Един от тях е излагането на пациентите на умъртвени туморни клетки с цел активиране на имунната система и атакуване на злокачествените образувания. За да се подобри ефикасността на терапията, е предложен нов подход.

При него, вместо да се умъртвят, клетките се модифицират в две посоки: да отделят вещества, които убиват туморните клетки и променят микросредата им, и същевременно да бъдат по-разпознаваеми за имунната система, за да улеснят работата ѝ. Този метод се възползва от свойството на туморните клетки да се привличат. След като се въведат в пациента, терапевтичните клетки (както ги наричат авторите) могат да се придвижат до необходимото място и да действат локално.

Поради активирането на естествения имунитет изследователите смятат, че терапевтичните клетки могат да се използват и като ваксина. За да изпълнят тази цел, в тях се вмъква механизъм за самоунищожение. След като обучат имунната система да ги разпознава, те могат да бъдат дезактивирани и премахнати от тялото. Моделът води до подобрена прогноза при т.нар. хуманизирани мишки, в които има клетки от различни човешки тъкани с цел средата да се приближи до имунната ни система.

Изследването е интересно и заради преките приложения, до които може да доведе, и заради подхода за използване на клетки от пациента, превърнати в терапевтично средство с помощта на няколко паралелни генетични модификации.

Антидепресанти

Съвременната медицина разчита все повече на тях – за периода от 1998 до 2018 г. броят на рецептите в Англия се е увеличил три пъти. Сред най-често предписваните медикаменти са т.нар. СИРС (селективни инхибитори на реабсорбцията на серотонин, SSRI), които увеличават количеството на серотонин в мозъка. В повечето случаи страничните ефекти от тях не са съществени, но има и такива, които могат да окажат значително влияние върху състоянието на пациента.

Един от най-често срещаните е апатията, която донякъде е следствие от метода на действие на антидепресантите – намаляват силата на отрицателните емоции, но потискат и позитивните. Ново изследване на екипи от Кеймбриджкия и Копенхагенския университет върху ефекта на СИРС при здрави доброволци показва възможна причина за това. Участниците са били разделени в две групи и за период от средно 26 дни са приемали или медикамент, или плацебо. В проучването чрез въпросници, когнитивни задачи и физиологични изследвания са проследени много параметри.

В двете групи не са открити разлики в когнитивните функции, свързани с паметта и емоциите, но при прилагане на тест, предназначен да проследи отговора на позитивни и отрицателни стимули, е установено понижение на реакцията към награди, което според авторите е механизмът, отключващ чувството за апатия. Така се повдига въпросът за начина на прилагане на тези медикаменти и се дава поле за бъдещи изследвания относно метода им на действие.

Широкото предписание на антидепресантите може да има и други неочаквани последствия. Австралийски екип наскоро показа, че те улесняват бактериите в придобиването на устойчивост на антибиотици.

Описани са два механизъма, работещи в синхрон. Вследствие на медикаментите бактериите (E. coli) започват да отделят повече активни форми на кислорода, които вредят на клетъчните компоненти. Това повишава еволюционния натиск и възможността за възникване на мутации. Бактериите са започнали да произвеждат и повече протеини, отговорни за изхвърляне на антибиотици от клетките.

Само по себе си това вече е проблем, но учените са забелязали и повишена възможност за хоризонтален генетичен трансфер, особено характерен за преноса на устойчивост на антибиотици при бактериите. Това е предаването на генетичен материал от един организъм на друг, който не е потомък на първия. При бактериите най-често става чрез поглъщане на генетичен материал от средата (трансформация) или чрез конюгация, при която между двете клетки се създава временна връзка за обмяна на генетична информация.

Няма значение дали приемате медикаменти, или не, защото те се отделят с урината и се натрупват в отпадните системи, откъдето може да попаднат в по-широкия воден цикъл и да доведат до промени и в по-висши водни организми, например риби.

Расте скептицизмът към широкото прилагане на антидепресантите и най-вече на СИРС. Той е повдигнат от един актуален и широко коментиран метаанализ, поставящ под съмнение добре приетата концепция за връзка между понижените нива на серотонина и депресивните състояния. Заключението от обобщението на 12 предишни изследвания и метаанализи е, че за момента няма убедителна информация, потвърждаваща серотониновата хипотеза. Едно от мненията на авторите е, че този нов поглед може да помогне при пациенти, които са приели, че състоянието им се дължи единствено на химичен дисбаланс, и не включват в лечението си други похвати, например психотерапия.

Въпреки това експертите съветват пациентите да не вземат самостоятелни решения, а да се консултират със своите лекари и да следят внимателно протичането на терапията си.

Нови антибиотици

Веществото албицидин се произвежда от патогена по захарната тръстика Xanthomonas albilineans. В средата на 80-те години на ХХ век учени са установили, че албицидинът има антибиотично действие, но и досега то не се прилага в медицината. Една от причините е непълното разбиране на начина му на действие.

Използвайки напредъка в криоелектронната микроскопия, която ни позволява да видим как молекулите взаимодействат една с друга, група учени от центъра „Джон Инес“ в Англия е установила как албицидинът се вмъква между молекулата на ДНК и ензима ДНК-топоизомераза, пречейки на функционалността му. Това откритие дава път за въвеждане на антибиотика в набора медикаменти на лекарите, като особено обещаващо е приложението му при устойчиви E. coli и S. aureus поради новия му метод на действие.

Биотехнологии в животновъдството

В края на 2021 г. беше публикувана разработка, в която с помощта на CRISPR е насочен полът на потомството при мишки. Използвана е двукомпонентна система – едната част в майката, а другата в бащата, като в зависимост от желания пол на потомството е вмъкната в X или в Y хромозомата. Когато двете части се съберат в зиготата, развитието ѝ спира още в съвсем начален стадий, няколко дни след оплождането.

Изненадващ страничен ефект е и компенсацията в големината на котилото, получено от животни, които са генетично редактирани. Математически се очакват около 50% по-малко бебета, но в експеримента се наблюдава намаление средно с 35%. Научният принос е висок, но приложението в животновъдството предизвиква още по-голям интерес.

Тъй като яйценосещите породи кокошки не са подходящи за производство на месо, почти веднага след излюпването си биват умъртвявани милиони мъжки пилетата годишно. За предотвратяването на тази спорна практика вече има решения, сред които тестването на яйцата, но то е скъпо и не е широко използвано.

Като алтернатива израелската компания Huminn Poultry предлага кокошката „Голда“. Технологията не е публикувана в научни журнали и не е напълно ясно как е реализирана, но от достъпната информация изглежда, че е следван сходен подход като при мишките: използва се особеността на птиците, при които мъжкият носи две еднакви полови хромозоми (ZZ), а женската – различни (ZW).

Редакцията се прилага единствено при женски кокошки. След кръстосване с петел те снасят яйца, от които могат да се излюпят само женски индивиди, защото тези с мъжки зиготи не са фертилни. Тъй като Z хромозомата идва от петел, който не е генетично редактиран, в получените кокошки липсват всякакви следи от генетични редакции и техните яйца могат да се пуснат свободно в търговската мрежа, без да са обект на регулации.

Това е добра новина за индустрията предвид предложението на Европейската комисия да се спре практиката за умъртвяване на мъжки пилета, следващо забраните от началото на миналата година в Германия и Франция.

Биоразнообразие

Масовото измиране на динозаврите е добре известен период от историята на планетата ни. Доскоро се смяташе, че освен него е имало още четири такива събития, но вече е известно още едно, настъпило преди около 550 млн. години в eдиакарския период. То е довело до изчезването на около 80% от съществуващите видове, които са били сред първите по-сложни многоклетъчни организми.

За съжаление, е трудно да се направят пълни заключения, тъй като телата им не са имали твърди обвивки и не са оставили много фосили. Според геоложките данни океаните са загубили значително количество кислород и са оцелели само организмите, адаптирани за живот в такава среда. Това променя и концепцията, че живеем в периода на шестото измиране – най-вероятно процесите, причинени от човешката дейност, протичащи в момента, ще бъдат преименувани на седмо масово измиране.

И въпреки това природата продължава да ни изненадва с места с изключително биологично разнообразие и с видове, които до момента не са описани.

В боливийски природен парк са открити 35 вида риби, които може да са непознати за учените. Обследването е правено в продължение на четири години и са описани над 300 вида риби, два пъти повече от известните до момента в този парк. Сред тях има и представители на четири рода, които досега не са срещани в Боливия, както и такива от слабо представени родове. Освен че е добра новина за биоразнообразието на планетата, откриването на нов вид е изключително вълнуващо.

През 2022 г. имаше новини за доста нови растения и животни. Повечето са от ареали с по-голямо биологично разнообразие, а откритията са разпръснати по целия свят. Едно от тях е тропическото дърво Uvariopsis dicaprio, носещо името на Леонардо ди Каприо и открито в Камерун. От САЩ е многоножката Nannaria swiftae, наречена на Тейлър Суифт. В Италия са описани няколко вида скорпиони, а в Черна гора – големият гол охлюв Limax pseudocinereoniger. Интересен обзор със снимков материал може да бъде видян в Discover Wildlife.

Предполага се, че на земята има почти 9 млн. вида живи организми, от които са описани само малко над 1 млн. Природата все още крие безброй изненади и от нас зависи да ги открием и съхраним.


Веднъж-дваж месечно Михаил Ангелов – биолог, агроном и любим нърд от нашия екип, ще ни представя най-интересните скорошни новини от различни сфери на науката и ще обяснява защо тези постижения са толкова значими за света и човечеството. Или най-малкото – любопитни и забавни.

Научни новини: ChatGPT, медицина, биотехнологии и биоразнообразие

Post Syndicated from Михаил Ангелов original https://www.toest.bg/nauchni-novini-chatgpt-meditsina-biotehnologii-i-bioraznoobrazie/

Изкуствен интелект

Научни новини: ChatGPT, медицина, биотехнологии и биоразнообразие

Въпреки че е достъпен само от три месеца, чатботът ChatGPT предизвиква много вълнения. Достъпът до програмата бе сравнително ограничен, но вече всеки, склонен да сподели имейла и телефона си с разработчиците, може да се възползва от способностите му.

Потенциалът е усетен бързо от предприемчиви ученици и студенти, които го използват като помощник при писането на домашни. Едно от първите съобщения за това идва от Университета на Северен Мичиган, където преподавател по философия се е усъмнил в есе, много по-добро от останалите. След разговор със студента става ясно, че есето е генерирано от ChatGPT. Притеснителното е, че ако студентът не си беше признал, преподавателят е щял да бъде принуден да приеме есето, защото няма преки доказателства, че то не е авторско.

След този случай в академичните среди в САЩ започва обсъждане как да се справят с новото предизвикателство. Редица средни училища забраняват достъпа до програмата през училищната мрежа и устройства, но тази мярка лесно се заобикаля от технически грамотните ученици. Поради това висшите учебни заведения са се спрели на промени в структурата на заданията и оценяването им. Освен ограничаване на домашните работи, преподавателите търсят нови похвати, които по-адекватно ще оценяват способността на студентите за критично мислене.

Бяха публикувани и някои сензационни новини, че програмата успешно е преминала редица изпити, сред които за получаване на лекарски права и юридическа правоспособност, както и по MBA. Въпреки че работите са със средни оценки, това повдига въпроси не само за възможностите на софтуера, но и за структурата на изпитите и за потенциални промени в тях.

Очаквано, започна публикуването и на програми, които се опитват да определят дали даден текст е генериран от ChatGPT. Една от тях е GPTZero, разработена от 22-годишния студент от Принстън Едуард Тиан. Вече е достъпен и сходен софтуер от създателите на ChatGPT. В това своеобразно състезание е много трудно някой да вземе превес и обикновено балансът на силите клони към „атакуващия“.

Продуктът се разработва от компанията OpenAI – нестопанско дружество, сред чиито съоснователи e Илън Мъск. Нейна разработка е и алгоритъмът художник DALL·E, който разбуни духовете в артистичните среди и даде повод за дискусии на тема етика, трудово и авторско право. Вече се среща „съавторство“ между AI продукти под формата на илюстрирани книги, в които рисунките са дело на MidJourney, а текстът – на ChatGPT.

Въпреки че някои експерти не смятат технологията за заплаха, тъй като не може да предостави цитати и задълбочени познания по повечето теми, трябва да се има предвид, че тя все още прави първите си стъпки. Следващите версии ще са обучени с още по-обширни набори от данни и ще се възползват от напредъка в разработката на нови алгоритми и хардуер. Не трябва да се пренебрегва и фактът, че инвестициите в този тип компании тепърва ще нарастват. OpenAI вече обявиха, че са получили ново финансиране от Microsoft в размер на няколко милиарда долара.

Медицина

Терапия на тумори

Това е област, която се развива постоянно и често в нея се прилагат методи от авангарда на научния прогрес. Един от тях е излагането на пациентите на умъртвени туморни клетки с цел активиране на имунната система и атакуване на злокачествените образувания. За да се подобри ефикасността на терапията, е предложен нов подход.

При него, вместо да се умъртвят, клетките се модифицират в две посоки: да отделят вещества, които убиват туморните клетки и променят микросредата им, и същевременно да бъдат по-разпознаваеми за имунната система, за да улеснят работата ѝ. Този метод се възползва от свойството на туморните клетки да се привличат. След като се въведат в пациента, терапевтичните клетки (както ги наричат авторите) могат да се придвижат до необходимото място и да действат локално.

Поради активирането на естествения имунитет изследователите смятат, че терапевтичните клетки могат да се използват и като ваксина. За да изпълнят тази цел, в тях се вмъква механизъм за самоунищожение. След като обучат имунната система да ги разпознава, те могат да бъдат дезактивирани и премахнати от тялото. Моделът води до подобрена прогноза при т.нар. хуманизирани мишки, в които има клетки от различни човешки тъкани с цел средата да се приближи до имунната ни система.

Изследването е интересно и заради преките приложения, до които може да доведе, и заради подхода за използване на клетки от пациента, превърнати в терапевтично средство с помощта на няколко паралелни генетични модификации.

Антидепресанти

Съвременната медицина разчита все повече на тях – за периода от 1998 до 2018 г. броят на рецептите в Англия се е увеличил три пъти. Сред най-често предписваните медикаменти са т.нар. СИРС (селективни инхибитори на реабсорбцията на серотонин, SSRI), които увеличават количеството на серотонин в мозъка. В повечето случаи страничните ефекти от тях не са съществени, но има и такива, които могат да окажат значително влияние върху състоянието на пациента.

Един от най-често срещаните е апатията, която донякъде е следствие от метода на действие на антидепресантите – намаляват силата на отрицателните емоции, но потискат и позитивните. Ново изследване на екипи от Кеймбриджкия и Копенхагенския университет върху ефекта на СИРС при здрави доброволци показва възможна причина за това. Участниците са били разделени в две групи и за период от средно 26 дни са приемали или медикамент, или плацебо. В проучването чрез въпросници, когнитивни задачи и физиологични изследвания са проследени много параметри.

В двете групи не са открити разлики в когнитивните функции, свързани с паметта и емоциите, но при прилагане на тест, предназначен да проследи отговора на позитивни и отрицателни стимули, е установено понижение на реакцията към награди, което според авторите е механизмът, отключващ чувството за апатия. Така се повдига въпросът за начина на прилагане на тези медикаменти и се дава поле за бъдещи изследвания относно метода им на действие.

Широкото предписание на антидепресантите може да има и други неочаквани последствия. Австралийски екип наскоро показа, че те улесняват бактериите в придобиването на устойчивост на антибиотици.

Описани са два механизъма, работещи в синхрон. Вследствие на медикаментите бактериите (E. coli) започват да отделят повече активни форми на кислорода, които вредят на клетъчните компоненти. Това повишава еволюционния натиск и възможността за възникване на мутации. Бактериите са започнали да произвеждат и повече протеини, отговорни за изхвърляне на антибиотици от клетките.

Само по себе си това вече е проблем, но учените са забелязали и повишена възможност за хоризонтален генетичен трансфер, особено характерен за преноса на устойчивост на антибиотици при бактериите. Това е предаването на генетичен материал от един организъм на друг, който не е потомък на първия. При бактериите най-често става чрез поглъщане на генетичен материал от средата (трансформация) или чрез конюгация, при която между двете клетки се създава временна връзка за обмяна на генетична информация.

Няма значение дали приемате медикаменти, или не, защото те се отделят с урината и се натрупват в отпадните системи, откъдето може да попаднат в по-широкия воден цикъл и да доведат до промени и в по-висши водни организми, например риби.

Расте скептицизмът към широкото прилагане на антидепресантите и най-вече на СИРС. Той е повдигнат от един актуален и широко коментиран метаанализ, поставящ под съмнение добре приетата концепция за връзка между понижените нива на серотонина и депресивните състояния. Заключението от обобщението на 12 предишни изследвания и метаанализи е, че за момента няма убедителна информация, потвърждаваща серотониновата хипотеза. Едно от мненията на авторите е, че този нов поглед може да помогне при пациенти, които са приели, че състоянието им се дължи единствено на химичен дисбаланс, и не включват в лечението си други похвати, например психотерапия.

Въпреки това експертите съветват пациентите да не вземат самостоятелни решения, а да се консултират със своите лекари и да следят внимателно протичането на терапията си.

Нови антибиотици

Веществото албицидин се произвежда от патогена по захарната тръстика Xanthomonas albilineans. В средата на 80-те години на ХХ век учени са установили, че албицидинът има антибиотично действие, но и досега то не се прилага в медицината. Една от причините е непълното разбиране на начина му на действие.

Използвайки напредъка в криоелектронната микроскопия, която ни позволява да видим как молекулите взаимодействат една с друга, група учени от центъра „Джон Инес“ в Англия е установила как албицидинът се вмъква между молекулата на ДНК и ензима ДНК-топоизомераза, пречейки на функционалността му. Това откритие дава път за въвеждане на антибиотика в набора медикаменти на лекарите, като особено обещаващо е приложението му при устойчиви E. coli и S. aureus поради новия му метод на действие.

Биотехнологии в животновъдството

В края на 2021 г. беше публикувана разработка, в която с помощта на CRISPR е насочен полът на потомството при мишки. Използвана е двукомпонентна система – едната част в майката, а другата в бащата, като в зависимост от желания пол на потомството е вмъкната в X или в Y хромозомата. Когато двете части се съберат в зиготата, развитието ѝ спира още в съвсем начален стадий, няколко дни след оплождането.

Изненадващ страничен ефект е и компенсацията в големината на котилото, получено от животни, които са генетично редактирани. Математически се очакват около 50% по-малко бебета, но в експеримента се наблюдава намаление средно с 35%. Научният принос е висок, но приложението в животновъдството предизвиква още по-голям интерес.

Тъй като яйценосещите породи кокошки не са подходящи за производство на месо, почти веднага след излюпването си биват умъртвявани милиони мъжки пилета годишно. За предотвратяването на тази спорна практика вече има решения, сред които тестването на яйцата, но то е скъпо и не е широко използвано.

Като алтернатива израелската компания Huminn Poultry предлага кокошката „Голда“. Технологията не е публикувана в научни журнали и не е напълно ясно как е реализирана, но от достъпната информация изглежда, че е следван сходен подход като при мишките: използва се особеността на птиците, при които мъжкият носи две еднакви полови хромозоми (ZZ), а женската – различни (ZW).

Редакцията се прилага единствено при женски кокошки. След кръстосване с петел те снасят яйца, от които могат да се излюпят само женски индивиди, защото тези с мъжки зиготи не са фертилни. Тъй като Z хромозомата идва от петел, който не е генетично редактиран, в получените кокошки липсват всякакви следи от генетични редакции и техните яйца могат да се пуснат свободно в търговската мрежа, без да са обект на регулации.

Това е добра новина за индустрията предвид предложението на Европейската комисия да се спре практиката за умъртвяване на мъжки пилета, следващо забраните от началото на миналата година в Германия и Франция.

Биоразнообразие

Масовото измиране на динозаврите е добре известен период от историята на планетата ни. Доскоро се смяташе, че освен него е имало още четири такива събития, но вече е известно още едно, настъпило преди около 550 млн. години в eдиакарския период. То е довело до изчезването на около 80% от съществуващите видове, които са били сред първите по-сложни многоклетъчни организми.

За съжаление, е трудно да се направят пълни заключения, тъй като телата им не са имали твърди обвивки и не са оставили много фосили. Според геоложките данни океаните са загубили значително количество кислород и са оцелели само организмите, адаптирани за живот в такава среда. Това променя и концепцията, че живеем в периода на шестото измиране – най-вероятно процесите, причинени от човешката дейност, протичащи в момента, ще бъдат преименувани на седмо масово измиране.

И въпреки това природата продължава да ни изненадва с места с изключително биологично разнообразие и с видове, които до момента не са описани.

В боливийски природен парк са открити 35 вида риби, които може да са непознати за учените. Обследването е правено в продължение на четири години и са описани над 300 вида риби, два пъти повече от известните до момента в този парк. Сред тях има и представители на четири рода, които досега не са срещани в Боливия, както и такива от слабо представени родове. Освен че е добра новина за биоразнообразието на планетата, откриването на нов вид е изключително вълнуващо.

През 2022 г. имаше новини за доста нови растения и животни. Повечето са от ареали с по-голямо биологично разнообразие, а откритията са разпръснати по целия свят. Едно от тях е тропическото дърво Uvariopsis dicaprio, носещо името на Леонардо ди Каприо и открито в Камерун. От САЩ е многоножката Nannaria swiftae, наречена на Тейлър Суифт. В Италия са описани няколко вида скорпиони, а в Черна гора – големият гол охлюв Limax pseudocinereoniger. Интересен обзор със снимков материал може да бъде видян в Discover Wildlife.

Предполага се, че на земята има почти 9 млн. вида живи организми, от които са описани само малко над 1 млн. Природата все още крие безброй изненади и от нас зависи да ги открием и съхраним.

Заглавно изображение: Микроскопска снимка на царевичен зародиш, 100х увеличение. Източник: Berkshire Community College Bioscience Image Library / Flickr

Веднъж-дваж месечно Михаил Ангелов – биолог, агроном и любим нърд от нашия екип, ни представя най-интересните скорошни новини от различни сфери на науката и ще обяснява защо тези постижения са толкова значими за света и човечеството. Или най-малкото – любопитни и забавни.

AI and Political Lobbying

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/ai-and-political-lobbying.html

Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing.

Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human.

But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes—not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.

Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers.

Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.

When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. AI could provide techniques for that as well.

A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area—say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote.

Once individuals and strategies are identified, an AI chatbot like ChatGPT could craft written messages to be used in letters, comments—anywhere text is useful. Human lobbyists could also target those individuals directly. It’s the combination that’s important: Editorial and social media comments only get you so far, and knowing which legislators to target isn’t itself enough.

This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of AI may be so hard to detect—particularly if it is being used strategically to guide human actors.

The data necessary to train such strategic targeting systems will only grow with time. Open societies generally make their democratic processes a matter of public record, and most legislators are eager—at least, performatively so—to accept and respond to messages that appear to be from their constituents.

Maybe an AI system could uncover which members of Congress have significant sway over leadership but still have low enough public profiles that there is only modest competition for their attention. It could then pinpoint the SuperPAC or public interest group with the greatest impact on that legislator’s public positions. Perhaps it could even calibrate the size of donation needed to influence that organization or direct targeted online advertisements carrying a strategic message to its members. For each policy end, the right audience; and for each audience, the right message at the right time.

What makes the threat of AI-powered lobbyists greater than the threat already posed by the high-priced lobbying firms on K Street is their potential for acceleration. Human lobbyists rely on decades of experience to find strategic solutions to achieve a policy outcome. That expertise is limited, and therefore expensive.

AI could, theoretically, do the same thing much more quickly and cheaply. Speed out of the gate is a huge advantage in an ecosystem in which public opinion and media narratives can become entrenched quickly, as is being nimble enough to shift rapidly in response to chaotic world events.

Moreover, the flexibility of AI could help achieve influence across many policies and jurisdictions simultaneously. Imagine an AI-assisted lobbying firm that can attempt to place legislation in every single bill moving in the US Congress, or even across all state legislatures. Lobbying firms tend to work within one state only, because there are such complex variations in law, procedure and political structure. With AI assistance in navigating these variations, it may become easier to exert power across political boundaries.

Just as teachers will have to change how they give students exams and essay assignments in light of ChatGPT, governments will have to change how they relate to lobbyists.

To be sure, there may also be benefits to this technology in the democracy space; the biggest one is accessibility. Not everyone can afford an experienced lobbyist, but a software interface to an AI system could be made available to anyone. If we’re lucky, maybe this kind of strategy-generating AI could revitalize the democratization of democracy by giving this kind of lobbying power to the powerless.

However, the biggest and most powerful institutions will likely use any AI lobbying techniques most successfully. After all, executing the best lobbying strategy still requires insiders—people who can walk the halls of the legislature—and money. Lobbying isn’t just about giving the right message to the right person at the right time; it’s also about giving money to the right person at the right time. And while an AI chatbot can identify who should be on the receiving end of those campaign contributions, humans will, for the foreseeable future, need to supply the cash. So while it’s impossible to predict what a future filled with AI lobbyists will look like, it will probably make the already influential and powerful even more so.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Edited to Add: After writing this, we discovered that a research group is researching AI and lobbying:

We used autoregressive large language models (LLMs, the same type of model behind the now wildly popular ChatGPT) to systematically conduct the following steps. (The full code is available at this GitHub link: https://github.com/JohnNay/llm-lobbyist.)

  1. Summarize official U.S. Congressional bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
  2. Using either the original official bill summary (if it was not too long), or the summarized version:
    1. Assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing.
    2. Provide an explanation for why the bill is relevant or not.
    3. Provide a confidence level to the overall answer.
  3. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.

Here is the paper.

Threats of Machine-Generated Text

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/threats-of-machine-generated-text.html

With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype.

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

Abstract: Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability.

ChatGPT-Written Malware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/chatgpt-written-malware.html

I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild.

…within a few weeks of ChatGPT going live, participants in cybercrime forums—­some with little or no coding experience­—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.

“It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”

Last month, one forum participant posted what they claimed was the first script they had written and credited the AI chatbot with providing a “nice [helping] hand to finish the script with a nice scope.”

The Python code combined various cryptographic functions, including code signing, encryption, and decryption. One part of the script generated a key using elliptic curve cryptography and the curve ed25519 for signing files. Another part used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms. A third used RSA keys and digital signatures, message signing, and the blake2 hash function to compare various files.

Check Point Research report.

ChatGPT-generated code isn’t that good, but it’s a start. And the technology will only get better. Where it matters here is that it gives less skilled hackers—script kiddies—new capabilities.