Tag Archives: policy

Changing the industry with CISA’s Secure by Design principles

Post Syndicated from Kristina Galicova original https://blog.cloudflare.com/secure-by-design-principles


The United States Cybersecurity and Infrastructure Agency (CISA) and seventeen international partners are helping shape best practices for the technology industry with their ‘Secure by Design’ principles. The aim is to encourage software manufacturers to not only make security an integral part of their products’ development, but to also design products with strong security capabilities that are configured by default.

As a cybersecurity company, Cloudflare considers product security an integral part of its DNA. We strongly believe in CISA’s principles and will continue to uphold them in the work we do. We’re excited to share stories about how Cloudflare has baked secure by design principles into the products we build and into the services we make available to all of our customers.

What do “secure by design” and “secure by default” mean?

Secure by design describes a product where the security is ‘baked in’ rather than ‘bolted on’. Rather than manufacturers addressing security measures reactively, they take actions to mitigate any risk beforehand by building products in a way that reasonably protects against attackers successfully gaining access to them.

Secure by default means products are built to have the necessary security configurations come as a default, without additional charges.

CISA outlines the following three software product security principles:

  • Take ownership of customer security outcomes
  • Embrace radical transparency and accountability
  • Lead from the top

In its documentation, CISA provides comprehensive guidance on how to achieve its principles and what security measures a manufacturer should follow. Adhering to these guidelines not only enhances security benefits to customers and boosts the developer’s brand reputation, it also reduces long term maintenance and patching costs for manufacturers.

Why does it matter?

Technology undeniably plays a significant role in our lives, automating numerous everyday tasks. The world’s dependence on technology and Internet-connected devices has significantly increased in the last few years, in large part due to Covid-19. During the outbreak, individuals and companies moved online as they complied with the public health measures that limited physical interactions.

While Internet connectivity makes our lives easier, bringing opportunities for online learning and remote work, it also creates an opportunity for attackers to benefit from such activities. Without proper safeguards, sensitive data such as user information, financial records, and login credentials can all be compromised and used for malicious activities.

Systems vulnerabilities can also impact entire industries and economies. In 2023, hackers from North Korea were suspected of being responsible for over 20% of crypto losses, exploiting software vulnerabilities and stealing more than $300 million from individuals and companies around the world.

Despite the potentially devastating consequences of insecure software, too many vendors place the onus of security on their customers — a fact that CISA underscores in their guidelines. While a level of care from customers is expected, the majority of risks should be handled by manufacturers and their products. Only then can we have more secure and trusting online interactions. The ‘Secure by Design’ principles are essential to bridge that gap and change the industry.

How does Cloudflare support secure by design principles?

Taking ownership of customer security outcomes

CISA explains that in order to take ownership of customer security outcomes, software manufacturers should invest in product security efforts that include application hardening, application features, and application default settings. At Cloudflare, we always have these product security efforts top of mind and a few examples are shared below.

Application hardening

At Cloudflare, our developers follow a defined software development life cycle (SDLC) management process with checkpoints from our security team. We proactively address known vulnerabilities before they can be exploited and fix any exploited vulnerabilities for all of our customers. For example, we are committed to memory safe programming languages and use them where possible. Back in 2021, Cloudflare rewrote the Cloudflare WAF from Lua into the memory safe Rust. More recently, Cloudflare introduced a new in-house built HTTP proxy named Pingora, that moved us from memory unsafe C to memory safe Rust as well. Both of these projects were extra large undertakings that would not have been possible without executive support from our technical leadership team.

Zero Trust Security

By default, we align with CISA’s Zero Trust Maturity Model through the use of Cloudflare’s Zero Trust Security suite of services, to prevent unauthorized access to Cloudflare data, development resources, and other services. We minimize trust assumptions and require strict identity verification for every person and device trying to access any Cloudflare resources, whether self-hosted or in the cloud.

At Cloudflare, we believe that Zero Trust Security is a must-have security architecture in today’s environment, where cyber security attacks are rampant and hybrid work environments are the new normal. To help protect small businesses today, we have a Zero Trust plan that provides the essential security controls needed to keep employees and apps protected online available free of charge for up to 50 users.

Application features

We not only provide users with many essential security tools for free, but we have helped push the entire industry to provide better security features by default since our early days.

Back in 2014, during Cloudflare’s birthday week, we announced that we were making encryption free for all our customers by introducing Universal SSL. Then in 2015, we went one step further and provided full encryption of all data from the browser to the origin, for free. Now, the rest of the industry has followed our lead and encryption by default has become the standard for Internet applications.

During Cloudflare’s seventh Birthday Week in 2017, we were incredibly proud to announce unmetered DDoS mitigation. The service absorbs and mitigates large-scale DDoS attacks without charging customers for the excess bandwidth consumed during an attack. With such announcement we eliminated the industry standard of ‘surge pricing’ for DDoS attacks

In 2021, we announced a protocol called MIGP (“Might I Get Pwned”) that allows users to check whether their credentials have been compromised without exposing any unnecessary information in the process. Aside from a bucket ID derived from a prefix of the hash of your email, your credentials stay on your device and are never sent (even encrypted) over the Internet. Before that, using credential checking services could turn out to be a vulnerability in itself, leaking sensitive information while you are checking whether or not your credentials have been compromised.

A year later, in 2022, Cloudflare again disrupted the industry when we announced WAF (Web Application Firewall) Managed Rulesets free of charge for all Cloudflare plans. WAF is a service responsible for protecting web applications from malicious attacks. Such attacks have a major impact across the Internet regardless of the size of an organization. By making WAF free, we are making the Internet safe for everyone.

Finally, at the end of 2023, we were excited to help lead the industry by making post-quantum cryptography available free of charge to all of our customers irrespective of plan levels.

Application default settings

To further protect our customers, we ensure our default settings provide a robust security posture right from the start. Once users are comfortable, they can change and configure any settings the way they prefer. For example, Cloudflare automatically deploys the Free Cloudflare Managed Ruleset to any new Cloudflare zone. The managed ruleset includes Log4j rules, Shellshock rules, rules matching very common WordPress exploits, and others. Customers are able to disable the ruleset, if necessary, or configure the traffic filter or individual rules. To provide an even more secure-by-default system, we also created the ML-computed WAF Attack Score that uses AI to detect bypasses of existing managed rules and can detect software exploits before they are made public.

As another example, all Cloudflare accounts come with unmetered DDoS mitigation services to protect applications from many of the Internet’s most common and hard to handle attacks, by default.

As yet another example, when customers use our R2 storage, all the stored objects are encrypted at rest. Both encryption and decryption is automatic, does not require user configuration to enable, and does not impact the performance of R2.

Cloudflare also provides all of our customers with robust audit logs. Audit logs summarize the history of changes made within your Cloudflare account. Audit logs include account level actions like login, as well as zone configuration changes. Audit Logs are available on all plan types and are captured for both individual users and for multi-user organizations. Our audit logs are available across all plan levels for 18 months.

Embracing radical transparency and accountability

To embrace radical transparency and accountability means taking pride in delivering safe and secure products. Transparency and sharing information are crucial for improving and evolving the security industry, fostering an environment where companies learn from each other and make the online world safer. Cloudflare shows transparency in multiple ways, as outlined below.

The Cloudflare blog

On the Cloudflare blog, you can find the latest information about our features and improvements, but also about zero-day attacks that are relevant to the entire industry, like the historic HTTP/2 Rapid Reset attacks detected last year. We are transparent and write about important security incidents, such as the Thanksgiving 2023 security incident, where we go in detail about what happened, why it happened, and the steps we took to resolve it. We have also made a conscious effort to embrace radical transparency from Cloudflare’s inception about incidents impacting our services, and continue to embrace this important principle as one of our core values. We hope that the information we share can assist others in enhancing their software practices.

Cloudflare System Status

Cloudflare System Status is a page to inform website owners about the status of Cloudflare services. It provides information about the current status of services and whether they are operating as expected. If there are any ongoing incidents, the status page notes which services were affected, as well as details about the issue. Users can also find information about scheduled maintenance that may affect the availability of some services.

Technical transparency for code integrity

We believe in the importance of using cryptography as a technical means for transparently verifying identity and data integrity. For example, in 2022, we partnered with WhatsApp to provide a system for WhatsApp that assures users they are running the correct, untampered code when visiting the web version of the service by enabling the code verify extension to confirm hash integrity automatically. It’s this process, and the fact that is automated on behalf of the user, that helps provide transparency in a scalable way. If users had to manually fetch, compute, and compare the hashes themselves, detecting tampering would likely only be done by a small fraction of technical users.

Transparency report and warrant canaries

We also believe that an essential part of earning and maintaining the trust of our customers is being transparent about the requests we receive from law enforcement and other governmental entities. To this end, Cloudflare publishes semi-annual updates to our Transparency Report on the requests we have received to disclose information about our customers.

An important part of Cloudflare’s transparency report is our warrant canaries. Warrant canaries are a method to implicitly inform users that we have not taken certain actions or received certain requests from government or law enforcement authorities, such as turning over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone. Through these means we are able to let our users know just how private and secure their data is while adhering to orders from law enforcement that prohibit disclosing some of their requests. You can read Cloudflare’s warrant canaries here.

While transparency reports and warrant canaries are not explicitly mentioned in CISA’s secure by design principles, we think they are an important aspect in a technology company being transparent about their practices.

Public bug bounties

We invite you to contribute to our security efforts by participating in our public bug bounty hosted by HackerOne, where you can report Cloudflare vulnerabilities and receive financial compensation in return for your help.

Leading from the top

With this principle, security is deeply rooted inside Cloudflare’s business goals. Because of the tight relationship of security and quality, by improving a product’s default security, the quality of the overall product also improves.

At Cloudflare, our dedication to security is reflected in the company’s structure. Our Chief Security Officer reports directly to our CEO, and presents at every board meeting. That allows for board members well-informed about the current cybersecurity landscape and emphasizes the importance of the company’s initiatives to improve security.

Additionally, our security engineers are a part of the main R&D organization, with their work being as integral to our products as that of our system engineers. This means that our security engineers can bake security into the SDLC instead of bolting it on as an afterthought.

How can you help?

If you are a software manufacturer, we encourage you to familiarize yourself with CISA’s ‘Secure by Design’ principles and create a plan to implement them in your company.

As an individual, we encourage you to participate in bug bounty programs (such as Cloudflare’s HackerOne public bounty) and promote cybersecurity awareness in your community.

Let’s help build a better Internet together.

All you need to know about the Digital Services Act

Post Syndicated from Petra Arts http://blog.cloudflare.com/author/petra/ original https://blog.cloudflare.com/digital-services-act


February 17th, 2024 marked the entry into force of a landmark piece of European Union (EU) legislation, affecting European users who create and disseminate online content as well as tech companies who act as “intermediaries” on the Internet. I am talking of course about the EU Digital Services Act, or DSA for short. The DSA was first proposed in December 2020, and is meant to update a 20-year-old law called the EU e-commerce Directive, which provides important safeguards and legal certainty for all businesses operating online. The principles of that legal framework, most notably the introduction of EU-wide rules on intermediary liability, are still of major importance today. The DSA is a landmark piece of European legislation because it also sets out, for the first time, enhanced regulatory requirements for (large) digital platforms, thus affecting the entire Internet ecosystem.

At Cloudflare, we are supportive of the longstanding legal frameworks both in Europe and other parts of the world that protect Internet companies from liability for the content that is uploaded or sent through their networks by their users, subscribers or customers. These frameworks are indispensable for the growth of online services, and have been essential in the growth of online applications, marketplaces and social networks.

What’s the Digital Services Act all about?

The EU Digital Services Act consists of two main parts: First, the DSA maintains the strong liability protections for intermediary services that have existed in Europe for over 20 years, and modernizes them, including by giving explicit recognition of supporting Internet services. Services which perform important roles in the functioning of the Internet, such as CDNs, reverse proxies and technical services at the DNS level were not explicitly mentioned in the EU e-commerce Directive at the time. The DSA, in recital 28, recognises that those services, along with many others, are part of the fundamental fabric of the Internet and deserve protection against liability for any illegal or infringing content. This marks an important clarification milestone in EU law.

Secondly, the DSA establishes varying degrees of due diligence and transparency obligations for intermediary services that offer services in the EU. The DSA follows a ‘staggered’ or ‘cumulative’ approach to those obligations and the different services it applies to. This ranges from a number of detailed obligations for the largest platforms (so-called “Very Large Online Platforms” or VLOPs, such as the Apple App Store, Facebook, TikTok, and YouTube), down to less extensive but still impactful rules for smaller platforms, hosting services and Internet intermediaries. What is really important to note with regard to the different service providers that are impacted is that the DSA clearly distinguishes between (technical) intermediary services, “mere” hosting services, and “online platforms”, with the latter category having a number of additional obligations under the new law. Online platform services are considered as hosting services which store information at the request of the recipients of the service, with the important additional role of also disseminating that information to the public.

This proportionate approach is in line with Cloudflare’s view of the Internet stack and the idea that infrastructure services are distinct from social media and search services that are designed to curate and recommend Internet content. This principle of a targeted, proportionate response to the matter is also embedded in the DSA itself. Recital 27 states that “(…) any requests or orders for [such] involvement should, as a general rule, be directed to the specific provider that has the technical and operational ability to act against specific items of illegal content, so as to prevent and minimise (sic) any possible negative effects on the availability and accessibility of information that is not illegal content”. This is an important provision, as principles of proportionality, freedom of speech, and access to information should be safeguarded at all times when it relates to online content.

What do the new rules mean for Cloudflare?

As a provider of intermediary services, Cloudflare has engaged with European policymakers on the topic of intermediary liability for a number of years. From the start of the legislative process on the proposed DSA in 2020 we have contributed extensively to public consultations, and have shared our views on the proposed DSA with lawmakers in Brussels.

In many ways, the final version of the law reflects our existing practices. We have long taken the position, for example, that our intermediary services should have different rules than our hosting services, as is anticipated under the DSA. We have taken a few additional measures to ensure compliance with DSA requirements. For instance, we’ve announced a new legal representative in the EU and point of contact for the purposes of the DSA.

Cloudflare has strongly believed in transparency reporting for a long time, and we have issued transparency reports twice a year since 2013. We recognize that the DSA includes some new requirements around transparency reporting, some of which match with our current reports and processes, and others that do not. We will be revising our transparency reporting, to reflect the DSA’s requirements, beyond our existing documentation. We have also taken steps to confirm that our limited hosting services comply with DSA requirements.

The EU Digital Services Act, because of its enhanced regulatory requirements for (large) digital platforms, represents a significant change to the Internet ecosystem. Cloudflare feels nonetheless well-prepared to address the different requirements that came into force on February 17, 2024, and we look forward to having positive and constructive conversations with relevant European regulators as they start to work on the enforcement of the new law.

Political Milestones for AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/political-milestones-for-ai.html

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades.

Threats of this sort seem urgent and disturbing because they’re salient. We know what to look for, and we can easily imagine their effects.

The truth is, the future will be much more interesting. And even some of the most stupendous potential impacts of AI on politics won’t be all bad. We can draw some fairly straight lines between the current capabilities of AI tools and real-world outcomes that, by the standards of current public understanding, seem truly startling.

With this in mind, we propose six milestones that will herald a new era of democratic politics driven by AI. All feel achievable—perhaps not with today’s technology and levels of AI adoption, but very possibly in the near future.

Good benchmarks should be meaningful, representing significant outcomes that come with real-world consequences. They should be plausible; they must be realistically achievable in the foreseeable future. And they should be observable—we should be able to recognize when they’ve been achieved.

Worries about AI swaying an election will very likely fail the observability test. While the risks of election manipulation through the robotic promotion of a candidate’s or party’s interests is a legitimate threat, elections are massively complex. Just as the debate continues to rage over why and how Donald Trump won the presidency in 2016, we’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.

Thinking further into the future: Could an AI candidate ever be elected to office? In the world of speculative fiction, from The Twilight Zone to Black Mirror, there is growing interest in the possibility of an AI or technologically assisted, otherwise-not-traditionally-eligible candidate winning an election. In an era where deepfaked videos can misrepresent the views and actions of human candidates and human politicians can choose to be represented by AI avatars or even robots, it is certainly possible for an AI candidate to mimic the media presence of a politician. Virtual politicians have received votes in national elections, for example in Russia in 2017. But this doesn’t pass the plausibility test. The voting public and legal establishment are likely to accept more and more automation and assistance supported by AI, but the age of non-human elected officials is far off.

Let’s start with some milestones that are already on the cusp of reality. These are achievements that seem well within the technical scope of existing AI technologies and for which the groundwork has already been laid.

Milestone #1: The acceptance by a legislature or agency of a testimony or comment generated by, and submitted under the name of, an AI.

Arguably, we’ve already seen legislation drafted by AI, albeit under the direction of human users and introduced by human legislators. After some early examples of bills written by AIs were introduced in Massachusetts and the US House of Representatives, many major legislative bodies have had their “first bill written by AI,” “used ChatGPT to generate committee remarks,” or “first floor speech written by AI” events.

Many of these bills and speeches are more stunt than serious, and they have received more criticism than consideration. They are short, have trivial levels of policy substance, or were heavily edited or guided by human legislators (through highly specific prompts to large language model-based AI tools like ChatGPT).

The interesting milestone along these lines will be the acceptance of testimony on legislation, or a comment submitted to an agency, drafted entirely by AI. To be sure, a large fraction of all writing going forward will be assisted by—and will truly benefit from—AI assistive technologies. So to avoid making this milestone trivial, we have to add the second clause: “submitted under the name of the AI.”

What would make this benchmark significant is the submission under the AI’s own name; that is, the acceptance by a governing body of the AI as proffering a legitimate perspective in public debate. Regardless of the public fervor over AI, this one won’t take long. The New York Times has published a letter under the name of ChatGPT (responding to an opinion piece we wrote), and legislators are already turning to AI to write high-profile opening remarks at committee hearings.

Milestone #2: The adoption of the first novel legislative amendment to a bill written by AI.

Moving beyond testimony, there is an immediate pathway for AI-generated policies to become law: microlegislation. This involves making tweaks to existing laws or bills that are tuned to serve some particular interest. It is a natural starting point for AI because it’s tightly scoped, involving small changes guided by a clear directive associated with a well-defined purpose.

By design, microlegislation is often implemented surreptitiously. It may even be filed anonymously within a deluge of other amendments to obscure its intended beneficiary. For that reason, microlegislation can often be bad for society, and it is ripe for exploitation by generative AI that would otherwise be subject to heavy scrutiny from a polity on guard for risks posed by AI.

Milestone #3: AI-generated political messaging outscores campaign consultant recommendations in poll testing.

Some of the most important near-term implications of AI for politics will happen largely behind closed doors. Like everyone else, political campaigners and pollsters will turn to AI to help with their jobs. We’re already seeing campaigners turn to AI-generated images to manufacture social content and pollsters simulate results using AI-generated respondents.

The next step in this evolution is political messaging developed by AI. A mainstay of the campaigner’s toolbox today is the message testing survey, where a few alternate formulations of a position are written down and tested with audiences to see which will generate more attention and a more positive response. Just as an experienced political pollster can anticipate effective messaging strategies pretty well based on observations from past campaigns and their impression of the state of the public debate, so can an AI trained on reams of public discourse, campaign rhetoric, and political reporting.

With these near-term milestones firmly in sight, let’s look further to some truly revolutionary possibilities. While these concepts may have seemed absurd just a year ago, they are increasingly conceivable with either current or near-future technologies.

Milestone #4: AI creates a political party with its own platform, attracting human candidates who win elections.

While an AI is unlikely to be allowed to run for and hold office, it is plausible that one may be able to found a political party. An AI could generate a political platform calculated to attract the interest of some cross-section of the public and, acting independently or through a human intermediary (hired help, like a political consultant or legal firm), could register formally as a political party. It could collect signatures to win a place on ballots and attract human candidates to run for office under its banner.

A big step in this direction has already been taken, via the campaign of the Danish Synthetic Party in 2022. An artist collective in Denmark created an AI chatbot to interact with human members of its community on Discord, exploring political ideology in conversation with them and on the basis of an analysis of historical party platforms in the country. All this happened with earlier generations of general purpose AI, not current systems like ChatGPT. However, the party failed to receive enough signatures to earn a spot on the ballot, and therefore did not win parliamentary representation.

Future AI-led efforts may succeed. One could imagine a generative AI with skills at the level of or beyond today’s leading technologies could formulate a set of policy positions targeted to build support among people of a specific demographic, or even an effective consensus platform capable of attracting broad-based support. Particularly in a European-style multiparty system, we can imagine a new party with a strong news hook—an AI at its core—winning attention and votes.

Milestone #5: AI autonomously generates profit and makes political campaign contributions.

Let’s turn next to the essential capability of modern politics: fundraising. “An entity capable of directing contributions to a campaign fund” might be a realpolitik definition of a political actor, and AI is potentially capable of this.

Like a human, an AI could conceivably generate contributions to a political campaign in a variety of ways. It could take a seed investment from a human controlling the AI and invest it to yield a return. It could start a business that generates revenue. There is growing interest and experimentation in auto-hustling: AI agents that set about autonomously growing businesses or otherwise generating profit. While ChatGPT-generated businesses may not yet have taken the world by storm, this possibility is in the same spirit as the algorithmic agents powering modern high-speed trading and so-called autonomous finance capabilities that are already helping to automate business and financial decisions.

Or, like most political entrepreneurs, AI could generate political messaging to convince humans to spend their own money on a defined campaign or cause. The AI would likely need to have some humans in the loop, and register its activities to the government (in the US context, as officers of a 501(c)(4) or political action committee).

Milestone #6: AI achieves a coordinated policy outcome across multiple jurisdictions.

Lastly, we come to the most meaningful of impacts: achieving outcomes in public policy. Even if AI cannot—now or in the future—be said to have its own desires or preferences, it could be programmed by humans to have a goal, such as lowering taxes or relieving a market regulation.

An AI has many of the same tools humans use to achieve these ends. It may advocate, formulating messaging and promoting ideas through digital channels like social media posts and videos. It may lobby, directing ideas and influence to key policymakers, even writing legislation. It may spend; see milestone #5.

The “multiple jurisdictions” piece is key to this milestone. A single law passed may be reasonably attributed to myriad factors: a charismatic champion, a political movement, a change in circumstances. The influence of any one actor, such as an AI, will be more demonstrable if it is successful simultaneously in many different places. And the digital scalability of AI gives it a special advantage in achieving these kinds of coordinated outcomes.

The greatest challenge to most of these milestones is their observability: will we know it when we see it? The first campaign consultant whose ideas lose out to an AI may not be eager to report that fact. Neither will the campaign. Regarding fundraising, it’s hard enough for us to track down the human actors who are responsible for the “dark money” contributions controlling much of modern political finance; will we know if a future dominant force in fundraising for political action committees is an AI?

We’re likely to observe some of these milestones indirectly. At some point, perhaps politicians’ dollars will start migrating en masse to AI-based campaign consultancies and, eventually, we may realize that political movements sweeping across states or countries have been AI-assisted.

While the progression of technology is often unsettling, we need not fear these milestones. A new political platform that wins public support is itself a neutral proposition; it may lead to good or bad policy outcomes. Likewise, a successful policy program may or may not be beneficial to one group of constituents or another.

We think the six milestones outlined here are among the most viable and meaningful upcoming interactions between AI and democracy, but they are hardly the only scenarios to consider. The point is that our AI-driven political future will involve far more than deepfaked campaign ads and manufactured letter-writing campaigns. We should all be thinking more creatively about what comes next and be vigilant in steering our politics toward the best possible ends, no matter their means.

This essay was written with Nathan Sanders, and previously appeared in MIT Technology Review.

The AI Dividend

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/the-ai-dividend.html

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable—not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies—those with substantial revenue—would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable, and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks—both every day and esoteric—posed by AI, and the government may need to develop policies to remedy any harms that arise.

Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides—particularly since this new technology isn’t possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on Politico.com.