Tag Archives: Uncategorized

Your guide to Amazon Communication Developer Services (CDS) at re:Invent 2022

Post Syndicated from Lauren Cudney original https://aws.amazon.com/blogs/messaging-and-targeting/your-guide-to-amazon-communication-developer-services-cds-at-reinvent-2022/

Your guide to AWS Communication Developer Services (CDS) at re:Invent 2022

AWS Communication Developer Services  has an exciting lineup at re:Invent this year, with 12 sessions spanning communication solutions through Amazon Pinpoint, Amazon Simple Email Service, and Amazon Chime SDK. With breakouts, workshops, and an in-person customer reception, there will be something for everyone. Here is a detailed guide to help you build your agenda and get the most our of your re:Invent 2022 experience.

AWS guide to CDS CPaaS at reInvent

In addition to AWS executive keynotes, like Adam Selipsky’s on Tuesday morning, there are several session you don’t want to miss:

Topic – CPaaS and AWS Communication Services

Improving multichannel communications (BIZ201) – Breakout session
Wednesday November 30, 11:30 PM – 12:30 PM PDT
Your communication strategy directly impacts customer satisfaction, revenue, growth, and retention. This session covers how the customer communication landscape is changing and how you can shift your product strategies to impact business outcomes. Learn how AWS Communication Developer Services (CDS) improve customer communication through CPaaS channels like email, SMS, push notifications, telephony, audio, and video. Explore how building a multichannel architecture can shift user habits and increase satisfaction. This session is tailored for chief marketing officers, chief product officers, chief innovation officers, marketing technologists, and communication builders.
Speaker: Siddhartha Rao, GM, Amazon Chime SDK, Amazon Web Services

Topic – Email (Amazon Simple Email Service)

Improve email inbox deliverability with innovation from Amazon SES (BIZ306) – Chalk Talk
Wednesday, November 30, 4:00 – 5:00 PM PDT
This chalk talk demonstrates how to improve email deliverability, leading to increased email ROI. Learn how to build new features in Amazon SES that increase deliverability through system and configuration suggestions, and then automatically implement those changes for more seamless sending. This chalk talk is designed for email data engineers, marketing operations professionals, and marketing engineers.

Topic – Messaging and Orchestration through SMS, Push, In App (Amazon Pinpoint)

Successfully engage customers using Amazon Pinpoint’s SMS channel (BIZ301) – Chalk Talk
Monday, November 28, 1:00 – 2:00 PM PDT
Developing or upgrading an SMS program can be challenging for any enterprise undergoing digital transformation. In this chalk talk, Amazon Pinpoint experts demonstrate how to build a two-way SMS solution and how to monitor global message deliverability. This talk is designed for communication builders, marketing technologists, and IT decision-makers.

Integrate AI and ML into customer engagement (BIZ309) – Chalk Talk
Thursday, December 1, 3:30 – 4:30 PM PDT
In this chalk talk, learn how to use Amazon Lex AI and Amazon Personalize ML to create two-way engagements with customers in Amazon Pinpoint. Amazon Pinpoint is an omni-channel messaging platform on AWS, supporting SMS, push notifications, email, voice, and custom channels. Follow along as Amazon Pinpoint specialists walk through sample architecture and code that uses Amazon Lex AI chatbots to check order status, share updates via the customer’s preferred channel, and personalize responses to the customer, taking customer engagement to the next level.

Topic – Voice and Video (Amazon Chime SDK)

Use AI to live transcribe and translate multiparty video calls – Workshop
Thursday, December 1, 2:00 – 4:00 PM PDT
Harness the power of real-time transcription in web-based voice or video calls through the Amazon Chime SDK. Join this interactive workshop to learn how to build transcription and translation into multiparty video calls requiring additional compliance and language accessibility, such as telehealth appointments, government services, parent-teacher interactions, or financial services. Experts guide attendees in a hands-on demo that combines the power of Amazon Chime SDK, Amazon Transcribe, and Amazon Translate into one simple interface. This workshop is designed for communication developers. You must bring your laptop to participate.

We look forward to seeing you at re:Invent!

Iran’s Digital Surveillance Tools Leaked

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/irans-digital-surveillance-tools-leaked.html

It’s Iran’s turn to have its digital surveillance tools leaked:

According to these internal documents, SIAM is a computer system that works behind the scenes of Iranian cellular networks, providing its operators a broad menu of remote commands to alter, disrupt, and monitor how customers use their phones. The tools can slow their data connections to a crawl, break the encryption of phone calls, track the movements of individuals or large groups, and produce detailed metadata summaries of who spoke to whom, when, and where. Such a system could help the government invisibly quash the ongoing protests ­—or those of tomorrow ­—an expert who reviewed the SIAM documents told The Intercept.

[…]

SIAM gives the government’s Communications Regulatory Authority ­—Iran’s telecommunications regulator ­—turnkey access to the activities and capabilities of the country’s mobile users. “Based on CRA rules and regulations all telecom operators must provide CRA direct access to their system for query customers information and change their services via web service,” reads an English-language document obtained by The Intercept. (Neither the CRA nor Iran’s mission to the United Nations responded to a requests for comment.)

Lots of details, and links to the leaked documents, at the Intercept webpage.

Apple Only Commits to Patching Latest OS Version

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/apple-only-commits-to-patching-latest-os-version.html

People have suspected this for a while, but Apple has made it official. It only commits to fully patching the latest version of its OS, even though it claims to support older versions.

From ArsTechnica:

In other words, while Apple will provide security-related updates for older versions of its operating systems, only the most recent upgrades will receive updates for every security problem Apple knows about. Apple currently provides security updates to macOS 11 Big Sur and macOS 12 Monterey alongside the newly released macOS Ventura, and in the past, it has released security updates for older iOS versions for devices that can’t install the latest upgrades.

This confirms something that independent security researchers have been aware of for a while but that Apple hasn’t publicly articulated before. Intego Chief Security Analyst Joshua Long has tracked the CVEs patched by different macOS and iOS updates for years and generally found that bugs patched in the newest OS versions can go months before being patched in older (but still ostensibly “supported”) versions, when they’re patched at all.

Friday Squid Blogging: Chinese Squid Fishing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/friday-squid-blogging-chinese-squid-fishing.html

China claims that it is “engaging in responsible squid fishing”:

Chen Xinjun, dean of the College of Marine Sciences at Shanghai Ocean University, made the remarks in response to recent accusations by foreign reporters and actor Leonardo DiCaprio that China is depleting its own fish stock and that Chinese boats have sailed to other waters to continue deep-sea fishing, particularly near Ecuador, affecting local fish stocks in the South American nation.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Critical Vulnerability in Open SSL

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/critical-vulnerability-in-open-ssl.html

There are no details yet, but it’s really important that you patch Open SSL 3.x when the new version comes out on Tuesday.

How bad is “Critical”? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable.

It’s likely to be abused to disclose server memory contents, and potentially reveal user details, and could be easily exploited remotely to compromise server private keys or execute code execute remotely. In other words, pretty much everything you don’t want happening on your production systems.

Slashdot thread.

Building highly resilient applications with on-premises interdependencies using AWS Local Zones

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/building-highly-resilient-applications-with-on-premises-interdependencies-using-aws-local-zones/

This blog post is written by Rachel Rui Liu, Senior Solutions Architect.

AWS Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers.

Following the successful launch of the AWS Local Zones in 16 US cities since 2019, in Feb 2022, AWS announced plans to launch new AWS Local Zones in 32 metropolitan areas in 26 countries worldwide.

With Local Zones, we’ve seen use cases in two common categories.

The first category of use cases is for workloads that require extremely low latency between end-user devices and workload servers. For example, let’s consider media content creation and real-time multiplayer gaming. For these use cases, deploying the workload to a Local Zone can help achieve down to single-digit milliseconds latency between end-user devices and the AWS infrastructure, which is ideal for a good end-user experience.

This post will focus on addressing the second category of use cases, which is commonly seen in an enterprise hybrid architecture, where customers must achieve low latency between AWS infrastructure and existing on-premises data centers.  Compared to the first category of use cases, these use cases can tolerate slightly higher latency between the end-user devices and the AWS infrastructure. However, these workloads have dependencies to these on-premises systems, so the lowest possible latency between AWS infrastructure and on-premises data centers is required for better application performance. Here are a few examples of these systems:

  • Financial services sector mainframe workloads hosted on premises serving regional customers.
  • Enterprise Active Directory hosted on premise serving cloud and on-premises workloads.
  • Enterprise applications hosted on premises processing a high volume of locally generated data.

For workloads deployed in AWS, the time taken for each interaction with components still hosted in the on-premises data center is increased by the latency. In turn, this delays responses received by the end-user. The total latency accumulates and results in suboptimal user experiences.

By deploying modernized workloads in Local Zones, you can reduce latency while continuing to access systems hosted in on-premises data centers, thereby reducing the total latency for the end-user. At the same time, you can enjoy the benefits of agility, elasticity, and security offered by AWS, and can apply the same automation, compliance, and security best practices that you’ve been familiar with in the AWS Regions.

Enterprise workload resiliency with Local Zones

While designing hybrid architectures with Local Zones, resiliency is an important consideration. You want to route traffic to the nearest Local Zone for low latency. However, when disasters happen, it’s critical to fail over to the parent Region automatically.

Let’s look at the details of hybrid architecture design based on real world deployments from different angles to understand how the architecture achieves all of the design goals.

Hybrid architecture with resilient network connectivity

The following diagram shows a high-level overview of a resilient enterprise hybrid architecture with Local Zones, where you have redundant connections between the AWS Region, the Local Zone, and the corporate data center.

resillient network connectivity

Here are a few key points with this network connectivity design:

  1. Use AWS Direct Connect or Site-to-Site VPN to connect the corporate data center and AWS Region.
  2. Use Direct Connect or self-hosted VPN to connect the corporate data center and the Local Zone. This connection will provide dedicated low-latency connectivity between the Local Zone and corporate data center.
  3. Transit Gateway is a regional service. When attaching the VPC to AWS Transit Gateway, you can only add subnets provisioned in the Region. Instances on subnets in the Local Zone can still use Transit Gateway to reach resources in the Region.
  4. For subnets provisioned in the Region, the VPC route table should be configured to route the traffic to the corporate data center via Transit Gateway.
  5. For subnets provisioned in Local Zone, the VPC route table should be configured to route the traffic to the corporate data center via the self-hosted VPN instance or Direct Connect.

Hybrid architecture with resilient workload deployment

The next examples show a public and a private facing workload.

To simplify the diagram and focus on application layer architecture, the following diagrams assume that you are using Direct Connect to connect between AWS and the on-premises data center.

Example 1: Resilient public facing workload

With a public facing workload, end-user traffic will be routed to the Local Zone. If the Local Zone is unavailable, then the traffic will be routed to the Region automatically using an Amazon Route 53 failover policy.

public facing workload resilliency
Here are the key design considerations for this architecture:

  1. Deploy the workload in the Local Zone and put the compute layer in an AWS AutoScaling Group, so that the application can scale up and down depending on volume of requests.
  2. Deploy the workload in both the Local Zone and an AWS Region, and put the compute layer into an autoscaling group. The regional deployment will act as pilot light or warm standby with minimal footprint. But it can scale out when the Local Zone is unavailable.
  3. Two Application Load Balancers (ALBs) are required: one in the Region and one in the Local Zone. Each ALB will dispatch the traffic to each workload cluster inside the autoscaling group local to it.
  4. An internet gateway is required for public facing workloads. When using a Local Zone, there’s no extra configuration needed: define a single internet gateway and attach it to the VPC.

If you want to specify an Elastic IP address to be the workload’s public endpoint, the Local Zone will have a different address pool than the Region. Noting that BYOIP is unsupported for Local Zones.

  1. Create a Route 53 DNS record with “Failover” as the routing policy.
  • For the primary record, point it to the alias of the ALB in the Local Zone. This will set Local Zone as the preferred destination for the application traffic which minimizes latency for end-users.
  • For the secondary record, point it to the alias of the ALB in the AWS Region.
  • Enable health check for the primary record. If health check against the primary record fails, which indicates that the workload deployed in the Local Zone has failed to respond, then Route 53 will automatically point to the secondary record, which is the workload deployed in the AWS Region.

Example 2: Resilient private workload

For a private workload that’s only accessible by internal users, a few extra considerations must be made to keep the traffic inside of the trusted private network.

private workload resilliency

The architecture for resilient private facing workload has the same steps as public facing workload, but with some key differences. These include:

  1. Instead of using a public hosted zone, create private hosted zones in Route 53 to respond to DNS queries for the workload.
  2. Create the primary and secondary records in Route 53 just like the public workload but referencing the private ALBs.
  3. To allow end-users onto the corporate network (within offices or connected via VPN) to resolve the workload, use the Route 53 Resolver with an inbound endpoint. This allows end-users located on-premises to resolve the records in the private hosted zone. Route 53 Resolver is designed to be integrated with an on-premises DNS server.
  4. No internet gateway is required for hosting the private workload. You might need an internet gateway in the Local Zone for other purposes: for example, to host a self-managed VPN solution to connect the Local Zone with the corporate data center.

Hosting multiple workloads

Customers who host multiple workloads in a single VPC generally must consider how to segregate those workloads. As with workloads in the AWS Region, segregation can be implemented at a subnet or VPC level.

If you want to segregate workloads at the subnet level, you can extend your existing VPC architecture by provisioning extra sets of subnets to the Local Zone.

segregate workloads at subnet level

Although not shown in the diagram, for those of you using a self-hosted VPN to connect the Local Zone with an on-premises data center, the VPN solution can be deployed in a centralized subnet.

You can continue to use security groups, network access control lists (NACLs) , and VPC route tables – just as you would in the Region to segregate the workloads.

If you want to segregate workloads at the VPC level, like many of our customers do, within the Region, inter-VPC routing is generally handled by Transit Gateway. However, in this case, it may be undesirable to send traffic to the Region to reach a subnet in another VPC that is also extended to the Local Zone.

segregate workloads at VPC level

Key considerations for this design are as follows:

  1. Direct Connect is deployed to connect the Local Zone with the corporate data center. Therefore, each VPC will have a dedicated Virtual Private Gateway provisioned to allow association with the Direct Connect Gateway.
  2. To enable inter-VPC traffic within the Local Zone, peer the two VPCs together.
  3. Create a VPC route table in VPC A. Add a route for Subnet Y where the destination is the peering link. Assign this route table to Subnet X.
  4. Create a VPC route table in VPC B. Add a route for Subnet X where the destination is the peering link. Assign this route table to Subnet Y.
  5. If necessary, add routes for on-premises networks and the transit gateway to both route tables.

This design allows traffic between subnets X and Y to stay within the Local Zone, thereby avoiding any latency from the Local Zone to the AWS Region while still permitting full connectivity to all other networks.

Conclusion

In this post, we summarized the use cases for enterprise hybrid architecture with Local Zones, and showed you:

  • Reference architectures to host workloads in Local Zones with low-latency connectivity to corporate data centers and resiliency to enable fail over to the AWS Region automatically.
  • Different design considerations for public and private facing workloads utilizing this hybrid architecture.
  • Segregation and connectivity considerations when extending this hybrid architecture to host multiple workloads.

Hopefully you will be able to follow along with these reference architectures to build and run highly resilient applications with local system interdependencies using Local Zones.

Australia Increases Fines for Massive Data Breaches

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/australia-increases-fines-for-massive-data-breaches.html

After suffering two large, and embarrassing, data breaches in recent weeks, the Australian government increased the fine for serious data breaches from $2.2 million to a minimum of $50 million. (That’s $50 million AUD, or $32 million USD.)

This is a welcome change. The problem is one of incentives, and Australia has now increased the incentive for companies to secure the personal data or their users and customers.

On the Randomness of Automatic Card Shufflers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/on-the-randomness-of-automatic-card-shufflers.html

Many years ago, Matt Blaze and I talked about getting our hands on a casino-grade automatic shuffler and looking for vulnerabilities. We never did it—I remember that we didn’t even try very hard—but this article shows that we probably would have found non-random properties:

…the executives had recently discovered that one of their machines had been hacked by a gang of hustlers. The gang used a hidden video camera to record the workings of the card shuffler through a glass window. The images, transmitted to an accomplice outside in the casino parking lot, were played back in slow motion to figure out the sequence of cards in the deck, which was then communicated back to the gamblers inside. The casino lost millions of dollars before the gang were finally caught.

Stanford mathematician Persi Diaconis found other flaws:

With his collaborator Susan Holmes, a statistician at Stanford, Diaconis travelled to the company’s Las Vegas showroom to examine a prototype of their new machine. The pair soon discovered a flaw. Although the mechanical shuffling action appeared random, the mathematicians noticed that the resulting deck still had rising and falling sequences, which meant that they could make predictions about the card order.

New Scientist article behind a paywall. Slashdot thread.

За икономиката

Post Syndicated from original http://www.gatchev.info/blog/?p=2505

Преди малко драснах един коментар в една социална мрежа, на тема икономика. След това реших, че може да е ценно да го пусна и тук. Не че не съм го писал и преди. Но някои хора схващат по-трудно, така че трябва да им се повтаря повече.


Икономиката не е нечии празни приказки, а е наука точно както е наука физиката или пък генетиката. Законите ѝ са също така реални и непробиваеми, както са физичните закони.

(Да, наоколо е пълно с идеолози, които се опитват да пробутват идеологически теории за икономически. Това не прави тези теории повече икономически и по-малко идеологически, точно както геоцентричната теория за Слънчевата система не е накарала Слънцето да се върти около Земята. Който вярва на лъжата, че те са реалност, а не идеология, го прави за своя сметка.)

Икономическият цикъл може да бъде описан много просто (ако и частично): кръг на производство и потребление, и кръг на обменни средства. Двата се „въртят“ в противоположна посока и се захранват един друг.

Кръг на производство: Налични стоки и услуги биват потребявани. В процеса на това потребление се създават материали, стоки, услуги, работна сила и т.н., които биват влагани в производство. То създава определени блага – стоки, услуги и т.н., които отиват за потреблението.

Кръг на обменни средства: За да получат достъп до стоките и услугите, потребителите им дават срещу тях обменни средства (напр. пари). Производителите използват тези обменни средства, за да купят материалите, стоките, услугите, работната сила и т.н., които биват влагани в производството.

Ключът в картинката е, че процесът на производство може да създава повече продукт, отколкото потребява. Благодарение на това икономиката може да расте, населението като цяло да богатее и т.н. Дали реално ще го прави зависи от няколко неща. Ключовото от тях е силната конкуренция – без нея няма стимул за ефективност, независимо от празните приказки и лъжите по въпроса.

За да работи цялата схема и да създава плюс, са нужни две условия. Едното е максимална свобода на процеса на производство, за да бъде той колкото се може по-ефективен. Другото е преразпределяне на обменните средства (парите) така, че достатъчен процент от тях да са у потребителите, за да могат те да купят произведените блага и да не се спира процесът.

Първото условие звучи на болните от идеология на главния мозък много дясно. Второто им звучи много ляво. В реалността нито първото е дясно, нито второто е ляво. Те са просто необходимости, за да работи икономическата машина. Точно както сипването на бензин, подаването на въздух и отвеждането на изгорелите газове са необходимости, за да работи двигател с вътрешно горене.

Спрете ли едно от тях, колкото и да е идеологически „неправоверно“, двигателят ще спре да работи, колкото и останалите да са идеологически „неопровержими“. Може да съм в екстаз от идеята за двигател с вътрешно горене, който работи, без да консумира гориво, и да съм твърдо решен, че по-скоро ще умра, но няма да му сипя бензин. Какво ще стане – двигателят ще заработи без бензин или аз ще се окажа очевиден идиот? Отговорете си сами.

Същото е и с машината на икономиката. Махнете ли кое да е от тези две условия, тя ще спре да работи, независимо колко ви харесва идеята. И независимо колко сте убедени, че идеята е вярна, неопровержима, велика, свещена и т.н. И независимо какви и колко велики и доказали се мъдреци са заявили, че номерът ще мине. Тези, които имат акъла да максимизират и двете, ще спечелят конкурса на еволюцията. Тези, които го нямат, ще отпаднат – ще се окажат вече знаете какво.

Това е положението. Избирайте си сами от кои искате да бъдете.

Friday Squid Blogging: The Reproductive Habits of Giant Squid

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/friday-squid-blogging-the-reproductive-habits-of-giant-squid.html

Interesting:

A recent study on giant squid that have washed ashore along the Sea of Japan coast has raised the possibility that the animal has a different reproductive method than many other types of squid.

Almost all squid and octopus species are polygamous, with multiple males passing sperm to a single female. Giant squids were thought to have a similar form reproduction.

However, a group led by Professor Noritaka Hirohashi, 57, a professor of reproductive biology in the Faculty of Life and Environmental Sciences at Shimane University suspects differently.

They examined 66 sperm “bags” attached to five different locations on the body of a female that washed ashore in Ine Town of Kyoto Prefecture in 2020, and found that all of them were from the same male.

It is rare for a female with sperm attached to be found, and further verification is needed, but the study’s results indicate that giant squid, unlike other squids, may be “monogamous.” That is, females may receive sperm from only one certain male. Hirohashi and his colleagues published their findings in an international scientific journal in July of 2021.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Adversarial ML Attack that Secretly Gives a Language Model a Point of View

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/adversarial-ml-attack-that-secretly-gives-a-language-model-a-point-of-view.html

Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”

Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization.

Model spinning introduces a “meta-backdoor” into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.

Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims.

To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call “pseudo-words,” and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary’s meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models.

This new attack dovetails with something I’ve been worried about for a while, something Latanya Sweeney has dubbed “persona bots.” This is what I wrote in my upcoming book (to be published in February):

One example of an extension of this technology is the “persona bot,” an AI posing as an individual on social media and other online groups. Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary. Or maybe it might offer its developer’s opinions about a recent election, or racial justice, or any other polarizing subject. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?

These are chatbots on a very small scale. They would participate in small forums around the Internet: hobbyist groups, book groups, whatever. In general they would behave normally, participating in discussions like a person does. But occasionally they would say something partisan or political, depending on the desires of their owners. Because they’re all unique and only occasional, it would be hard for existing bot detection techniques to find them. And because they can be replicated by the millions across social media, they could have a greater effect. They would affect what we think, and—just as importantly—what we think others think. What we will see as robust political discussions would be persona bots arguing with other persona bots.

Attacks like these add another wrinkle to that sort of scenario.

Museum Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/museum-security.html

Interesting interview:

Banks don’t take millions of dollars and put them in plastic bags and hang them on the wall so everybody can walk right up to them. But we do basically the same thing in museums and hang the assets right out on the wall. So it’s our job, then, to either use technology or develop technology that protects the art, to hire honest guards that are trainable and able to meet the challenge and alert and so forth. And we have to keep them alert because it’s the world’s most boring job. It might be great for you to go to a museum and see it for a day, but they stand in that same gallery year after year, and so they get mental fatigue. And so we have to rotate them around and give them responsibilities that keep them stimulated and keep them fresh.

It’s a challenge. But we try to predict the items that might be most vulnerable. Which are not necessarily most valuable; some things have symbolic significance to them. And then we try to predict what the next targets might be and advise our clients that they maybe need to put special security on those items.

Propagating valid mTLS client certificate identity to downstream services using Amazon API Gateway

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/propagating-valid-mtls-client-certificate-identity-to-downstream-services-using-amazon-api-gateway/

This blog written by Omkar Deshmane, Senior SA and Anton Aleksandrov, Principal SA, Serverless.

This blog shows how to use Amazon API Gateway with a custom authorizer to process incoming requests, validate the mTLS client certificate, extract the client certificate subject, and propagate it to the downstream application in a base64 encoded HTTP header.

This pattern allows you to terminate mTLS at the edge so downstream applications do not need to perform client certificate validation. With this approach, developers can focus on application logic and offload mTLS certificate management and validation to a dedicated service, such as API Gateway.

Overview

Authentication is one of the core security aspects that you must address when building a cloud application. Successful authentication proves you are who you are claiming to be. There are various common authentication patterns, such as cookie-based authentication, token-based authentication, or the topic of this blog – a certificate-based authentication.

Transport Layer Security (TLS) certificates are at the core of a secure and safe internet. TLS certificates secure the connection between the client and server by encrypting data, ensuring private communication. When using the TLS protocol, the server must prove its identity to the client using a certificate signed by a certificate authority trusted by the client.

Mutual TLS (mTLS) introduces an additional layer of security, in which both the client and server must prove their identities to each other. Developers commonly use mTLS for application-to-application authentication — using digital certificates to represent both client and server apps is a common authentication pattern for building such workflows. We highly recommended decoupling the mTLS implementation from the application business logic so that you do not have to update the application when changing the mTLS configuration. It is a common pattern to implement the mTLS authentication and termination in a network appliance at the edge, such as Amazon API Gateway.

In this solution, we show a pattern of using API Gateway with an authorizer implemented with AWS Lambda to validate the mTLS client certificate, extract the client certificate subject, and propagate it to the downstream application in a base64 encoded HTTP header.

While this blog describes how to implement this pattern for identities extracted from the mTLS client certificates, you can generalize it and apply to propagating information obtained via any other means of authentication.

mTLS Sample Application

This blog includes a sample application implemented using the AWS Serverless Application Model (AWS SAM). It creates a demo environment containing resources like API Gateway, a Lambda authorizer, and an Amazon EC2 instance, which simulates the backend application.

The EC2 instance is used for the backend application to mimic common customer scenarios. You can use any other type of compute, such as Lambda functions or containerized applications with Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), as a backend application layer as well.

The following diagram shows the solution architecture:

Example architecture diagram

Example architecture diagram

  1. Store the client certificate in a trust store in an Amazon S3 bucket.
  2. The client makes a request to the API Gateway endpoint, supplying the client certificate to establish the mTLS session.
  3. API Gateway retrieves the trust store from the S3 bucket. It validates the client certificate, matches the trusted authorities, and terminates the mTLS connection.
  4. API Gateway invokes the Lambda authorizer, providing the request context and the client certificate information.
  5. The Lambda authorizer extracts the client certificate subject. It performs any necessary custom validation, and returns the extracted subject to API Gateway as a part of the authorization context.
  6. API Gateway injects the subject extracted in the previous step into the integration request HTTP header and sends the request to a downstream endpoint.
  7. The backend application receives the request, extracts the injected subject, and uses it with custom business logic.

Prerequisites and deployment

Some resources created as part of this sample architecture deployment have associated costs, both when running and idle. This includes resources like Amazon Virtual Private Cloud (Amazon VPC), VPC NAT Gateway, and EC2 instances. We recommend deleting the deployed stack after exploring the solution to avoid unexpected costs. See the Cleaning Up section for details.

Refer to the project code repository for instructions to deploy the solution using AWS SAM. The deployment provisions multiple resources, taking several minutes to complete.

Following the successful deployment, refer to the RestApiEndpoint variable in the Output section to locate the API Gateway endpoint. Note this value for testing later.

AWS CloudFormation output

AWS CloudFormation output

Key areas in the sample code

There are two key areas in the sample project code.

In src/authorizer/index.js, the Lambda authorizer code extracts the subject from the client certificate. It returns the value as part of the context object to API Gateway. This allows API Gateway to use this value in the subsequent integration request.

const crypto = require('crypto');

exports.handler = async (event) => {
    console.log ('> handler', JSON.stringify(event, null, 4));

    const clientCertPem = event.requestContext.identity.clientCert.clientCertPem;
    const clientCert = new crypto.X509Certificate(clientCertPem);
    const clientCertSub = clientCert.subject.replaceAll('\n', ',');

    const response = {
        principalId: clientCertSub,
        context: { clientCertSub },
        policyDocument: {
            Version: '2012-10-17',
            Statement: [{
                Action: 'execute-api:Invoke',
                Effect: 'allow',
                Resource: event.methodArn
            }]
        }
    };

    console.log('Authorizer Response', JSON.stringify(response, null, 4));
    return response;
};

In template.yaml, API Gateway injects the client certificate subject extracted by the Lambda authorizer previously into the integration request as X-Client-Cert-Sub HTTP header. X-Client-Cert-Sub is a custom header name and you can choose any other custom header name instead.

SayHelloGetMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: CUSTOM
      AuthorizerId: !Ref CustomAuthorizer
      HttpMethod: GET
      ResourceId: !Ref SayHelloResource
      RestApiId: !Ref RestApi
      Integration:
        Type: HTTP_PROXY
        ConnectionType: VPC_LINK
        ConnectionId: !Ref VpcLink
        IntegrationHttpMethod: GET
        Uri: !Sub 'http://${NetworkLoadBalancer.DNSName}:3000/'
        RequestParameters:
          'integration.request.header.X-Client-Cert-Sub': 'context.authorizer.clientCertSub' 

Testing the example

You create a client key and certificate during the deployment, which are stored in the /certificates directory. Use the curl command to make a request to the REST API endpoint using these files.

curl --cert certificates/client.pem --key certificates/client.key \
<use the RestApiEndpoint found in CloudFormation output>
Example flow diagram

Example flow diagram

The client request to API Gateway uses mTLS with a client certificate supplied for mutual TLS authentication. API Gateway uses the Lambda authorizer to extract the certificate subject, and inject it into the Integration request.

The HTTP server runs on the EC2 instance, simulating the backend application. It accepts the incoming request and echoes it back, supplying request headers as part of the response body. The HTTP response received from the backend application contains a simple message and a copy of the request headers sent from API Gateway to the backend.

One header is x-client-cert-sub with the Common Name value you provided to the client certificate during creation. Verify that the value matches the Common Name that you provided when generating the client certificate.

Response example

Response example

API Gateway validated the mTLS client certificate, used the Lambda authorizer to extract the subject common name from the certificate, and forwarded it to the downstream application.

Cleaning up

Use the sam delete command in the api-gateway-certificate-propagation directory to delete the sample application infrastructure:

sam delete

You can also refer to the project code repository for the clean-up instructions.

Conclusion

This blog shows how to use the API Gateway with a Lambda authorizer for mTLS client certificate validation, custom field extraction, and downstream propagation to backend systems. This pattern allows you to terminate mTLS at the edge so that downstream applications are not responsible for client certificate validation.

For additional documentation, refer to Using API Gateway with Lambda Authorizer. Download the sample code from the project code repository. For more serverless learning resources, visit Serverless Land.

Qatar Spyware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/qatar-spyware.html

Everyone visiting Qatar for the World Cup needs to install spyware on their phone.

Everyone travelling to Qatar during the football World Cup will be asked to download two apps called Ehteraz and Hayya.

Briefly, Ehteraz is an covid-19 tracking app, while Hayya is an official World Cup app used to keep track of match tickets and to access the free Metro in Qatar.

In particular, the covid-19 app Ehteraz asks for access to several rights on your mobile., like access to read, delete or change all content on the phone, as well as access to connect to WiFi and Bluetooth, override other apps and prevent the phone from switching off to sleep mode.

The Ehteraz app, which everyone over 18 coming to Qatar must download, also gets a number of other accesses such as an overview of your exact location, the ability to make direct calls via your phone and the ability to disable your screen lock.

The Hayya app does not ask for as much, but also has a number of critical aspects. Among other things, the app asks for access to share your personal information with almost no restrictions. In addition, the Hayya app provides access to determine the phone’s exact location, prevent the device from going into sleep mode, and view the phone’s network connections.

Despite what the article says, I don’t know how mandatory this actually is. I know people who visited Saudi Arabia when that country had a similarly sketchy app requirement. Some of them just didn’t bother downloading the apps, and were never asked about it at the border.

Hacking Automobile Keyless Entry Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/hacking-automobile-keyless-entry-systems.html

Suspected members of a European car-theft ring have been arrested:

The criminals targeted vehicles with keyless entry and start systems, exploiting the technology to get into the car and drive away.

As a result of a coordinated action carried out on 10 October in the three countries involved, 31 suspects were arrested. A total of 22 locations were searched, and over EUR 1 098 500 in criminal assets seized.

The criminals targeted keyless vehicles from two French car manufacturers. A fraudulent tool—marketed as an automotive diagnostic solution, was used to replace the original software of the vehicles, allowing the doors to be opened and the ignition to be started without the actual key fob.

Among those arrested feature the software developers, its resellers and the car thieves who used this tool to steal vehicles.

The article doesn’t say how the hacking tool got installed into cars. Were there crooked auto mechanics, dealers, or something else?

Adding approval notifications to EC2 Image Builder before sharing AMIs

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/adding-approval-notifications-to-ec2-image-builder-before-sharing-amis-2/

­­­­­This blog post is written by, Glenn Chia Jin Wee, Associate Cloud Architect, and Randall Han, Professional Services.

You may be required to manually validate the Amazon Machine Image (AMI) built from an Amazon Elastic Compute Cloud (Amazon EC2) Image Builder pipeline before sharing this AMI to other AWS accounts or to an AWS organization. Currently, Image Builder provides an end-to-end pipeline that automatically shares AMIs after they’ve been built.

In this post, we will walk through the steps to enable approval notifications before AMIs are shared with other AWS accounts. Image Builder supports automated image testing using test components. The recommended best practice is to automate test steps, however situations can arise where test steps become either challenging to automate or internal compliance policies mandate manual checks be conducted prior to distributing images. In such situations, having a manual approval step is useful if you would like to verify the AMI configuration before it is shared to other AWS accounts or an AWS Organization. A manual approval step reduces the potential for sharing an incorrectly configured AMI with other teams which can lead to downstream issues. This solution sends an email with a link to approve or reject the AMI. Users approve the AMI after they’ve verified that it is built according to specifications. Upon approving the AMI, the solution automatically shares it with the specified AWS accounts.

OverviewArchitecture Diagram

  1. In this solution, an Image Builder Pipeline is run that builds a Golden AMI in Account A. After the AMI is built, Image Builder publishes data about the AMI to an Amazon Simple Notification Service (Amazon SNS)
  2. The SNS Topic passes the data to an AWS Lambda function that subscribes to it.
  3. The Lambda function that subscribes to this topic retrieves the data, formats it, and then starts an SSM Automation, passing it the AMI Name and ID.
  4. The first step of the SSM Automation is a manual approval step. The SSM Automation first publishes to an SNS Topic that has an email subscription with the Approver’s email. The approver will receive the email with a URL that they can click to approve the step.
  5. The approval step defines a specific AWS Identity and Access Management (IAM) Role as an approver. This role has the minimum required permissions to approve the manual approval step. After performing manual tests on the Golden AMI, the Approver principal will assume this role.
  6. After assuming this role, the approver will click on the approval link that was sent via email. After approving the step, an AWS Lambda Function is triggered.
  7. This Lambda Function shares the Golden AMI with Account B and sends an email notifying the Target Account Recipients that the AMI has been shared.

Prerequisites

For this walkthrough, you will need the following:

  • Two AWS accounts – one to host the solution resources, and the second which receives the shared Golden AMI.
    • In the account that hosts the solution, prepare an AWS Identity and Access Management (IAM) principal with the sts:AssumeRole permission. This principal must assume the IAM Role that is listed as an approver in the Systems Manager approval step. The ARN of this IAM principal is used in the AWS CloudFormation Approver parameter, This ARN is added to the trust policy of approval IAM Role.
    • In addition, in the account hosting the solution, ensure that the IAM principal deploying the CloudFormation template has the required permissions to create the resources in the stack.
  • A new Amazon Virtual Private Cloud (Amazon VPC) will be created from the stack. Make sure that you have fewer than five VPCs in the selected Region.

Walkthrough

In this section, we will guide you through the steps required to deploy the Image Builder solution. The solution is deployed with CloudFormation.

In this scenario, we deploy the solution within the approver’s account. The approval email will be sent to a predefined email address for manual approval, before the newly created AMI is shared to target accounts.

The approver first assumes the approval IAM Role and then selects the approval link. This leads to the Systems Manager approval page. Upon approval, an email notification will be sent to the predefined target account email address, notifying the relevant stakeholders that the AMI has been successfully shared.

The high-level steps we will follow are:

  1. In Account A, deploy the provided AWS CloudFormation template. This includes an example Image Builder Pipeline, Amazon SNS topics, Lambda functions, and an SSM Automation Document.
  2. Approve the SNS subscription from your supplied email address.
  3. Run the pipeline from the Amazon EC2 Image Builder Console.
  4. [Optional] To conduct manual tests, launch an Amazon EC2 instance from the built AMI after the pipeline runs.
  5. An email will be sent to you with options to approve or reject the step. Ensure that you have assumed the IAM Role that is the approver before clicking the approval link that leads to the SSM console approval page.
  6. Upon approving the step, an AWS Lambda function shares the AMI to the Account B and also sends an email to the target account email recipients notifying them that the AMI has been shared.
  7. Log in to Account B and verify that the AMI has been shared.

Step 1: Deploy the AWS CloudFormation template

1. The CloudFormation template, template.yaml that deploys the solution can also found at this GitHub repository. Follow the instructions at the repository to deploy the stack.

Step 2: Verify your email address

  1. After running the deployment, you will receive an email prompting you to confirm the Subscription at the approver email address. Choose Confirm subscription.

SNS Topic Subscription confirmation email

  1. This leads to the following screen, which shows that your subscription is confirmed.

subscription-confirmation

  1. Repeat the previous 2 steps for the target email address.

Step 3: Run the pipeline from the Image Builder console

  1. In the Image Builder console, under Image pipelines, select the checkbox next to the Pipeline created, choose Actions, and select Run pipeline.

run-image-builder-pipeline

Note: The pipeline takes approximately 20 – 30 minutes to complete.

Step 4: [Optional] Launch an Amazon EC2 instance from the built AMI

If you have a requirement to manually validate the AMI before sharing it with other accounts or to the AWS organization an approver will launch an Amazon EC2 instance from the built AMI and conduct manual tests on the EC2 instance to make sure it is functional.

  1. In the Amazon EC2 console, under Images, choose AMIs. Validate that the AMI is created.

ami-in-account-a

  1. Follow AWS docs: Launching an EC2 instances from a custom AMI for steps on how to launch an Amazon EC2 instance from the AMI.

Step 5: Select the approval URL in the email sent

  1. When the pipeline is run successfully, you will receive another email with a URL to approve the AMI.

approval-email

  1. Before clicking on the Approve link, you must assume the IAM Role that is set as an approver for the Systems Manager step.
  2. In the CloudFormation console, choose the stack that was deployed.

cloudformation-stack

4. Choose Outputs and copy the IAM Role name.

ssm-approval-role-output

5. While logged in as the IAM Principal that has permissions to assume the approval IAM Role, follow the instructions at AWS IAM documentation for switching a role using the console to assume the approval role.
In the Switch Role page, in Role paste the name of the IAM Role that you copied in the previous step.

Note: This IAM Role was deployed with minimum permissions. Hence, seeing warning messages in the console is expected after assuming this role.

switch-role

6. Now in the approval email, select the Approve URL. This leads to the Systems Manager console. Choose Submit.

approve-console

7. After approving the manual step, the second step is executed, which shares the AMI to the target account.

automation-step-success

Step 6: Verify that the AMI is shared to Account B

  1. Log in to Account B.
  2. In the Amazon EC2 console, under Images, choose AMIs. Then, in the dropdown, choose Private images. Validate that the AMI is shared.

verify-ami-in-account-b

  1. Verify that a success email notification was sent to the target account email address provided.

target-email

Clean up

This section provides the necessary information for deleting various resources created as part of this post.

  1. Deregister the AMIs that were created and shared.
    1. Log in to Account A and follow the steps at AWS documentation: Deregister your Linux AMI.
  2. Delete the CloudFormation stack. For instructions, refer to Deleting a stack on the AWS CloudFormation console.

Conclusion

In this post, we explained how to enable approval notifications for an Image Builder pipeline before AMIs are shared to other accounts. This solution can be extended to share to more than one AWS account or even to an AWS organization. With this solution, you will be notified when new golden images are created, allowing you to verify the accuracy of their configuration before sharing them to for wider use. This reduces the possibility of sharing AMIs with misconfigurations that the written tests may not have identified.

We invite you to experiment with different AMIs created using Image Builder, and with different Image Builder components. Check out this GitHub repository for various examples that use Image Builder. Also check out this blog on Image builder integrations with EC2 Auto Scaling Instance Refresh. Let us know your questions and findings in the comments, and have fun!

Upcoming Speaking Engagements

Post Syndicated from Schneier.com Webmaster original https://www.schneier.com/blog/archives/2022/10/upcoming-speaking-engagements-24.html

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Regulating DAOs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/regulating-daos.html

In August, the US Treasury’s Office of Foreign Assets Control (OFAC) sanctioned the cryptocurrency platform Tornado Cash, a virtual currency “mixer” designed to make it harder to trace cryptocurrency transactions—and a worldwide favorite money-laundering platform. Americans are now forbidden from using it. According to the US government, Tornado Cash was sanctioned because it allegedly laundered over $7 billion in cryptocurrency, $455 million of which was stolen by a North Korean state-sponsored hacking group.

Tornado Cash is not a traditional company run by human beings, but instead a series of “smart contracts”: self-executing code that exists only as software. Critics argue that prohibiting Americans from using Tornado Cash is a restraint of free speech, pointing to court rulings in the 1990s that established that computer language is a form of language, and that software programs are a form of speech. They also suggest that the Treasury Department has the authority to sanction only humans and not software.

We think that the most useful way to understand the speech issues involved with regulating Tornado Cash and other decentralized autonomous organizations (DAOs) is through an analogy: the golem. There are many versions of the Jewish golem legend, but in most of them, a person-like clay statue comes to life after someone writes the word “truth” in Hebrew on its forehead, and eventually starts doing terrible things. The golem stops only when a rabbi erases one of those letters, turning “truth” into the Hebrew word for “death,” and the golem ceases to function.

The analogy between DAOs and golems is quite precise, and has important consequences for the relationship between free speech and code. Ultimately, just as the golem needed the intervention of a rabbi to stop wreaking havoc on the world, so too do DAOs need to be subject to regulation.

The equivalency of code and free speech was established during the first “crypto wars” of the 1990s, which were about cryptography, not cryptocurrencies. US agencies tried to use export control laws to prevent sophisticated cryptography software from being exported outside the US. Activists and lawyers cleverly showed how code could be transformed into speech and vice versa, turning the source code for a cryptographic product into a printed book and daring US authorities to prevent its export. In 1996, US District Judge Marilyn Hall Patel ruled that computer code is a language, just like German or French, and that coded programs deserve First Amendment protection. That such code is also functional, instructing a computer to do something, was irrelevant to its expressive capabilities, according to Patel’s ruling. However, both a concurring and dissenting opinion argued that computer code also has the “functional purpose of controlling computers and, in that regard, does not command protection under the First Amendment.”

This disagreement highlights the awkward distinction between ordinary language and computer code. Language does not change the world, except insofar as it persuades, informs, or compels other people. Code, however, is a language where words have inherent power. Type the appropriate instructions and the computer will implement them without hesitation, second-guessing, or independence of will. They are like the words inscribed on a golem’s forehead (or the written instructions that, in some versions of the folklore, are placed in its mouth). The golem has no choice, because it is incapable of making choices. The words are code, and the golem is no different from a computer.

Unlike ordinary organizations, DAOs don’t rely on human beings to carry out many of their core functions. Instead, those functions have been translated into a set of instructions that are implemented in software. In the case of Tornado Cash, its code exists as part of Ethereum, a widely used cryptocurrency that can also run arbitrary computer code.

Cryptocurrency zealots thought that DAOs would allow them to place their trust in secure computer code, which would do exactly what they wanted it to do, rather than fallible human beings who might fail or cheat. Humans could still have input, but under rules that were enshrined in self-running software. The past several years of DAO activity has taught these zealots a series of painful and expensive lessons on the limits of both computer security and incomplete contracts: Software has bugs, and contracts may do weird things under unanticipated circumstances. The combination frequently results in multimillion-dollar frauds and thefts.

Further complicating the matter is that individual DAOs can have very different rules. DAOs were supposed to create truly decentralized services that could never turn into a source of state power and coercion. Today, some DAOs talk a big game about decentralization, but provide power to founders and big investors like Andreessen Horowitz. Others are deliberately set up to frustrate outside control. Indeed, the creators of Tornado Cash explicitly wanted to create a golem-like entity that would be immune from law. In doing so, they were following in a long libertarian tradition.

In 2014, Gavin Woods, one of Ethereum’s core developers, gave a talk on what he called “allegality” of decentralized software services. Woods’s argument was very simple. Companies like PayPal employ real people and real lawyers. That meant that “if they provide a service to you that is deemed wrong or illegal … then they get fucked … maybe [go] to prison.” But cryptocurrencies like Bitcoin “had no operator.” By using software running on blockchains rather than people to run your organization, you could do an end-run around normal, human law. You could create services that “cannot be shut down. Not by a court, not by a police force, not by a nation state.” People would be able to set whatever rules they wanted, regardless of what any government prohibited.

Woods’s speech helped inspire the first DAO (The DAO), and his ideas live on in Tornado Cash. Tornado Cash was designed, in its founder’s words, “to be unstoppable.” The way the protocol is “designed, decentralized and autonomous …[,] there’s nobody in charge.” The people who ran Tornado Cash used a decentralized protocol running on the Ethereum computing platform, which is itself radically decentralized. But they used indelible ink. The protocol was deliberately instructed never to accept an update command.

Other elements of Tornado Cash—­its website, and the GitHub repository where its source code was stored—­have been taken down. But the protocol that actually mixes cryptocurrency is still available through the Ethereum network, even if it doesn’t have a user-friendly front end. Like a golem that has been set in motion, it will just keep on going, taking in, processing, and returning cryptocurrency according to its original instructions.

This gets us to the argument that the US government, by sanctioning a software program, is restraining free speech. Not only is it more complicated than that, but it’s complicated in ways that undercut this argument. OFAC’s actions aren’t aimed against free speech and the publication of source code, as its clarifications have made clear. Researchers are not prohibited from copying, posting, “discussing, teaching about, or including open-source code in written publications, such as textbooks.” GitHub could potentially still host the source code and the project. OFAC’s actions are aimed at preventing persons from using software applications that undercut one of the most basic functions of government: regulating activities that it deems endangers national security.

The question is whether the First Amendment covers golems. When your words are used not to persuade or argue, but to animate a mindless entity that will exist as long as the Ethereum blockchain exists and will carry out your final instructions no matter what, should your golem be immune from legal action?

When Patel issued her famous ruling, she caustically dismissed the argument that “even one drop of ‘direct functionality’” overwhelmed people’s expressive rights. Arguably, the question with Tornado Cash is whether a possibly notional droplet of free speech expressivity can overwhelm the direct functionality of running code, especially code designed to refuse any further human intervention. The Tornado Cash protocol will accept and implement the routine commands described by its protocol: It will still launder cryptocurrency. But the protocol itself is frozen.

We certainly don’t think that the US government should ban DAOs or code running on Ethereum or other blockchains, or demand any universal right of access to their workings. That would be just as sweeping—and wrong—as the general claim that encrypted messaging results in a “lawless space,” or the contrary notion that regulating code is always a prior restraint on free speech. There is wide scope for legitimate disagreement about government regulation of code and its legal authorities over distributed systems.

However, it’s hard not to sympathize with OFAC’s desire to push back against a radical effort to undermine the very idea of government authority. What would happen if the Tornado Cash approach to the law prevailed? That is, what would be the outcome if judges and politicians decided that entities like Tornado Cash could not be regulated, on free speech or any other grounds?

Likely, anyone who wanted to facilitate illegal activities would have a strong incentive to turn their operation into a DAO—and then throw away the key. Ethereum’s programming language is Turing-complete. That means, as Woods argued back in 2014, that one could turn all kinds of organizational rules into software, whether or not they were against the law.

In practice, it wouldn’t be so easy. Turning business principles into running code is hard, and doing it without creating bugs or loopholes is much harder still. Ethereum and other blockchains still have hard limits on computing power. But human ingenuity can accomplish many things when there’s a lot of money at stake.

People have legitimate reasons for seeking anonymity in their financial transactions, but these reasons need to be weighed against other harms to society. As privacy advocate Cory Doctorow wrote recently: “When you combine anonymity with finance—­not the right to speak anonymously, but the right to run an investment fund anonymously—you’re rolling out the red carpet for serial scammers, who can run a scam, get caught, change names, and run it again, incorporating the lessons they learned.”

It’s a mistake to defend DAOs on the grounds that code is free speech. Some code is speech, but not all code is speech. And code can also directly affect the world. DAOs, which are in essence autonomous golems, made from code rather than clay, make this distinction especially stark.

This will become even more important as robots become more capable and prevalent. Robots are even more obviously golems than DAOs are, performing actions in the physical world. Should their code enjoy a safe harbor from the law? What if robots, like DAOs, are designed to obey only their initial instructions, however unlawful­—and refuse all further updates or commands? Assuming that code is free speech and only free speech, and ignoring its functional purpose, will at best tangle the law up in knots.

Tying free speech arguments to the cause of DAOs like Tornado Cash imperils some of the important free speech victories that were won in the past. But the risks for everyone might be even greater if that argument wins. A world where democratic governments are unable to enforce their laws is not a world where civic spaces or civil liberties will thrive.

This essay was written with Henry Farrell, and previously appeared on Lawfare.com.