Tag Archives: Privacy

Announcing the Results of the 1.1.1.1 Public DNS Resolver Privacy Examination

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/

Announcing the Results of the 1.1.1.1 Public DNS Resolver Privacy Examination

Announcing the Results of the 1.1.1.1 Public DNS Resolver Privacy Examination

On April 1, 2018, we took a big step toward improving Internet privacy and security with the launch of the 1.1.1.1 public DNS resolver — the Internet’s fastest, privacy-first public DNS resolver. And we really meant privacy first. We were not satisfied with the status quo and believed that secure DNS resolution with transparent privacy practices should be the new normal. So we committed to our public resolver users that we would not retain any personal data about requests made using our 1.1.1.1 resolver. We also built in technical measures to facilitate DNS over HTTPS to help keep your DNS queries secure. We’ve never wanted to know what individuals do on the Internet, and we took technical steps to ensure we can’t know.

We knew there would be skeptics. Many consumers believe that if they aren’t paying for a product, then they are the product. We don’t believe that has to be the case. So we committed to retaining a Big 4 accounting firm to perform an examination of our 1.1.1.1 resolver privacy commitments.

Today we’re excited to announce that the 1.1.1.1 resolver examination has been completed and a copy of the independent accountants’ report can be obtained from our compliance page.

The examination process

We gained a number of observations and lessons from the privacy examination of the 1.1.1.1 resolver. First, we learned that it takes much longer to agree on terms and complete an examination when you ask an accounting firm to do what we believe is the first of its kind examination of custom privacy commitments for a recursive resolver.

We also observed that privacy by design works. Not that we were surprised — we use privacy by design principles in all our products and services. Because we baked anonymization best practices into the 1.1.1.1 resolver when we built it, we were able to demonstrate that we didn’t have any personal data to sell. More specifically, in accordance with RFC 6235, we decided to truncate the client/source IP at our edge data centers so that we never store in non-volatile storage the full IP address of the 1.1.1.1 resolver user.

We knew that a truncated IP address would be enough to help us understand general Internet trends and where traffic is coming from. In addition, we also further improved our privacy-first approach by replacing the truncated IP address with the network number (the ASN) for our internal logs. On top of that, we committed to only retaining those anonymized logs for a limited period of time. It’s the privacy version of belt plus suspenders plus another belt.

Finally, we learned that aligning our examination of the 1.1.1.1 resolver with our SOC 2 report most efficiently demonstrated that we had the appropriate change control procedures and audit logs in place to confirm that our IP truncation logic and limited data retention periods were in effect during the examination period. The 1.1.1.1 resolver examination period of February 1, 2019, through October 31, 2019, was the earliest we could go back to while relying on our SOC 2 report.

Details on the examination

When we launched the 1.1.1.1 resolver, we committed that we would not track what individual users of our 1.1.1.1 resolver are searching for online. The examination validated that our system is configured to achieve what we think is the most important part of this commitment — we never write the querying IP addresses together with the DNS query to disk and therefore have no idea who is making a specific request using the 1.1.1.1 resolver. This means we don’t track which sites any individual visits, and we won’t sell your personal data, ever.

We want to be fully transparent that during the examination we uncovered that our routers randomly capture up to 0.05% of all requests that pass through them, including the querying IP address of resolver users. We do this separately from the 1.1.1.1 service for all traffic passing into our network and we retain such data for a limited period of time for use in connection with network troubleshooting and mitigating denial of service attacks.

To explain — if a specific IP address is flowing through one of our data centers a large number of times, then it is often associated with malicious requests or a botnet. We need to keep that information to mitigate attacks against our network and to prevent our network from being used as an attack vector itself. This limited subsample of data is not linked up with DNS queries handled by the 1.1.1.1 service and does not have any impact on user privacy.

We also want to acknowledge that when we made our privacy promises about how we would handle non-personally identifiable log data for 1.1.1.1 resolver requests, we made what we now see were some confusing statements about how we would handle those anonymous logs.

For example, we learned that our blog post commitment about retention of anonymous log data was not written clearly enough and our previous statements were not as clear because we referred to temporary logs, transactional logs, and permanent logs in ways that could have been better defined. For example, our 1.1.1.1 resolver privacy FAQs stated that we would not retain transactional logs for more than 24 hours but that some anonymous logs would be retained indefinitely. However, our blog post announcing the public resolver didn’t capture that distinction. You can see a clearer statement about our handling of anonymous logs on our privacy commitments page mentioned below.

With this in mind, we updated and clarified our privacy commitments for the 1.1.1.1 resolver as outlined below. The most critical part of these commitments remains unchanged: We don’t want to know what you do on the Internet — it’s none of our business — and we’ve taken the technical steps to ensure we can’t.

Our 1.1.1.1 public DNS resolver commitments

We have refined our commitments to 1.1.1.1 resolver privacy as part of our examination effort. The nature and intent of our commitments remain consistent with our original commitments. These updated commitments are what was included in the examination:

  1. Cloudflare will not sell or share public resolver users’ personal data with third parties or use personal data from the public resolver to target any user with advertisements.
  2. Cloudflare will only retain or use what is being asked, not information that will identify who is asking it. Except for randomly sampled network packets captured from at most 0.05% of all traffic sent to Cloudflare’s network infrastructure, Cloudflare will not retain the source IP from DNS queries to the public resolver in non-volatile storage (more on that below). The randomly sampled packets are solely used for network troubleshooting and DoS mitigation purposes.
  3. A public resolver user’s IP address (referred to as the client or source IP address) will not be stored in non-volatile storage. Cloudflare will anonymize source IP addresses via IP truncation methods (last octet for IPv4 and last 80 bits for IPv6). Cloudflare will delete the truncated IP address within 25 hours.
  4. Cloudflare will retain only the limited transaction and debug log data (“Public Resolver Logs”) for the legitimate operation of our Public Resolver and research purposes, and Cloudflare will delete the Public Resolver Logs within 25 hours.
  5. Cloudflare will not share the Public Resolver Logs with any third parties except for APNIC pursuant to a Research Cooperative Agreement. APNIC will only have limited access to query the anonymized data in the Public Resolver Logs and conduct research related to the operation of the DNS system.

Proving privacy commitments

We created the 1.1.1.1 resolver because we recognized significant privacy problems: ISPs, WiFi networks you connect to, your mobile network provider, and anyone else listening in on the Internet can see every site you visit and every app you use — even if the content is encrypted. Some DNS providers even sell data about your Internet activity or use it to target you with ads. DNS can also be used as a tool of censorship against many of the groups we protect through our Project Galileo.

If you use DNS-over-HTTPS or DNS-over-TLS to our 1.1.1.1 resolver, your DNS lookup request will be sent over a secure channel. This means that if you use the 1.1.1.1 resolver then in addition to our privacy guarantees an eavesdropper can’t see your DNS requests. We promise we won’t be looking at what you’re doing.

We strongly believe that consumers should expect their service providers to be able to show proof that they are actually abiding by their privacy commitments. If we were able to have our 1.1.1.1 resolver privacy commitments examined by an independent accounting firm, we think other organizations can do the same. We encourage other providers to follow suit and help improve privacy and transparency for Internet users globally. And for our part, we will continue to engage well-respected auditing firms to audit our 1.1.1.1 resolver privacy commitments. We also appreciate the work that Mozilla has undertaken to encourage entities that operate recursive resolvers to adopt data handling practices that protect the privacy of user data.

Details of the 1.1.1.1 resolver privacy examination and our accountant’s opinion can be found on Cloudflare’s Compliance page.

Visit https://developers.cloudflare.com/1.1.1.1/ from any device to get started with the Internet’s fastest, privacy-first DNS service.

PS Cloudflare has traditionally used tomorrow, April 1, to release new products. Two years ago we launched the 1.1.1.1 free, fast, privacy-focused public DNS resolver. One year ago we launched Warp, our way of securing and accelerating mobile Internet access.

And tomorrow?

Then three key changes
One before the weft, also
Safety to the roost

Privacy vs. Surveillance in the Age of COVID-19

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/privacy_vs_surv.html

The trade-offs are changing:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus ­ even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later.

I think the effects of COVID-19 will be more drastic than the effects of the terrorist attacks of 9/11: not only with respect to surveillance, but across many aspects of our society. And while many things that would never be acceptable during normal time are reasonable things to do right now, we need to makes sure we can ratchet them back once the current pandemic is over.

Cindy Cohn at EFF wrote:

We know that this virus requires us to take steps that would be unthinkable in normal times. Staying inside, limiting public gatherings, and cooperating with medically needed attempts to track the virus are, when approached properly, reasonable and responsible things to do. But we must be as vigilant as we are thoughtful. We must be sure that measures taken in the name of responding to COVID-19 are, in the language of international human rights law, “necessary and proportionate” to the needs of society in fighting the virus. Above all, we must make sure that these measures end and that the data collected for these purposes is not re-purposed for either governmental or commercial ends.

I worry that in our haste and fear, we will fail to do any of that.

More from EFF.

Facial Recognition for People Wearing Masks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/facial_recognit_3.html

The Chinese facial recognition company Hanwang claims it can recognize people wearing masks:

The company now says its masked facial recognition program has reached 95 percent accuracy in lab tests, and even claims that it is more accurate in real life, where its cameras take multiple photos of a person if the first attempt to identify them fails.

[…]

Counter-intuitively, training facial recognition algorithms to recognize masked faces involves throwing data away. A team at the University of Bradford published a study last year showing they could train a facial recognition program to accurately recognize half-faces by deleting parts of the photos they used to train the software.

When a facial recognition program tries to recognize a person, it takes a photo of the person to be identified, and reduces it down to a bundle, or vector, of numbers that describes the relative positions of features on the face.

[…]

Hanwang’s system works for masked faces by trying to guess what all the faces in its existing database of photographs would look like if they were masked.

Emergency Surveillance During COVID-19 Crisis

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/emergency_surve.html

Israel is using emergency surveillance powers to track people who may have COVID-19, joining China and Iran in using mass surveillance in this way. I believe pressure will increase to leverage existing corporate surveillance infrastructure for these purposes in the US and other countries. With that in mind, the EFF has some good thinking on how to balance public safety with civil liberties:

Thus, any data collection and digital monitoring of potential carriers of COVID-19 should take into consideration and commit to these principles:

  • Privacy intrusions must be necessary and proportionate. A program that collects, en masse, identifiable information about people must be scientifically justified and deemed necessary by public health experts for the purpose of containment. And that data processing must be proportionate to the need. For example, maintenance of 10 years of travel history of all people would not be proportionate to the need to contain a disease like COVID-19, which has a two-week incubation period.
  • Data collection based on science, not bias. Given the global scope of communicable diseases, there is historical precedent for improper government containment efforts driven by bias based on nationality, ethnicity, religion, and race­ — rather than facts about a particular individual’s actual likelihood of contracting the virus, such as their travel history or contact with potentially infected people. Today, we must ensure that any automated data systems used to contain COVID-19 do not erroneously identify members of specific demographic groups as particularly susceptible to infection.

  • Expiration. As in other major emergencies in the past, there is a hazard that the data surveillance infrastructure we build to contain COVID-19 may long outlive the crisis it was intended to address. The government and its corporate cooperators must roll back any invasive programs created in the name of public health after crisis has been contained.

  • Transparency. Any government use of “big data” to track virus spread must be clearly and quickly explained to the public. This includes publication of detailed information about the information being gathered, the retention period for the information, the tools used to process that information, the ways these tools guide public health decisions, and whether these tools have had any positive or negative outcomes.

  • Due Process. If the government seeks to limit a person’s rights based on this “big data” surveillance (for example, to quarantine them based on the system’s conclusions about their relationships or travel), then the person must have the opportunity to timely and fairly challenge these conclusions and limits.

How financial institutions can approve AWS services for highly confidential data

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/how-financial-institutions-can-approve-aws-services-for-highly-confidential-data/

As a Principal Solutions Architect within the Worldwide Financial Services industry group, one of the most frequently asked questions I receive is whether a particular AWS service is financial-services-ready. In a regulated industry like financial services, moving to the cloud isn’t a simple lift-and-shift exercise. Instead, financial institutions use a formal service-by-service assessment process, often called whitelisting, to demonstrate how cloud services can help address their regulatory obligations. When this process is not well defined, it can delay efforts to migrate data to the cloud.

In this post, I will provide a framework consisting of five key considerations that financial institutions should focus on to help streamline the whitelisting of cloud services for their most confidential data. I will also outline the key AWS capabilities that can help financial services organizations during this process.

Here are the five key considerations:

  1. Achieving compliance
  2. Data protection
  3. Isolation of compute environments
  4. Automating audits with APIs
  5. Operational access and security

For many of the business and technology leaders that I work with, agility and the ability to innovate quickly are the top drivers for their cloud programs. Financial services institutions migrate to the cloud to help develop personalized digital experiences, break down data silos, develop new products, drive down margins for existing products, and proactively address global risk and compliance requirements. AWS customers who use a wide range of AWS services achieve greater agility as they move through the stages of cloud adoption. Using a wide range of services enables organizations to offload undifferentiated heavy lifting to AWS and focus on their core business and customers.

My goal is to guide financial services institutions as they move their company’s highly confidential data to the cloud — in both production environments and mission-critical workloads. The following considerations will help financial services organizations determine cloud service readiness and to achieve success in the cloud.

1. Achieving compliance

For financial institutions that use a whitelisting process, the first step is to establish that the underlying components of the cloud service provider’s (CSP’s) services can meet baseline compliance needs. A key prerequisite to gaining this confidence is to understand the AWS shared responsibility model. Shared responsibility means that the secure functioning of an application on AWS requires action on the part of both the customer and AWS as the CSP. AWS customers are responsible for their security in the cloud. They control and manage the security of their content, applications, systems, and networks. AWS manages security of the cloud, providing and maintaining proper operations of services and features, protecting AWS infrastructure and services, maintaining operational excellence, and meeting relevant legal and regulatory requirements.

In order to establish confidence in the AWS side of the shared responsibility model, customers can regularly review the AWS System and Organization Controls 2 (SOC 2) Type II report prepared by an independent, third-party auditor. The AWS SOC 2 report contains confidential information that can be obtained by customers under an AWS non-disclosure agreement (NDA) through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

Key takeaway: Currently, 116 AWS services are in scope for SOC compliance, which will help organizations streamline their whitelisting process. For more information about which services are in scope, see AWS Services in Scope by Compliance Program.

2. Data protection

Financial institutions use comprehensive data loss prevention strategies to protect confidential information. Customers using AWS data services can employ encryption to mitigate the risk of disclosure, alteration of sensitive information, or unauthorized access. The AWS Key Management Service (AWS KMS) allows customers to manage the lifecycle of encryption keys and control how they are used by their applications and AWS services. Allowing encryption keys to be generated and maintained in the FIPS 140-2 validated hardware security modules (HSMs) in AWS KMS is the best practice and most cost-effective option.

For AWS customers who want added flexibility for key generation and storage, AWS KMS allows them to either import their own key material into AWS KMS and keep a copy in their on-premises HSM, or generate and store keys in dedicated AWS CloudHSM instances under their control. For each of these key material generation and storage options, AWS customers can control all the permissions to use keys from any of their applications or AWS services. In addition, every use of a key or modification to its policy is logged to AWS CloudTrail for auditing purposes. This level of control and audit over key management is one of the tools organizations can use to address regulatory requirements for using encryption as a data privacy mechanism.

All AWS services offer encryption features, and most AWS services that financial institutions use integrate with AWS KMS to give organizations control over their encryption keys used to protect their data in the service. AWS offers customer-controlled key management features in twice as many services as any other CSP.

Financial institutions also encrypt data in transit to ensure that it is accessed only by the intended recipient. Encryption in transit must be considered in several areas, including API calls to AWS service endpoints, encryption of data in transit between AWS service components, and encryption in transit within applications. The first two considerations fall within the AWS scope of the shared responsibility model, whereas the latter is the responsibility of the customer.

All AWS services offer Transport Layer Security (TLS) 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints in selected AWS Regions. These FIPS 140-2 endpoints use a cryptographic library that has been validated under the Federal Information Processing Standards (FIPS) 140-2 standard. For financial institutions that operate workloads on behalf of the US government, using FIPS 140-2 endpoints helps them to meet their compliance requirements.

To simplify configuring encryption in transit within an application, which falls under the customer’s responsibility, customers can use the AWS Certificate Manager (ACM) service. ACM enables easy provisioning, management, and deployment of x.509 certificates used for TLS to critical application endpoints hosted in AWS. These integrations provide automatic certificate and private key deployment and automated rotation for Amazon CloudFront, Elastic Load Balancing, Amazon API Gateway, AWS CloudFormation, and AWS Elastic Beanstalk. ACM offers both publicly-trusted and private certificate options to meet the trust model requirements of an application. Organizations may also import their existing public or private certificates to ACM to make use of existing public key infrastructure (PKI) investments.

Key takeaway: AWS KMS allows organizations to manage the lifecycle of encryption keys and control how encryption keys are used for over 50 services. For more information, see AWS Services Integrated with AWS KMS. AWS ACM simplifies the deployment and management of PKI as compared to self-managing in an on-premises environment.

3. Isolation of compute environments

Financial institutions have strict requirements for isolation of compute resources and network traffic control for workloads with highly confidential data. One of the core competencies of AWS as a CSP is to protect and isolate customers’ workloads from each other. Amazon Virtual Private Cloud (Amazon VPC) allows customers to control their AWS environment and keep it separate from other customers’ environments. Amazon VPC enables customers to create a logically separate network enclave within the Amazon Elastic Compute Cloud (Amazon EC2) network to house compute and storage resources. Customers control the private environment, including IP addresses, subnets, network access control lists, security groups, operating system firewalls, route tables, virtual private networks (VPNs), and internet gateways.

Amazon VPC provides robust logical isolation of customers’ resources. For example, every packet flow on the network is individually authorized to validate the correct source and destination before it is transmitted and delivered. It is not possible for information to pass between multiple tenants without specifically being authorized by both the transmitting and receiving customers. If a packet is being routed to a destination without a rule that matches it, the packet is dropped. AWS has also developed the AWS Nitro System, a purpose-built hypervisor with associated custom hardware components that allocates central processing unit (CPU) resources for each instance and is designed to protect the security of customers’ data, even from operators of production infrastructure.

For more information about the isolation model for multi-tenant compute services, such as AWS Lambda, see the Security Overview of AWS Lambda whitepaper. When Lambda executes a function on a customer’s behalf, it manages both provisioning and the resources necessary to run code. When a Lambda function is invoked, the data plane allocates an execution environment to that function or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment. Each function runs in one or more dedicated execution environments that are used for the lifetime of the function and are then destroyed. Execution environments run on hardware-virtualized lightweight micro-virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. Execution environments are never shared across functions, and microVMs are never shared across AWS accounts. AWS continues to innovate in the area of hypervisor security, and resource isolation enables our financial services customers to run even the most sensitive workloads in the AWS Cloud with confidence.

Most financial institutions require that traffic stay private whenever possible and not leave the AWS network unless specifically required (for example, in internet-facing workloads). To keep traffic private, customers can use Amazon VPC to carve out an isolated and private portion of the cloud for their organizational needs. A VPC allows customers to define their own virtual networking environments with segmentation based on application tiers.

To connect to regional AWS services outside of the VPC, organizations may use VPC endpoints, which allow private connectivity between resources in the VPC and supported AWS services. Endpoints are managed virtual devices that are highly available, redundant, and scalable. Endpoints enable private connection between a customer’s VPC and AWS services using private IP addresses. With VPC endpoints, Amazon EC2 instances running in private subnets of a VPC have private access to regional resources without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Furthermore, when customers create an endpoint, they can attach a policy that controls the use of the endpoint to access only specific AWS resources, such as specific Amazon Simple Storage Service (Amazon S3) buckets within their AWS account. Similarly, by using resource-based policies, customers can restrict access to their resources to only allow access from VPC endpoints. For example, by using bucket policies, customers can restrict access to a given Amazon S3 bucket only through the endpoint. This ensures that traffic remains private and only flows through the endpoint without traversing public address space.

Key takeaway: To help customers keep traffic private, more than 45 AWS services have support for VPC Endpoints.

4. Automating audits with APIs

Visibility into user activities and resource configuration changes is a critical component of IT governance, security, and compliance. On-premises logging solutions require installing agents, setting up configuration files and log servers, and building and maintaining data stores to store the data. This complexity may result in poor visibility and fragmented monitoring stacks, which in turn takes longer to troubleshoot and resolve issues. CloudTrail provides a simple, centralized solution to record AWS API calls and resource changes in the cloud that helps alleviate this burden.

CloudTrail provides a history of activity in a customer’s AWS account to help them meet compliance requirements for their internal policies and regulatory standards. CloudTrail helps identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help customers analyze and respond to activity in their AWS account. CloudTrail management events provide insights into the management (control plane) operations performed on resources in an AWS account. For example, customers can log administrative actions, such as creation, deletion, and modification of Amazon EC2 instances. For each event, they receive details such as the AWS account, IAM user role, and IP address of the user that initiated the action as well as time of the action and which resources were affected.

CloudTrail data events provide insights into the resource (data plane) operations performed on or within the resource itself. Data events are often high-volume activities and include operations, such as Amazon S3 object-level APIs, and AWS Lambda function Invoke APIs. For example, customers can log API actions on Amazon S3 objects and receive detailed information, such as the AWS account, IAM user role, IP address of the caller, time of the API call, and other details. Customers can also record activity of their Lambda functions and receive details about Lambda function executions, such as the IAM user or service that made the Invoke API call, when the call was made, and which function was executed.

To help customers simplify continuous compliance and auditing, AWS uniquely offers the AWS Config service to help them assess, audit, and evaluate the configurations of AWS resources. AWS Config continuously monitors and records AWS resource configurations, and allows customers to automate the evaluation of recorded configurations against internal guidelines. With AWS Config, customers can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories.

Key takeaway: Over 160 AWS services are integrated with CloudTrail, which helps customers ensure compliance with their internal policies and regulatory standards by providing a history of activity within their AWS account. For more information about how to use CloudTrail with specific AWS services, see AWS Service Topics for CloudTrail in the CloudTrail user guide. For more information on how to enable AWS Config in an environment, see Getting Started with AWS Config.

5. Operational access and security

In our discussions with financial institutions, they’ve told AWS that they are required to have a clear understanding of access to their data. This includes knowing what controls are in place to ensure that unauthorized access does not occur. AWS has implemented layered controls that use preventative and detective measures to ensure that only authorized individuals have access to production environments where customer content resides. For more information about access and security controls, see the AWS SOC 2 report in AWS Artifact.

One of the foundational design principles of AWS security is to keep people away from data to minimize risk. As a result, AWS created an entirely new virtualization platform called the AWS Nitro System. This highly innovative system combines new hardware and software that dramatically increases both performance and security. The AWS Nitro System enables enhanced security with a minimized attack surface because virtualization and security functions are offloaded from the main system board where customer workloads run to dedicated hardware and software. Additionally, the locked-down security model of the AWS Nitro System prohibits all administrative access, including that of Amazon employees, which eliminates the possibility of human error and tampering.

Key takeaway: Review third-party auditor reports (including SOC 2 Type II) available in AWS Artifact, and learn more about the AWS Nitro System.

Conclusion

AWS can help simplify and expedite the whitelisting process for financial services institutions to move to the cloud. When organizations take advantage of a wide range of AWS services, it helps maximize their agility by making use of the existing security and compliance measures built into AWS services to complete whitelisting so financial services organizations can focus on their core business and customers.

After organizations have completed the whitelisting process and determined which cloud services can be used as part of their architecture, the AWS Well-Architected Framework can then be implemented to help build and operate secure, resilient, performant, and cost-effective architectures on AWS.

AWS also has a dedicated team of financial services professionals to help customers navigate a complex regulatory landscape, as well as other resources to guide them in their migration to the cloud – no matter where they are in the process. For more information, see the AWS Financial Services page, or fill out this AWS Financial Services Contact form.

Additional resources

  • AWS Security Documentation
    The security documentation repository shows how to configure AWS services to help meet security and compliance objectives. Cloud security at AWS is the highest priority. AWS customers benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.
  • AWS Compliance Center
    The AWS Compliance Center is an interactive tool that provides customers with country-specific requirements and any special considerations for cloud use in the geographies in which they operate. The AWS Compliance Center has quick links to AWS resources to help with navigating cloud adoption in specific countries, and includes details about the compliance programs that are applicable in these jurisdictions. The AWS Compliance Center covers many countries, and more countries continue to be added as they update their regulatory requirements related to technology use.
  • AWS Well-Architected Framework and AWS Well-Architected Tool
    The AWS Well-Architected Framework helps customers understand the pros and cons of decisions they make while building systems on AWS. The AWS Well-Architected Tool helps customers review the state of their workloads and compares them to the latest AWS architectural best practices. For more information about the AWS Well-Architected Framework and security, see the Security Pillar – AWS Well-Architected Framework whitepaper.

If you have feedback about this blog post, submit comments in the Comments section below.

Author

Ilya Epshteyn

Ilya is a solutions architect with AWS. He helps customers to innovate on the AWS platform by building highly available, scalable, and secure architectures. He enjoys spending time outdoors and building Lego creations with his kids.

More on Crypto AG

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/more_on_crypto_.html

One follow-on to the story of Crypto AG being owned by the CIA: this interview with a Washington Post reporter. The whole thing is worth reading or listening to, but I was struck by these two quotes at the end:

…in South America, for instance, many of the governments that were using Crypto machines were engaged in assassination campaigns. Thousands of people were being disappeared, killed. And I mean, they’re using Crypto machines, which suggests that the United States intelligence had a lot of insight into what was happening. And it’s hard to look back at that history now and see a lot of evidence of the United States going to any real effort to stop it or at least or even expose it.

[…]

To me, the history of the Crypto operation helps to explain how U.S. spy agencies became accustomed to, if not addicted to, global surveillance. This program went on for more than 50 years, monitoring the communications of more than 100 countries. I mean, the United States came to expect that kind of penetration, that kind of global surveillance capability. And as Crypto became less able to deliver it, the United States turned to other ways to replace that. And the Snowden documents tell us a lot about how they did that.

Security of Health Information

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/security_of_hea.html

The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic­ — when countries struggle to manage not just the outbreak but its social, economic, and political fallout­ — is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts­ — many of which had previously been tied to Russia­ — had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time­ — and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information­ — if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.

EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.

Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.

Thank you for your patience and forbearance on this.

Facebook’s Download-Your-Data Tool Is Incomplete

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/facebooks_downl.html

Privacy International has the details:

Key facts:

  • Despite Facebook claim, “Download Your Information” doesn’t provide users with a list of all advertisers who uploaded a list with their personal data.
  • As a user this means you can’t exercise your rights under GDPR because you don’t know which companies have uploaded data to Facebook.
  • Information provided about the advertisers is also very limited (just a name and no contact details), preventing users from effectively exercising their rights.
  • Recently announced Off-Facebook feature comes with similar issues, giving little insight into how advertisers collect your personal data and how to prevent such data collection.

When I teach cybersecurity tech and policy at the Harvard Kennedy School, one of the assignments is to download your Facebook and Google data and look at it. Many are surprised at what the companies know about them.

Apple’s Tracking-Prevention Feature in Safari has a Privacy Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/apples_tracking.html

Last month, engineers at Google published a very curious privacy bug in Apple’s Safari web browser. Apple’s Intelligent Tracking Prevention, a feature designed to reduce user tracking, has vulnerabilities that themselves allow user tracking. Some details:

ITP detects and blocks tracking on the web. When you visit a few websites that happen to load the same third-party resource, ITP detects the domain hosting the resource as a potential tracker and from then on sanitizes web requests to that domain to limit tracking. Tracker domains are added to Safari’s internal, on-device ITP list. When future third-party requests are made to a domain on the ITP list, Safari will modify them to remove some information it believes may allow tracking the user (such as cookies).

[…]

The details should come as a surprise to everyone because it turns out that ITP could effectively be used for:

  • information leaks: detecting websites visited by the user (web browsing history hijacking, stealing a list of visited sites)
  • tracking the user with ITP, making the mechanism function like a cookie
  • fingerprinting the user: in ways similar to the HSTS fingerprint, but perhaps a bit better

I am sure we all agree that we would not expect a privacy feature meant to protect from tracking to effectively enable tracking, and also accidentally allowing any website out there to steal its visitors’ web browsing history. But web architecture is complex, and the consequence is that this is exactly the case.

Apple fixed this vulnerability in December, a month before Google published.

If there’s any lesson here, it’s that privacy is hard — and that privacy engineering is even harder. It’s not that we shouldn’t try, but we should recognize that it’s easy to get it wrong.

New Research on the Adtech Industry

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/new_research_on.html

The Norwegian Consumer Council has published an extensive report about how the adtech industry violates consumer privacy. At the same time, it is filing three legal complaints against six companies in this space. From a Twitter summary:

1. [thread] We are filing legal complaints against six companies based on our research, revealing systematic breaches to privacy, by shadowy #OutOfControl #adtech companies gathering & sharing heaps of personal data. https://forbrukerradet.no/out-of-control/#GDPR… #privacy

2. We observed how ten apps transmitted user data to at least 135 different third parties involved in advertising and/or behavioural profiling, exposing (yet again) a vast network of companies monetizing user data and using it for their own purposes.

3. Dating app @Grindr shared detailed user data with a large number of third parties. Data included the fact that you are using the app (clear indication of sexual orientation), IP address (personal data), Advertising ID, GPS location (very revealing), age, and gender.

From a news article:

The researchers also reported that the OkCupid app sent a user’s ethnicity and answers to personal profile questions — like “Have you used psychedelic drugs?” — to a firm that helps companies tailor marketing messages to users. The Times found that the OkCupid site had recently posted a list of more than 300 advertising and analytics “partners” with which it may share users’ information.

This is really good research exposing the inner workings of a very secretive industry.

Customer Tracking at Ralphs Grocery Store

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/customer_tracki.html

To comply with California’s new data privacy law, companies that collect information on consumers and users are forced to be more transparent about it. Sometimes the results are creepy. Here’s an article about Ralphs, a California supermarket chain owned by Kroger:

…the form proceeds to state that, as part of signing up for a rewards card, Ralphs “may collect” information such as “your level of education, type of employment, information about your health and information about insurance coverage you might carry.”

It says Ralphs may pry into “financial and payment information like your bank account, credit and debit card numbers, and your credit history.”

Wait, it gets even better.

Ralphs says it’s gathering “behavioral information” such as “your purchase and transaction histories” and “geolocation data,” which could mean the specific Ralphs aisles you browse or could mean the places you go when not shopping for groceries, thanks to the tracking capability of your smartphone.

Ralphs also reserves the right to go after “information about what you do online” and says it will make “inferences” about your interests “based on analysis of other information we have collected.”

Other information? This can include files from “consumer research firms” ­– read: professional data brokers ­– and “public databases,” such as property records and bankruptcy filings.

The reaction from John Votava, a Ralphs spokesman:

“I can understand why it raises eyebrows,” he said. We may need to change the wording on the form.”

That’s the company’s solution. Don’t spy on people less, just change the wording so they don’t realize it.

More consumer protection laws will be required.

Google Receives Geofence Warrants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/google_receives.html

Sometimes it’s hard to tell the corporate surveillance operations from the government ones:

Google reportedly has a database called Sensorvault in which it stores location data for millions of devices going back almost a decade.

The article is about geofence warrants, where the police go to companies like Google and ask for information about every device in a particular geographic area at a particular time. In 2013, we learned from Edward Snowden that the NSA does this worldwide. Its program is called CO-TRAVELLER. The NSA claims it stopped doing that in 2014 — probably just stopped doing it in the US — but why should it bother when the government can just get the data from Google.

Both the New York Times and EFF have written about Sensorvault.

Empowering Your Privacy

Post Syndicated from Emily Hancock original https://blog.cloudflare.com/empowering-your-privacy/

Empowering Your Privacy

Empowering Your Privacy

Happy Data Privacy Day! At Cloudflare, our mission is to help build a better Internet, and we believe data privacy is core to that mission. But we know words are cheap — even data brokers who sell your personal information will tell you that “privacy is important” to them. So we wanted to take the opportunity on this Data Privacy Day to show you how our commitment to privacy crosses all levels of the work we do at Cloudflare to help make the Internet more private and secure — and therefore better — for everyone.

Privacy on the Internet means different things to different people. Maybe privacy means you get to control your personal data — who can collect it and how it can be used. Or that you have the right to access and delete your personal information. Or maybe it means your online life is protected from government surveillance or from ad trackers and targeted advertising. Maybe you think you should be able to be completely anonymous online. At Cloudflare, we think all these flavors of privacy are equally important, and as we describe in more detail below, we’ve taken steps to address each of these privacy priorities.

Governments don’t necessarily take the same view on what privacy should mean either. Europe has its General Data Protection Regulation (GDPR), under which people have the right to control how their information is used, and the protection of data is a fundamental right under the EU Charter of Fundamental Rights. The United States takes a consumer-centric approach focusing on deceptive use of information, the sale of information, and privacy from unwarranted government surveillance. Brazil’s privacy law is similar to that of Europe’s, and Canada, New Zealand, Japan, Australia, China, and Singapore (to name a few) have some variation on the theme of a national, comprehensive privacy law.

Rather than viewing privacy of personal data as an ocean of data to be regulated through the lens of any particular government, we think privacy merits a different approach. To begin with, we don’t think there should be an ocean of personal data. We believe in empowering individuals and entities of all sizes with technological tools to reduce the amount of personal data that gets funneled into the data ocean — regardless of whether you live in a country with laws protecting the privacy of your personal data. If we can build tools to help you share less personal data online, then that’s a win for privacy no matter your privacy priorities or country of residence.

Technologies that Enable the Privacy of Personal Data

We’ve said it before — the Internet was not built with privacy and security in mind. But as the Internet has become more essential to daily life and more central to even the most critical corporate and government systems, the world has needed better tools to provide privacy and security for these online functions. When we talk about building a better Internet, for us that means (re)building the Internet with privacy baked in. Since Cloudflare launched in 2010, we’ve released a number of state-of-the-art, privacy-enhancing technologies that can help individuals, businesses, and governments alike:

  • Universal SSL: In 2014, there were 2 million websites that supported encrypted connections. In September of that year we introduced universal SSL (now called Transport Layer Security) for all of our customers, paying and free, and overnight we were able to make SSL easily available at scale to the millions of websites that use Cloudflare. Supporting SSL means that we support encrypting the content of web pages, which had previously been sent as plain text over the Internet. It’s like sending your private, personal information in a locked box instead of on a postcard.
  • Privacy Pass: Cloudflare supports Privacy Pass, which lets users prove their identity across multiple sites anonymously without enabling tracking. When people use anonymity services or shared IPs, it makes it more difficult for website protection services like Cloudflare to identify their requests as coming from legitimate users and not bots. To help reduce the friction for these users — which include some of the most vulnerable users online — Privacy Pass provides them with a way to prove they are legitimate across multiple sites on the Cloudflare network. This is done without revealing their identity, and without exposing Cloudflare customers to additional threats from malicious bots.
  • ESNI: We announced beta support for encrypted Server Name Identification (ESNI) in 2018. Server Name Identification (SNI) was created to allow multiple websites to exist on the same IP address (something that became necessary with the shortage of IPv4 addresses), but it can reveal which websites users are visiting. As described here, ESNI encrypts the SNI, fixing what has been a glaring privacy hole.
  • 1.1.1.1 Public DNS Resolver: In 2018, we announced our public privacy-focused resolver, the 1.1.1.1 Public DNS Resolver (which also turned out to be the world’s fastest public DNS resolver). It was our first consumer product, it’s free, and we built it because we believe that consumers should have the ability to browse the Internet without providers in the middle monitoring user activity. So our public DNS resolver service will never store 1.1.1.1 public DNS resolver users’ IP addresses (referred to as the source IP address) in non-volatile storage, and we anonymize the source IP addresses of 1.1.1.1 public DNS resolver users before logging any data. This way, we have no information about what website a specific user has looked up using the 1.1.1.1 Public DNS Resolver service. We can’t tell who is visiting any given website, and we don’t want to know.
  • DNS over HTTPS (DoH): Using the 1.1.1.1 Public DNS Resolver means that your ISP won’t get all of your browsing data from acting as your DNS resolver, but they will still get it from provisioning those requests unless you encrypt that channel. For those reasons, we added support for DoH. DNS requests can contain some alarmingly personal data, such as your location, the domains and subdomains you have visited, the time of day requests were submitted, and how long you stayed on certain sites. Encrypting those requests ensures that only the user and the resolver get that information, and that no one involved in the transit in between sees it. In addition to DoH, we’ve partnered with Mozilla to support private web browsing in Firefox. We have also employed query minimization to ensure that those who don’t need to access the full URL you are requesting, simply don’t.
  • 1.1.1.1 Mobile Application with WARP: People are accessing the Internet from their mobile devices more and more, so in 2019 we launched our 1.1.1.1 Mobile Application with WARP. You can enable our mobile application in DNS-only mode to ensure that all of your mobile device’s DNS queries are sent to our 1.1.1.1 Public DNS Resolver using either DNS over HTTPS or DNS over TLS. You can also enable WARP in our mobile application, which includes everything from our DNS-only mode and will also route traffic from your device through the Cloudflare network via encrypted tunnels. This means that even if you are accessing websites or mobile applications that are not using HTTPS, the content transmitted to and from your device will be encrypted if you have WARP enabled and will not be sent as plain text over the Internet.  

How We Do Privacy at Cloudflare

The privacy-enhancing technologies we build are public examples of how we put our money where our mouth is when it comes to privacy. We also want to tell you about the ways — some public, some not — we infuse privacy principles at all levels at Cloudflare.

  • Employee Education and Mindset: An understanding of privacy is core to a Cloudflare employee’s experience right from the start. Employees learn about the role privacy and security play in helping to build a better Internet in their first week at Cloudflare. During the comprehensive employee orientation, we stress the role each employee plays in keeping the company and our customers secure. All employees are required to take annual data protection training, which introduces employees to the fundamentals of the Fair Information Practices (FIPs), GDPR and other applicable laws, and we do targeted training for individual teams, depending on their engagement with personal data, throughout the year.
  • Privacy in Product Development: We have built the FIPs and GDPR requirements into product development. Cloudflare employees take privacy-by-design seriously. We develop products and processes with the principles of data minimization, purpose limitation, and data security always front of mind. We have a product development lifecycle that includes performing privacy impact assessments when we may process personal data. We retain personal data we process for as short a time as necessary to provide our services to our customers. We do not cross-track individual Internet users across sites. We don’t sell personal information. We don’t monetize DNS requests. We detect, deter, and deflect bad actors — we’re not in the business of looking at what any one person (or more specifically, browser) is doing when they browse the Internet. That’s not what we’re about.
  • Internal Compliance with Privacy Regulations: Even before Europe’s watershed GDPR went into effect in 2018 and the California Consumer Privacy Act (CCPA) took effect earlier this month, we were focusing on how to implement the privacy principles embodied in regulations globally. A key part of this has been to minimize our collection of personal data and to only use personal data for the purpose for which it was collected. We view the GDPR and CCPA as a codification of many of the steps we were already taking: only collect the personal data you need to provide the service you’re offering; don’t sell personal information; give people the ability to access, correct, or delete their personal information; and give our customers control over the information that, for example, is cached on our content delivery network (CDN), stored in Workers Key Value Store, or captured by our web application firewall (WAF).
  • Security as a Means to Enhance Privacy: We’re a security company, so naturally we view security as a critical element of ensuring data privacy. In addition to the extensive internal security mechanisms we have in place to protect our customers’ data, we also have become certified under industry standards to demonstrate our commitment to data security. We are ISO 27001 and AICPA SOC 2 Type II certified. Cloudflare’s SOC 2 Type II report covers security, confidentiality, and availability controls to protect customer data. We also maintain a SOC 3 report which is the public report of Security, Confidentiality, and Availability controls. In addition to this, we comply with our obligations under the EU Directive on Security of Network and Information Systems (NIS).
  • Privacy-focused Response to Government and Third-Party Requests for Information: Our respect for our customers’ privacy applies with equal force to commercial requests and to government or law enforcement requests. Any law enforcement requests that we receive must strictly adhere to the due process of law and be subject to judicial oversight. We believe that U.S. law enforcement requests for the personal data of a non-U.S. person that conflict with the privacy laws of that person’s country of residence (such as the EU GDPR) should be legally challenged. Consistent with both the U.S. CLOUD Act and the proceedings in the Microsoft Ireland case,  providers like Cloudflare may ask U.S. courts to quash requests from U.S. law enforcement based on such a conflict. In addition, it is our policy to notify our customers of a subpoena or other legal process requesting their customer or billing information before disclosure of that information, whether the legal process comes from the government or private parties involved in civil litigation, unless legally prohibited. We also publicly report on the types of requests we receive, as well as our responses, in our semi-annual  Transparency Report. Finally, we publicly list certain types of actions that Cloudflare has never taken in response to government requests, and we commit that if Cloudflare were asked to do any of the things on this list, we would exhaust all legal remedies in order to protect our customers from what we believe are illegal or unconstitutional requests.
  • Bringing Privacy and Security to Vulnerable Entities (Project Galileo): Since 2014, we have been providing a wide range of security products to important, yet vulnerable, voices on the internet with Project Galileo. Privacy is essential to the more than 900 organizations receiving free services under the Project, as many face threats from powerful adversaries. These organizations range from humanitarian groups and non-profit organizations, to journalism and media sites that are repeatedly flooded with malicious attacks in an attempt to knock them offline.
  • Spreading the Message on What We Think Privacy Should Look Like: It isn’t enough to build tools with privacy in mind; we also feel a responsibility to share best practices we have learned and work with policymakers to help them understand the implications of regulation on complex technologies. For example, Cloudflare has actively supported efforts to develop a framework for US Federal privacy standards, urging policymakers to adopt technology-neutral approaches that allow standards to change and improve as technology does. In Europe, we are engaged in the ongoing discussions on the draft ePrivacy Regulation, which aims to enshrine the important principle of confidentiality of communications and guides companies on cookie usage and direct marketing. We are also actively contributing to the EU debate on the draft eEvidence Regulation, which seeks to facilitate cross-border access to data. We believe this initiative must fully respect the EU Charter of Fundamental Rights and the EU data protection framework.

So What’s Next?

Protecting the privacy of personal data is an ongoing journey. Our approach has never been to check the boxes of compliance and move on. We are continually evaluating how we handle personal data and looking for ways to minimize the amount of personal data we receive. We will continue to be self-critical and examine our own motivations for the technologies we develop. And we will keep working, just as we have for the past ten years, to find new ways to secure privacy and security for our customers and for the Internet as a whole.

Modern Mass Surveillance: Identify, Correlate, Discriminate

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/modern_mass_sur.html

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­– using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­– and almost entirely unregulated ­– data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies –­ and governments ­– to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­– passed in Vermont in 2018 ­– that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

Clearview AI and Facial Recognition

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/clearview_ai_an.html

The New York Times has a long story about Clearview AI, a small company that scrapes identified photos of people from pretty much everywhere, and then uses unstated magical AI technology to identify people in other photos.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

[…]

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

Another article.

EDITED TO ADD (1/23): Twitter told the company to stop scraping its photos.

Police Surveillance Tools from Special Services Group

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/police_surveill.html

Special Services Group, a company that sells surveillance tools to the FBI, DEA, ICE, and other US government agencies, has had its secret sales brochure published. Motherboard received the brochure as part of a FOIA request to the Irvine Police Department in California.

“The Tombstone Cam is our newest video concealment offering the ability to conduct remote surveillance operations from cemeteries,” one section of the Black Book reads. The device can also capture audio, its battery can last for two days, and “the Tombstone Cam is fully portable and can be easily moved from location to location as necessary,” the brochure adds. Another product is a video and audio capturing device that looks like an alarm clock, suitable for “hotel room stings,” and other cameras are designed to appear like small tree trunks and rocks, the brochure reads.

The “Shop-Vac Covert DVR Recording System” is essentially a camera and 1TB harddrive hidden inside a vacuum cleaner. “An AC power connector is available for long-term deployments, and DC power options can be connected for mobile deployments also,” the brochure reads. The description doesn’t say whether the vacuum cleaner itself works.

[…]

One of the company’s “Rapid Vehicle Deployment Kits” includes a camera hidden inside a baby car seat. “The system is fully portable, so you are not restricted to the same drop car for each mission,” the description adds.

[…]

The so-called “K-MIC In-mouth Microphone & Speaker Set” is a tiny Bluetooth device that sits on a user’s teeth and allows them to “communicate hands-free in crowded, noisy surroundings” with “near-zero visual indications,” the Black Book adds.

Other products include more traditional surveillance cameras and lenses as well as tools for surreptitiously gaining entry to buildings. The “Phantom RFID Exploitation Toolkit” lets a user clone an access card or fob, and the so-called “Shadow” product can “covertly provide the user with PIN code to an alarm panel,” the brochure reads.

The Motherboard article also reprints the scary emails Motherboard received from Special Services Group, when asked for comment. Of course, Motherboard published the information anyway.

Hacking School Surveillance Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/hacking_school_.html

Lance Vick suggesting that students hack their schools’ surveillance systems.

“This is an ethical minefield that I feel students would be well within their rights to challenge, and if needed, undermine,” he said.

Of course, there are a lot more laws in place against this sort of thing than there were in — say — the 1980s, but it’s still worth thinking about.

EDITED TO ADD (1/2): Another essay on the topic.

ToTok Is an Emirati Spying Tool

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/totok_is_an_emi.html

The smartphone messaging app ToTok is actually an Emirati spying tool:

But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

Apple and Google have removed it from their app stores. If you have it on your phone, delete it now.