Tag Archives: Privacy

re:Invent – New security sessions launching soon

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/reinvent-new-security-sessions-launching-soon/

Where did the last month go? Were you able to catch all of the sessions in the Security, Identity, and Compliance track you hoped to see at AWS re:Invent? If you missed any, don’t worry—you can stream all the sessions released in 2020 via the AWS re:Invent website. Additionally, we’re starting 2021 with all new sessions that you can stream live January 12–15. Here are the new Security, Identity, and Compliance sessions—each session is offered at multiple times, so you can find the time that works best for your location and schedule.

Protecting sensitive data with Amazon Macie and Amazon GuardDuty – SEC210
Himanshu Verma, AWS Speaker

Tuesday, January 12 – 11:00 AM to 11:30 AM PST
Tuesday, January 12 – 7:00 PM to 7:30 PM PST
Wednesday, January 13 – 3:00 AM to 3:30 AM PST

As organizations manage growing volumes of data, identifying and protecting your sensitive data can become increasingly complex, expensive, and time-consuming. In this session, learn how Amazon Macie and Amazon GuardDuty together provide protection for your data stored in Amazon S3. Amazon Macie automates the discovery of sensitive data at scale and lowers the cost of protecting your data. Amazon GuardDuty continuously monitors and profiles S3 data access events and configurations to detect suspicious activities. Come learn about these security services and how to best use them for protecting data in your environment.

BBC: Driving security best practices in a decentralized organization – SEC211
Apurv Awasthi, AWS Speaker
Andrew Carlson, Sr. Software Engineer – BBC

Tuesday, January 12 – 1:15 PM to 1:45 PM PST
Tuesday, January 12 – 9:15 PM to 9:45 PM PST
Wednesday, January 13 – 5:15 AM to 5:45 AM PST

In this session, Andrew Carlson, engineer at BBC, talks about BBC’s journey while adopting AWS Secrets Manager for lifecycle management of its arbitrary credentials such as database passwords, API keys, and third-party keys. He provides insight on BBC’s secrets management best practices and how the company drives these at enterprise scale in a decentralized environment that has a highly visible scope of impact.

Get ahead of the curve with DDoS Response Team escalations – SEC321
Fola Bolodeoku, AWS Speaker

Tuesday, January 12 – 3:30 PM to 4:00 PM PST
Tuesday, January 12 – 11:30 PM to 12:00 AM PST
Wednesday, January – 7:30 AM to 8:00 AM PST

This session identifies tools and tricks that you can use to prepare for application security escalations, with lessons learned provided by the AWS DDoS Response Team. You learn how AWS customers have used different AWS offerings to protect their applications, including network access control lists, security groups, and AWS WAF. You also learn how to avoid common misconfigurations and mishaps observed by the DDoS Response Team, and you discover simple yet effective actions that you can take to better protect your applications’ availability and security controls.

Network security for serverless workloads – SEC322
Alex Tomic, AWS Speaker

Thursday, January 14 -1:30 PM to 2:00 PM PST
Thursday, January 14 – 9:30 PM to 10:00 PM PST
Friday, January 15 – 5:30 AM to 6:00 AM PST

Are you building a serverless application using services like Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Aurora, and Amazon SQS? Would you like to apply enterprise network security to these AWS services? This session covers how network security concepts like encryption, firewalls, and traffic monitoring can be applied to a well-architected AWS serverless architecture.

Building your cloud incident response program – SEC323
Freddy Kasprzykowski, AWS Speaker

Wednesday, January 13 – 9:00 AM to 9:30 AM PST
Wednesday, January 13 – 5:00 PM to 5:30 PM PST
Thursday, January 14 – 1:00 AM to 1:30 AM PST

You’ve configured your detection services and now you’ve received your first alert. This session provides patterns that help you understand what capabilities you need to build and run an effective incident response program in the cloud. It includes a review of some logs to see what they tell you and a discussion of tools to analyze those logs. You learn how to make sure that your team has the right access, how automation can help, and which incident response frameworks can guide you.

Beyond authentication: Guide to secure Amazon Cognito applications – SEC324
Mahmoud Matouk, AWS Speaker

Wednesday, January 13 – 2:15 PM to 2:45 PM PST
Wednesday, January 13 – 10:15 PM to 10:45 PM PST
Thursday, January 14 – 6:15 AM to 6:45 AM PST

Amazon Cognito is a flexible user directory that can meet the needs of a number of customer identity management use cases. Web and mobile applications can integrate with Amazon Cognito in minutes to offer user authentication and get standard tokens to be used in token-based authorization scenarios. This session covers best practices that you can implement in your application to secure and protect tokens. You also learn about new Amazon Cognito features that give you more options to improve the security and availability of your application.

Event-driven data security using Amazon Macie – SEC325
Neha Joshi, AWS Speaker

Thursday, January 14 – 8:00 AM to 8:30 AM PST
Thursday, January 14 – 4:00 PM to 4:30 PM PST
Friday, January 15 – 12:00 AM to 12:30 AM PST

Amazon Macie sensitive data discovery jobs for Amazon S3 buckets help you discover sensitive data such as personally identifiable information (PII), financial information, account credentials, and workload-specific sensitive information. In this session, you learn about an automated approach to discover sensitive information whenever changes are made to the objects in your S3 buckets.

Instance containment techniques for effective incident response – SEC327
Jonathon Poling, AWS Speaker

Thursday, January 14 – 10:15 AM to 10:45 AM PST
Thursday, January 14 – 6:15 PM to 6:45 PM PST
Friday, January 15 – 2:15 AM to 2:45 AM PST

In this session, learn about several instance containment and isolation techniques, ranging from simple and effective to more complex and powerful, that leverage native AWS networking services and account configuration techniques. If an incident happens, you may have questions like “How do we isolate the system while preserving all the valuable artifacts?” and “What options do we even have?”. These are valid questions, but there are more important ones to discuss amidst a (possible) incident. Join this session to learn highly effective instance containment techniques in a crawl-walk-run approach that also facilitates preservation and collection of valuable artifacts and intelligence.

Trusted connects for government workloads – SEC402
Brad Dispensa, AWS Speaker

Wednesday, January 13 – 11:15 AM to 11:45 AM PST
Wednesday, January 13 – 7:15 PM to 7:45 PM PST
Thursday, January 14 – 3:15 AM to 3:45 AM PST

Cloud adoption across the public sector is making it easier to provide government workforces with seamless access to applications and data. With this move to the cloud, we also need updated security guidance to ensure public-sector data remain secure. For example, the TIC (Trusted Internet Connections) initiative has been a requirement for US federal agencies for some time. The recent TIC-3 moves from prescriptive guidance to an outcomes-based model. This session walks you through how to leverage AWS features to better protect public-sector data using TIC-3 and the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF). Also, learn how this might map into other geographies.

I look forward to seeing you in these sessions. Please see the re:Invent agenda for more details and to build your schedule.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Eavesdropping on Phone Taps from Voice Assistants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/eavesdropping-on-phone-taps-from-voice-assistants.html

The microphones on voice assistants are very sensitive, and can snoop on all sorts of data:

In Hey Alexa what did I just type? we show that when sitting up to half a meter away, a voice assistant can still hear the taps you make on your phone, even in presence of noise. Modern voice assistants have two to seven microphones, so they can do directional localisation, just as human ears do, but with greater sensitivity. We assess the risk and show that a lot more work is needed to understand the privacy implications of the always-on microphones that are increasingly infesting our work spaces and our homes.

From the paper:

Abstract: Voice assistants are now ubiquitous and listen in on our everyday lives. Ever since they became commercially available, privacy advocates worried that the data they collect can be abused: might private conversations be extracted by third parties? In this paper we show that privacy threats go beyond spoken conversations and include sensitive data typed on nearby smartphones. Using two different smartphones and a tablet we demonstrate that the attacker can extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away. This shows that remote keyboard-inference attacks are not limited to physical keyboards but extend to virtual keyboards too. As our homes become full of always-on microphones, we need to work through the implications.

US Schools Are Buying Cell Phone Unlocking Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/us-schools-are-buying-cell-phone-unlocking-systems.html

Gizmodo is reporting that schools in the US are buying equipment to unlock cell phones from companies like Cellebrite:

Gizmodo has reviewed similar accounting documents from eight school districts, seven of which are in Texas, showing that administrators paid as much $11,582 for the controversial surveillance technology. Known as mobile device forensic tools (MDFTs), this type of tech is able to siphon text messages, photos, and application data from student’s devices. Together, the districts encompass hundreds of schools, potentially exposing hundreds of thousands of students to invasive cell phone searches.

The eighth district was in Los Angeles.

Mexican Drug Cartels with High-Tech Spyware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/mexican-drug-cartels-with-high-tech-spyware.html

Sophisticated spyware, sold by surveillance tech companies to Mexican government agencies, are ending up in the hands of drug cartels:

As many as 25 private companies — including the Israeli company NSO Group and the Italian firm Hacking Team — have sold surveillance software to Mexican federal and state police forces, but there is little or no regulation of the sector — and no way to control where the spyware ends up, said the officials.

Lots of details in the article. The cyberweapons arms business is immoral in many ways. This is just one of them.

Privacy and Compliance Reading List

Post Syndicated from Val Vesa original https://blog.cloudflare.com/privacy-and-compliance-reading-list/

Privacy and Compliance Reading List

Privacy and Compliance Reading List

Privacy matters. Privacy and Compliance are at the heart of Cloudflare’s products and solutions. We are committed to providing built-in data protection and privacy throughout our global network and for every product in our portfolio. This is why we have dedicated a whole week to highlight important aspects of how we are working to make sure privacy will stay at the core of all we do as a business.

In case you missed any of the blog posts this week addressing the topics of Privacy and Compliance, you’ll find a summary below.

Welcome to Privacy & Compliance Week: Reflecting Values at Cloudflare’s Core

We started the week with this introduction by Matthew Prince. The blog post summarizes the early decisions that the founding team made to make sure customer data is kept private, that we do not sell or rent this data to third parties, and why trust is the foundation of our business. > Read the full blog post.

Introducing the Cloudflare Data Localization Suite

Cloudflare’s network is private and compliant by design. Preserving end-user privacy is core to our mission of helping to build a better Internet; we’ve never sold personal data about customers or end-users of our network. We comply with laws like GDPR and maintain certifications such as ISO-27001. In a blog post by John Graham-Cumming, we announced the Data Localization Suite, which helps businesses get the performance and security benefits of Cloudflare’s global network while making it easy to set rules and controls at the edge about where their data is stored and protected. The Data Localization Suite is available now as an add-on for Enterprise customers. > Read the full blog post.

Privacy needs to be built into the Internet

John also reflected upon three phases of the evolution of the Internet: from its invention to the mid-1990s the race was on for expansion and connectivity. Then, as more devices and networks became interconnected, the focus shifted with the introduction of SSL in 1994 to a second phase where security became paramount. We’re now in the full swing of phase 3, where privacy is becoming more and more important than ever. > Read the full blog post.

Helping build the next generation of privacy-preserving protocols

The Internet is growing in terms of its capacity and the number of people using it and evolving in terms of its design and functionality. As a player in the Internet ecosystem, Cloudflare has a responsibility to help the Internet grow in a way that respects and provides value for its users. In this blog post, Nick Sullivan summarizes several announcements on improving Internet protocols with respect to something important to our customers and Internet users worldwide: privacy. These initiatives are focussed around: fixing one of the last information leaks in HTTPS through Encrypted Client Hello (ECH), which supersedes Encrypted SNI; making DNS even more private by supporting Oblivious DNS-over-HTTPS (ODoH); developing a superior protocol for password authentication, OPAQUE, that makes password breaches less likely to occur.  > Read the full blog post.

OPAQUE: The Best Passwords Never Leave your Device

Passwords are a problem. They are a problem for reasons that are familiar to most readers. For us at Cloudflare, the problem lies much deeper and broader. Most readers will immediately acknowledge that passwords are hard to remember and manage, especially as password requirements grow increasingly complex. Luckily there are great software packages and browser add-ons to help manage passwords. Unfortunately, the greater underlying problem is beyond the reaches of software to solve. Today’s deep-dive blog post by Tatiana Bradley, into OPAQUE, is one possible answer. OPAQUE is one among many examples of systems that enable a password to be useful without it ever leaving your possession. No one likes passwords, but as long they’re in use, at least we can ensure they are never given away.  > Read the full blog post.

Good-bye ESNI, hello ECH!

In this post Christopher Patton dives into Encrypted Client Hello (ECH), a new extension for TLS that promises to significantly enhance the privacy of this critical Internet protocol. Today, a number of privacy-sensitive parameters of the TLS connection are negotiated in the clear. This leaves a trove of metadata available to network observers, including the endpoints’ identities, how they use the connection, and so on. > Read the full blog post.

Improving DNS Privacy with Oblivious DoH in 1.1.1.1

Tanya Verma and Sudheesh Singanamalla wrote this blog post for our announcement of support for a new proposed DNS standard — co-authored by engineers from Cloudflare, Apple, and Fastly — that separates IP addresses from queries, so that no single entity can see both at the same time. Even better, we’ve made source code available, so anyone can try out ODoH, or run their own ODoH service! > Read the full blog post.

Cloudflare never tracks end-users across sites or sells their personal data. However, we didn’t want there to be any questions about our cookie use, and we don’t want any customer to think they need a cookie banner because of what we do. Therefore we’ve announced that Cloudflare is deprecating the __cfduid cookie. Starting on 10 May 2021, we will stop adding a “Set-Cookie” header on all HTTP responses. The last __cfduid cookies will expire 30 days after that. So why did we use the __cfduid cookie before, and why can we remove it now? Read the full blog post by Sergi Isasi to find out.

Cloudflare’s privacy-first Web Analytics is now available for everyone

In September, we announced that we’re building a new, free Web Analytics product for the whole web. In this blog post by Jon Levine, we’re announcing that anyone can now sign up to use our new Web Analytics — even without changing your DNS settings. In other words, Cloudflare Web Analytics can now be deployed by adding an HTML snippet (in the same way many other popular web analytics tools are) making it easier than ever to use privacy-first tools to understand visitor behavior.

Announcing Workplace Records for Cloudflare for Teams

As businesses worldwide have shifted to remote work, many employees have been working from “home” — wherever that may be. Some employees have taken this opportunity to venture further from where they usually are, sometimes crossing state and national borders. Businesses worldwide pay employment taxes based on where their employees do work. For most businesses and in normal times, where employees do work has been relatively easy to determine: it’s where they come into the office. But 2020 has made everything more complicated, even taxes. In this blog post by Matthew Prince and Sam Rhea, we’re announcing the beta of a new feature for Cloudflare for Teams to help solve this problem: Workplace Records. Cloudflare for Teams uses Access and Gateway logs to provide the state and country from which employees are working. Workplace Records can be used to help finance, legal, and HR departments determine where payroll taxes are due and provide a record to defend those decisions.

Securing the post-quantum world

Quantum computing will change the face of Internet security forever — particularly in the realm of cryptography, which is the way communications and information are secured across channels like the Internet. Cryptography is critical to almost every aspect of modern life, from banking to cellular communications to connected refrigerators and systems that keep subways running on time. This ultra-powerful, highly sophisticated new generation of computing has the potential to unravel decades of work that have been put into developing the cryptographic algorithms and standards we use today. When will a quantum computer be built that is powerful enough to break all modern cryptography? By some estimates, it may take 10 to 15 years. This makes deploying post-quantum cryptography as soon as possible a pressing privacy concern. Cloudflare is taking steps to accelerate this transition. Read the full blog post by Nick Sullivan to find out more.

How to Build a Global Network that Complies with Local Law

Governments around the world have long had an interest in getting access to online records. Sometimes law enforcement is looking for evidence relevant to criminal investigations. Sometimes intelligence agencies are looking to learn more about what foreign governments or actors are doing. And online service providers of all kinds often serve as an access point for those electronic records.

For service providers like Cloudflare, though, those requests can be fraught. The work that law enforcement and other government authorities do is important. At the same time, the data that law enforcement and other government authorities are seeking does not belong to us. By using our services, our customers have put us in a position of trust over that data. Maintaining that trust is fundamental to our business and our values. Alissa Stark details in her blog post how Cloudflare works to ensure compliance with laws like GDPR, particularly in the face of legal orders that might put us in the difficult position of being required to violate it and that requires involving the courts.

Encrypting your WAF Payloads with Hybrid Public Key Encryption (HPKE)

The Cloudflare Web Application Firewall (WAF) blocks more than 72B malicious requests per day from reaching our customers’ applications. Typically, our users can easily confirm these requests were not legitimate by checking the URL, the query parameters, or other metadata that Cloudflare provides as part of the security event log in the dashboard. Request headers may contain cookies and POST payloads may contain username and password pairs submitted during a login attempt among other sensitive data.

We recognize that providing clear visibility in any security event is a core feature of a firewall, as this allows users to better fine-tune their rules. To accomplish this, while ensuring end-user privacy, we built encrypted WAF matched payload logging. This feature will log only the specific component of the request the WAF has deemed malicious — and it is encrypted using a customer-provided key to ensure that no Cloudflare employee can examine the data. Michael Tremante goes over this in full detail, explaining how only application owners who also have access to the Cloudflare dashboard as Super Administrators will be able to configure encrypted matched payload logging.

Supporting Jurisdictional Restrictions for Durable Objects

Durable Objects, currently in limited beta, already make it easy for customers to manage state on Cloudflare Workers without worrying about provisioning infrastructure. Greg McKeon announces in this blog post the upcoming launch of Jurisdictional Restrictions for Durable Objects, which ensure that a Durable Object only stores and processes data in a given geographical region. Jurisdictional Restrictions make it easy for developers to build serverless, stateful applications that not only comply with today’s regulations but can handle new and updated policies as new regulations are added. Head over to the blog post to read more and also request an invite to the beta.

I want my Cloudflare TV

We have also had a full week of CloudflareTV segments focussed on privacy and compliance and you can get the full list and more details on our dedicated Privacy Week page.

As always, we welcome your feedback and comments and we stay committed to putting the privacy and safety of your data at the core of everything we do.

Cloudflare’s privacy-first Web Analytics is now available for everyone

Post Syndicated from Jon Levine original https://blog.cloudflare.com/privacy-first-web-analytics/

Cloudflare’s privacy-first Web Analytics is now available for everyone

Cloudflare’s privacy-first Web Analytics is now available for everyone

In September, we announced that we’re building a new, free Web Analytics product for the whole web. Today, I’m excited to announce that anyone can now sign up to use our new Web Analytics — even without changing your DNS settings. In other words, Cloudflare Web Analytics can now be deployed by adding an HTML snippet (in the same way many other popular web analytics tools are) making it easier than ever to use privacy-first tools to understand visitor behavior.

Why does the web need another analytics service?

Popular analytics vendors have business models driven by ad revenue. Using them implies a bargain: they track visitor behavior and create buyer profiles to retarget your visitors with ads; in exchange, you get free analytics.

At Cloudflare, our mission is to help build a better Internet, and part of that is to deliver essential web analytics to everyone with a website, without compromising user privacy. For free. We’ve never been interested in tracking users or selling advertising. We don’t want to know what you do on the Internet — it’s not our business.

Our customers have long relied on Cloudflare’s Analytics because we’re accurate, fast, and privacy-first. In September we released a big upgrade to analytics for our existing customers that made them even more flexible.

However, we know that there are many folks who can’t use our analytics, simply because they’re not able to onboard to use the rest of Cloudflare for Infrastructure — specifically, they’re not able to change their DNS servers. Today, we’re bringing the power of our analytics to the whole web. By adding a simple HTML snippet to your website, you can start measuring your web traffic — similar to other popular analytics vendors.

What can I do with Cloudflare Web Analytics?

We’ve worked hard to make our analytics as powerful and flexible as possible — while still being fast and easy to use.

When measuring analytics about your website, the most common questions are “how much traffic did I get?” and “how many people visited?” We answer this by measuring page views (the total number of times a page view was loaded) and visits (the number of times someone landed on a page view from another website).

With Cloudflare Web Analytics, it’s easy to switch between measuring page views or visits. Within each view, you can see top pages, countries, device types and referrers.

Cloudflare’s privacy-first Web Analytics is now available for everyone

My favorite thing is the ability to add global filters, and to quickly drill into the most important data with actions like “zoom” and “group by”. Say you publish a new blog post, and you want to see the top sites that send you traffic right after you email your subscribers about it. It’s easy to zoom into the time period when you hit the email, and group by to see the top pages. Then you can add a filter to just that page — and then finally view top referrers for that page. It’s magic!

Best of all, our analytics is free. We don’t have limits based on the amount of traffic you can send it. Thanks to our ABR technology, we can serve accurate analytics for websites that get anywhere from one to one billion requests per day.

How does the new Web Analytics work?

Traditionally, Cloudflare Analytics works by measuring traffic at our edge. This has some great benefits; namely, it catches all traffic, even from clients that block JavaScript or don’t load HTML. At the edge, we can also block bots, add protection from our WAF, and measure the performance of your origin server.

The new Web Analytics works like most other measurement tools: by tracking visitors on the client. We’ve long had client-side measuring tools with Browser Insights, but these were only available to orange-cloud users (i.e. Cloudflare customers).

Today, for the first time, anyone can get access to our client-side analytics — even if you don’t use the rest of Cloudflare. Just add our JavaScript snippet to any website, and we can start collecting metrics.

How do I sign up?

We’ve worked hard making our onboarding as simple as possible.

First, enter the name of your website. It’s important to use the domain name that your analytics will be served on — we use this to filter out any unwanted “spam” analytics reports.

Cloudflare’s privacy-first Web Analytics is now available for everyone

(At this time, you can only add analytics from one website to each Cloudflare account. In the coming weeks we’ll add support for multiple analytics properties per account.)

Next, you’ll see a script tag that you can copy onto your website. We recommend adding this just before the closing </body> tag on the pages you want to measure.

Cloudflare’s privacy-first Web Analytics is now available for everyone

And that’s it! After you release your website and start getting visits, you’ll be able to see them in analytics.

What does privacy-first mean?

Being privacy-first means we don’t track individual users for the purposes of serving analytics. We don’t use any client-side state (like cookies or localStorage) for analytics purposes. Cloudflare also doesn’t track users over time via their IP address, User Agent string, or any other immutable attributes for the purposes of displaying analytics — we consider “fingerprinting” even more intrusive than cookies, because users have no way to opt out.

The concept of a “visit” is key to this approach. Rather than count unique IP addresses, which would require storing state about what each visitor does, we can simply count the number of page views that come from a different site. This provides a perfectly usable metric that doesn’t compromise on privacy.

Cloudflare’s privacy-first Web Analytics is now available for everyone

What’s next

This is just the start for our privacy-first Analytics. We’re excited to integrate more closely with the rest of Cloudflare, and give customers even more detailed stats about performance and security (not just traffic.) We’re also hoping to make our analytics even more powerful as a standalone product by building support for alerts, real-time time updates, and more.

Please let us know if you have any questions or feedback, and happy measuring!

Deprecating the __cfduid cookie

Post Syndicated from Sergi Isasi original https://blog.cloudflare.com/deprecating-cfduid-cookie/

Deprecating the __cfduid cookie

Deprecating the __cfduid cookie

Cloudflare is deprecating the __cfduid cookie. Starting on 10 May 2021, we will stop adding a “Set-Cookie” header on all HTTP responses. The last __cfduid cookies will expire 30 days after that.

We never used the __cfduid cookie for any purpose other than providing critical performance and security services on behalf of our customers. Although, we must admit, calling it something with “uid” in it really made it sound like it was some sort of user ID. It wasn’t. Cloudflare never tracks end users across sites or sells their personal data. However, we didn’t want there to be any questions about our cookie use, and we don’t want any customer to think they need a cookie banner because of what we do.

The primary use of the cookie is for detecting bots on the web. Malicious bots may disrupt a service that has been explicitly requested by an end user (through DDoS attacks) or compromise the security of a user’s account (e.g. through brute force password cracking or credential stuffing, among others). We use many signals to build machine learning models that can detect automated bot traffic. The presence and age of the cfduid cookie was just one signal in our models. So for our customers who benefit from our bot management products, the cfduid cookie is a tool that allows them to provide a service explicitly requested by the end user.

The value of the cfduid cookie is derived from a one-way MD5 hash of the cookie’s IP address, date/time, user agent, hostname, and referring website — which means we can’t tie a cookie to a specific person. Still, as a privacy-first company, we thought: Can we find a better way to detect bots that doesn’t rely on collecting end user IP addresses?

For the past few weeks, we’ve been experimenting to see if it’s possible to run our bot detection algorithms without using this cookie. We’ve learned that it will be possible for us to transition away from using this cookie to detect bots. We’re giving notice of deprecation now to give our customers time to transition, while our bot management team works to ensure there’s no decline in quality of our bot detection algorithms after removing this cookie. (Note that some Bot Management customers will still require the use of a different cookie after April 1.)

While this is a small change, we’re excited about any opportunity to make the web simpler, faster, and more private.

Good-bye ESNI, hello ECH!

Post Syndicated from Christopher Patton original https://blog.cloudflare.com/encrypted-client-hello/

Good-bye ESNI, hello ECH!

Good-bye ESNI, hello ECH!

Most communication on the modern Internet is encrypted to ensure that its content is intelligible only to the endpoints, i.e., client and server. Encryption, however, requires a key and so the endpoints must agree on an encryption key without revealing the key to would-be attackers. The most widely used cryptographic protocol for this task, called key exchange, is the Transport Layer Security (TLS) handshake.

In this post we’ll dive into Encrypted Client Hello (ECH), a new extension for TLS that promises to significantly enhance the privacy of this critical Internet protocol. Today, a number of privacy-sensitive parameters of the TLS connection are negotiated in the clear. This leaves a trove of metadata available to network observers, including the endpoints’ identities, how they use the connection, and so on.

ECH encrypts the full handshake so that this metadata is kept secret. Crucially, this closes a long-standing privacy leak by protecting the Server Name Indication (SNI) from eavesdroppers on the network. Encrypting the SNI secret is important because it is the clearest signal of which server a given client is communicating with. However, and perhaps more significantly, ECH also lays the groundwork for adding future security features and performance enhancements to TLS while minimizing their impact on the privacy of end users.

ECH is the product of close collaboration, facilitated by the IETF, between academics and the tech industry leaders, including Cloudflare, our friends at Fastly and Mozilla (both of whom are the affiliations of co-authors of the standard), and many others. This feature represents a significant upgrade to the TLS protocol, one that builds on bleeding edge technologies, like DNS-over-HTTPS, that are only now coming into their own. As such, the protocol is not yet ready for Internet-scale deployment. This article is intended as a sign post on the road to full handshake encryption.

Background

The story of TLS is the story of the Internet. As our reliance on the Internet has grown, so the protocol has evolved to address ever-changing operational requirements, use cases, and threat models. The client and server don’t just exchange a key: they negotiate a wide variety of features and parameters: the exact method of key exchange; the encryption algorithm; who is authenticated and how; which application layer protocol to use after the handshake; and much, much more. All of these parameters impact the security properties of the communication channel in one way or another.

SNI is a prime example of a parameter that impacts the channel’s security. The SNI extension is used by the client to indicate to the server the website it wants to reach. This is essential for the modern Internet, as it’s common nowadays for many origin servers to sit behind a single TLS operator. In this setting, the operator uses the SNI to determine who will authenticate the connection: without it, there would be no way of knowing which TLS certificate to present to the client. The problem is that SNI leaks to the network the identity of the origin server the client wants to connect to, potentially allowing eavesdroppers to infer a lot of information about their communication. (Of course, there are other ways for a network observer to identify the origin — the origin’s IP address, for example. But co-locating with other origins on the same IP address makes it much harder to use this metric to determine the origin than it is to simply inspect the SNI.)

Although protecting SNI is the impetus for ECH, it is by no means the only privacy-sensitive handshake parameter that the client and server negotiate. Another is the ALPN extension, which is used to decide which application-layer protocol to use once the TLS connection is established. The client sends the list of applications it supports — whether it’s HTTPS, email, instant messaging, or the myriad other applications that use TLS for transport security — and the server selects one from this list, and sends its selection to the client. By doing so, the client and server leak to the network a clear signal of their capabilities and what the connection might be used for.

Some features are so privacy-sensitive that their inclusion in the handshake is a non-starter. One idea that has been floated is to replace the key exchange at the heart of TLS with password-authenticated key-exchange (PAKE). This would allow password-based authentication to be used alongside (or in lieu of) certificate-based authentication, making TLS more robust and suitable for a wider range of applications. The privacy issue here is analogous to SNI: servers typically associate a unique identifier to each client (e.g., a username or email address) that is used to retrieve the client’s credentials; and the client must, somehow, convey this identity to the server during the course of the handshake. If sent in the clear, then this personally identifiable information would be easily accessible to any network observer.

A necessary ingredient for addressing all of these privacy leaks is handshake encryption, i.e., the encryption of handshake messages in addition to application data. Sounds simple enough, but this solution presents another problem: how do the client and server pick an encryption key if, after all, the handshake is itself a means of exchanging a key? Some parameters must be sent in the clear, of course, so the goal of ECH is to encrypt all handshake parameters except those that are essential to completing the key exchange.

In order to understand ECH and the design decisions underpinning it, it helps to understand a little bit about the history of handshake encryption in TLS.

Handshake encryption in TLS

TLS had no handshake encryption at all prior to the latest version, TLS 1.3. In the wake of the Snowden revelations in 2013, the IETF community began to consider ways of countering the threat that mass surveillance posed to the open Internet. When the process of standardizing TLS 1.3 began in 2014, one of its design goals was to encrypt as much of the handshake as possible. Unfortunately, the final standard falls short of full handshake encryption, and several parameters, including SNI, are still sent in the clear. Let’s take a closer look to see why.

The TLS 1.3 protocol flow is illustrated in Figure 1. Handshake encryption begins as soon as the client and server compute a fresh shared secret. To do this, the client sends a key share in its ClientHello message, and the server responds in its ServerHello with its own key share. Having exchanged these shares, the client and server can derive a shared secret. Each subsequent handshake message is encrypted using the handshake traffic key derived from the shared secret. Application data is encrypted using a different key, called the application traffic key, which is also derived from the shared secret. These derived keys have different security properties: to emphasize this, they are illustrated with different colors.

The first handshake message that is encrypted is the server’s EncryptedExtensions. The purpose of this message is to protect the server’s sensitive handshake parameters, including the server’s ALPN extension, which contains the application selected from the client’s ALPN list. Key-exchange parameters are sent unencrypted in the ClientHello and ServerHello.

Good-bye ESNI, hello ECH!
Figure 1: The TLS 1.3 handshake.

All of the client’s handshake parameters, sensitive or not, are sent in the ClientHello. Looking at Figure 1, you might be able to think of ways of reworking the handshake so that some of them can be encrypted, perhaps at the cost of additional latency (i.e., more round trips over the network). However, extensions like SNI create a kind of “chicken-and-egg” problem.

The client doesn’t encrypt anything until it has verified the server’s identity (this is the job of the Certificate and CertificateVerify messages) and the server has confirmed that it knows the shared secret (the job of the Finished message). These measures ensure the key exchange is authenticated, thereby preventing monster-in-the-middle (MITM) attacks in which the adversary impersonates the server to the client in a way that allows it to decrypt messages sent by the client.  Because SNI is needed by the server to select the certificate, it needs to be transmitted before the key exchange is authenticated.

In general, ensuring confidentiality of handshake parameters used for authentication is only possible if the client and server already share an encryption key. But where might this key come from?

Full handshake encryption in the early days of TLS 1.3. Interestingly, full handshake encryption was once proposed as a core feature of TLS 1.3. In early versions of the protocol (draft-10, circa 2015), the server would offer the client a long-lived public key during the handshake, which the client would use for encryption in subsequent handshakes. (This design came from a protocol called OPTLS, which in turn was borrowed from the original QUIC proposal.) Called “0-RTT”, the primary purpose of this mode was to allow the client to begin sending application data prior to completing a handshake. In addition, it would have allowed the client to encrypt its first flight of handshake messages following the ClientHello, including its own EncryptedExtensions, which might have been used to protect the client’s sensitive handshake parameters.

Ultimately this feature was not included in the final standard (RFC 8446, published in 2018), mainly because its usefulness was outweighed by its added complexity. In particular, it does nothing to protect the initial handshake in which the client learns the server’s public key. Parameters that are required for server authentication of the initial handshake, like SNI, would still be transmitted in the clear.

Nevertheless, this scheme is notable as the forerunner of other handshake encryption mechanisms, like ECH, that use public key encryption to protect sensitive ClientHello parameters. The main problem these mechanisms must solve is key distribution.

Before ECH there was (and is!) ESNI

The immediate predecessor of ECH was the Encrypted SNI (ESNI) extension. As its name implies, the goal of ESNI was to provide confidentiality of the SNI. To do so, the client would encrypt its SNI extension under the server’s public key and send the ciphertext to the server. The server would attempt to decrypt the ciphertext using the secret key corresponding to its public key. If decryption were to succeed, then the server would proceed with the connection using the decrypted SNI. Otherwise, it would simply abort the handshake. The high-level flow of this simple protocol is illustrated in Figure 2.

Good-bye ESNI, hello ECH!
Figure 2: The TLS 1.3 handshake with the ESNI extension. It is identical to the TLS 1.3 handshake, except the SNI extension has been replaced with ESNI.

For key distribution, ESNI relied on another critical protocol: Domain Name Service (DNS). In order to use ESNI to connect to a website, the client would piggy-back on its standard A/AAAA queries a request for a TXT record with the ESNI public key. For example, to get the key for crypto.dance, the client would request the TXT record of _esni.crypto.dance:

$ dig _esni.crypto.dance TXT +short
"/wGuNThxACQAHQAgXzyda0XSJRQWzDG7lk/r01r1ZQy+MdNxKg/mAqSnt0EAAhMBAQQAAAAAX67XsAAAAABftsCwAAA="

The base64-encoded blob contains an ESNI public key and related parameters such as the encryption algorithm.

But what’s the point of encrypting SNI if we’re just going to leak the server name to network observers via a plaintext DNS query? Deploying ESNI this way became feasible with the introduction of DNS-over-HTTPS (DoH), which enables encryption of DNS queries to resolvers that provide the DoH service (1.1.1.1 is an example of such a service.). Another crucial feature of DoH is that it provides an authenticated channel for transmitting the ESNI public key from the DoH server to the client. This prevents cache-poisoning attacks that originate from the client’s local network: in the absence of DoH, a local attacker could prevent the client from offering the ESNI extension by returning an empty TXT record, or coerce the client into using ESNI with a key it controls.

While ESNI took a significant step forward, it falls short of our goal of achieving full handshake encryption. Apart from being incomplete — it only protects SNI — it is vulnerable to a handful of sophisticated attacks, which, while hard to pull off, point to theoretical weaknesses in the protocol’s design that need to be addressed.

ESNI was deployed by Cloudflare and enabled by Firefox, on an opt-in basis, in 2018, an  experience that laid bare some of the challenges with relying on DNS for key distribution. Cloudflare rotates its ESNI key every hour in order to minimize the collateral damage in case a key ever gets compromised. DNS artifacts are sometimes cached for much longer, the result of which is that there is a decent chance of a client having a stale public key. While Cloudflare’s ESNI service tolerates this to a degree, every key must eventually expire. The question that the ESNI protocol left open is how the client should proceed if decryption fails and it can’t access the current public key, via DNS or otherwise.

Another problem with relying on DNS for key distribution is that several endpoints might be authoritative for the same origin server, but have different capabilities. For example, a request for the A record of “example.com” might return one of two different IP addresses, each operated by a different CDN. The TXT record for “_esni.example.com” would contain the public key for one of these CDNs, but certainly not both. The DNS protocol does not provide a way of atomically tying together resource records that correspond to the same endpoint. In particular, it’s possible for a client to inadvertently offer the ESNI extension to an endpoint that doesn’t support it, causing the handshake to fail. Fixing this problem requires changes to the DNS protocol. (More on this below.)

The future of ESNI. In the next section, we’ll describe the ECH specification and how it addresses the shortcomings of ESNI. Despite its limitations, however, the practical privacy benefit that ESNI provides is significant. Cloudflare intends to continue its support for ESNI until ECH is production-ready.

The ins and outs of ECH

The goal of ECH is to encrypt the entire ClientHello, thereby closing the gap left in TLS 1.3 and ESNI by protecting all privacy-sensitive handshake-parameters. Similar to ESNI, the protocol uses a public key, distributed via DNS and obtained using DoH, for encryption during the client’s first flight. But ECH has improvements to key distribution that make the protocol more robust to DNS cache inconsistencies. Whereas the ESNI server aborts the connection if decryption fails, the ECH server attempts to complete the handshake and supply the client with a public key it can use to retry the connection.

But how can the server complete the handshake if it’s unable to decrypt the ClientHello? As illustrated in Figure 3, the ECH protocol actually involves two ClientHello messages: the ClientHelloOuter, which is sent in the clear, as usual; and the ClientHelloInner, which is encrypted and sent as an extension of the ClientHelloOuter. The server completes the handshake with just one of these ClientHellos: if decryption succeeds, then it proceeds with the ClientHelloInner; otherwise, it proceeds with the ClientHelloOuter.

Good-bye ESNI, hello ECH!
Figure 3: The TLS 1.3 handshake with the ECH extension.

The ClientHelloInner is composed of the handshake parameters the client wants to use for the connection. This includes sensitive values, like the SNI of the origin server it wants to reach (called the backend server in ECH parlance), the ALPN list, and so on. The ClientHelloOuter, while also a fully-fledged ClientHello message, is not used for the intended connection. Instead, the handshake is completed by the ECH service provider itself (called the client-facing server), signaling to the client that its intended destination couldn’t be reached due to decryption failure. In this case, the service provider also sends along the correct ECH public key with which the client can retry handshake, thereby “correcting” the client’s configuration. (This mechanism is similar to how the server distributed its public key for 0-RTT mode in the early days of TLS 1.3.)

At a minimum, both ClientHellos must contain the handshake parameters that are required for a server-authenticated key-exchange. In particular, while the ClientHelloInner contains the real SNI, the ClientHelloOuter also contains an SNI value, which the client expects to verify in case of ECH decryption failure (i.e., the client-facing server). If the connection is established using the ClientHelloOuter, then the client is expected to immediately abort the connection and retry the handshake with the public key provided by the server. It’s not necessary that the client specify an ALPN list in the ClientHelloOuter, nor any other extension used to guide post-handshake behavior. All of these parameters are encapsulated by the encrypted ClientHelloInner.

This design resolves — quite elegantly, I think — most of the challenges for securely deploying handshake encryption encountered by earlier mechanisms. Importantly, the design of ECH was not conceived in a vacuum. The protocol reflects the diverse perspectives of the IETF community, and its development dovetails with other IETF standards that are crucial to the success of ECH.

The first is an important new DNS feature known as the HTTPS resource record type. At a high level, this record type is intended to allow multiple HTTPS endpoints that are authoritative for the same domain name to advertise different capabilities for TLS. This makes it possible to rely on DNS for key distribution, resolving one of the deployment challenges uncovered by the initial ESNI deployment. For a deep dive into this new record type and what it means for the Internet more broadly, check out Alessandro Ghedini’s recent blog post on the subject.

The second is the CFRG’s Hybrid Public Key Encryption (HPKE) standard, which specifies an extensible framework for building public key encryption schemes suitable for a wide variety of applications. In particular, ECH delegates all of the details of its handshake encryption mechanism to HPKE, resulting in a much simpler and easier-to-analyze specification. (Incidentally, HPKE is also one of the main ingredients of Oblivious DNS-over-HTTPS.

The road ahead

The current ECH specification is the culmination of a multi-year collaboration. At this point, the overall design of the protocol is fairly stable. In fact, the next draft of the specification will be the first to be targeted for interop testing among implementations. Still, there remain a number of details that need to be sorted out. Let’s end this post with a brief overview of the road ahead.

Resistance to traffic analysis

Ultimately, the goal of ECH is to ensure that TLS connections made to different origin servers behind the same ECH service provider are indistinguishable from one another. In other words, when you connect to an origin behind, say, Cloudflare, no one on the network between you and Cloudflare should be able to discern which origin you reached, or which privacy-sensitive handshake-parameters you and the origin negotiated. Apart from an immediate privacy boost, this property, if achieved, paves the way for the deployment of new features for TLS without compromising privacy.

Encrypting the ClientHello is an important step towards achieving this goal, but we need to do a bit more. An important attack vector we haven’t discussed yet is traffic analysis. This refers to the collection and analysis of properties of the communication channel that betray part of the ciphertext’s contents, but without cracking the underlying encryption scheme. For example, the length of the encrypted ClientHello might leak enough information about the SNI for the adversary to make an educated guess as to its value (this risk is especially high for domain names that are either particularly short or particularly long). It is therefore crucial that the length of each ciphertext is independent of the values of privacy-sensitive parameters. The current ECH specification provides some mitigations, but their coverage is incomplete. Thus, improving ECH’s resistance to traffic analysis is an important direction for future work.

The spectre of ossification

An important open question for ECH is the impact it will have on network operations.

One of the lessons learned from the deployment of TLS 1.3 is that upgrading a core Internet protocol can trigger unexpected network behavior. Cloudflare was one of the first major TLS operators to deploy TLS 1.3 at scale; when browsers like Firefox and Chrome began to enable it on an experimental basis, they observed a significantly higher rate of connection failures compared to TLS 1.2. The root cause of these failures was network ossification, i.e., the tendency of middleboxes — network appliances between clients and servers that monitor and sometimes intercept traffic — to write software that expects traffic to look and behave a certain way. Changing the protocol before middleboxes had the chance to update their software led to middleboxes trying to parse packets they didn’t recognize, triggering software bugs that, in some instances, caused connections to be dropped completely.

This problem was so widespread that, instead of waiting for network operators to update their software, the design of TLS 1.3 was altered in order to mitigate the impact of network ossification. The ingenious solution was to make TLS 1.3 “look like” another protocol that middleboxes are known to tolerate. Specifically, the wire format and even the contents of handshake messages were made to resemble TLS 1.2. These two protocols aren’t identical, of course — a curious network observer can still distinguish between them — but they look and behave similar enough to ensure that the majority of existing middleboxes don’t treat them differently. Empirically, it was found that this strategy significantly reduced the connection failure rate enough to make deployment of TLS 1.3 viable.

Once again, ECH represents a significant upgrade for TLS for which the spectre of network ossification looms large. The ClientHello contains parameters, like SNI, that have existed in the handshake for a long time, and we don’t yet know what the impact will be of encrypting them. In anticipation of the deployment issues ossification might cause, the ECH protocol has been designed to look as much like a standard TLS 1.3 handshake as possible. The most notable difference is the ECH extension itself: if middleboxes ignore it — as they should, if they are compliant with the TLS 1.3 standard — then the rest of the handshake will look and behave very much as usual.

It remains to be seen whether this strategy will be enough to ensure the wide-scale deployment of ECH. If so, it is notable that this new feature will help to mitigate the impact of future TLS upgrades on network operations. Encrypting the full handshake reduces the risk of ossification since it means that there are less visible protocol features for software to ossify on. We believe this will be good for the health of the Internet overall.

Conclusion

The old TLS handshake is (unintentionally) leaky. Operational requirements of both the client and server have led to privacy-sensitive parameters, like SNI, being negotiated completely in the clear and available to network observers. The ECH extension aims to close this gap by enabling encryption of the full handshake. This represents a significant upgrade to TLS, one that will help preserve end-user privacy as the protocol continues to evolve.

The ECH standard is a work-in-progress. As this work continues, Cloudflare is committed to doing its part to ensure this important upgrade for TLS reaches Internet-scale deployment.

Privacy needs to be built into the Internet

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/internet-privacy/

Privacy needs to be built into the Internet

Privacy needs to be built into the Internet

The first phase of the Internet lasted until the early 1990s. During that time it was created and debugged, and grew globally. Its growth was not hampered by concerns about data security or privacy. Until the 1990s the race was for connectivity.

Connectivity meant that people could get online and use the Internet wherever they were. Because the “inter” in Internet implied interoperability the network was able to grow rapidly using a variety of technologies. Think dialup modems using ordinary phones lines, cable modems sending the Internet over coax originally designed for television, Ethernet, and, later, fibre optic connections and WiFi.

By the 1990s, the Internet was being used widely and for uses far beyond its academic origins. Early web pioneers, like Netscape, realized that the potential for e-commerce was gigantic but would be held back if people couldn’t have confidence in the security of online transactions.

Thus, with the introduction of SSL in 1994, the Internet moved to a second phase where security became paramount. Securing the web, and the Internet more generally, helped create the dotcom rush and the secure, online world we live in today. But this security was misunderstood by some as providing guarantees about privacy which it did not.

People feel safe going online to shop, read the news, look up ailments and search for a life partner because cryptography prevents an eavesdropper from seeing what they are doing, and provides a guarantee that a website is who it claims to be. But it does not provide any privacy guarantee. The website you are visiting knows, at the very least, the IP address of your Internet connection.

And even with encryption a well placed eavesdropper can learn at least the names of websites you are visiting because of that information leaks from protocols that weren’t designed to preserve privacy.

People who aim to remain anonymous on the Internet therefore turn to technologies like Tor or VPNs. But remaining anonymous from a website you shop from or an airline’s online booking site doesn’t make any sense. In those instances, the company you are dealing with will know who you are because you tell them your home address, name, passport number etc. You want them to know.

That makes privacy a nuanced thing: you want to remain anonymous to an eavesdropper but make sure a retailer knows where you live.

The connectivity phase of the Internet made it possible for you to connect to a computer anywhere in the world just as easily as one in your own city. The security phase of the Internet solved the problem of giving you confidence to hand over information to an airline or a retailer. Combining these two phases resulted in an Internet you can trust to transmit your data, but little control over where that data ultimately ended up.

Phase 3

A French citizen could just as easily buy goods from a Spanish website as from a North American one. In both cases, the retailer would know the French name and address where the purchases were to be delivered. This creates a conundrum for a privacy-conscious citizen. The Internet created an amazing global platform for commerce, news and information (how easy it is for the French citizen to stay in contact with family in Cote d’Ivoire and even read the local news there from afar).

And while shopping an eavesdropper (such as an ISP, a coffee shop owner or an intelligence agency) could tell which website the French citizen was visiting.

And the Internet also meant that your and my information is dispersed across the world. And different countries have different rules about how that data is to be stored and shared. And countries and regions have data sharing agreements to allow cross-border transfer of private information about citizens.

Concerns about eavesdropping and where data ends up have created the world we are living in today where privacy concerns are coming to the forefront, especially in Europe but in many other countries as well.

In addition, the economics and flexibility of SaaS and cloud applications meant that it made sense to actually transfer data to a limited number of large data centers (which are sometimes confusingly called regions) where data from people all over the world can be processed. And, by and large, that was the world of the Internet, universal connectivity, widespread security, and data sharing through cross-border agreements.

This apparent utopia got snowed on by the leaking of secret documents describing the relationship between the US NSA (and its Five Eyes partners) and large Internet companies, and that intelligence agencies were scooping up data from choke points on the Internet. Those revelations brought to the public’s attention the fact that their data could, in some cases, be accessed by foreign intelligence agencies

Quite quickly those large data centers in far flung countries looked like a bad idea, and governments and citizens started to demand control of data. This is the third phase of the Internet. Privacy joins universal connectivity and security as core.

But what is control over data or privacy? Different governments have different ideas and different requirements, which can differ for different data sets. Some countries are convinced that the only way to control data is to keep it inside their countries, where they believe they can control who gets access to it. Other countries believe that they can address the risks by putting restrictions to prevent certain governments or companies from getting access to data. And the regulatory challenges are only getting more complicated.

This will be an enormous challenge for companies that have built a business on aggregating citizens’ information in order to target advertising, but it is also a challenge for anyone offering an Internet service. Just as companies have had to face the scourge of DDoS attacks and hacking, and have had to stay up to date with the latest in encryption technology, they will fundamentally have to store and process their customers’ data in different countries in different ways.

The European Union, in particular, has pushed a comprehensive approach to data privacy. Although the EU has had data protection principles in place since 1995, the implementation of the EU’s General Data Protection Regulation (GDPR) in 2018 has generated a new era of privacy online. GDPR imposes limitations on how the personal data of EU residents can be collected, stored, deleted, modified and otherwise processed.

Among the GDPR’s requirements are provisions on how EU personal data should be protected if that personal data leaves the EU. Although the US and the EU worked together to develop a set of voluntary commitments to make it easier for companies to transfer data between the two countries, that framework — the Privacy Shield — was invalidated this past summer. As a result, companies are grappling with how they can transfer data outside the EU, consistent with GDPR requirements. Recommendations recently issued by the European Data Protection Board (EDPB), which require data exporters to assess the law in third countries, determine whether that law adequately protects privacy, and if necessary, obtain guarantees of additional safeguards from data importers, have only added to companies’ concerns.

This anxiety over whether there are controls over data adequate to address the concerns of European regulators has prompted many of our customers to explore whether it is possible to prevent data subject to the GDPR from leaving the EU at all.

Gone are the days when all the world’s data could be processed in a massive data center regardless of its provenance.

One reaction to this change could be a retreat into every country building its own online email services, HR systems, e-commerce providers, and more. This would be a massive wasted effort. There are economies of scale if the same service can be used by Germans, Peruvians, Indonesians, Australians…

The answer to this privacy challenge is the same as the answer to the connectivity and security phases of the Internet: build it! We need to build a privacy-respecting Internet and give companies the tools to easily build privacy-respecting applications.

This week we’ll be talking about new tools from Cloudflare that make building privacy-respecting applications easy by allowing companies to situate their users’ data in the countries and regions of their choosing. And we’ll be talking about new protocols that build privacy into the very structure of the Internet. We’ll update on the latest quantum-resistant algorithms that help keep private data private today and into the far future.

We’ll show how it’s possible to run a massive DNS resolver service like 1.1.1.1 and preserve users’ privacy through a clever new protocol. We’ll look at how to make passwords that can’t be leaked. And we’ll give everyone the power to get web analytics without tracking people.

Welcome to Phase 3 of the Internet: always on, always secure, always private.

Introducing the Cloudflare Data Localization Suite

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/introducing-the-cloudflare-data-localization-suite/

Introducing the Cloudflare Data Localization Suite

Introducing the Cloudflare Data Localization Suite

Today we’re excited to announce the Cloudflare Data Localization Suite, which helps businesses get the performance and security benefits of Cloudflare’s global network, while making it easy to set rules and controls at the edge about where their data is stored and protected.

The Data Localization Suite is available now as an add-on for Enterprise customers.

Cloudflare’s network is private and compliant by design. Preserving end-user privacy is core to our mission of helping to build a better Internet; we’ve never sold personal data about customers or end users of our network. We comply with laws like GDPR and maintain certifications such as ISO-27001.

Today, we’re announcing tools that make it simple for our customers to build the same rigor into their own applications. In this post, I’ll explain the different types of data that we process and how the Data Localization Suite keeps this data local.

We’ll also talk about how Cloudflare makes it possible to build applications that comply with data locality laws, while remaining fast, secure and scalable.

Why keep data local?

Cloudflare’s customers have increasing desire or face legal requirements for data locality: they want to control the geographic location where their data is handled. Many categories of data that our customers process (including healthcare, legal, or financial data) may be subject to obligations that specify the data be stored or processed in a specific location. The preference or requirement for data localization is growing across jurisdictions such as the EU, India, and Brazil; over time, we expect more customers in more places will be expected to keep data local.

Although “data locality” sounds like a simple concept, our conversations with Cloudflare customers make clear that there are a number of unique challenges they face in the attempt to move toward this goal.  The availability of information on their Internet properties will remain global–they don’t want to limit access to their websites to local jurisdictions–but they want to make sure data stays local. Variously, they are trying to figure out:

  • How do I build local requirements into my global online operations?
  • How do I make sure unencrypted traffic is only available locally?
  • How do I make sure personal data is handled according to localization obligations?
  • How do I make sure my applications only store data in certain locations?

The Cloudflare Data Localization Suite attempts to respond to these questions.

Until now, customers who wanted to localize their data had to choose to restrict their application to one data center, or to one cloud provider’s region. This is a fragile approach, fraught with performance, reliability, and security challenges. Cloudflare is creating a new paradigm: customers should be able to get the performance and security benefits of our global network, while effortlessly keeping their data local.

Encryption is the backbone of privacy

Before we go into data locality, we should discuss encryption. Privacy isn’t possible without strong encryption; otherwise, anyone could snoop your customers’ data, regardless of where it’s stored.

Data is often described as being “in transit” and “at rest”. It’s critically important that both are encrypted. Data “in transit” refers to just that—data while it’s moving about on the wire, whether a local network or the public Internet. “At rest” generally means stored on a disk somewhere, whether a spinning HDD or a modern SSD.

In transit, Cloudflare can enforce that all traffic to end-users uses modern TLS and gets the highest level of encryption possible. We can also enforce that all traffic back to customers’ origin servers is always encrypted. Communication between all our edge and core data centers is always encrypted.

Cloudflare encrypts all of the data we handle at rest, usually with disk-level encryption. From cached files on our edge network, to configuration state in databases in our core data centers—every byte is encrypted at rest.

Control where TLS private keys can be accessed

Given the importance of encryption, one of the most sensitive pieces of data that our customers trust us to protect are their cryptographic keys, which enable data to be decrypted. Cloudflare offers two ways for customers to ensure that their private keys are only accessible in locations they specify.

Keyless SSL allows a customer to store and manage their own SSL private keys for use with Cloudflare on any external infrastructure of their choosing. Customers can use a variety of systems for their keystore, including hardware security modules (“HSMs”), virtual servers, and hardware running Unix/Linux and Windows that is housed in environments customers control. Cloudflare never has access to the private key with Keyless SSL.

Geo Key Manager gives customers granular control over which locations should store their keys. For example, a customer can choose for the private keys required for inspection of traffic to only be accessible inside data centers located in the European Union.

Manage where HTTPS requests and responses are inspected

In order to deploy our WAF, or detect malicious bot traffic, Cloudflare must terminate TLS in our edge data centers and inspect HTTPS request and response payloads.

Regional Services gives organizations control over where their traffic is inspected. With Regional Services enabled, traffic is ingested on Cloudflare’s global Anycast network at the location closest to the client, where we can provide L3 and L4 DDoS protection. Instead of being inspected at the HTTP level at that data center, this traffic is securely transmitted to Cloudflare data centers inside the region selected by the customer and handled there.

Introducing the Cloudflare Data Localization Suite

Control the logs and analytics generated by your traffic

In addition to making our customers’ infrastructure and teams faster, more secure, and more reliable, we also provide insights into what our services do, and how customers can make better use of them. We gather metadata about the traffic that goes through our edge data centers, and use this to improve the operation of our own network: for example, by crafting WAF rules to block the latest attacks, or by developing machine learning models to detect malicious bots. We also make this data available to our customers in the form of logs and analytics.

This only requires a subset of the metadata to be processed in our core data centers in the US/EU. This data contains information about how many requests were served, how much data was sent, how long requests took, and other information that is essential for the operation of our network.

With Edge Log Delivery, customers can send logs directly from the edge to their partner of choice—for example, an Azure storage bucket in their preferred region, or an instance of Splunk that runs in an on-premise data center. With this option, customers can still get their complete logs in their preferred region, without these logs first flowing through either of our US or EU core data centers.

Introducing the Cloudflare Data Localization Suite

Edge Log Delivery is in early beta for Enterprise customers today—please visit our product page for more information.

Ultimately, we are working towards providing customers full control over where their metadata is stored, and for how long. In the coming year, we plan to allow customers to be able to choose exactly which fields are stored, and for how long, and in which location.

Building location-aware applications from the ground up

So far, we’ve discussed how Cloudflare’s products can offer global performance and security solutions for our customers, while keeping their existing keys, application data, and metadata local.

But we know that customers are also struggling to use existing, traditional cloud systems to manage their data locality needs. Existing platforms may allow code or data to be deployed to a specific region, but having copies of applications in each region, and managing state across each of them, can be challenging at best (or impossible at worst).

The ultimate promise of serverless has been to allow any developer to say “I don’t care where my code runs, just make it scale.” Increasingly, another promise will need to be “I do care where my code runs, and I need more control to satisfy my compliance department.” Cloudflare Workers allows you the best of both worlds, with instant scaling, locations that span more than 100 countries around the world, and the granularity to choose exactly what you need.

Introducing the Cloudflare Data Localization Suite

We are announcing a major improvement that lets customers control where their applications store data: Workers Durable Objects will support Jurisdiction Restrictions.  Durable Objects provide globally consistent state and coordination to serverless applications running on the Cloudflare Workers platform. Jurisdiction Restrictions will make it possible for users to ensure that their Durable Objects do not store data or run outside of a given jurisdiction—making it trivially simple to build applications that combine global performance with local compliance. With automatic migration of Durable Objects, adapting to new rules will be as simple as adding a tag to a set of Durable Objects.

Building for the long haul

The data localization landscape is constantly evolving. Since we began working on the Data Localization Suite, the European Data Protection Board has released new guidance about how data may be transferred between the EU and the US. And we know this is just the beginning — over time, more regions and more industries will have data localization requirements.

At Cloudflare, we stay on top of the latest developments around data protection so our customers don’t have to. The Data Localization Suite gives our customers the tools to set rules and controls at the edge about where their data is stored and protected, while taking advantage of our global network.

Welcome to Privacy & Compliance Week

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/welcome-to-privacy-and-compliance-week/

Welcome to Privacy & Compliance Week

Welcome to Privacy & Compliance Week

Tomorrow kicks off Cloudflare’s Privacy & Compliance Week. Over the course of the week, we’ll be announcing ways that our customers can use our service to ensure they are in compliance with an increasingly complicated set of rules and laws around the world.

Early in Cloudflare’s history, when Michelle, Lee, and I were talking about the business we wanted to build, we kept coming back to the word trust. We realized early on that if we were not trustworthy then no one would ever choose to route their Internet traffic through us. Above all else, we are in the trust business.

Every employee at Cloudflare goes through orientation. I teach one of the sessions titled “What Is Cloudflare?” I fill several white boards with notes and diagrams talking about where we fit in to the market. But I leave one for the end so I can write the word TRUST, in capital letters, and underline it three times. Trust is the foundation of our business.

Standing Up For Our Customers from Our Early Days

That’s why we’ve made decisions that other companies may not have. In January 2013 the FBI showed up at our door with a National Security Letter requesting information on a customer. It was incredibly scary.

We had fewer than 30 employees at the time. The agents, while professional, were incredibly intimidating. And the letter ordered us to turn over information and forbid us from discussing it with anyone other than our attorneys.

Welcome to Privacy & Compliance Week

There’s a proper role for law enforcement, but National Security Letters, which at the time had almost no oversight, could be written and enforced by a single branch of the US government, and gagged recipients from talking about them indefinitely, ran counter to the foundational principles of due process. So we decided to sue the United States government.

I am thankful for Cloudflare’s Board for encouraging us to always fight for our principles. I am also thankful for the Electronic Frontier Foundation, who served as our attorneys in the case. It took several years, and we were gagged from talking about it until 2017, but ultimately the FBI withdrew the letter and Congress has taken steps to reform the law and ensure better oversight. There is a proper role for law enforcement, but when it crosses a line and infringes on basic principles of due process, then we believe it’s important to challenge it.

It’s all about trust.

Recognizing It’s Not Our Data

The same is true for the commercial side of our business. As soon as Cloudflare took off, the ad tech companies came knocking: “Do you have any idea how much you could make if you just let us cookie and retarget individuals passing through your network?” I took a lot of those meetings in our early days, but always came away feeling uneasy. Talking through it with Michelle she concisely expressed why we would never be in the advertising business: “It’s not our data.”

And that’s right. For our customers who do run ads on their sites, if we sold the data then we’d effectively be undercutting them. And, more fundamentally, if we were some invisible service that tracked you online without your knowledge then that would fail the creepiness test. While we believe there can be good ad-supported businesses, Cloudflare will never be one.

Welcome to Privacy & Compliance Week

As a result, we’ve always seen any personally identifiable information that passes through our network as a toxic asset and purged it as quickly as possible. That can be a tension because we are a security company and part of security requires us to be able to know, for instance, if a particular IP address is sending DDoS traffic. But we’ve invested in implementing or inventing technologies — like Universal SSL, Privacy Pass, Encrypted DNS, and ESNI — that keep your private data private, including from us.

Again, it’s all about trust.

Privacy In Our DNA

While Cloudflare started in California, we have had a global perspective from our earliest days. Today, nearly half of our C-level executives are Europeans, including our CTO, CIO, and CFO. Michelle, my co-founder and Cloudflare’s COO, is Canadian, a country that shares many of Europe’s values around privacy. We have offices around the world and far more engineers working outside of Silicon Valley than inside of it.

I wrote the first version of our Privacy Policy back in 2010. It included from the first draft this clear statement: “Cloudflare will not sell, rent, or give away any of your personal information without your consent. It is our overriding privacy principle that any personal information you provide to us is just that: private.” That is still true today. While other tech companies have made their policies more flexible over time, we’ve made ours stricter, including committing to a list of things we have never done and will fight like hell to never do:

  • Cloudflare has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone.
  • Cloudflare has never installed any law enforcement software or equipment anywhere on our network.
  • Cloudflare has never provided any law enforcement organization a feed of our customers’ content transiting our network.
  • Cloudflare has never modified customer content at the request of law enforcement or another third party.
  • Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
  • Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.
Welcome to Privacy & Compliance Week

While many tech companies struggled to comply with privacy regulations such as GDPR, at Cloudflare it was relatively easy because the principles it imposed were at our core from our very outset. We don’t have a business if we don’t have trust, and being transparent, principled, and respecting the sanctity of personal data is critical to us continuously earning that trust.

Improving the Privacy of Our Service

But we’re not done; we can do more. There are things that have irked me about our service for a long time. For instance, from our earliest days we’ve used the _cfduid cookie to help with some of our security functions. That has meant that if you used Cloudflare you couldn’t be completely cookieless. John Graham-Cumming and I challenged the team earlier this year to see if we could kill it. Our team rose to the challenge and this week we’re announcing its deprecation. To my mind, that announcement alone is worth an entire week of celebrations.

Welcome to Privacy & Compliance Week

We have multiple data centers around the world that aggregate and process data in order to display logs and provide features. While having geographic redundancy helps with availability, some customers want to make sure their data never leaves a particular region. This week we’ll be giving users a lot more control over what data is processed where.

And, like we have during Privacy and Encryption weeks in years past, we will continue to invest in technologies to enable better encryption and more private use of core Internet services like DNS. Wouldn’t it be cool if, for example, we could ensure that no DNS provider could ever see both who is using their service and also where on the Internet those users are going? Stay tuned!

Helping Customers With Increasingly Complex Compliance Challenges

While we continue to invest in ensuring Cloudflare leads the way on privacy, more and more of our customers are also looking for solutions to be more private themselves. This month we expect that the EU’s new Digital Services Act will be proposed. We expect that it will continue to raise the bar on how companies doing business in Europe have to handle customers’ data. While the Internet giants will have the resources to comply with these heightened requirements, for everyone else they will create new challenges.

To that end, this week we’re announcing the Cloudflare Data Localization Suite. It provides our customers with a powerful set of tools to ensure they have control over how and where their data is processed in order to help comply with increasingly complex local data processing requirements. This includes enhancements to Workers, our edge computing and storage platform, to help modern applications get built such that users’ data never leaves their own country or region.

Welcome to Privacy & Compliance Week

It’s clear to us that the model of sending all your customer data back to a data center in Ashburn, VA, regardless of where those customers are located in the world, will look as antiquated in an increasingly privacy-conscious world as carrying a stack of punch cards to a central mainframe would today. In the not too distant future, regulations are inevitably going to force data storage and processing to be local. And, with a network that today already spans more than 100 countries, Cloudflare stands ready to help our customers enable that more private future.

Stay Tuned

Stay tuned this week to our blog for a series of announcements. Since these are topics that are so important in Europe right now, we’ll be simultaneously publishing most of them in French, Italian, Spanish, Portuguese, and German as well as English. Also check out Cloudflare TV where we’ll be interviewing a series of people whose views on privacy and compliance we respect and have learned from.

Cloudflare’s mission is to help build a better Internet. And there is no doubt that a better Internet is a more private Internet. With that in mind, welcome to Privacy & Compliance Week.

re:Invent 2020 – Your guide to AWS Identity and Data Protection sessions

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/reinvent-2020-your-guide-to-aws-identity-and-data-protection-sessions/

AWS re:Invent will certainly be different in 2020! Instead of seeing you all in Las Vegas, this year re:Invent will be a free, three-week virtual conference. One thing that will remain the same is the variety of sessions, including many Security, Identity, and Compliance sessions. As we developed sessions, we looked to customers—asking where they would like to expand their knowledge. One way we did this was shared in a recent Security blog post, where we introduced a new customer polling feature that provides us with feedback directly from customers. The initial results of the poll showed that Identity and Access Management and Data Protection are top-ranking topics for customers. We wanted to highlight some of the re:Invent sessions for these two important topics so that you can start building your re:Invent schedule. Each session is offered at multiple times, so you can sign up for the time that works best for your location and schedule.

Managing your Identities and Access in AWS

AWS identity: Secure account and application access with AWS SSO
Ron Cully, Principal Product Manager, AWS

Dec 1, 2020 | 12:00 PM – 12:30 PM PST
Dec 1, 2020 | 8:00 PM – 8:30 PM PST
Dec 2, 2020 | 4:00 AM – 4:30 AM PST

AWS SSO provides an easy way to centrally manage access at scale across all your AWS Organizations accounts, using identities you create and manage in AWS SSO, Microsoft Active Directory, or external identity providers (such as Okta Universal Directory or Azure AD). This session explains how you can use AWS SSO to manage your AWS environment, and it covers key new features to help you secure and automate account access authorization.

Getting started with AWS identity services
Becky Weiss, Senior Principal Engineer, AWS

Dec 1, 2020 | 1:30 PM – 2:00 PM PST
Dec 1, 2020 | 9:30 PM – 10:00 PM PST
Dec 2, 2020 | 5:30 AM – 6:00 AM PST

The number, range, and breadth of AWS services are large, but the set of techniques that you need to secure them is not. Your journey as a builder in the cloud starts with this session, in which practical examples help you quickly get up to speed on the fundamentals of becoming authenticated and authorized in the cloud, as well as on securing your resources and data correctly.

AWS identity: Ten identity health checks to improve security in the cloud
Cassia Martin, Senior Security Solutions Architect, AWS

Dec 2, 2020 | 9:30 AM – 10:00 AM PST
Dec 2, 2020 | 5:30 PM – 6:00 PM PST
Dec 3, 2020 | 1:30 AM – 2:00 AM PST

Get practical advice and code to help you achieve the principle of least privilege in your existing AWS environment. From enabling logs to disabling root, the provided checklist helps you find and fix permissions issues in your resources, your accounts, and throughout your organization. With these ten health checks, you can improve your AWS identity and achieve better security every day.

AWS identity: Choosing the right mix of AWS IAM policies for scale
Josh Du Lac, Principal Security Solutions Architect, AWS

Dec 2, 2020 | 11:00 AM – 11:30 AM PST
Dec 2, 2020 | 7:00 PM – 7:30 PM PST
Dec 3, 2020 | 3:00 AM – 3:30 AM PST

This session provides both a strategic and tactical overview of various AWS Identity and Access Management (IAM) policies that provide a range of capabilities for the security of your AWS accounts. You probably already use a number of these policies today, but this session will dive into the tactical reasons for choosing one capability over another. This session zooms out to help you understand how to manage these IAM policies across a multi-account environment, covering their purpose, deployment, validation, limitations, monitoring, and more.

Zero Trust: An AWS perspective
Quint Van Deman, Principal WW Identity Specialist, AWS

Dec 2, 2020 | 12:30 PM – 1:00 PM PST
Dec 2, 2020 | 8:30 PM – 9:00 PM PST
Dec 3, 2020 | 4:30 AM – 5:00 AM PST

AWS customers have continuously asked, “What are the optimal patterns for ensuring the right levels of security and availability for my systems and data?” Increasingly, they are asking how patterns that fall under the banner of Zero Trust might apply to this question. In this session, you learn about the AWS guiding principles for Zero Trust and explore the larger subdomains that have emerged within this space. Then the session dives deep into how AWS has incorporated some of these concepts, and how AWS can help you on your own Zero Trust journey.

AWS identity: Next-generation permission management
Brigid Johnson, Senior Software Development Manager, AWS

Dec 3, 2020 | 11:00 AM – 11:30 AM PST
Dec 3, 2020 | 7:00 PM – 7:30 PM PST
Dec 4, 2020 | 3:00 AM – 3:30 AM PST

This session is for central security teams and developers who manage application permissions. This session reviews a permissions model that enables you to scale your permissions management with confidence. Learn how to set your organization up for access management success with permission guardrails. Then, learn about granting workforce permissions based on attributes, so they scale as your users and teams adjust. Finally, learn about the access analysis tools and how to use them to identify and reduce broad permissions and give users and systems access to only what they need.

How Goldman Sachs administers temporary elevated AWS access
Harsha Sharma, Solutions Architect, AWS
Chana Garbow Pardes, Associate, Goldman Sachs
Jewel Brown, Analyst, Goldman Sachs

Dec 16, 2020 | 2:00 PM – 2:30 PM PST
Dec 16, 2020 | 10:00 PM – 10:30 PM PST
Dec 17, 2020 | 6:00 AM – 6:30 AM PST

Goldman Sachs takes security and access to AWS accounts seriously. While empowering teams with the freedom to build applications autonomously is critical for scaling cloud usage across the firm, guardrails and controls need to be set in place to enable secure administrative access. In this session, learn how the company built its credential brokering workflow and administrator access for its users. Learn how, with its simple application that uses proprietary and AWS services, including Amazon DynamoDB, AWS Lambda, AWS CloudTrail, Amazon S3, and Amazon Athena, Goldman Sachs is able to control administrator credentials and monitor and report on actions taken for audits and compliance.

Data Protection

Do you need an AWS KMS custom key store?
Tracy Pierce, Senior Consultant, AWS

Dec 15, 2020 | 9:45 AM – 10:15 AM PST
Dec 15, 2020 | 5:45 PM – 6:15 PM PST
Dec 16, 2020 | 1:45 AM – 2:15 AM PST

AWS Key Management Service (AWS KMS) has integrated with AWS CloudHSM, giving you the option to create your own AWS KMS custom key store. In this session, you learn more about how a KMS custom key store is backed by an AWS CloudHSM cluster and how it enables you to generate, store, and use your KMS keys in the hardware security modules that you control. You also learn when and if you really need a custom key store. Join this session to learn why you might choose not to use a custom key store and instead use the AWS KMS default.

Using certificate-based authentication on containers & web servers on AWS
Josh Rosenthol, Senior Product Manager, AWS
Kevin Rioles, Manager, Infrastructure & Security, BlackSky

Dec 8, 2020 | 12:45 PM – 1:15 PM PST
Dec 8, 2020 | 8:45 PM – 9:15 PM PST
Dec 9, 2020 | 4:45 AM – 5:15 AM PST

In this session, BlackSky talks about its experience using AWS Certificate Manager (ACM) end-entity certificates for the processing and distribution of real-time satellite geospatial intelligence and monitoring. Learn how BlackSky uses certificate-based authentication on containers and web servers within its AWS environment to help make TLS ubiquitous in its deployments. The session details the implementation, architecture, and operations best practices that the company chose and how it was able to operate ACM at scale across multiple accounts and regions.

The busy manager’s guide to encryption
Spencer Janyk, Senior Product Manager, AWS

Dec 9, 2020 | 11:45 AM – 12:15 PM PST
Dec 9, 2020 | 7:45 PM – 8:15 PM PST
Dec 10, 2020 | 3:45 AM – 4:15 AM PST

In this session, explore the functionality of AWS cryptography services and learn when and where to deploy each of the following: AWS Key Management Service, AWS Encryption SDK, AWS Certificate Manager, AWS CloudHSM, and AWS Secrets Manager. You also learn about defense-in-depth strategies including asymmetric permissions models, client-side encryption, and permission segmentation by role.

Building post-quantum cryptography for the cloud
Alex Weibel, Senior Software Development Engineer, AWS

Dec 15, 2020 | 12:45 PM – 1:15 PM PST
Dec 15, 2020 | 8:45 PM – 9:15 PM PST
Dec 16, 2020 | 4:45 AM – 5:15 AM PST

This session introduces post-quantum cryptography and how you can use it today to secure TLS communication. Learn about recent updates on standards and existing deployments, including the AWS post-quantum TLS implementation (pq-s2n). A description of the hybrid key agreement method shows how you can combine a new post-quantum key encapsulation method with a classical key exchange to secure network traffic today.

Data protection at scale using Amazon Macie
Neel Sendas, Senior Technical Account Manager, AWS

Dec 17, 2020 | 7:15 AM – 7:45 AM PST
Dec 17, 2020 | 3:15 PM – 3:45 PM PST
Dec 17, 2020 | 11:15 PM – 11:45 PM PST

Data Loss Prevention (DLP) is a common topic among companies that work with sensitive data. If an organization can’t identify its sensitive data, it can’t protect it. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. In this session, we will share details of the design and architecture you can use to deploy Macie at large scale.

While sessions are virtual this year, they will be offered at multiple times with live moderators and “Ask the Expert” sessions available to help answer any questions that you may have. We look forward to “seeing” you in these sessions. Please see the re:Invent agenda for more details and to build your schedule.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Himanshu Verma

Himanshu is a Worldwide Specialist for AWS Security Services. In this role, he leads the go-to-market creation and execution for AWS Data Protection and Threat Detection & Monitoring services, field enablement, and strategic customer advisement. Prior to AWS, he held roles as Director of Product Management, engineering and development, working on various identity, information security and data protection technologies.

Verified, episode 2 – A Conversation with Emma Smith, Director of Global Cyber Security at Vodafone

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/verified-episode-2-conversation-with-emma-smith-director-of-global-cyber-security-at-vodafone/

Over the past 8 months, it’s become more important for us all to stay in contact with peers around the globe. Today, I’m proud to bring you the second episode of our new video series, Verified: Presented by AWS re:Inforce. Even though we couldn’t be together this year at re:Inforce, our annual security conference, we still wanted to share some of the conversations with security leaders that would have taken place at the conference. The series showcases conversations with security leaders around the globe. In episode two, I’m talking to Emma Smith, Vodafone’s Global Cyber Security Director.

Vodafone is a global technology communications company with an optimistic culture. Their focus is connecting people and building the digital future for society. During our conversation, Emma detailed how the core values of the Global Cyber Security team were inspired by the company. “We’ve got a team of people who are ultimately passionate about protecting customers, protecting society, protecting Vodafone, protecting all of our services and our employees.” Emma shared experiences about the evolution of the security organization during her past 5 years with the company.

We were also able to touch on one of Emma’s passions, diversity and inclusion. Emma has worked to implement diversity and drive a policy of inclusion at Vodafone. In June, she was named Diversity Champion in the SC Awards Europe. In her own words: “It makes me realize that my job is to smooth the way for everybody else and to try and remove some of those obstacles or barriers that were put in their way… it means that I’m really passionate about trying to get a very diverse team in security, but also in Vodafone, so that we reflect our customer base, so that we’ve got diversity of thinking, of backgrounds, of experience, and people who genuinely feel comfortable being themselves at work—which is easy to say but really hard to create that culture of safety and belonging.”

Stay tuned for future episodes of Verified: Presented by AWS re:Inforce here on the AWS Security Blog. You can watch episode one, an interview with Jason Chan, Vice President of Information Security at Netflix on YouTube. If you have an idea or a topic you’d like covered in this series, please drop us a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

California Proposition 24 Passes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/california-proposition-24-passes.html

California’s Proposition 24, aimed at improving the California Consumer Privacy Act, passed this week. Analyses are very mixed. I was very mixed on the proposition, but on the whole I supported it. The proposition has some serious flaws, and was watered down by industry, but voting for privacy feels like it’s generally a good thing.

The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/the-nsa-is-refusing-to-disclose-its-policy-on-backdooring-commercial-products.html

Senator Ron Wyden asked, and the NSA didn’t answer:

The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others.

These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications.

The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines.

[…]

The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws.

“At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.”

Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries.

The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary.

Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC.

Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere.

Juniper has never identified the customer, and declined to comment for this story.

Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used.

Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security.

IMSI-Catchers from Canada

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/imsi-catchers-from-canada.html

Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments:

L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete.

The article goes on to talk about replacement surveillance systems from the Canadian company Octasic.

Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G).

[…]

A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone.

Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner.

Introducing the first video in our new series, Verified, featuring Netflix’s Jason Chan

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/introducing-first-video-new-series-verified-featuring-netflix-jason-chan/

The year has been a profoundly different one for us all, and like many of you, I’ve been adjusting, both professionally and personally, to this “new normal.” Here at AWS we’ve seen an increase in customers looking for secure solutions to maintain productivity in an increased work-from-home world. We’ve also seen an uptick in requests for training; it’s clear, a sense of community and learning are critically important as workforces physically distance.

For these reasons, I’m happy to announce the launch of Verified: Presented by AWS re:Inforce. I’m hosting this series, but I’ll be joined by leaders in cloud security across a variety of industries. The goal is to have an open conversation about the common issues we face in securing our systems and tools. Topics will include how the pandemic is impacting cloud security, tips for creating an effective security program from the ground up, how to create a culture of security, emerging security trends, and more. Learn more by following me on Twitter (@StephenSchmidt), and get regular updates from @AWSSecurityInfo. Verified is just one of the many ways we will continue sharing best practices with our customers during this time. You can find more by reading the AWS Security Blog, reviewing our documentation, visiting the AWS Security and Compliance webpages, watching re:Invent and re:Inforce playlists, and/or reviewing the Security Pillar of Well Architected.

Our first conversation, above, is with Jason Chan, Vice President of Information Security at Netflix. Jason spoke to us about the security program at Netflix, his approach to hiring security talent, and how Zero Trust enables a remote workforce. Jason also has solid insights to share about how he started and grew the security program at Netflix.

“In the early days, what we were really trying to figure out is how do we build a large-scale consumer video-streaming service in the public cloud, and how do you do that in a secure way? There wasn’t a ton of expertise in that, so when I was building the security team at Netflix, I thought, ‘how do we bring in folks from a variety of backgrounds, generalists … to tackle this problem?’”

He also gave his view on how a growing security team can measure ROI. “I think it’s difficult to have a pure equation around that. So what we try to spend our time doing is really making sure that we, as a team, are aligned on what is the most important—what are the most important assets to protect, what are the most critical risks that we’re trying to prevent—and then make sure that leadership is aligned with that, because, as we all know, there’s not unlimited resources, right? You can’t hire an unlimited number of folks or spend an unlimited amount of money, so you’re always trying to figure out how do you prioritize, and how do you find where is going to be the biggest impact for your value?”

Check out Jason’s full interview above, and stay tuned for further videos in this series. If you have an idea or a topic you’d like covered in this series, please drop us a comment below. Thanks!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Google Responds to Warrants for “About” Searches

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/google-responds-to-warrants-for-about-searches.html

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

Free, Privacy-First Analytics for a Better Web

Post Syndicated from Jon Levine original https://blog.cloudflare.com/free-privacy-first-analytics-for-a-better-web/

Free, Privacy-First Analytics for a Better Web

Everyone with a website needs to know some basic facts about their website: what pages are people visiting? Where in the world are they? What other sites sent traffic to my website?

There are “free” analytics tools out there, but they come at a cost: not money, but your users’ privacy. Today we’re announcing a brand new, privacy-first analytics service that’s open to everyone — even if they’re not already a Cloudflare customer. And if you’re a Cloudflare customer, we’ve enhanced our analytics to make them even more powerful than before.

The most important analytics feature: Privacy

The most popular analytics services available were built to help ad-supported sites sell more ads. But, a lot of websites don’t have ads. So if you use those services, you’re giving up the privacy of your users in order to understand how what you’ve put online is performing.

Cloudflare’s business has never been built around tracking users or selling advertising. We don’t want to know what you do on the Internet — it’s not our business. So we wanted to build an analytics service that gets back to what really matters for web creators, not necessarily marketers, and to give web creators the information they need in a simple, clean way that doesn’t sacrifice their visitors’ privacy. And giving web creators these analytics shouldn’t depend on their use of Cloudflare’s infrastructure for performance and security. (More on that in a bit.)

What does it mean for us to make our analytics “privacy-first”? Most importantly, it means we don’t need to track individual users over time for the purposes of serving analytics. We don’t use any client-side state, like cookies or localStorage, for the purposes of tracking users. And we don’t “fingerprint” individuals via their IP address, User Agent string, or any other data for the purpose of displaying analytics. (We consider fingerprinting even more intrusive than cookies, because users have no way to opt out.)

Counting visits without tracking users

One of the most essential stats about any website is: “how many people went there”? Analytics tools frequently show counts of “unique” visitors, which requires tracking individual users by a cookie or IP address.

We use the concept of a visit: a privacy-friendly measure of how people have interacted with your website. A visit is defined simply as a successful page view that has an HTTP referer that doesn’t match the hostname of the request. This tells you how many times people came to your website and clicked around before navigating away, but doesn’t require tracking individuals.

Free, Privacy-First Analytics for a Better Web

A visit has slightly different semantics from a “unique”, and you should expect this number to differ from other analytics tools.

All of the details, none of the bots

Our analytics deliver the most important metrics about your website, like page views and visits. But we know that an essential analytics feature is flexibility: the ability to add arbitrary filters, and slice-and-dice data as you see fit. Our analytics can show you the top hostnames, URLs, countries, and other critical metrics like status codes. You can filter on any of these metrics with a click and see the whole dashboard update.

I’m especially excited about two features in our time series charts: the ability to drag-to-zoom into a narrower time range, and the ability to “group by” different dimensions to see data in a different way. This is a super powerful way to drill into an anomaly in traffic and quickly see what’s going on. For example, you might notice a spike in traffic, zoom into that spike, and then try different groupings to see what contributed the extra clicks. A GIF is worth a thousand words:

And for customers of our Bot Management product, we’re working on the ability to detect (and remove) automated traffic. Coming very soon, you’ll be able to see which bots are reaching your website — with just a click, block them by using Firewall Rules.

This is all possible thanks to our ABR analytics technology, which enables us to serve analytics very quickly for websites large and small. Check out our blog post to learn more about how this works.

Edge or Browser analytics? Why not both?

There are two ways to collect web analytics data: at the edge (or on an origin server), or in the client using a JavaScript beacon.

Historically, Cloudflare has collected analytics data at our edge. This has some nice benefits over traditional, client-side analytics approaches:

  • It’s more accurate because you don’t miss users who block third-party scripts, or JavaScript altogether
  • You can see all of the traffic back to your origin server, even if an HTML page doesn’t load
  • We can detect (and block bots), apply Firewall rules, and generally scrub traffic of unwanted noise
  • You can measure the performance of your origin server

More commonly, most web analytics providers use client-side measurement. This has some benefits as well:

  • You can understand performance as your users see it — e.g. how long did the page actually take to render
  • You can detect errors in client-side JavaScript execution
  • You can define custom event types emitted by JavaScript frameworks

Ultimately, we want our customers to have the best of both worlds. We think it’s really powerful to get web traffic numbers directly from the edge. We also launched Browser Insights a year ago to augment our existing edge analytics with more performance information, and today Browser Insights are taking a big step forward by incorporating Web Vitals metrics.

But, we know not everyone can modify their DNS to take advantage of Cloudflare’s edge services. That’s why today we’re announcing a free, standalone analytics product for everyone.

How do I get it?

For existing Cloudflare customers on our Pro, Biz, and Enterprise plans, just go to your Analytics tab! Starting today, you’ll see a banner to opt-in to the new analytics experience. (We plan to make this the default in a few weeks.)

But when building privacy-first analytics, we realized it’s important to make this accessible even to folks who don’t use Cloudflare today. You’ll be able to use Cloudflare’s web analytics even if you can’t change your DNS servers — just add our JavaScript, and you’re good to go.

We’re still putting on the finishing touches on our JavaScript-based analytics, but you can sign up here and we’ll let you know when it’s ready.

The evolution of analytics at Cloudflare

Just over a year ago, Cloudflare’s analytics consisted of a simple set of metrics: cached vs uncached data transfer, or how many requests were blocked by the Firewall. Today we provide flexible, powerful analytics across all our products, including Firewall, Cache, Load Balancing and Network traffic.

While we’ve been focused on building analytics about our products, we realized that our analytics are also powerful as a standalone product. Today is just the first step on that journey. We have so much more planned: from real-time analytics, to ever-more performance analysis, and even allowing customers to add custom events.

We want to hear what you want most out of analytics — drop a note in the comments to let us know what you want to see next.