Tag Archives: CIO Week

Adding Zero Trust signals to Sumo Logic for better security insights

Post Syndicated from Corey Mahan original https://blog.cloudflare.com/zero-trust-signals-to-sumo-logic/

Adding Zero Trust signals to Sumo Logic for better security insights

Adding Zero Trust signals to Sumo Logic for better security insights

A picture is worth a thousand words and the same is true when it comes to getting visualizations, trends, and data in the form of a ready-made security dashboard.

Today we’re excited to announce the expansion of support for automated normalization and correlation of Zero Trust logs for Logpush in Sumo Logic’s Cloud SIEM. As a Cloudflare technology partner, Sumo Logic is the pioneer in continuous intelligence, a new category of software which enables organizations of all sizes to address the data challenges and opportunities presented by digital transformation, modern applications, and cloud computing.

The updated content in Sumo Logic Cloud SIEM helps joint Cloudflare customers reduce alert fatigue tied to Zero Trust logs and accelerates the triage process for security analysts by converging security and network data into high-fidelity insights. This new functionality complements the existing Cloudflare App for Sumo Logic designed to help IT and security teams gain insights, understand anomalous activity, and better trend security and network performance data over time.

Adding Zero Trust signals to Sumo Logic for better security insights

Deeper integration to deliver Zero Trust insights

Using Cloudflare Zero Trust helps protect users, devices, and data, and in the process can create a large volume of logs. These logs are helpful and important because they provide the who, what, when, and where for activity happening within and across an organization. They contain information such as what website was accessed, who signed in to an application, or what data may have been shared from a SaaS service.

Up until now, our integrations with Sumo Logic only allowed automated correlation of security signals for Cloudflare only included core services. While it’s critical to ensure collection of WAF and bot detection events across your fabric, extended visibility into Zero Trust components has now become more important than ever with the explosion of distributed work and adoption of hybrid and multi-cloud infrastructure architectures.

With the expanded Zero Trust logs now available in Sumo Logic Cloud SIEM, customers can now get deeper context into security insights thanks to the broad set of network and security logs produced by Cloudflare products:

“As a long time Cloudflare partner, we’ve worked together to help joint customers analyze events and trends from their websites and applications to provide end-to-end visibility and improve digital experiences. We’re excited to expand this partnership to provide real-time insights into the Zero Trust security posture of mutual customers in Sumo Logic’s Cloud SIEM.”
John Coyle – Vice President of Business Development, Sumo Logic

How to get started

To take advantage of the suite of integrations available for Sumo Logic and Cloudflare logs available via Logpush, first enable Logpush to Sumo Logic, which will ship logs directly to Sumo Logic’s cloud-native platform. Then, install the Cloudflare App and (for Cloud SIEM customers) enable forwarding of these logs to Cloud SIEM for automated normalization and correlation of security insights.

Note that Cloudflare’s Logpush service is only available to Enterprise customers. If you are interested in upgrading, please contact us here.

  1. Enable Logpush to Sumo Logic
    Cloudflare Logpush supports pushing logs directly to Sumo Logic via the Cloudflare dashboard or via API.
  2. Install the Cloudflare App for Sumo Logic
    Locate and install the Cloudflare app from the App Catalog, linked above. If you want to see a preview of the dashboards included with the app before installing, click Preview Dashboards. Once installed, you can now view key information in the Cloudflare Dashboards for all core services.
  3. (Cloud SIEM Customers) Forward logs to Cloud SIEM
    After the steps above, enable the updated parser for Cloudflare logs by adding the _parser field to your S3 source created when installing the Cloudflare App.

What’s next

As more organizations move towards a Zero Trust model for security, it’s increasingly important to have visibility into every aspect of the network with logs playing a crucial role in this effort.

If your organization is just getting started and not already using a tool like Sumo Logic, Cloudflare R2 for log storage is worth considering. Cloudflare R2 offers a scalable, cost-effective solution for log storage.

We’re excited to continue closely working with technology partners to expand existing and create new integrations that help customers on their Zero Trust journey.

Cloudflare Zero Trust for managed service providers

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gateway-managed-service-provider/

Cloudflare Zero Trust for managed service providers

Cloudflare Zero Trust for managed service providers

As part of CIO week, we are announcing a new integration between our DNS Filtering solution and our Partner Tenant platform that supports parent-child policy requirements for our partner ecosystem and our direct customers. Our Tenant platform, launched in 2019, has allowed Cloudflare partners to easily integrate Cloudflare solutions across millions of customer accounts. Cloudflare Gateway, introduced in 2020, has grown from protecting personal networks to Fortune 500 enterprises in just a few short years. With the integration between these two solutions, we can now help Managed Service Providers (MSPs) support large, multi-tenant deployments with parent-child policy configurations and account-level policy overrides that seamlessly protect global employees from threats online.

Why work with Managed Service Providers?

Managed Service Providers (MSPs) are a critical part of the toolkit of many CIOs. In the age of disruptive technology, hybrid work, and shifting business models, outsourcing IT and security operations can be a fundamental decision that drives strategic goals and ensures business success across organizations of all sizes. An MSP is a third-party company that remotely manages a customer’s information technology (IT) infrastructure and end-user systems. MSPs promise deep technical knowledge, threat insights, and tenured expertise across a variety of security solutions to protect from ransomware, malware, and other online threats. The decision to partner with an MSP can allow internal teams to focus on more strategic initiatives while providing access to easily deployable, competitively priced IT and security solutions. Cloudflare has been making it easier for our customers to work with MSPs to deploy and manage a complete Zero Trust transformation.

One decision criteria for selecting an appropriate MSP is the provider’s ability to keep the partner’s best technology, security and cost interests in mind. An MSP should be leveraging innovative and lower cost security solutions whenever possible to drive the best value to your organization. Out of date technology can quickly incur higher implementation and maintenance costs compared to more modern and purpose-built solutions given the broader attack surface brought about by hybrid work. In a developing space like Zero Trust, an effective MSP should be able to support vendors that can be deployed globally, managed at scale, and effectively enforce global corporate policy across business units. Cloudflare has worked with many MSPs, some of which we will highlight today, that implement and manage Zero Trust security policies cost-effectively at scale.

The MSPs we are highlighting have started to deploy Cloudflare Gateway DNS Filtering to complement their portfolio as part of a Zero Trust access control strategy. DNS filtering provides quick time-to-value for organizations seeking protection from ransomware, malware, phishing, and other Internet threats. DNS filtering is the process of using the Domain Name System to block malicious websites and prevent users from reaching harmful or inappropriate content on the Internet. This ensures that company data remains secure and allows companies to have control over what their employees can access on company-managed networks and devices.

Filtering policies are often set by the Organization with consultation from the service provider. In some cases, these policies also need to be managed independently at the account or business unit level by either the MSP or the customer. This means it is very common for a parent-child relationship to be required to balance the deployment of corporate level rules from customization across devices, office locations, or business units. This structure is vital for MSPs that are deploying access policies across millions of devices and accounts.

Cloudflare Zero Trust for managed service providers

Better together: Zero Trust ❤️ Tenant Platform

To make it easier for MSPs to manage millions of accounts with appropriate access controls and policy management, we integrated Cloudflare Gateway with our existing Tenant platform with a new feature that provides parent-child configurations. This allows MSP partners to create and manage accounts, set global corporate security policies, and allow appropriate management or overrides at the individual business unit or team level.

Cloudflare Zero Trust for managed service providers

The Tenant platform allows MSPs ability to create millions of end customer accounts at their discretion to support their specific onboarding and configurations. This also ensures proper separation of ownership between customers and allows end customers to access the Cloudflare dashboard directly, if required.

Each account created is a separate container of subscribed resources (zero trust policies, zones, workers, etc.) for each of the MSPs end customers. Customer administrators can be invited to each account as necessary for self-service management, while the MSP retains control of the capabilities enabled for each account.

With MSPs now able to set up and manage accounts at scale, we’ll explore how the integration with Cloudflare Gateway lets them manage scaled DNS filtering policies for these accounts.

Tiered Zero Trust accounts

With individual accounts for each MSP end customer in place, MSPs can either fully manage the deployment or provide a self-service portal backed by Cloudflare configuration APIs. Supporting a configuration portal also means you would never want your end users to block access to this domain, so the MSP can add a hidden policy to all of its end customer accounts when they onboard which would be a simple one time API call. Although issues start to arise anytime they need to push an update to said policy, this now means they have to update the policy once for each and every MSP end customer and for some MSPs that can mean over 1 million API calls.

To help turn this into a single API call, we introduced the concept of a top level account aka parent account. This parent account allows MSPs to set global policies which are applied to all DNS queries before the subsequent MSP end customer policies aka child account policies. This structure helps ensure MSPs can set their own global policies for all of their child accounts while each child account can further filter their DNS queries to meet their needs without impacting any other child account.

Cloudflare Zero Trust for managed service providers

This extends further than just policies as well, each child account can create their own custom block page, upload their own certificates to display these block pages, and set up their own DNS endpoints (IPv4, IPv6, DoH, and DoT) via Gateway locations. Also, because these are the exact same as non-MSP Gateway accounts, there aren’t any lower limits when it comes to the default limits on the number policies, locations, or lists per parent or child account.

Managed Service Provider integrations

To help bring this to life, below are real-world examples of how Cloudflare customers are using this new managed service provider feature to help protect their organizations.

US federal government

The US federal government requires many of the same services to support a protective DNS service for their 100+ civilian agencies, and they often outsource many of their IT and security operations to service providers like Accenture Federal Services (AFS).

In 2022, Cloudflare and AFS were selected by Cybersecurity and Infrastructure Security Agency (CISA) with the Department of Homeland Security (DHS) to develop a joint solution to help the federal government defend itself against cyberattacks. The solution consists of Cloudflare’s protective DNS resolver which will filter DNS queries from offices and locations of the federal government and stream events directly to Accenture’s platform to provide unified administration and log storage.

Accenture Federal Services is providing a central interface to each department that allows them to adjust their DNS filtering policies. This interface works with Cloudflare’s Tenant platform and Gateway client APIs to provide a seamless customer experience for government employees managing their security policies using our new parent-child configurations. CISA, as the parent account, can set their own global policies, while allowing agencies, child accounts, to bypass select global policies, and set their own default block pages.

In conjunction with our parent-child structure we provided a few improvements to our DNS location matching and filtering defaults. Currently, all Gateway accounts can purchase a dedicated IPv4 resolver IP address(es) and these are great for situations where a customer doesn’t have a static source IP address or wants their own IPv4 address to host the solution.

For CISA, they wanted not only a dedicated IPv4 address but to assign that same address from their parent account to their child accounts. This would allow them to have their own default IPv4 addresses for all agencies easing the burden of onboarding. Next they also want the ability to fail closed, which means if a DNS query did not match any location (which must have a source IPv4 address/network configured) it would be dropped. This allows CISA to ensure only configured IPv4 networks had access to their protective services. Lasty, we didn’t have to address this with IPv6, DoH, and DoT DNS endpoints as those are custom with each and every DNS location created.

Malwarebytes

Malwarebytes, a global leader in real-time cyber protection, recently integrated with Cloudflare to provide a DNS filtering module within their Nebula platform. The Nebula platform is a cloud-hosted security operations solution that manages control of any malware or ransomware incident—from alert to fix. This new module allows Malwarebytes customers to filter on content categories and add policy rules for groups of devices. A key need was the ability to easily integrate with their current device client, provide individual account management, and provide room for future expansion across additional Zero Trust services like Cloudflare Browser Isolation.

Cloudflare was able to provide a comprehensive solution that was easily integrated into the Malwarebytes platform. This included using DNS-over-HTTP (DoH) to segment users across unique locations and adding a unique token per device to properly track the device ID and apply the correct DNS policies. And lastly, the integration was completed using the Cloudflare Tenant API which allowed seamless integration with their current workflow and platform. This combination of our Zero Trust services and Tenant platform let Malwarebytes quickly go to market for new segments within their business.

“It’s challenging for organizations today to manage access to malicious sites and keep their end users safe and productive. Malwarebytes’ DNS Filtering module extends our cloud-based security platform to web protection. After evaluating other Zero Trust providers it was clear to us that Cloudflare could offer the comprehensive solution IT and security teams need while providing lightning fast performance at the same time. Now, IT and security teams can block whole categories of sites, take advantage of an extensive database of pre-defined scores on known, suspicious web domains, protect core web-based applications and manage specific site restrictions, removing the headache from overseeing site access.” – Mark Strassman, Chief Product Officer, Malwarebytes

Large global ISP

We’ve been working with a large global ISP recently to support DNS filtering which is a part of a larger security solution offered for families for over one million accounts in just the first year! The ISP leverages our Tenant and Gateway APIs to seamlessly integrate into their current platform and user experience with minimal engineering effort. We look forward to sharing more detail around this implementation in the coming months.

What’s next

As the previous stories highlight, MSPs play a key role in securing today’s diverse ecosystem of organizations, of all sizes and maturities. Companies of all sizes find themselves squaring off against the same complex threat landscape and are challenged to maintain a proper security posture and manage risk with constrained resources and limited security tooling. MSPs provide the additional resources, expertise and advanced security tooling that can help reduce the risk profile for these companies. Cloudflare is committed to making it easier for MSPs to be effective in delivering Zero Trust solutions to their customers.

Given the importance of MSPs for our customers and the continued growth of our partner network, we plan to launch quite a few features in 2023 and beyond that better support our MSP partners. First, a key item on our roadmap is the development of a refreshed tenant management dashboard for improved account and user management. Second, we want to extend our multi-tenant configurations across our entire Zero Trust solution set to make it easier for MSPs to implement secure hybrid work solutions at scale.

Lastly, to better support hierarchical access, we plan to expand the user roles and access model currently available to MSP partners to allow their teams to more easily support and manage their various accounts. Cloudflare has always prided itself on its ease of use, and our goal is to make Cloudflare the Zero Trust platform of choice for service and security providers globally.

Throughout CIO week, we’ve touched on how our partners are helping modernize the security posture for their customers to align with a world transformed by hybrid work and hybrid multi-cloud infrastructures. Ultimately, the power of Cloudflare Zero Trust comes from its existence as a composable, unified platform that draws strength from its combination of products, features, and our partner network.

  • If you’d like to learn more about becoming an MSP partner, you can read more here: https://www.cloudflare.com/partners/services
  • If you’d like to learn more about improving your security with DNS Filtering and Zero Trust, or would like to get started today, test the platform yourself with 50 free seats by signing up here.

Cloud CNI privately connects your clouds to Cloudflare

Post Syndicated from David Tuber original https://blog.cloudflare.com/cloud-cni/

Cloud CNI privately connects your clouds to Cloudflare

This post is also available in 简体中文, 日本語 and Español.

Cloud CNI privately connects your clouds to Cloudflare

For CIOs, networking is a hard process that is often made harder. Corporate networks have so many things that need to be connected and each one of them needs to be connected differently: user devices need managed connectivity through a Secure Web Gateway, offices need to be connected using the public Internet or dedicated connectivity, data centers need to be managed with their own private or public connectivity, and then you have to manage cloud connectivity on top of it all! It can be exasperating to manage connectivity for all these different scenarios and all their privacy and compliance requirements when all you want to do is enable your users to access their resources privately, securely, and in a non-intrusive manner.

Cloudflare helps simplify your connectivity story with Cloudflare One. Today, we’re excited to announce that we support direct cloud interconnection with our Cloudflare Network Interconnect, allowing Cloudflare to be your one-stop shop for all your interconnection needs.

Customers using IBM Cloud, Google Cloud, Azure, Oracle Cloud Infrastructure, and Amazon Web Services can now open direct connections from their private cloud instances into Cloudflare. In this blog, we’re going to talk about why direct cloud interconnection is important, how Cloudflare makes it easy, and how Cloudflare integrates direct cloud connection with our existing Cloudflare One products to bring new levels of security to your corporate networks built on top of Cloudflare.

Privacy in a public cloud

Public cloud compute providers are built on the idea that the compute power they provide can be used by anyone: your cloud VM and my cloud VM can run next to each other on the same machine and neither of us will know. The same is true for bits on the wire going in and out of these clouds: your bits and my bits may flow on the same wire, interleaved with each other, and neither of us will know that it’s happening.

The abstraction and relinquishment of ownership is comforting in one way but can be terrifying in another: neither of us need to run a physical machine and buy our own connectivity, but we have no guarantees about how or where our data and compute lives except that it lives in a datacenter with millions of other users.

For many enterprises, this isn’t acceptable: enterprises need compute that can only be accessed by them. Maybe the compute in the cloud is storing payment data that can’t be publicly accessible, and must be accessed through a private connection. Maybe the cloud customer has compliance requirements due to government restrictions that require the cloud not be accessible to the public Internet. Maybe the customer simply doesn’t trust public clouds or the public Internet and wants to limit exposure as much as possible. Customers want a private cloud that only they can access: a virtual private cloud, or a VPC.

To help solve this problem and ensure that only compute owners can access cloud compute that needs to stay private, clouds developed private cloud interconnects: direct cables from clouds to their customers. You may know them by their product names: AWS calls theirs DirectConnect, Azure calls theirs ExpressRoute, Google Cloud calls theirs Cloud Interconnect, OCI calls theirs FastConnect, and IBM calls theirs Direct Link. By providing private cloud connectivity to the customer datacenter, clouds satisfy the chief pain points for their customers: providing compute in a private manner. With these private links, VPCs are only accessible from the corporate networks that they’re plugged into, providing air-gapped security while allowing customers to turn over operations and maintenance of the datacenters to the clouds.

Privacy on the public Internet

But while VPCs and direct cloud interconnection have solved the problem of infrastructure moving to the cloud, as corporate networks move out of on-premise deployments, the cloud brings a completely new challenge: how do I keep my private cloud connections if I’m getting rid of my corporate network that connects all my resources together?

Let’s take an example company that connects a data center, an office, and an Azure instance together. Today, this company may have remote users that connect to applications hosted in either the datacenter, the office, or the cloud instance. Users in the office may connect to applications in the cloud, and all of it today is managed by the company. To do this, they may employ VPNs that tunnel the remote users into the data center or office before accessing the necessary applications. The office and data center are often connected through MPLS lines that are leased from connectivity providers. And then there’s the private IBM instance that is connected via IBM Direct Link. That’s three different connectivity providers for CIOs to manage, and we haven’t even started talking about access policies for the internal applications, firewalls for the cross-building network, and implementing MPLS routing on top of the provider underlay.

Cloud CNI privately connects your clouds to Cloudflare

Cloudflare One helps simplify this by allowing companies to insert Cloudflare as the network for all the different connectivity options. Instead of having to run connections between buildings and clouds, all you need to do is manage your connections to Cloudflare.

WARP manages connectivity for remote users, Cloudflare Network Interconnect provides the private connectivity from data centers and offices to Cloudflare, and all of that can be managed with Access policies for policing applications and Magic WAN to provide the routing that gets your users where they need to go. When we released Cloudflare One, we were able to simplify the connectivity story to look like this:

Cloud CNI privately connects your clouds to Cloudflare

Before, users with private clouds had to either expose their cloud instances to the public Internet, or maintain suboptimal routing by keeping their private cloud instances connected to their data centers instead of directly connecting to Cloudflare. This means that these customers have to maintain their private connections directly to their data centers, which adds toil to a solution that is supposed to be easier:

Cloud CNI privately connects your clouds to Cloudflare

Now that CNI supports cloud environments, this company can open a private cloud link directly into Cloudflare instead of into their data center. This allows the company to use Cloudflare as a true intermediary between all of their resources, and they can rely on Cloudflare to manage firewalls, access policies, and routing for all of their resources, trimming the number of vendors they need to manage for routing down to one: just Cloudflare!

Cloud CNI privately connects your clouds to Cloudflare

Once everything is directly connected to Cloudflare, this company can manage their cross-resource routing and firewalls through Magic WAN, they can set their user policies directly in Access, and they can set egress policies out to the public Internet through any one of Cloudflare’s 250+ data centers through Gateway. All the offices and clouds talk to each other on a hermetically sealed network with no public access or publicly shared peering links, and most importantly, all of these security and privacy efforts are done completely transparently to the user.

So let’s talk about how we can get your cloud connected to us.

Quick cloud connectivity

The most important thing with cloud connectivity is how easy it should be: you shouldn’t have to spend lots of time waiting for cross-connects to come up, get LOAs, monitor light levels and do all the things that you would normally do when provisioning connectivity. Getting connected from your cloud provider should be cloud-native: you should just be able to provision cloud connectivity directly from your existing portals and follow the existing steps laid out for direct cloud connection.

That’s why our new cloud support makes it even easier to connect with Cloudflare. We now support direct cloud connectivity with IBM, AWS, Azure, Google Cloud, and OCI so that you can provision connections directly from your cloud provider into Cloudflare like you would to a datacenter. Moving private connections to Cloudflare means you don’t have to maintain your own infrastructure anymore, Cloudflare becomes your infrastructure, so you don’t have to worry about ordering cross-connects into your devices, getting LOAs, or checking light levels. To show you how easy this can be, let’s walk through an example of how easy this is using Google Cloud.

The first step to provisioning connectivity in any cloud is to request a connection. In Google Cloud, you can do this by selecting “Private Service Connection” in the VPC network details:

Cloud CNI privately connects your clouds to Cloudflare

That will allow you to select a partner connection or a direct connection. In Cloudflare’s case, you should select a partner connection. Follow the instructions to select a connecting region and datacenter site, and you’ll get what’s called a connection ID, which is used by Google Cloud and Cloudflare to identify the private connection with your VPC:

Cloud CNI privately connects your clouds to Cloudflare

You’ll notice in this screenshot that it says you need to configure the connection on the partner side. In this case, you can take that key and use it to automatically provision a virtual connection on top of an already existing link. The provisioning process consists of five steps:

  1. Assigning unique VLANs to your connection to ensure a private connection
  2. Assigning unique IP addresses for a BGP point-to-point connection
  3. Provisioning a BGP connection on the Cloudflare side
  4. Passing this information back to Google Cloud and creating the connection
  5. Accepting the connection and finishing BGP provisioning on your VPC

All of these steps are performed automatically in seconds so that by the time you get your IP address and VLANs, Cloudflare has already provisioned our end of the connection. When you accept and configure the connection, everything will be ready to go, and it’s easy to start privately routing your traffic through Cloudflare.

Now that you’ve finished setting up your connection, let’s talk about how private connectivity to your cloud instances can integrate with all of your Cloudflare One products.

Private routing with Magic WAN

Magic WAN integrates extremely well with Cloud CNI, allowing customers to connect their VPCs directly to the private network built with Magic WAN. Since the routing is private, you can even advertise your private address spaces reserved for internal routing, such as your 10.0.0.0/8 space.

Previously, your cloud VPC needed to be publicly addressable. But with Cloud CNI, we assign a point-to-point IP range, and you can advertise your internal spaces back to Cloudflare and Magic WAN will route traffic to your internal address spaces!

Secure authentication with Access

Many customers love Cloudflare Tunnel in combination with Access for its secure paths to authentication servers hosted in cloud providers. But what if your authentication server didn’t need to be publicly accessible at all? With Access + Cloud CNI, you can connect your authentication services to Cloudflare and Access will route all your authentication traffic through the private path back to your service without needing the public Internet.

Manage your cloud egress with Gateway

While you may want to protect your cloud services from ever being accessed by anyone not on your network, sometimes your cloud services need to talk out to the public Internet. Luckily for you, Gateway has you covered and with Cloud CNI you can get a private path to Cloudflare which will manage all of your egress policies, ensuring that you can carefully watch your cloud service outbound traffic from the same place you monitor all other traffic leaving your network.

Cloud CNI: safe, performant, easy

Cloudflare is committed to making zero trust and network security easy and unobtrusive. Cloud CNI is another step towards ensuring that your network is as easy to manage as everything else so that you can stop focusing on how to build your network, and start focusing on what goes on top of it.

If you’re interested in Cloud CNI, contact us today to get connected to a seamless and easy Zero Trust world.

China Express: Cloudflare partners to boost performance in China for corporate networks

Post Syndicated from Dafu Wang original https://blog.cloudflare.com/china-express/

China Express: Cloudflare partners to boost performance in China for corporate networks

China Express: Cloudflare partners to boost performance in China for corporate networks

Cloudflare has been helping global organizations offer their users a consistent experience all over the world. This includes mainland China, a market our global customers cannot ignore but that continues to be challenging for infrastructure teams trying to ensure performance, security and reliability for their applications and users both in and outside mainland China. We are excited to announce China Express — a new suite of capabilities and best practices in partnership with our partners China Mobile International (CMI) and CBC Tech — that help address some of these performance challenges and ensure a consistent experience for customers and employees everywhere.

Cloudflare has been providing Application Services to users in mainland China since 2015, improving performance and security using in-country data centers and caching. Today, we have a presence in 30 cities in mainland China thanks to our strategic partnership with JD Cloud. While this delivers significant performance improvements, some requests still need to go back to the origin servers which may live outside mainland China. With limited international Internet gateways and restrictive cross-border regulations, international traffic has a very high latency and packet drop rate in and out of China. This results in inconsistent cached content within China and a poor experience for users trying to access dynamic content that requires frequent access to the origin.

Last month, we expanded our Cloudflare One, Zero Trust network-as-a-service platform to users and organizations in China with additional connectivity options. This has received tremendous interest from customers, so we’re looking at what else we could do to further improve the user experience for customers with employees or offices in China.

What is China Express?

China Express is a suite of connectivity and performance offerings designed to simplify connectivity and improve performance for users in China. To understand these better, let’s take an example of Acme Corp, a global company with offices in Shanghai and Beijing — with origin data centers in London and Ashburn. And let’s see how we can help their infrastructure teams better serve employees and users in mainland China.

China Express Premium DIA

Premium Dedicated Internet Access, is an optimized, high-quality public Internet circuit for cross-border connectivity provided by our local partners CMI and CBC Tech. With this service, traffic from mainland China will arrive at our partner data center in Hong Kong, using a fixed NAT IP. Customers do not worry about compliance issues because their traffic still goes through the public Internet with all regulatory controls in place.

Acme Corp can use Premium DIA to improve origin performance for their Cloudflare service in mainland China. Requests to the origin data centers in Ashburn and London would traverse the Premium DIA connection, which offers more bandwidth and lower packet loss resulting in more than a 60% improvement in performance.

Acme employees in mainland China would also see an improvement while accessing SaaS applications such as Microsoft 365 over the Internet when these apps are delivered from outside China. They would also notice an improvement in Internet speed in general.

While Premium DIA offers Acme performance improvements over the public Internet, they may want to keep some mission-critical application traffic on a private network for security reasons. Private link offers a dedicated private tunnel between Acme’s locations in China and their data centers outside of China. Private Link can also be used to establish dedicated private connectivity to SaaS data centers like Salesforce.

Private Link is a highly regulated area in China and depending on your use case, there might be additional requirements from our partners to implement it.

China Express: Cloudflare partners to boost performance in China for corporate networks

China Express Travel SIM

Acme Corp might have employees visiting China on a regular basis and need access to their corporate apps on their mobile devices including phones and tablets. Their IT teams not only have to procure and provision mobile Internet connectivity for their users, but also enforce consistent Zero Trust security controls.

Cloudflare is pleased to announce that the Travel SIM provided by Cloudflare’s partner CMI automatically provides network connectivity and can be used together with the Cloudflare WARP Client on mobile devices to provide Cloudflare’s suite of Zero Trust security services. Using the same Zero Trust profiles assigned to the user, the WARP client will automatically use the available 4G LTE network and establish a WireGuard tunnel to the closest Cloudflare data center outside of China. The data connection can also be shared with other devices using the hotspot function on the mobile device.

With the Travel SIM, users can enjoy the same Cloudflare global service as the rest of the world when traveling to China. And IT and security teams no longer need to worry about purchasing or deploying additional Zero Trust seats and device clients to ensure the employees’ Internet connection and the security policy enforcement.

China Express: Cloudflare partners to boost performance in China for corporate networks

China Express — Extending Cloudflare One to China

As mentioned in a previous blog post, we are extending Cloudflare One, our zero trust network-as-a-service product, to mainland China through our strategic partnerships. Acme Corp will now be able to ensure their employees both inside and outside China will be able to use consistent zero trust security policy using the Cloudflare WARP device client. In addition, they will be able to connect their physical offices in China to their global private WAN using Magic WAN with consistent security policies applied globally.

Get started today

Cloudflare is excited to work with  our partners to help our customers solve connectivity and performance challenges in mainland China. All the above solutions are easy and fast to deploy and are available now. If you’d like to get started, contact us here or reach out to your account team.

Cloudflare Application Services for private networks: do more with the tools you already love

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/app-services-private-networks/

Cloudflare Application Services for private networks: do more with the tools you already love

Cloudflare Application Services for private networks: do more with the tools you already love

Cloudflare’s Application Services have been hard at work keeping Internet-facing websites and applications secure, fast, and reliable for over a decade. Cloudflare One provides similar security, performance, and reliability benefits for your entire corporate network. And today, we’re excited to announce new integrations that make it possible to use these services together in new ways. These integrations unlock operational and cost efficiencies for IT teams by allowing them to do more with fewer tools, and enable new use cases that are impossible without Cloudflare’s  “every service everywhere” architecture.

“Just as Canva simplifies graphic design, Cloudflare simplifies performance and security. Thanks to Cloudflare, we can focus on growing our product and expanding into new markets with confidence, knowing that our platform is fast, reliable, and secure.” – Jim Tyrrell, Head of Infrastructure, Canva

Every service everywhere, now for every network

One of Cloudflare’s fundamental architectural principles has always been to treat our network like one homogeneous supercomputer. Rather than deploying services in specific locations – for example, using some of our points of presence to enforce WAF policies, others for Zero Trust controls, and others for traffic optimization – every server runs a virtually identical stack of all of our software services. This way, a packet can land on any server and flow through a full set of security filters in a single pass, without having to incur the performance tax of hair pinning to multiple locations.

Cloudflare Application Services for private networks: do more with the tools you already love

The software that runs on each of these servers is Linux-based and takes advantage of core concepts of the Linux kernel in order to create “wiring” between services. This deep dive on our DDoS mitigation stack explains just one example of how we use these tools to route packets through multiple layers of protection without sacrificing performance. This approach also enables us to easily add new paths for packets and requests, enabling deeper integrations and new possibilities for traffic routed to Cloudflare’s network from any source or to any destination. Let’s walk through some of these new use cases we’re developing for private networks.

Web Application Firewall for private apps with any off-ramp

Today, millions of customers trust Cloudflare’s WAF to protect their applications that are exposed to the public Internet – either fully public apps or private apps connected via Cloudflare Tunnel and surfaced with a public hostname. We’ve increasingly heard from customers that are excited about putting our WAF controls in front of any application with any traffic on or off-ramp, for a variety of reasons.

Some customers want to do this in order to enforce stronger Zero Trust principles: filtering all traffic, even requests sourced from within a “trusted” private network, as though it came from the open Internet. Other customers want to connect an entire datacenter or cloud property with a network-layer on-ramp like a GRE or IPsec tunnel or CNI. And yet others want to adopt the Cloudflare WAF for their private apps without specifying public hostnames.

By fully integrating Cloudflare’s WAF with the Cloudflare One dataplane, we’re excited to address all of these use cases: enabling customers to create WAF policies in-path for fully private traffic flows by building their private network on Cloudflare.

API security for internal APIs

After web applications, one of the next attack surfaces our customers turn to addressing is their public-facing APIs. Cloudflare offers services to protect public APIs from DDoS, abuse, sensitive data loss, and many other attack vectors. But security concerns don’t stop with public-facing APIs: as engineering organizations continue to embrace distributed architecture, multicloud and microsegmentation, CIOs and teams that provide internal services are also interested in securing their private APIs.

With Cloudflare One, customers can connect and route their entire private network through our global fabric, enabling private API traffic to flow through the same stack of security controls we’ve previously made available for public APIs. Networking and security teams will be able to apply the principles of zero trust to their private API traffic flow to help improve their overall security posture.

Global and local traffic management for private apps

So far, we’ve focused on the security controls customers have available to filter malicious traffic to their applications and APIs. But Cloudflare’s services don’t stop with security: we make anything connected to the Internet faster and more reliable. One of the key tools enabling this is our suite of load balancing services, which include application-layer controls for any origin server behind Cloudflare’s reverse proxy and network-layer controls for any IP traffic.

Customers have asked for even more flexibility and new ways to use our traffic management tools: the ability to create application-layer load balancing policies for traffic connected with any off-ramp, such as Cloudflare Tunnel for applications, GRE or IPsec tunnels or CNI for IP networks. They also are excited about the potential to extend load balancing policies into their local networks, managing traffic across servers within a datacenter or cloud property in addition to across multiple “global” locations. These capabilities, which will improve resiliency for any application – both by enforcing more granular controls for private apps and managing local traffic for any app – are coming soon; stay tuned for more updates.

Full-stack performance optimization for private apps

Cloudflare has always obsessed over the speed of every request routed through our network. We’re constantly developing new ways to deliver content closer to users, automatically optimize any kind of traffic, and route packets over the best possible paths, avoiding congestion and other issues on the Internet. Argo Smart Routing speeds up any reverse proxied traffic with application-layer optimizations and IP packets with intelligent decisions at the network layer, using Cloudflare’s extensive interconnectivity and global private backbone to make sure that traffic is delivered as quickly and efficiently as possible.

As we more deeply integrate Cloudflare’s private networking dataplane and our application services to realize the security and reliability benefits described above, customers will automatically be able to see the benefits of Argo Smart Routing at all layers of the OSI stack for any traffic connected to Cloudflare.

Private DNS for one-stop management of internal network resources

Cloudflare’s industry-leading authoritative DNS protects millions of public Internet domains. These can be queried by anyone on the public Internet, which is great for most organizations, but some want to be able to restrict this access. With our private DNS, customers will be able to resolve queries to private domains only when connected to the Zero Trust private network they define within Cloudflare. Because we’re building this using our robust authoritative DNS and Gateway filtering services, you can expect all the other goodness already possible with Cloudflare to also apply to private DNS: support for all common DNS record types, the ability to resolve to DNS queries to virtual networks with overlapping IPs, and all the other Zero Trust filtering control offered by Gateway DNS filtering. Consolidating management of external and internal DNS in one place, with the fastest response time, unparalleled redundancy, and advanced security already built in, will greatly simplify customers’ infrastructure and save time and operational overhead.

And more new use cases every day

We love hearing about new ways you’re using Cloudflare to make any user, application, or network faster, more secure, and more reliable. Get on the list for beta access to the new integrations described today and reach out to us in the comments if you’ve got more ideas for new problems you’d like to solve using Cloudflare.

Give us a ping. (Cloudflare) One ping only

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/the-most-exciting-ping-release/

Give us a ping. (Cloudflare) One ping only

Give us a ping. (Cloudflare) One ping only

Ping was born in 1983 when the Internet needed a simple, effective way to measure reachability and distance. In short, ping (and subsequent utilities like traceroute and MTR)  provides users with a quick way to validate whether one machine can communicate with another. Fast-forward to today and these network utility tools have become ubiquitous. Not only are they now the de facto standard for troubleshooting connectivity and network performance issues, but they also improve our overall quality of life by acting as a common suite of tools almost all Internet users are comfortable employing in their day-to-day roles and responsibilities.

Making network utility tools work as expected is very important to us, especially now as more and more customers are building their private networks on Cloudflare. Over 10,000 teams now run a private network on Cloudflare. Some of these teams are among the world’s largest enterprises, some are small crews, and yet others are hobbyists, but they all want to know – can I reach that?

That’s why today we’re excited to incorporate support for these utilities into our already expansive troubleshooting toolkit for Cloudflare Zero Trust. To get started, sign up to receive beta access and start using the familiar debugging tools that we all know and love like ping, traceroute, and MTR to test connectivity to private network destinations running behind Tunnel.

Cloudflare Zero Trust

With Cloudflare Zero Trust, we’ve made it ridiculously easy to build your private network on Cloudflare. In fact, it takes just three steps to get started. First, download Cloudflare’s device client, WARP, to connect your users to Cloudflare. Then, create identity and device aware policies to determine who can reach what within your network. And finally, connect your network to Cloudflare with Tunnel directly from the Zero Trust dashboard.

Give us a ping. (Cloudflare) One ping only

We’ve designed Cloudflare Zero Trust to act as a single pane of glass for your organization. This means that after you’ve deployed any part of our Zero Trust solution, whether that be ZTNA or SWG, you are clicks, not months, away from deploying Browser Isolation, Data Loss Prevention, Cloud Access Security Broker, and Email Security. This is a stark contrast from other solutions on the market which may require distinct implementations or have limited interoperability across their portfolio of services.

It’s that simple, but if you’re looking for more prescriptive guidance watch our demo below to get started:

To get started, sign-up for early access to the closed beta. If you’re interested in learning more about how it works and what else we will be launching in the future, keep scrolling.

So, how do these network utilities actually work?

Ping, traceroute and MTR are all powered by the same underlying protocol, ICMP. Every ICMP message has 8-bit type and code fields, which define the purpose and semantics of the message. While ICMP has many types of messages, the network diagnostic tools mentioned above make specific use of the echo request and echo reply message types.

Every ICMP message has a type, code and checksum. As you may have guessed from the name, an echo reply is generated in response to the receipt of an echo request, and critically, the request and reply have matching identifiers and sequence numbers. Make a mental note of this fact as it will be useful context later in this blog post.

Give us a ping. (Cloudflare) One ping only

A crash course in ping, traceroute, and MTR

As you may expect, each one of these utilities comes with its own unique nuances, but don’t worry. We’re going to provide a quick refresher on each before getting into the nitty-gritty details.

Ping

Ping works by sending a sequence of echo request packets to the destination. Each router hop between the sender and destination decrements the TTL field of the IP packet containing the ICMP message and forwards the packet to the next hop. If a hop decrements the TTL to 0 before reaching the destination, or doesn’t have a next hop to forward to, it will return an ICMP error message – “TTL exceeded” or “Destination host unreachable” respectively – to the sender. A destination which speaks ICMP will receive these echo request packets and return matching echo replies to the sender. The same process of traversing routers and TTL decrementing takes place on the return trip. On the sender’s machine, ping reports the final TTL of these replies, as well as the roundtrip latency of sending and receiving the ICMP messages to the destination. From this information a user can determine the distance between themselves and the origin server, both in terms of number of network hops and time.

Traceroute and MTR

As we’ve just outlined, while helpful, the output provided by ping is relatively simple. It does provide some useful information, but we will generally want to follow up this request with a traceroute to learn more about the specific path to a given destination. Similar to ping, traceroutes start by sending an ICMP echo request. However, it handles TTL a bit differently. You can learn more about why that is the case in our Learning Center, but the important takeaway is that this is how traceroutes are able to map and capture the IP address of each unique hop on the network path. This output makes traceroute an incredibly powerful tool to understanding not only if a machine can connect to another, but also how it will get there! And finally, we’ll cover MTR. We’ve grouped traceroute and MTR together for now as they operate in an extremely similar fashion. In short, the output of an MTR will provide everything traceroute can, but with some additional, aggregate statistics for each unique hop. MTR will also run until explicitly stopped allowing users to receive a statistical average for each hop on the path.

Checking connectivity to the origin

Now that we’ve had a quick refresher, let’s say I cannot connect to my private application server. With ICMP support enabled on my Zero Trust account, I could run a traceroute to see if the server is online.

Here is simple example from one of our lab environments:

Give us a ping. (Cloudflare) One ping only

Then, if my server is online, traceroute should output something like the following:

traceroute -I 172.16.10.120
traceroute to 172.16.10.120 (172.16.10.120), 64 hops max, 72 byte packets
 1  172.68.101.57 (172.68.101.57)  20.782 ms  12.070 ms  15.888 ms
 2  172.16.10.100 (172.16.10.100)  31.508 ms  30.657 ms  29.478 ms
 3  172.16.10.120 (172.16.10.120)  40.158 ms  55.719 ms  27.603 ms

Let’s examine this a bit deeper. Here, the first hop is the Cloudflare data center where my Cloudflare WARP device is connected via our Anycast network. Keep in mind this IP may look different depending on your location. The second hop will be the server running cloudflared. And finally, the last hop is my application server.

Conversely, if I could not connect to my app server I would expect traceroute to output the following:

traceroute -I 172.16.10.120
traceroute to 172.16.10.120 (172.16.10.120), 64 hops max, 72 byte packets
 1  172.68.101.57 (172.68.101.57)  20.782 ms  12.070 ms  15.888 ms
 2  * * *
 3  * * *

In the example above, this means the ICMP echo requests are not reaching cloudflared. To troubleshoot, first I will make sure cloudflared is running by checking the status of the Tunnel in the ZeroTrust dashboard. Then I will check if the Tunnel has a route to the destination IP. This can be found in the Routes column of the Tunnels table in the dashboard. If it does not, I will add a route to my Tunnel to see if this changes the output of my traceroute.

Once I have confirmed that cloudflared is running and the Tunnel has a route to my app server, traceroute will show the following:

raceroute -I 172.16.10.120
traceroute to 172.16.10.120 (172.16.10.120), 64 hops max, 72 byte packets
 1  172.68.101.57 (172.68.101.57)  20.782 ms  12.070 ms  15.888 ms
 2  172.16.10.100 (172.16.10.100)  31.508 ms  30.657 ms  29.478 ms
 3  * * *

However, it looks like we still can’t quite reach the application server. This means the ICMP echo requests reached cloudflared, but my application server isn’t returning echo replies. Now, I can narrow down the problem to my application server, or communication between cloudflared and the app server. Perhaps the machine needs to be rebooted or there is a firewall rule in place, but either way we have what we need to start troubleshooting the last hop. With ICMP support, we now have many network tools at our disposal to troubleshoot connectivity end-to-end.

Note that the route cloudflared to origin is always shown as a single hop, even if there are one or more routers between the two. This is because cloudflared creates its own echo request to the origin, instead of forwarding the original packets. In the next section we will explain the technical reason behind it.

What makes ICMP traffic unique?

A few quarters ago, Cloudflare Zero Trust extended support for UDP end-to-end as well. Since UDP and ICMP are both datagram-based protocols, within the Cloudflare network we can reuse the same infrastructure to proxy both UDP and ICMP traffic. To do this, we send the individual datagrams for either protocol over a QUIC connection using QUIC datagrams between Cloudflare and the cloudflared instances within your network.

With UDP, we establish and maintain a session per client/destination pair, such that we are able to send only the UDP payload and a session identifier in datagrams. In this way, we don’t need to send the IP and port to which the UDP payload should be forwarded with every single packet.

However, with ICMP we decided that establishing a session like this is far too much overhead, given that typically only a handful of ICMP packets are exchanged between endpoints. Instead, we send the entire IP packet (with the ICMP payload inside) as a single datagram.

What this means is that cloudflared can read the destination of the ICMP packet from the IP header it receives. While this conveys the eventual destination of the packet to cloudflared, there is still work to be done to actually send the packet. Cloudflared cannot simply send out the IP packet it receives without modification, because the source IP in the packet is still the original client IP, and not a source that is routable to the cloudflared instance itself.

To receive ICMP echo replies in response to the ICMP packets it forwards, cloudflared must apply a source NAT to the packet. This means that when cloudflared receives an IP packet, it must complete the following:

  • Read the destination IP address of the packet
  • Strip off the IP header to get the ICMP payload
  • Send the ICMP payload to the destination, meaning the source address of the ICMP packet will be the IP of a network interface to which cloudflared can bind
  • When cloudflared receives replies on this address, it must rewrite the destination address of the received packet (destination because the direction of the packet is reversed) to the original client source address

Network Address Translation like this is done all the time for TCP and UDP, but is much easier in those cases because ports can be used to disambiguate cases where the source and destination IPs are the same. Since ICMP packets do not have ports associated with them, we needed to find a way to map packets received from the upstream back to the original source which sent cloudflared those packets.

For example, imagine that two clients 192.0.2.1 and 192.0.2.2 both send an ICMP echo request to a destination 10.0.0.8. As we previously outlined, cloudflared must rewrite the source IPs of these packets to a source address to which it can bind. In this scenario, when the echo replies come back, the IP headers will be identical: source=10.0.0.8 destination=<cloudflared’s IP>. So, how can cloudflared determine which packet needs to have its destination rewritten to 192.0.2.1 and which to 192.0.2.2?

To solve this problem, we use fields of the ICMP packet to track packet flows, in the same way that ports are used in TCP/UDP NAT. The field we’ll use for this purpose is the Echo ID. When an echo request is received, conformant ICMP endpoints will return an echo reply with the same identifier as was received in the request. This means we can send the packet from 192.0.2.1 with ID 23 and the one from 192.0.2.2 with ID 45, and when we receive replies with IDs 23 and 45, we know which one corresponds to each original source.

Of course this strategy only works for ICMP echo requests, which make up a relatively small percentage of the available ICMP message types. For security reasons, however, and owing to the fact that these message types are sufficient to implement the ubiquitous ping and traceroute functionality that we’re after, these are the only message types we currently support. We’ll talk through the security reasons for this choice in the next section.

How to proxy ICMP without elevated permissions

Generally, applications need to send ICMP packets through raw sockets. Applications have control of the IP header using this socket, so it requires elevated privileges to open. Whereas the IP header for TCP and UDP packets are added on send and removed on receive by the operating system. To adhere to security best-practices, we don’t really want to run cloudflared with additional privileges. We needed a better solution. To solve this, we found inspiration in the ping utility, which you’ll note can be run by any user, without elevated permissions. So then, how does ping send ICMP echo requests and listen for echo replies as a normal user program? Well, the answer is less satisfying: it depends (on the platform). And as cloudflared supports all the following platforms, we needed to answer this question for each.

Linux

On linux, ping opens a datagram socket for the ICMP protocol with the syscall socket(PF_INET, SOCK_DGRAM, PROT_ICMP). This type of socket can only be opened if the group ID of the user running the program is in /proc/sys/net/ipv4/ping_group_range, but critically, the user does not need to be root. This socket is “special” in that it can only send ICMP echo requests and receive echo replies. Great! It also has a conceptual “port” associated with it, despite the fact that ICMP does not use ports. In this case, the identifier field of echo requests sent through this socket are rewritten to the “port” assigned to the socket. Reciprocally, echo replies received by the kernel which have the same identifier are sent to the socket which sent the request.

Therefore, on linux cloudflared is able to perform source NAT for ICMP packets simply by opening a unique socket per source IP address. This rewrites the identifier field and source address of the request. Replies are delivered to this same socket meaning that cloudflared can easily rewrite the destination IP address (destination because the packets are flowing to the client) and echo identifier back to the original values received from the client.

Darwin

On Darwin (the UNIX-based core set of components which make up macOS), things are similar, in that we can open an unprivileged ICMP socket with the same syscall socket(PF_INET, SOCK_DGRAM, PROT_ICMP). However, there is an important difference. With Darwin the kernel does not allocate a conceptual “port” for this socket, and thus, when sending ICMP echo requests the kernel does not rewrite the echo ID as it does on linux. Further, and more importantly for our purposes, the kernel does not demultiplex ICMP echo replies to the socket which sent the corresponding request using the echo identifier. This means that on macOS, we effectively need to perform the echo ID rewriting manually. In practice, this means that when cloudflared receives an echo request on macOS, it must choose an echo ID which is unique for the destination. Cloudflared then adds a key of (chosen echo ID, destination IP) to a mapping it then maintains, with a value of (original echo ID, original source IP). Cloudflared rewrites the echo ID in the echo request packet to the one it chose and forwards it to the destination. When it receives a reply, it is able to use the source IP address and echo ID to look up the client address and original echo ID and rewrite the echo ID and destination address in the reply packet before forwarding it back to the client.

Windows

Finally, we arrived at Windows which conveniently provides a Win32 API IcmpSendEcho that sends echo requests and returns echo reply, timeout or error. For ICMPv6 we just had to use Icmp6SendEcho. The APIs are in C, but cloudflared can call them through CGO without a problem. If you also need to call these APIs in a Go program, checkout our wrapper for inspiration.

And there you have it! That’s how we built the most exciting ping release since 1983. Overall, we’re thrilled to announce this new feature and can’t wait to get your feedback on ways we can continue improving our implementation moving forward.

What’s next

Support for these ICMP-based utilities is just the beginning of how we’re thinking about improving our Zero Trust administrator experience. Our goal is to continue providing tools which make it easy to identify issues within the network that impact connectivity and performance.

Looking forward, we plan to add more dials and knobs for observability with announcements like Digital Experience Monitoring across our Zero Trust platform to help users proactively monitor and stay alert to changing network conditions. In the meantime, try applying Zero Trust controls to your private network for free by signing up today.

CIO Week 2023 recap

Post Syndicated from James Chang original https://blog.cloudflare.com/cio-week-2023-recap/

CIO Week 2023 recap

CIO Week 2023 recap

In our Welcome to CIO Week 2023 post, we talked about wanting to start the year by celebrating the work Chief Information Officers do to keep their organizations safe and productive.

Over the past week, you learned about announcements addressing all facets of your technology stack – including new services, betas, strategic partnerships, third party integrations, and more. This recap blog summarizes each announcement and labels what capability is generally available (GA), in beta, or on our roadmap.

We delivered on critical capabilities requested by our customers – such as even more comprehensive phishing protection and deeper integrations with the Microsoft ecosystem. Looking ahead, we also described our roadmap for emerging technology categories like Digital Experience Monitoring and our vision to make it exceedingly simple to route traffic from any source to any destination through Cloudflare’s network.

Everything we launched is designed to help CIOs accelerate their pursuit of digital transformation. In this blog, we organized our announcement summaries based on the three feelings we want CIOs to have when they consider partnering with Cloudflare:

  1. CIOs now have a simpler roadmap to Zero Trust and SASE: We announced new capabilities and tighter integrations that make it easier for organizations to adopt Zero Trust security best practices and move towards aspirational architectures like Secure Access Service Edge (SASE).
  2. CIOs have access to the right technology and channel partners: We announced integrations and programming to help organizations access the right expertise to modernize IT and security at their own pace with the technologies they already use.
  3. CIOs can streamline a multi-cloud strategy with ease: We announced new ways to connect, secure, and accelerate traffic across diverse cloud environments.

Thank you for following CIO Week, Cloudflare’s first of many Innovation Weeks in 2023. It can be hard to keep up with our pace of innovation sometimes, but we hope that reading this blog and registering for our recap webinar will help!

If you want to speak with us about how to modernize your IT and security and make life easier for your organization’s CIO, fill out the form here.

Simplifying your journey to Zero Trust and SASE

Securing access
These blog posts are focused on making it faster, easier, and safer to connect any user to any application with the granular controls and comprehensive visibility needed to achieve Zero Trust.

Blog Summary
Beta: Introducing Digital Experience Monitoring Cloudflare Digital Experience Monitoring will be an all-in-one dashboard that helps CIOs understand how critical applications and Internet services are performing across their entire corporate network. Sign up for beta access.
Beta: Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP With a single click, any device running Cloudflare’s device client, WARP, in your organization can reach any other device running WARP over a private network. Sign up for beta access.
GA: New ways to troubleshoot Cloudflare Access ‘blocked’ messages Investigate ‘allow’ or ‘block’ decisions based on how a connection was made with the same level of ease that you can troubleshoot user identity within Cloudflare’s Zero Trust platform.
Beta: One-click data security for your internal and SaaS applications Secure sensitive data by running application sessions in an isolated browser and control how users interact with sensitive data – now with just one click. Sign up for beta access.
GA: Announcing SCIM support for Cloudflare Access & Gateway Cloudflare’s ZTNA (Access) and SWG (Gateway) services now support the System for Cross-domain Identity Management (SCIM) protocol, making it easier for administrators to manage identity records across systems.
GA: Cloudflare Zero Trust: The Most Exciting Ping Release Since 1983 Cloudflare Zero Trust administrators can use familiar debugging tools that use the ICMP protocol (like Ping, Traceroute, and MTR) to test connectivity to private network destinations.

Threat defense
These blog posts are focused on helping organizations filter, inspect, and isolate traffic to protect users from phishing, ransomware, and other Internet threats.

Blog Summary
GA: Email Link Isolation: your safety net for the latest phishing attacks Email Link Isolation is your safety net for the suspicious links that end up in inboxes and that users may click. This added protection turns Cloudflare Area 1 into the most comprehensive email security solution when it comes to protecting against phishing attacks.
GA: Bring your own certificates to Cloudflare Gateway Administrators can use their own custom certificates to apply HTTP, DNS, CASB, DLP, RBI and other filtering policies.
GA: Announcing Custom DLP profiles Cloudflare’s Data Loss Prevention (DLP) service now offers the ability to create custom detections, so that organizations can inspect traffic for their most sensitive data.
GA: Cloudflare Zero Trust for Managed Service Providers Learn how the U.S. Federal Government and other large Managed Service Providers (MSPs) are using Cloudflare’s Tenant API to apply security policies like DNS filtering across the organizations they manage.

Secure SaaS environments
These blog posts are focused on maintaining consistent security and visibility across SaaS application environments, in particular to protect leaks of sensitive data.

Blog Summary
Roadmap: How Cloudflare CASB and DLP work together to protect your data Cloudflare Zero Trust will introduce capabilities between our CASB and DLP services that will enable administrators to peer into the files stored in their SaaS applications and identify sensitive data inside them.
Roadmap: How Cloudflare Area 1 and DLP work together to protect data in email Cloudflare is combining capabilities from Area 1 Email Security and Data Loss Prevention (DLP) to provide complete data protection for corporate email.
GA: Cloudflare CASB: Scan Salesforce and Box for security issues Cloudflare CASB now integrates with Salesforce and Box, enabling IT and security teams to scan these SaaS environments for security risks.

Accelerating and securing connectivity
In addition to product capabilities, blog posts in this section highlight speed and other strategic benefits that organizations realize with Cloudflare.

Blog Summary
Why do CIOs choose Cloudflare One? As part of CIO Week, we spoke with the leaders of some of our largest customers to better understand why they selected Cloudflare One. Learn six thematic reasons why.
Cloudflare is faster than Zscaler Cloudflare is 38-55% faster at delivering Zero Trust experiences than Zscaler, as validated by third party testing.
GA: Network detection and settings profiles for the Cloudflare One agent Cloudflare’s device client (WARP) can now securely detect pre-configured locations and route traffic based on the needs of the organization for that location.

Making Cloudflare easier to use
These blog posts highlight innovations across the Cloudflare portfolio, and outside the Zero Trust and SASE categories, to help organizations secure and accelerate traffic with ease.

Blog Summary
Preview any Cloudflare product today Enterprise customers can now start previewing non-contracted services with a single click in the dashboard.
GA: Improved access controls: API access can now be selectively disabled Cloudflare is making it easier for account owners to view and manage the access their users have on an account by allowing them to restrict API access to the account.
GA: Zone Versioning is now generally available Zone Versioning allows customers to safely manage zone configuration by versioning changes and choosing how and when to deploy those changes to defined environments of traffic.
Roadmap: Cloudflare Application Services for private networks: do more with the tools you already love Cloudflare is unlocking operational efficiencies by working on integrations between our Application Services to protect Internet-facing websites and our Cloudflare One platform to protect corporate networks.

Collaborating with the right partners

In addition to new programming for our channel partners, these blog posts describe deeper technical integrations that help organizations work more efficiently with the IT and security tools they already use.

Blog Summary
GA: Expanding our Microsoft collaboration: Proactive and automated Zero Trust security for customers Cloudflare announced four new integrations between Microsoft Azure Active Directory (Azure AD) and Cloudflare Zero Trust that reduce risk proactively. These integrated offerings increase automation, allowing security teams to focus on threats versus implementation and maintenance.
Beta: API-based email scanning Now, Microsoft Office 365 customers can deploy Area 1 cloud email security via Microsoft Graph API. This feature enables O365 customers to quickly deploy the Area 1 product via API, with onboarding through the Microsoft Marketplace coming in the near future.
GA: China Express: Cloudflare partners to boost performance in China for corporate networks China Express is a suite of offerings designed to simplify connectivity and improve performance for users in China and developed in partnership with China Mobile International and China Broadband Communications.
Beta: Announcing the Authorized Partner Service Delivery Track for Cloudflare One Cloudflare announced the limited availability of a new specialization track for our channel and implementation partners, designed to help develop their expertise in delivering Cloudflare One services.

Streamlining your multi-cloud strategy

These blog posts highlight innovations that make it easier for organizations to simply ‘plug into’ Cloudflare’s network and send traffic from any source to any destination.

Blog Summary
Beta: Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network Cloudflare is making it even easier to get connected with the Magic WAN Connector: a lightweight software package you can install in any physical or cloud network to automatically connect, steer, and shape any IP traffic. Sign up for early access.
GA: Cloud CNI privately connects your clouds to Cloudflare Customers using Google Cloud Platform, Azure, Oracle Cloud, IBM Cloud, and Amazon Web Services can now open direct connections from their private cloud instances into Cloudflare.
Cloudflare protection for all your cardinal directions This blog post recaps how definitions of corporate network traffic have shifted and how Cloudflare One provides protection for all traffic flows, regardless of source or destination.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Post Syndicated from Abhi Das original https://blog.cloudflare.com/expanding-our-collaboration-with-microsoft-proactive-and-automated-zero-trust-security/

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

As CIOs navigate the complexities of stitching together multiple solutions, we are extending our partnership with Microsoft to create one of the best Zero Trust solutions available. Today, we are announcing four new integrations between Azure AD and Cloudflare Zero Trust that reduce risk proactively. These integrated offerings increase automation allowing security teams to focus on threats versus implementation and maintenance.

What is Zero Trust and why is it important?

Zero Trust is an overused term in the industry and creates a lot of confusion. So, let’s break it down. Zero Trust architecture emphasizes the “never trust, always verify” approach. One way to think about it is that in the traditional security perimeter or “castle and moat” model, you have access to all the rooms inside the building (e.g., apps) simply by having access to the main door (e.g., typically a VPN).  In the Zero Trust model you would need to obtain access to each locked room (or app) individually rather than only relying on access through the main door. Some key components of the Zero Trust model are identity e.g., Azure AD (who), apps e.g., a SAP instance or a custom app on Azure (applications), policies e.g. Cloudflare Access rules (who can access what application), devices e.g. a laptop managed by Microsoft Intune (the security of the endpoint requesting the access) and other contextual signals.

Zero Trust is even more important today since companies of all sizes are faced with an accelerating digital transformation and an increasingly distributed workforce. Moving away from the castle and moat model, to the Internet becoming your corporate network, requires security checks for every user accessing every resource. As a result, all companies, especially those whose use of Microsoft’s broad cloud portfolio is increasing, are adopting a Zero Trust architecture as an essential part of their cloud journey.

Cloudflare’s Zero Trust platform provides a modern approach to authentication for internal and SaaS applications. Most companies likely have a mix of corporate applications – some that are SaaS and some that are hosted on-premise or on Azure. Cloudflare’s Zero Trust Network Access (ZTNA) product as part of our Zero Trust platform makes these applications feel like SaaS applications, allowing employees to access them with a simple and consistent flow. Cloudflare Access acts as a unified reverse proxy to enforce access control by making sure every request is authenticated, authorized, and encrypted.

Cloudflare Zero Trust and Microsoft Azure Active Directory

We have thousands of customers using Azure AD and Cloudflare Access as part of their Zero Trust architecture. Our partnership with Microsoft  announced last year strengthened security without compromising performance for our joint customers. Cloudflare’s Zero Trust platform integrates with Azure AD, providing a seamless application access experience for your organization’s hybrid workforce.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

As a recap, the integrations we launched solved two key problems:

  1. For on-premise legacy applications, Cloudflare’s participation as Azure AD secure hybrid access partner enabled customers to centrally manage access to their legacy on-premise applications using SSO authentication without incremental development. Joint customers now easily use Cloudflare Access as an additional layer of security with built-in performance in front of their legacy applications.
  2. For apps that run on Microsoft Azure, joint customers can integrate Azure AD with Cloudflare Zero Trust and build rules based on user identity, group membership and Azure AD Conditional Access policies. Users will authenticate with their Azure AD credentials and connect to Cloudflare Access with just a few simple steps using Cloudflare’s app connector, Cloudflare Tunnel, that can expose applications running on Azure. See guide to install and configure Cloudflare Tunnel.

Recognizing Cloudflare’s innovative approach to Zero Trust and Security solutions, Microsoft awarded us the Security Software Innovator award at the 2022 Microsoft Security Excellence Awards, a prestigious classification in the Microsoft partner community.

But we aren’t done innovating. We listened to our customers’ feedback and to address their pain points are announcing several new integrations.

Microsoft integrations we are announcing today

The four new integrations we are announcing today are:

1. Per-application conditional access: Azure AD customers can use their existing Conditional Access policies in Cloudflare Zero Trust.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Azure AD allows administrators to create and enforce policies on both applications and users using Conditional Access. It provides a wide range of parameters that can be used to control user access to applications (e.g. user risk level, sign-in risk level, device platform, location, client apps, etc.). Cloudflare Access now supports Azure AD Conditional Access policies per application. This allows security teams to define their security conditions in Azure AD and enforce them in Cloudflare Access.

For example, customers might have tighter levels of control for an internal payroll application and hence will have specific conditional access policies on Azure AD. However, for a general info type application such as an internal wiki, customers might enforce not as stringent rules on Azure AD conditional access policies. In this case both app groups and relevant Azure AD conditional access policies can be directly plugged into Cloudflare Zero Trust seamlessly without any code changes.

2. SCIM: Autonomously synchronize Azure AD groups between Cloudflare Zero Trust and Azure AD, saving hundreds of hours in the CIO org.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Cloudflare Access policies can use Azure AD to verify a user’s identity and provide information about that user (e.g., first/last name, email, group membership, etc.). These user attributes are not always constant, and can change over time. When a user still retains access to certain sensitive resources when they shouldn’t, it can have serious consequences.

Often when user attributes change, an administrator needs to review and update all access policies that may include the user in question. This makes for a tedious process and an error-prone outcome.

The SCIM (System for Cross-domain Identity Management) specification ensures that user identities across entities using it are always up-to-date. We are excited to announce that joint customers of Azure AD and Cloudflare Access can now enable SCIM user and group provisioning and deprovisioning. It will accomplish the following:

  • The IdP policy group selectors are now pre-populated with Azure AD groups and will remain in sync. Any changes made to the policy group will instantly reflect in Access without any overhead for administrators.

  • When a user is deprovisioned on Azure AD, all the user’s access is revoked across Cloudflare Access and Gateway. This ensures that change is made in near real time thereby reducing security risks.

3. Risky user isolation: Helps joint customers add an extra layer of security by isolating high risk users (based on AD signals) such as contractors to browser isolated sessions via Cloudflare’s RBI product.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Azure AD classifies users into low, medium and high risk users based on many data points it analyzes. Users may move from one risk group to another based on their activities. Users can be deemed risky based on many factors such as the nature of their employment i.e. contractors, risky sign-in behavior, credential leaks, etc. While these users are high-risk, there is a low-risk way to provide access to resources/apps while the user is assessed further.

We now support integrating Azure AD groups with Cloudflare Browser Isolation. When a user is classified as high-risk on Azure AD, we use this signal to automatically isolate their traffic with our Azure AD integration. This means a high-risk user can access resources through a secure and isolated browser. If the user were to move from high-risk to low-risk, the user would no longer be subjected to the isolation policy applied to high-risk users.

4. Secure joint Government Cloud customers: Helps Government Cloud customers achieve better security with centralized identity & access management via Azure AD, and an additional layer of security by connecting them to the Cloudflare global network, not having to open them up to the whole Internet.

Via Secure Hybrid Access (SHA) program, Government Cloud (‘GCC’) customers will soon be able to integrate Azure AD with Cloudflare Zero Trust and build rules based on user identity, group membership and Azure AD conditional access policies. Users will authenticate with their Azure AD credentials and connect to Cloudflare Access with just a few simple steps using Cloudflare Tunnel that can expose applications running on Microsoft Azure.

“Digital transformation has created a new security paradigm resulting in organizations accelerating their adoption of Zero Trust. The Cloudflare Zero Trust and Azure Active Directory joint solution has been a growth enabler for Swiss Re by easing Zero Trust deployments across our workforce allowing us to focus on our core business. Together, the joint solution enables us to go beyond SSO to empower our adaptive workforce with frictionless, secure access to applications from anywhere. The joint solution also delivers us a holistic Zero Trust solution that encompasses people, devices, and networks.”
– Botond Szakács, Director, Swiss Re

A cloud-native Zero Trust security model has become an absolute necessity as enterprises continue to adopt a cloud-first strategy. Cloudflare has and Microsoft have jointly developed robust product integrations with Microsoft to help security and IT leaders CIO teams prevent attacks proactively, dynamically control policy and risk, and increase automation in alignment with Zero Trust best practices.
– Joy Chik, President, Identity & Network Access, Microsoft

Try it now

Interested in learning more about how our Zero Trust products integrate with Azure Active Directory? Take a look at this extensive reference architecture that can help you get started on your Zero Trust journey and then add the specific use cases above as required. Also, check out this joint webinar with Microsoft that highlights our joint Zero Trust solution and how you can get started.

What next

We are just getting started. We want to continue innovating and make the Cloudflare Zero Trust and Microsoft Security joint solution to solve your problems. Please give us feedback on what else you would like us to build as you continue using this joint solution.

Zone Versioning is now generally available

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/zone-versioning-ga/

Zone Versioning is now generally available

Zone Versioning is now generally available

Today we are announcing the general availability of Zone Versioning for enterprise customers. Zone Versioning allows you to safely manage zone configuration by versioning changes and choosing how and when to deploy those changes to defined environments of traffic. Previously announced as HTTP Applications, we have redesigned the experience based on testing and feedback to provide a seamless experience for customers looking to safely rollout configuration changes.

Problems with making configuration changes

There are two problems we have heard from customers that Zone Versioning aims to solve:

  1. How do I test changes to my zone safely?
  2. If I do end up making a change that impacts my traffic negatively, how can I quickly revert that change?

Customers have worked out various ways of solving these problems. For problem #1, customers will create staging zones that live on a different hostname, often taking the form staging.example.com, that they make changes on first to ensure that those changes will work when deployed to their production zone. When making more than one change this can become troublesome as they now need to keep track of all the changes made to make the exact same set of changes on the production zone. Also, it is possible that something tested in staging never makes it to production, but yet is not rolled back, so now the two environments differ in configuration.

For problem #2, customers often keep track of what changes were made and when they were deployed in a ticketing system like JIRA, such that in case of an incident an on-call engineer can more easily find the changes they may need to roll back by manually modifying the configuration of the zone. This requires the on-call to be able to easily get to the list of what changes were made.

Altogether, this means customers are more reluctant to make changes to configuration or turn on new features that may benefit them because they do not feel confident in the ability to validate the changes safely.

How Zone Versioning solves those problems

Zone Versioning provides two new fundamental aspects to managing configuration that allow a customer to safely test, deploy and rollback configuration changes: Versions and Environments.

Versions are independent sets of zone configuration. They can be created anytime from a previous version or the initial configuration of the zone and changes to one version will not affect another version. Initially, a version affects none of a zone’s traffic, so any changes made are safe by definition. When first enabling zone versioning, we create Version 1 that is based on the current configuration of the zone (referred to as the baseline configuration).

Zone Versioning is now generally available

From there any changes that you make to Version 1 will be safely stored and propagated to our global network, but will not affect any traffic. Making changes to a version is no different from before, just select the version to edit and modify the configuration of that feature as normal. Once you have made the set of changes desired for a given version, to deploy that version on live traffic in your zone, you will need to deploy the version to an Environment.

Environments are a way of mapping segments of your zone’s traffic to versions of configuration. Powered by our Ruleset Engine, that powers the likes of Custom WAF Rules and Cache Rules, Environments give you the ability to create filters based on a wide range of parameters such as hostname, client IP, location, or cookie. When a version is applied to an Environment, any traffic matching the filter will use that version’s configuration.

By default, we create three environments to get started with:

  • Development – Applies to traffic sent with a specific cookie for development
  • Staging – Applies to traffic sent to Cloudflare’s staging IPs
  • Production – Applies to all traffic on the zone

You can create additional environments or modify the pre-defined environments except for Production. Any newly created environment will begin in an unassigned state meaning traffic will fall back to the baseline configuration of the zone. In the above image, we have deployed Version 2 to both the Development and Staging environments. Once we have tested Version 2 in staging, then we can ‘Promote’ Version 2 to Production which means all traffic on the zone will receive the configuration in Version 2 except for Development and Staging traffic. If something goes wrong after deploying to Production, then we can use the ‘Rollback’ action to revert to the configuration of Version 1.

How promotion and rollbacks work

It is worth going into a bit more detail about how configuration changes, promotions, and rollbacks are realized in our global network. Whenever a configuration change is made to a version, we store that change in our system of record for the service and push that change to our global network so that it is available to be used at any time.

Importantly and unlike how changes to zones automatically take effect, that change will not be used until the version is deployed to an environment that is receiving traffic. The same is true for when a version is promoted or rolled back between environments. Because all the configuration we need for a given version is already available in our global network, we only need to push a single, atomic change to tell our network that traffic matching the filter for a given environment should now use the newly defined configuration version.

This means that promotions and more importantly rollbacks occur as quickly as you are used to with any configuration change in Cloudflare. No need to wait five or ten minutes for us to roll back a bad deployment, if something goes wrong you can return to a last known good configuration in seconds. Slow rollbacks can make ongoing incidents drag on leading to extended customer impact, so the ability to quickly execute a rollback was a critical capability.

Get started with Zone Versioning today

Enterprise Customers can get started with Zone Versioning today for their zones on the Cloudflare dashboard. Customers will need to be using the new Managed WAF rules in order to enable Zone Versioning. You can find more information about Zone Versioning in our Developer Docs.

Happy versioning!

Announcing SCIM support for Cloudflare Access & Gateway

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-and-gateway-with-scim/

Announcing SCIM support for Cloudflare Access & Gateway

Announcing SCIM support for Cloudflare Access & Gateway

Today, we’re excited to announce that Cloudflare Access and Gateway now support the System for Cross-domain Identity Management (SCIM) protocol. Before we dive into what this means, let’s take a step back and review what SCIM, Access, and Gateway are.

SCIM is a protocol that enables organizations to manage user identities and access to resources across multiple systems and domains. It is often used to automate the process of creating, updating, and deleting user accounts and permissions, and to keep these accounts and permissions in sync across different systems.

Announcing SCIM support for Cloudflare Access & Gateway

For example, most organizations have an identity provider, such as Okta or Azure Active Directory, that stores information about its employees, such as names, addresses, and job titles. The organization also likely uses cloud-based applications for collaboration. In order to access the cloud-based application, employees need to create an account and log in with a username and password. Instead of manually creating and managing these accounts, the organization can use SCIM to automate the process. Both the on-premise system and the cloud-based application are configured to support SCIM.

When a new employee is added to, or removed from, the identity provider, SCIM automatically creates an account for that employee in the cloud-based application, using the information from the on-premises system. If an employee’s information is updated in the identity provider, such as a change in job title, SCIM automatically updates the corresponding information in the cloud-based application. If an employee leaves the organization, their account can be deleted from both systems using SCIM.

SCIM helps organizations efficiently manage user identities and access across multiple systems, reducing the need for manual intervention and ensuring that user information is accurate and up to date.

Cloudflare Access provides secure access to your internal applications and resources. It integrates with your existing identity provider to enforce strong authentication for users and ensure that only authorized users have access to your organization’s resources. After a user successfully authenticates via the identity provider, Access initiates a session for that user. Once the session has expired, Access will redirect the user back to the identity provider.

Similarly, Cloudflare Gateway is a comprehensive secure web gateway (SWG) which leverages the same identity provider configurations as Access to allow administrators to build DNS, Network, and HTTP inspection policies based on identity. Once a user logs in using WARP client via the identity provider, their identity is logged and evaluated against any policies created by their organization’s administrator.

Challenges before SCIM

Before SCIM, if a user needed to be deprovisioned (e.g. leaving the business, a security breach or other factors) an administrator needed to remove access for the user in both the identity provider and Access. This was because a user’s Cloudflare Zero Trust session would stay active until they attempted to log in via the identity provider again. This was time-consuming and error-prone, and it leaves room for security vulnerabilities if a user’s access is not removed in a timely manner.

Announcing SCIM support for Cloudflare Access & Gateway

Another challenge with Cloudflare Access and Gateway was that identity provider groups had to be manually entered. This meant that if an identity provider group changed, an administrator had to manually update the value within the Cloudflare Zero trust dashboard to reflect those changes. This was tedious and time-consuming, and led to inconsistencies if the updates were not made promptly. Additionally, it required additional resources and expertise to manage this process effectively.

Announcing SCIM support for Cloudflare Access & Gateway

SCIM for Access & Gateway

Now, with the integration of SCIM, Access and Gateway can automatically deprovision users after they are deactivated in an identity provider and synchronize identity provider groups. This ensures that only active users, in the right group, have access to your organization’s resources, improving the security of your network.

User deprovisioning via SCIM listens for any user deactivation events in the identity provider and then revokes all active sessions for that user. This immediately cuts off their access to any application protected by Access and their session via WARP for Gateway.

Announcing SCIM support for Cloudflare Access & Gateway

Additionally, the integration of SCIM allows for the synchronization of identity provider group information in Access and Gateway policies. This means that all identity provider groups will automatically be available in both the Access and Gateway policy builders. There is also an option to automatically force a user to reauthenticate if their group membership changes.

For example, if you wanted to create an Access policy that only applied to users with emails associated with example.com and apart from the risky user group, you would be able to build a policy as show below by simply selecting the risky user group from a drop-down:

Announcing SCIM support for Cloudflare Access & Gateway

Similarly, if you wanted to create a Gateway policy to block example.com and all of its subdomains for these same users you could create the policy below:

Announcing SCIM support for Cloudflare Access & Gateway

What’s next

Today, SCIM support is available for Azure Active Directory and Okta for Self-Hosted Access applications. In the future, we plan to extend support for more Identity Providers and to Access for SaaS.

Try it now

SCIM is available for all Zero Trust customers today and can be used to improve operations and overall security. Try out SCIM for Access and Gateway yourself today.

API-based email scanning

Post Syndicated from Ayush Kumar original https://blog.cloudflare.com/api-based-email-scanning/

API-based email scanning

API-based email scanning

The landscape of email security is constantly changing. One aspect that remains consistent is the reliance of email as the beginning for the majority of threat campaigns. Attackers often start with a phishing campaign to gather employee credentials which, if successful, are used to exfiltrate data, siphon money, or perform other malicious activities. This threat remains ever present even as companies transition to moving their email to the cloud using providers like Microsoft 365 or Google Workspace.

In our pursuit to help build a better Internet and tackle online threats, Cloudflare offers email security via our Area 1 product to protect all types of email inboxes – from cloud to on premise. The Area 1 product analyzes every email an organization receives and uses our threat models to assess if the message poses risk to the customer. For messages that are deemed malicious, the Area 1 platform will even prevent the email from landing in the recipient’s inbox, ensuring that there is no chance for the attempted attack to be successful.

We try to provide customers with the flexibility to deploy our solution in whatever way they find easiest. Continuing in this pursuit to make our solution as turnkey as possible, we are excited to announce our open beta for Microsoft 365 domain onboarding via the Microsoft Graph API. We know that domains onboarded via API offer quicker deployment times and more flexibility. This onboarding method is one of many, so customers can now deploy domains how they see fit without losing Area 1 protection.

Onboarding Microsoft 365 Domains via API

Cloudflare Area 1 provides customers with many deployment options. Whether it is Journaling + BCC (where customers send a copy of each email to Area 1), Inline/MX records (where another hop is added via MX records), or Secure Email Gateway Connectors (where Area 1 directly interacts with a SEG), Area 1 provides customers with flexibility with how they want to deploy our solution. However, we have always recommended customers to deploy using MX records.

API-based email scanning

Adding this extra hop and having domains be pointed to Area 1 allows the service to provide protection with sinkholing, making sure that malicious emails don’t reach the destination email inbox. However, we recognized that configuring Area 1 as the first hop (i.e. changing the MX records) may require sign-offs from other teams inside organizations and can lead to additional cycles. Organizations are also caught in waiting for this inline change to reflect in DNS (known as DNS propagation time). We know our customers want to be protected ASAP while they make these necessary adjustments.

With Microsoft 365 onboarding, the process of adding protection requires less configuration steps and waiting time. We now use the Microsoft Graph API to evaluate all messages associated with a domain. This allows for greater flexibility for operation teams to deploy Area 1.

For example, a customer of Area 1 who is heavily involved in M&A transactions due to the nature of their industry benefit from being able to deploy Area 1 quickly using the Microsoft API. Before API onboarding, IT teams spent time juggling the handover of various acquisition assets. Assigning new access rights, handing over ownership, and other tasks took time to execute leaving mailboxes unsecured. However, now when the customer acquires a new entity, they can use the API onboarding to quickly add protection for the domains they just acquired. This allows them to have protection on the email addresses associated with the new domain while they work on completing the other tasks on hand. How our API onboarding process works can be seen below.

API-based email scanning

Once we are authorized to read incoming messages from Microsoft 365, we will start processing emails and firing detections on suspected emails. This new onboarding process is significantly faster and only requires a few clicks to get started.

To start the process, choose which domain you would like to onboard via API. Then within the UI, you can navigate to “Domains & Routing” within the settings. After adding a new domain and choosing API scan, you can follow our setup wizard to authorize Area 1 to start reading messages.

API-based email scanning
API scan

Within a few minutes of authorization, your organization will now be protected by Area 1.

API-based email scanning
Ready to scan ‌‌

Looking Ahead

This onboarding process is part of our continual efforts to provide customers with best of class email protection. With our API onboarding we provide customers with increased flexibility to deploy our solution. As we look forward, our Microsoft 365 API onboarding opens the door for other capabilities.

Our team is now looking to add the ability to retroactively scan emails that were sent before Area 1 was installed. This provides the opportunity for new customers to clean up any old emails that could still pose a risk for the organization. We are also looking to provide more levers for organizations who want to have more control on which mailboxes are scanned with Area 1. Soon customers will be able to designate within the UI which mailboxes will have their incoming email scanned by Area 1.

We also currently limit the deployment type of each domain to one type (i.e. a domain can either be onboarded using MX records or API). However, we are now looking at providing customers with the ability to do hybrid deployments, using both API + MX records. This combinatorial approach not only provides the greatest flexibility but also provides the maximum coverage.

There are many things in the pipeline that the Area 1 team is looking to bring to customers in 2023 and this open beta lets us build these new capabilities.

All customers can join the open beta so if you are interested in onboarding a new domain using this method, follow the steps above and get Area 1 protection on your Microsoft 365 Domains.

New: Scan Salesforce and Box for security issues

Post Syndicated from Alex Dunbrack original https://blog.cloudflare.com/casb-adds-salesforce-and-box-integrations/

New: Scan Salesforce and Box for security issues

New: Scan Salesforce and Box for security issues

Today, we’re sharing the release of two new SaaS integrations for Cloudflare CASB – Salesforce and Box – in order to help CIOs, IT leaders, and security admins swiftly identify looming security issues present across the exact type of tools housing this business-critical data.

Recap: What is Cloudflare CASB?

Released in September, Cloudflare’s API CASB has already proven to organizations from around the world that security risks – like insecure settings and inappropriate file sharing – can often exist across the friendly SaaS apps we all know and love, and indeed pose a threat. By giving operators a comprehensive view of the issues plaguing their SaaS environments, Cloudflare CASB has allowed them to effortlessly remediate problems in a timely manner before they can be leveraged against them.

But as both we and other forward-thinking administrators have come to realize, it’s not always Microsoft 365, Google Workspace, and business chat tools like Slack that contain an organization’s most sensitive information.

Scan Salesforce with Cloudflare CASB

The first Software-as-a-Service. Salesforce, the sprawling, intricate, hard-to-contain Customer Relationship Management (CRM) platform, gives workforces a flexible hub from which they can do just as the software describes: manage customer relationships. Whether it be tracking deals and selling opportunities, managing customer conversations, or storing contractual agreements, Salesforce has truly become the ubiquitous solution for organizations looking for a way to manage every customer-facing interaction they have.

This reliance, however, also makes Salesforce a business data goldmine for bad actors.

New: Scan Salesforce and Box for security issues

With CASB’s new integration for Salesforce, IT and security operators will be able to quickly connect their environments and scan them for the kind of issues putting their sensitive business data at risk. Spot uploaded files that have been shared publicly with anyone who has the link. Identify default permissions that give employees access to records that should be need-to-know only. You can even see employees who are sending out emails as other Salesforce users!

Using this new integration, we’re excited to help close the security visibility gap for yet another SaaS app serving as the lifeblood for teams out in the field making business happen.

Scan Box with Cloudflare CASB

Box is the leading Content Cloud that enables organizations to accelerate business processes, power workplace collaboration, and protect their most valuable information, all while working with a best-of-breed enterprise IT stack like Cloudflare.

A platform used to store everything – from contracts and financials to product roadmaps and employee records – Box has given collaborative organizations a single place to convene and share information that, in a growing remote-first world, has no better place to be stored.

So where are disgruntled employees and people with malicious intent going to look when they want to unveil private business files?

New: Scan Salesforce and Box for security issues

With Cloudflare CASB’s new integration for Box, security and IT teams alike can now link their admin accounts and scan them for under-the-radar security issues leaving them prone to compromise and data exfiltration. In addition to Box’s built-in content and collaboration security, Cloudflare CASB gives you another added layer of protection where you can catch files and folders shared publicly or with users outside your organization. By providing security admins with a single view to see employees who aren’t following security policies, we make it harder for bad actors to get inside and do damage.

With Cloudflare’s status as an official Box Technology Partner, we’re looking forward to offering both Cloudflare and Box users a robust, yet easy-to-use toolset that can help stop pressing, real-world data security incidents right in their tracks.

“Organizations today need products that are inherently secure to support employees working from anywhere,” said Areg Alimian, Head of Security Products at Box. “At Box, we continuously strive to improve our integrations with third-party apps so that it’s easier than ever for customers to use Box alongside best-in-class solutions. With today’s integration with Cloudflare CASB, we enable our joint customers to have a single pane of glass view allowing them to consistently enforce security policies and protect leakage of sensitive information across all their apps.”

Taking action on your business data security

Salesforce and Box are certainly not the only SaaS applications managing this type of sensitive organizational data. At Cloudflare, we strive to make our products as widely compatible as possible so that organizations can continue to place their trust and confidence in us to help keep them secure.

Today, Cloudflare CASB supports integrations with Google Workspace, Microsoft 365, Slack, GitHub, Salesforce, and Box, with a growing list of other critical applications on their way, so if there’s one in particular you’d like to see soon, let us know!

For those not already using Cloudflare Zero Trust, don’t hesitate to get started today – see the platform yourself with 50 free seats by signing up here, then get in touch with our team here to learn more about how Cloudflare CASB can help your organization lock down its SaaS apps.

Email Link Isolation: your safety net for the latest phishing attacks

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/area1-eli-ga/

Email Link Isolation: your safety net for the latest phishing attacks

Email Link Isolation: your safety net for the latest phishing attacks

Email is one of the most ubiquitous and also most exploited tools that businesses use every single day. Baiting users into clicking malicious links within an email has been a particularly long-standing tactic for the vast majority of bad actors, from the most sophisticated criminal organizations to the least experienced attackers.

Even though this is a commonly known approach to gain account access or commit fraud, users are still being tricked into clicking malicious links that, in many cases, lead to exploitation. The reason is simple: even the best trained users (and security solutions) cannot always distinguish a good link from a bad link.

On top of that, securing employees’ mailboxes often results in multiple vendors, complex deployments, and a huge drain of resources.

Email Link Isolation turns Cloudflare Area 1 into the most comprehensive email security solution when it comes to protecting against phishing attacks. It rewrites links that could be exploited, keeps users vigilant by alerting them of the uncertainty around the website they’re about to visit, and protects against malware and vulnerabilities through the user-friendly Cloudflare Browser Isolation service. Also, in true Cloudflare fashion,  it’s a one-click deployment.

With more than a couple dozen customers in beta and over one million links protected (so far), we can now clearly see the significant value and potential that this solution can deliver. To extend these benefits to more customers and continue to expand on the multitude of ways we can apply this technology, we’re making Email Link Isolation generally available (GA) starting today.

Email Link Isolation is included with Cloudflare Area 1 enterprise plan at no extra cost, and can be enabled with three clicks:

1. Log in to the Area 1 portal.

2. Go to Settings (the gear icon).

3. On Email Configuration, go to Email Policies > Link Actions.

4. Scroll to Email Link Isolation and enable it.

Email Link Isolation: your safety net for the latest phishing attacks

Defense in layers

Applying multiple layers of defense becomes ever more critical as threat actors continuously look for ways to navigate around each security measure and develop more complex attacks. One of the best examples that demonstrates these evolving techniques is a deferred phishing attack, where an embedded URL is benign when the email reaches your email security stack and eventually your users’ inbox, but is later weaponized post-delivery.

Email Link Isolation: your safety net for the latest phishing attacks

To combat evolving email-borne threats, such as malicious links, Area 1 continually updates its machine learning (ML) models to account for all potential attack vectors, and leverages post-delivery scans and retractions as additional layers of defense. And now, customers on the Enterprise plan also have access to Email Link Isolation as one last defense – a safety net.

The key to successfully adding layers of security is to use a strong Zero Trust suite, not a disjointed set of products from multiple vendors. Users need to be kept safe without disrupting their productivity – otherwise they’ll start seeing important emails being quarantined or run into a poor experience when accessing websites, and soon enough they’ll be the ones looking for ways around the company’s security measures.

Built to avoid productivity impacts

Email Link Isolation provides an additional layer of security with virtually no disruption to the user experience. It’s smart enough to decide which links are safe, which are malicious, and which are still dubious. Those dubious links are then changed (rewritten to be precise) and Email Link Isolation keeps evaluating them until it reaches a verdict with a high degree of confidence. When a user clicks on one of those rewritten links, Email Link Isolation checks for a verdict (benign or malign) and takes the corresponding action – benign links open in the local browser as if they hadn’t been changed, while malign links are prevented from opening altogether.

Most importantly, when Email Link Isolation is unable to confidently determine a verdict based on all available intelligence, an interstitial page is presented to ask the user to be extra vigilant. The interstitial page calls out that the website is suspicious, and that the user should refrain from entering any personal information and passwords unless they know and fully trust the website. Over the last few months of beta, we’ve seen that over two thirds of users don’t proceed to the website after seeing this interstitial – that’s a good thing!

For the users that still want to navigate to the website after seeing the interstitial page, Email Link Isolation uses Cloudflare Browser Isolation to automatically open the link in an isolated browser running in Cloudflare’s closest data center to the user. This delivers an experience virtually indistinguishable from using the local browser, thanks to our Network Vector Rendering (NVR) technology and Cloudflare’s expansive, low-latency network. By opening the suspicious link in an isolated browser, the user is protected against potential browser attacks (including malware, zero days, and other types of malicious code execution).

In a nutshell, the interstitial page is displayed when Email Link Isolation is uncertain about the website, and provides another layer of awareness and protection against phishing attacks. Then, Cloudflare Browser Isolation is used to protect against malicious code execution when a user decides to still proceed to such a website.

What we’ve seen in the beta

As expected, the percentage of rewritten links that users actually click is quite small (single digit percentage). That’s because the majority of such links are not delivered in messages the users are expecting, and aren’t coming from trusted colleagues or partners of theirs. So, even when a user clicks on such a link, they will often see the interstitial page and decide not to proceed any further. We see that less than half of all clicks lead to the user actually visiting the website (in Browser Isolation, to protect against malicious code that could otherwise be executing behind the scenes).

Email Link Isolation: your safety net for the latest phishing attacks
Email Link Isolation: your safety net for the latest phishing attacks

You may be wondering why we’re not seeing a larger amount of clicks on these rewritten links. The answer is quite simply that link Email Link Isolation is indeed that last layer of protection against attack vectors that may have evaded other lines of defense. Virtually all the well crafted phishing attacks that try and trick users into clicking malicious links are already being stopped by the Area 1 email security, and such messages don’t reach users’ inboxes.

The balance is very positive. From all the customers using Email Link Isolation beta in production, some Fortune 500, we received no negative feedback on the user experience. That means that we’re meeting one of the most challenging goals – to provide additional security without negatively affecting users and without adding the burden of tuning/administration to the SOC and IT teams.

One interesting thing we uncover is how valuable our customers are finding our click-time inspection of link shorteners. The fact that a shortened URL (e.g. bit.ly) can be modified at any time to point to a different website has been making some of our customers anxious. Email Link Isolation inspects the link at time-of-click, evaluates the actual website that it’s going to open, and proceeds to open locally, block or present the interstitial page as adequate. We’re now working on full link shortener coverage through Email Link Isolation.

All built on Cloudflare

Cloudflare’s intelligence is driving the decisions of what gets rewritten. We have earlier signals than others.

Email Link Isolation has been built on Cloudflare’s unique capabilities in many areas.

First, because Cloudflare sees enough Internet traffic for us to confidently identify new/low confidence and potentially dangerous domains earlier than anyone else – leveraging the Cloudflare intelligence for this early signal is key to the user experience, to not add speed bumps to legitimate websites that are part of our users’ daily routines. Next, we’re using Cloudflare Workers to process this data and serve the interstitial without introducing frustrating delays to the user. And finally, only Cloudflare Browser Isolation can protect against malicious code with a low-latency experience that is invisible to end users and feels like a local browser.

If you’re not yet a Cloudflare Area 1 customer, start your free trial and phishing risk assessment here.

One-click data security for your internal and SaaS applications

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/one-click-zerotrust-isolation/

One-click data security for your internal and SaaS applications

One-click data security for your internal and SaaS applications

Most of the CIOs we talk to want to replace dozens of point solutions as they start their own Zero Trust journey. Cloudflare One, our comprehensive Secure Access Service Edge (SASE) platform can help teams of any size rip out all the legacy appliances and services that tried to keep their data, devices, and applications safe without compromising speed.

We also built those products to work better together. Today, we’re bringing Cloudflare’s best-in-class browser isolation technology to our industry-leading Zero Trust access control product. Your team can now control the data in any application, and what a user can do in the application, with a single click in the Cloudflare dashboard. We’re excited to help you replace your private networks, virtual desktops, and data control boxes with a single, faster solution.

Zero Trust access control is just the first step

Most organizations begin their Zero Trust migration by replacing a virtual private network (VPN). VPN deployments trust too many users by default. In most configurations, any user on a private network can reach any resource on that same network.

The consequences vary. On one end of the spectrum, employees in marketing can accidentally stumble upon payroll amounts for the entire organization. At the other end, attackers who compromise the credentials of a support agent can move through a network to reach trade secrets or customer production data.

Zero Trust access control replaces this model by inverting the security posture. A Zero Trust network trusts no one by default. Every user and each request or connection, must prove they can reach a specific resource. Administrators can build granular rules and monitor comprehensive logs to prevent incidental or malicious access incidents.

Over 10,000 teams have adopted Cloudflare One to replace their own private network with a Zero Trust model. We offer those teams rules that go beyond just identity. Security teams can enforce hard key authentication for specific applications as a second factor. Sensitive production systems can require users to provide the reason they need temporary access while they request permission from a senior manager. We integrate with just about every device posture provider, or you can build your own, to ensure that only corporate devices connect to your systems.

The teams who deploy this solution improve the security of their enterprise overnight while also making their applications faster and more usable for employees in any region. However, once users pass all of those checks we still rely on the application to decide what they can and cannot do.

In some cases, that means Zero Trust access control is not sufficient. An employee planning to leave tomorrow could download customer contact info. A contractor connecting from an unmanaged device can screenshot schematics. As enterprises evolve on their SASE migration, they need to extend Zero Trust control to application usage and data.

Isolate sessions without any client software

Cloudflare’s browser isolation technology gives teams the ability to control usage and data without making the user experience miserable. Legacy approaches to browser isolation relied on one of two methods to secure a user on the public Internet:

  • Document Object Model (DOM) manipulation – unpack the webpage, inspect it, hope you caught the vulnerability, attempt to repack the webpage, deliver it. This model leads to thousands of broken webpages and total misses on zero days and other threats.
  • Pixel pushing – stream a browser running far away to the user, like a video. This model leads to user complaints due to performance and a long tail of input incompatibilities.

Cloudflare’s approach is different. We run headless versions of Chromium, the open source project behind Google Chrome and Microsoft Edge and other browsers, in our data centers around the world. We send the final rendering of the webpage, the draw commands, to a user’s local device.

One-click data security for your internal and SaaS applications

The user thinks it is just the Internet. Highlighting, right-clicking, videos – they all just work. Users do not need a special browser client. Cloudflare’s technology just works in any browser on mobile or desktop. For security teams, they can guarantee that code never executes on the devices in the field to stop Zero-Day attacks.

We added browser isolation to Cloudflare One to protect against attacks that leap out of a browser from the public Internet. However, controlling the browser also gives us the ability to pass that control along to security and IT departments, so they can focus on another type of risk – data misuse.

As part of this launch, when administrators secure an application with Cloudflare’s Zero Trust access control product, they can click an additional button that will force sessions into our isolated browser.

One-click data security for your internal and SaaS applications

When the user authenticates, Cloudflare Access checks all the Zero Trust rules configured for a given application. When this isolation feature is enabled, Cloudflare will silently open the session in our isolated browser. The user does not need any special software or to be trained on any unique steps. They just navigate to the application and start doing their work. Behind the scenes, the session runs entirely in Cloudflare’s network.

Control usage and data in sessions

By running the session in Cloudflare’s isolated browser, administrators can begin to build rules that replace some goals of legacy virtual desktop solutions. Some enterprises deploy virtual desktop instances (VDIs) to sandbox application usage. Those VDI platforms extended applications to employees and contractors without allowing the application to run on the physical device.

Employees and contractors tend to hate this method. The client software required is clunky and not available on every operating system. The speed slows them down. Administrators also need to invest time in maintaining the desktops and the virtualization software that power them.

We’re excited to help you replace that point solution, too. Once an application is isolated in Cloudflare’s network, you can toggle additional rules that control how users interact with the resource. For example, you can disable potential data loss vectors like file downloads, printing, or copy-pasting. Add watermarks, both visible and invisible, to audit screenshot leaks.

You can extend this control beyond just data loss. Some teams have sensitive applications where you need users to connect without inputting any data, but they do not have the developer time to build a “Read Only” mode. With Cloudflare One, those teams can toggle “Disable keyboard” and allow users to reach the service while blocking any input.

One-click data security for your internal and SaaS applications

The isolated solution also integrates with Cloudflare One’s Data Loss Prevention (DLP) suite. With a few additional settings, you can bring comprehensive data control to your applications without any additional engineering work or point solution deployment. If a user strays too far in an application and attempts to download something that contains personal information like social security or credit card numbers, Cloudflare’s network will stop that download while still allowing otherwise approved files.

One-click data security for your internal and SaaS applications

Extend that control to SaaS applications

Most of the customers we hear from need to bring this level of data and usage control to their self-hosted applications. Many of the SaaS tools they rely on have more advanced role-based rules. However, that is not always the case and, even if the rules exist, they are not as comprehensive as needed and require an administrator to manage a dozen different application settings.

To avoid that hassle you can bring Cloudflare One’s one-click isolation feature to your SaaS applications, too. Cloudflare’s access control solution can be configured as an identity proxy that will force all logins to any SaaS application that supports SSO through Cloudflare’s network where additional rules, including isolation, can be applied.

What’s next?

Today’s announcement brings together two of our customers’ favorite solutions – our Cloudflare Access solution and our browser isolation technology. Both products are available to use today. You can start building rules that force isolation or control data usage by following the guides linked here.

Willing to wait for the easy button? Join the beta today for the one-click version that we are rolling out to customer accounts.

Improved access controls: API access can now be selectively disabled

Post Syndicated from Joseph So original https://blog.cloudflare.com/improved-api-access-control/

Improved access controls: API access can now be selectively disabled

Improved access controls: API access can now be selectively disabled

Starting today, it is possible to selectively scope API access to your account to specific users.

We are making it easier for account owners to view and manage the access their users have on an account by allowing them to restrict API access to the account. Ensuring users have the least amount of access they need, and maximizing visibility of the access is critical, and our move today is another step in this direction.

When Cloudflare was first introduced, a single user had access to a single account. As we have been adopted by larger enterprises, the need to maximize access granularity and retain control of an account has become progressively more important. Nowadays, enterprises using Cloudflare could have tens or hundreds of users on an account, some of which need to do account configuration, and some that do not. In addition, to centralize the configuration of the account, some enterprises have a need for service accounts, or those shared between several members of an organization.

While account owners have always been able to restrict access to an account by their users, they haven’t been able to view the keys and tokens created by their users. Restricting use of the API is the first step in a direction that will allow account owners a single control plane experience to manage their users’ access.

Steps to secure an account

The safest thing to do to reduce risk is to scope every user to the minimal amount of access required, and the second is to monitor what they do with their access.

While a dashboard login has some degree of non-repudiation, especially when being protected by multiple factors and an SSO configuration, an API key or token can be leaked, and no further authentication factors will block the use of this credential. Therefore, in order to reduce the attack surface, we can limit what the token can do.

A Cloudflare account owner can now access their members page, and turn API access on or off for specific users, as well as account wide.

Improved access controls: API access can now be selectively disabled
Improved access controls: API access can now be selectively disabled

This feature is available for our enterprise users starting today.

Moving forward

On our journey to making the account management experience safer, and more granular, we will continue to increase the level of control account owners have over their accounts. Building these API restrictions is a first step on the way to allowing account-owned API tokens (which will limit the need to have personal tokens), as well as increasing general visibility of tokens among account members.

How Cloudflare CASB and DLP work together to protect your data

Post Syndicated from Alex Dunbrack original https://blog.cloudflare.com/casb-dlp/

How Cloudflare CASB and DLP work together to protect your data

How Cloudflare CASB and DLP work together to protect your data

Cloudflare’s Cloud Access Security Broker (CASB) scans SaaS applications for misconfigurations, unauthorized user activity, shadow IT, and other data security issues. Discovered security threats are called out to IT and security administrators for timely remediation, removing the burden of endless manual checks on a long list of applications.

But Cloudflare customers revealed they want more information available to assess the risk associated with a misconfiguration. A publicly exposed intramural kickball schedule is not nearly as critical as a publicly exposed customer list, so customers want them treated differently. They asked us to identify where sensitive data is exposed, reducing their assessment and remediation time in the case of leakages and incidents. With that feedback, we recognized another opportunity to do what Cloudflare does best: combine the best parts of our products to solve customer problems.

What’s underway now is an exciting effort to provide Zero Trust users a way to get the same DLP coverage for more than just sensitive data going over the network: SaaS DLP for data stored in popular SaaS apps used by millions of organizations.

With these upcoming capabilities, customers will be able to connect their SaaS applications in just a few clicks and scan them for sensitive data – such as PII, PCI, and even custom regex – stored in documents, spreadsheets, PDFs, and other uploaded files. This gives customers the signals to quickly assess and remediate major security risks.

Understanding CASB

How Cloudflare CASB and DLP work together to protect your data

Released in September, Cloudflare’s API CASB has already enabled organizations to quickly and painlessly deep-dive into the security of their SaaS applications, whether it be Google Workspace, Microsoft 365, or any of the other SaaS apps we support (including Salesforce and Box released today). With CASB, operators have been able to understand what SaaS security issues could be putting their organization and employees at risk, like insecure settings and misconfigurations, files shared inappropriately, user access risks and best practices not being followed.

“But what about the sensitive data stored inside the files we’re collaborating on? How can we identify that?”

Understanding DLP

Also released in September, Cloudflare DLP for data in-transit has provided users of Gateway, Cloudflare’s Secure Web Gateway (SWG), a way to manage and outright block the movement of sensitive information into and out of the corporate network, preventing it from landing in the wrong hands. In this case, DLP can spot sensitive strings, like credit card and social security numbers, as employees attempt to communicate them in one form or another, like uploading them in a document to Google Drive or sent in a message on Slack. Cloudflare DLP blocks the HTTP request before it reaches the intended application.

How Cloudflare CASB and DLP work together to protect your data
How Cloudflare CASB and DLP work together to protect your data

But once again we received the same questions and feedback as before.

“What about data in our SaaS apps? The information stored there won’t be visible over the network.”

CASB + DLP, Better Together

Coming in early 2023, Cloudflare Zero Trust will introduce a new product synergy that allows customers to peer into the files stored in their SaaS applications and identify any particularly sensitive data inside them.

Credit card numbers in a Google Doc? No problem. Social security numbers in an Excel spreadsheet? CASB will let you know.

With this product collaboration, Cloudflare will provide IT and security administrators one more critical area of security coverage, rounding out our data loss prevention story. Between DLP for data in-transit, CASB for file sharing monitoring, and even Remote Browser Isolation (RBI) and Area 1 for data in-use DLP and email DLP, respectively, organizations can take comfort in knowing that their bases are covered when it comes to data exfiltration and misuse.

While development continues, we’d love to hear how this kind of functionality could be used at an organization like yours. Interested in learning more about either of these products or what’s coming next? Reach out to your account manager or click here to get in touch if you’re not already using Cloudflare.

How Cloudflare Area 1 and DLP work together to protect data in email

Post Syndicated from Ayush Kumar original https://blog.cloudflare.com/dlp-area1-to-protect-data-in-email/

How Cloudflare Area 1 and DLP work together to protect data in email

How Cloudflare Area 1 and DLP work together to protect data in email

Threat prevention is not limited to keeping external actors out, but also keeping sensitive data in. Most organizations do not realize how much confidential information resides within their email inboxes. Employees handle vast amounts of sensitive data on a daily basis, such as intellectual property, internal documentation, PII, or payment information and often share this information internally via email making email one of the largest locations confidential information is stored within a company. It comes as no shock that organizations worry about protecting the accidental or malicious egress of sensitive data and often address these concerns by instituting strong Data Loss Prevention policies. Cloudflare makes it easy for customers to manage the data in their email inboxes with Area 1 Email Security and Cloudflare One.

Cloudflare One, our SASE platform that delivers network-as-a-service (NaaS) with Zero Trust security natively built-in, connects users to enterprise resources, and offers a wide variety of opportunities to secure corporate traffic, including the inspection of data transferred to your corporate email. Area 1 email security, as part of our composable Cloudflare One platform, delivers the most complete data protection for your inbox and offers a cohesive solution when including additional services, such as Data Loss Prevention (DLP). With the ability to easily adopt and implement Zero Trust services as needed, customers have the flexibility to layer on defenses based on their most critical use cases. In the case of Area 1 + DLP, the combination can collectively and preemptively address the most pressing use cases that represent high-risk areas of exposure for organizations. Combining these products provides the in-depth defense of your corporate data.

Preventing egress of cloud email data via HTTPs

Email provides a readily available outlet for corporate data, so why let sensitive data reach email in the first place? An employee can accidentally attach an internal file rather than a public white paper in a customer email, or worse, attach a document with the wrong customers’ information to an email.

With Cloudflare Data Loss Prevention (DLP) you can prevent the upload of sensitive information, such as PII or intellectual property, to your corporate email. DLP is offered as part of Cloudflare One, which runs traffic from data centers, offices, and remote users through the Cloudflare network.  As traffic traverses Cloudflare, we offer protections including validating identity and device posture and filtering corporate traffic.

How Cloudflare Area 1 and DLP work together to protect data in email

Cloudflare One offers HTTP(s) filtering, enabling you to inspect and route the traffic to your corporate applications. Cloudflare Data Loss Prevention (DLP) leverages the HTTP filtering abilities of Cloudflare One. You can apply rules to your corporate traffic and route traffic based on information in an HTTP request. There are a wide variety of options for filtering, such as domain, URL, application, HTTP method, and many more. You can use these options to segment the traffic you wish to DLP scan. All of this is done with the performance of our global network and managed with one control plane.

You can apply DLP policies to corporate email applications, such as Google Suite or O365.  As an employee attempts to upload an attachment to an email, the upload is inspected for sensitive data, and then allowed or blocked according to your policy.

How Cloudflare Area 1 and DLP work together to protect data in email

Inside your corporate email extend more core data protection principles with Area 1 in the following ways:

Enforcing data security between partners

With Cloudflare’s Area 1, you can also enforce strong TLS standards. Having TLS configured adds an extra layer of security as it ensures that emails are encrypted, preventing any attackers from reading sensitive information and changing the message if they intercept the email in transit (on-path-attack). This is especially useful for G Suite customers whose internal emails still go out to the whole Internet in front of prying eyes or for customers who have contractual obligations to communicate with partners with SSL/TLS.

Area 1 makes it easy to enforce SSL/TLS inspections. From the Area 1 portal, you can configure Partner Domain(s) TLS by navigating “Partner Domains TLS” within “Domains & Routing” and adding a partner domain with which you want to enforce TLS. If TLS is required then all emails from that domain with no TLS will be automatically dropped. Our TLS ensures strong TLS rather than the best effort in order to make sure that all traffic is encrypted with strong ciphers preventing a malicious attacker from being able to decrypt any intercepted emails.

How Cloudflare Area 1 and DLP work together to protect data in email

Stopping passive email data loss

Organizations often forget that exfiltration also can be done without ever sending any email. Attackers who are able to compromise a company account are able to passively sit by, monitoring all communications and picking out information manually.

Once an attacker has reached this stage, it is incredibly difficult to know an account is compromised and what information is being tracked. Indicators like email volume, IP address changes, and others do not work since the attacker is not taking any actions that would cause suspicion. At Cloudflare, we have a strong thesis on preventing these account takeovers before they take place, so no attacker is able to fly under the radar.

In order to stop account takeovers before they happen, we place great emphasis on filtering emails that pose a risk for stealing employee credentials. The most common attack vector used by malicious actors are phishing emails. Given its ability to have a high impact in accessing confidential data when successful, it’s no shock that this is the go-to tool in the attackers tool kit. Phishing emails pose little threat to an email inbox protected by Cloudflare’s Area 1 product. Area 1’s models are able to assess if a message is a suspected phishing email by analyzing different metadata. Anomalies detected by the models like domain proximity (how close a domain is to the legitimate one), sentiment of email, or others can quickly determine if an email is legitimate or not. If Area 1 determines an email to be a phishing attempt, we automatically retract the email and prevent the recipient from receiving the email in their inbox ensuring that the employee’s account remains uncompromised and unable to be used to exfiltrate data.

Attackers who are looking to exfiltrate data from an organization also often rely on employees clicking on links sent to them via email. These links can point to online forms which on the surface look innocuous but serve to gather sensitive information. Attackers can use these websites to initiate scripts which gather information about the visitor without any interaction from the employee. This presents a strong concern since an errant click by an employee can lead to the exfiltration of sensitive information. Other malicious links can contain exact copies of websites which the user is accustomed to accessing. However, these links are a form of phishing where the credentials entered by the employee are sent to the attacker rather than logging them into the website.

Area 1 covers this risk by providing Email Link Isolation as part of our email security offering. With email link isolation, Area 1 looks at every link sent and accesses its domain authority. For anything that’s on the margin (a link we cannot confidently say is safe), Area 1 will launch a headless Chromium browser and open the link there with no interruption. This way, any malicious scripts that execute will run on an isolated instance far from the company’s infrastructure, stopping the attacker from getting company information. This is all accomplished instantaneously and reliably.

Stopping Ransomware

Attackers have many tools in their arsenal to try to compromise employee accounts. As we mentioned above, phishing is a common threat vector, but it’s far from the only one. At Area 1, we are also vigilant in preventing the propagation of ransomware.

A common mechanism that attackers use to disseminate ransomware is to disguise attachments by renaming them. A ransomware payload could be renamed from petya.7z to Invoice.pdf in order to try to trick an employee into downloading the file. Depending on how urgent the email made this invoice seem, the employee could blindly try to open the attachment on their computer causing the organization to suffer a ransomware attack. Area 1’s models detect these mismatches and stop malicious ones from arriving into their target’s email inbox.

How Cloudflare Area 1 and DLP work together to protect data in email

A successful ransomware campaign can not only stunt the daily operations of any company, but can also lead to the loss of local data if the encryption is unable to be reversed. Cloudflare’s Area 1 product has dedicated payload models which analyze not only the attachment extensions but also the hashed value of the attachment to compare it to known ransomware campaigns. Once Area 1 finds an attachment deemed to be ransomware, we prohibit the email from going any further.

How Cloudflare Area 1 and DLP work together to protect data in email

Cloudflare’s DLP vision

We aim for Cloudflare products to give you the layered security you need to protect your organization, whether its malicious attempts to get in or sensitive data getting out. As email continues to be the largest surface of corporate data, it is crucial for companies to have strong DLP policies in place to prevent the loss of data. With Area 1 and Cloudflare One working together, we at Cloudflare are able to provide organizations with more confidence about their DLP policies.

If you are interested in these email security or DLP services, contact us for a conversation about your security and data protection needs.

Or if you currently subscribe to Cloudflare services, consider reaching out to your Cloudflare customer success manager to discuss adding additional email security or DLP protection.

Announcing Custom DLP profiles

Post Syndicated from Adam Chalmers original https://blog.cloudflare.com/custom-dlp-profiles/

Announcing Custom DLP profiles

Introduction

Announcing Custom DLP profiles

Where does sensitive data live? Who has access to that data? How do I know if that data has been improperly shared or leaked? These questions keep many IT and security administrators up at night. The goal of data loss prevention (DLP) is to give administrators the desired visibility and control over their sensitive data.

We shipped the general availability of DLP in September 2022, offering Cloudflare One customers better protection of their sensitive data. With DLP, customers can identify sensitive data in their corporate traffic, evaluate the intended destination of the data, and then allow or block it accordingly — with details logged as permitted by your privacy and sovereignty requirements. We began by offering customers predefined detections for identifier numbers (e.g. Social Security #s) and financial information (e.g. credit card #s). Since then, nearly every customer has asked:

“When can I build my own detections?”

Most organizations care about credit card numbers, which use standard patterns that are easily detectable. But the data patterns of intellectual property or trade secrets vary widely between industries and companies, so customers need a way to detect the loss of their unique data. This can include internal project names, unreleased product names, or unannounced partner names.

As of today, your organization can build custom detections to identify these types of sensitive data using Cloudflare One. That’s right, today you are able to build Custom DLP Profile using the same regular expression approach that is used in policy building across our platform.

How to use it

Cloudflare’s DLP is embedded in our secure web gateway (SWG) product, Cloudflare Gateway, which routes your corporate traffic through Cloudflare for fast, safe Internet browsing. As your traffic passes through Cloudflare, you can inspect that HTTP traffic for sensitive data and apply DLP policies.

Building DLP custom profiles follows the same intuitive approach you’ve come to expect from Cloudflare.

First, once within the Zero Trust dashboard, navigate to the DLP Profiles tab under Gateway:

Announcing Custom DLP profiles

Here you will find any available DLP profiles, either predefined or custom:

Announcing Custom DLP profiles

Select to Create Profile to begin a new one.  After providing a name and description, select Add detection entry to add a custom regular expression. A regular expression, or regex, is a sequence of characters that specifies a search pattern in text, and is a standard way for administrators to achieve the flexibility and granularity they need in policy building.

Cloudflare Gateway currently supports regexes in HTTP policies using the Rust regex crate. For consistency, we used the same crate to offer custom DLP detections. For documentation on our regex support, see our documentation.

Regular expressions can be used to build custom PII detections of your choosing, such as email addresses, or to detect keywords for sensitive intellectual property.

Announcing Custom DLP profiles

Provide a name and a regex of your choosing. Every entry in a DLP profile is a new detection that you can scan for in your corporate traffic. Our documentation provides resources to help you create and test Rust regexes.

Below is an example of regex to detect a simple email address:

Announcing Custom DLP profiles

When you are done, you will see the entry in your profile.  You can turn entries on and off in the Status field for easier testing.

Announcing Custom DLP profiles

The custom profile can then be applied to traffic using an HTTP policy, just like a predefined profile. Here both a predefined and custom profile are used in the same policy, blocking sensitive traffic to dlptest.com:

Announcing Custom DLP profiles

Our DLP roadmap

This is just the start of our DLP journey, and we aim to grow the product exponentially in the coming quarters. In Q4 we delivered:

  • Expanded Predefined DLP Profiles
  • Custom DLP Profiles
  • PDF scanning support
  • Upgraded file name logging

Over the next quarters, we will add a number of features, including:

  • Data at rest scanning with Cloudflare CASB
  • Minimum DLP match counts
  • Microsoft Sensitivity Label support
  • Exact Data Match (EDM)
  • Context analysis
  • Optical Character Recognition (OCR)
  • Even more predefined DLP detections
  • DLP analytics
  • Many more!

Each of these features will offer you new data visibility and control solutions, and we are excited to bring these features to customers very soon.

How do I get started?

DLP is part of Cloudflare One, our Zero Trust network-as-a-service platform that connects users to enterprise resources. Our GA blog announcement provides more detail about using Cloudflare One to onboard traffic to DLP.

To get access to DLP via Cloudflare One, reach out for a consultation, or contact your account manager.

Preview any Cloudflare product today

Post Syndicated from Angie Kim original https://blog.cloudflare.com/preview-today/

Preview any Cloudflare product today

Preview any Cloudflare product today

With Cloudflare’s pace of innovation, customers want to be able to see how our products work and sooner to address their needs without having to contact someone. Now they can, without any commitments or limits on monetary value and usage caps.

Ready to get started? Here’s how it works.

For any product* that is currently not part of an enterprise contract, users with administrative access will have the ability to enable the product on the Cloudflare dashboard. With a single click of a button, they can start configuring any required features within seconds.

Preview any Cloudflare product today
Preview any Cloudflare product today

You have access to resources that can help you get started as well as the ongoing support of your sales team. You will be otherwise left to enjoy the product and our team members will be in contact after about 2 weeks. We always look to collect feedback and can also discuss how to have it added to your contract. If more time is needed in the evaluation phase, no problem. If it is decided that it is not a right product fit, we will offboard the product without any penalties.

We are working on offering more and more self-service capabilities that traditionally have not been offered to our enterprise customers. We’ll also be enhancing this overall experience over the next few months to increase visibility and improve the self-guided journey.

Log into the dashboard to start exploring any products today!

*There are some products that will never be fully self-service, but we will look for opportunities to streamline the onboarding as much as possible. Examples include access to our China Network and Registrar. Support for the following products is still on the roadmap: R2, Cache Reserve, Image Resizing, CASB, DLP and BYOIP.

Network detection and settings profiles for the Cloudflare One agent

Post Syndicated from Kyle Krum original https://blog.cloudflare.com/location-aware-warp/

Network detection and settings profiles for the Cloudflare One agent

Network detection and settings profiles for the Cloudflare One agent

Teams can connect users, devices, and entire networks to Cloudflare One through several flexible on-ramps. Those on-ramps include traditional connectivity options like GRE or IPsec tunnels, our Cloudflare Tunnel technology, and our Cloudflare One device agent.

Each of these on-ramps send nearly all traffic to Cloudflare’s network where we can filter security threats with products like our Secure Web Gateway and Data Loss Prevention service. In other cases, the destination is an internal resource deployed in Cloudflare’s Zero Trust private network.

However, sometimes users want traffic to stay local. If a user is sitting within a few meters of their printer, they might prefer to connect through their local network instead of adding a hop through Cloudflare. They could configure Cloudflare to always ignore traffic bound for the printer, keeping it local, but when they leave the office they still need to use Cloudflare’s network to reach that printer remotely.

Solving this use case and others like it previously required manual changes from an administrator every time a user moved. An administrator would need to tell Cloudflare’s agent to include traffic sometimes and, in other situations, ignore it. This does not scale.

Starting today, any team using Cloudflare One has the flexibility to decide what traffic is sent to Cloudflare and what traffic stays local depending on the network of the user. End users do not need to change any settings when they enter or exit a managed network. Cloudflare One’s device agent will automatically detect and make the change for them.

Not everyone needs the same controls

Not every user in your enterprise needs the same network configuration. Sometimes you need to make exceptions for teams, certain members of staff, or speciality hardware/software based on business needs. Those exceptions can become a manual mess when you compound how locations and networks might also require different settings.

We’ve heard several examples from customers who run into that type of headache. Each case below describes a common theme: rigid network configuration breaks when it means real world usage.

In some cases, a user will work physically close to a server or another device that their device needs to reach. We talk to customers in manufacturing or lab environments who prefer to send all Internet-bound traffic to Cloudflare but want to continue to operate a private network inside their facility.

Today’s announcement allows teams to adapt to this type of model. When users operate inside the physical location in the trusted network, they can connect directly. When they leave, they can use Cloudflare’s network to reach back into the trusted network after they meet the conditions of the Zero Trust rules configured by an administrator.

In other situations, customers are in the process of phasing out legacy appliances in favor of Cloudflare One. However, the migration to a Zero Trust model sometimes needs to be stepwise and deliberate. In these cases, customers maintain some existing on-premise infrastructure while they deploy Cloudflare’s SASE solution.

As part of this release, teams can configure Cloudflare’s device agent to detect that a user sits inside a known location where those appliances still operate. The agent will automatically stop directing traffic to Cloudflare and instead send it to your existing appliances while you deprecate them over time.

Configuration Profiles and Managed Networks

Today’s release introduces the ability to create a profile, a defined set of configuration options. You can create rules that decide when and where profiles apply, changing settings without manual intervention.

For our network-aware work, administrators can define a profile that decides what traffic is sent to Cloudflare and what stays local. Next, that profile can apply when users are in specific networks and not when they are in other locations.

Beyond network detection, profiles can apply based on user group membership. Not every user in your workforce needs the same on-ramp configuration. Some developers might need certain traffic excluded due to local development work. As part of this launch, you can configure profiles to apply based on who the user is in addition to where the user sits.

Defining a secure way to detect a network you manage

Cloudflare needs to be able to decide what network a device is using in a way that can’t easily be spoofed by someone looking to skirt policy. To solve that challenge, today’s release introduces the ability to define a known TLS endpoint which Cloudflare’s agent can reach. In just a few minutes, an administrator can create a certificate-validated check to indicate a device is operating within a managed network.

First, an administrator can create a TLS certificate that Cloudflare will use and match based on the SHA-256 hash of the certificate. You can leverage existing infrastructure or create a new TLS endpoint via the following example:

1. Create a local certificate you can use

openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes -keyout example.key -out example.pem -subj "/CN=example.com" -addext "subjectAltName=DNS:example.com"

2. Extract the sha256 thumbprint of that certificate

openssl x509 -noout -fingerprint -sha256 -inform pem -in example.pem | tr -d :

Which will output something like this:

SHA256 Fingerprint=DD4F4806C57A5BBAF1AA5B080F0541DA75DB468D0A1FE731310149500CCD8662

Next, the Cloudflare agent running on the device needs to be able to reach that certificate to validate that it is connected to a network you manage. We recommend running a simple HTTP server inside your network which the device can reach to validate the certificate.

3. Create a python3 script and save as myserver.py as part of setting up a simple HTTP server.

import ssl, http.server
server = http.server.HTTPServer(('0.0.0.0', 4443), http.server.SimpleHTTPRequestHandler)
sslcontext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
sslcontext.load_cert_chain(certfile='./example.pem', keyfile='./example.key')
server.socket = sslcontext.wrap_socket(server.socket, server_side=True)
server.serve_forever()

Run the server

python3 myserver.py

Configure the network location in Zero Trust dashboard

Once you’ve created the example TLS endpoint above, provide the fingerprint to Cloudflare to define a managed network.

  1. Login to your Zero Trust Dashboard and navigate to Settings → WARP Client
  2. Scroll down to Network Locations and click Add new and complete the form. Use the Fingerprint generated in the previous step as the TLS Cert SHA-256 and the IP address of the device running the python script
Network detection and settings profiles for the Cloudflare One agent

Configure a Device Profile

Once the network is defined, you can create profiles that apply based on whether the agent is operating in this network. To do so, follow the steps below.

  1. Login to your Zero Trust Dashboard and navigate to Settings → WARP Client
  2. Scroll down to Device Settings and create a new profile that includes Your newly created managed network as a location
Network detection and settings profiles for the Cloudflare One agent

Reconnect your Agent

Each time the device agent detects a network change event from the operating systems (ex. waking up the device, changing Wi-Fi networks, etc.) the agent will also attempt to reach that endpoint inside your network to prove that it is operating within a network you manage.

If an endpoint that matches the SHA-256 fingerprint you’ve defined is detected, the device will get the settings profile as configured above. You can quickly validate that the device agent received the required settings by using warp-cli settings or warp-cli get-alternate-network from your command line / terminal.

What’s next?

Managed network detection and settings profiles are both new and available for you to use today. While settings profiles will work with any modern version of the agent from this last year, network detection requires at least version 2022.12.

The WARP device client currently runs on all major operating systems and is easy to deploy with the device management tools your organization already uses. You can find the download links to all version of our agent by visiting Settings →Downloads

Network detection and settings profiles for the Cloudflare One agent

Starting a Zero Trust journey can be daunting. We’re spending this week, CIO Week, to share features like this to make it less of a hassle to begin. If you want to talk to us to learn more about how to take that first step, please reach out.