Tag Archives: vpn

NSA on Securing VPNs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/nsa_on_securing.html

The NSA’s Central Security Service — that’s the part that’s supposed to work on defense — has released two documents (a full and an abridged version) on securing virtual private networks. Some of it is basic, but it contains good information.

Maintaining a secure VPN tunnel can be complex and requires regular maintenance. To maintain a secure VPN, network administrators should perform the following tasks on a regular basis:

  • Reduce the VPN gateway attack surface
  • Verify that cryptographic algorithms are Committee on National Security Systems Policy (CNSSP) 15-compliant
  • Avoid using default VPN settings
  • Remove unused or non-compliant cryptography suites
  • Apply vendor-provided updates (i.e. patches) for VPN gateways and clients

Тихата безкръвна революция: Интернет отвръща на цифровото проследяване с децентрализиран VPN

Post Syndicated from Екип на Биволъ original https://bivol.bg/tachyon-vpn.html

неделя 10 май 2020


В магазините на Google и AppStore се появи ново приложение -- Tachyon VPN. Малко по-рано бе пусната алфа-версия на този безплатен VPN за MacOS, а скоро ще излезе версията за Windows.

„Ех пък и намерили информационен повод за статия в пресата! Че и с „революция“ в заглавието!“ – ще се дръпне читателят и ще се окаже, че не е прав. Защото Tachyon не е поредният сервиз за безопасна и анонимна работа в мрежата, а първият VPN в историята, работещ в блокчейн и готов за използване от редовия потребител.

В Tachyon няма централизирани сървъри, които някой може да блокира, тъй като обменът на информация се осъществява в партньорска мрежа между участниците. Всеки от участниците пък изпълнява ролята на сървър. Целият трафик в Tachyon е кодиран с криптиране от край до край, а да го проследиш е невъзможно,

защото по-рано шифрованите данни се пренасят през интернет с помощта на симулирани обикновени протоколи /HTTP и TLS/, без по никакъв начин да се отличават от окръжаващия поток данни. И, в крайна сметка:

Tachyon VPN е невъзможно нито да се закрие, нито да се забрани, нито да се накаже

Защото този сервиз няма началство, протоколът е отворен, а управлението на блокчейна и мрежата се осъществява чрез традиционната за криптоиндустрията структура – DAO, децентрализирана автономна организация, в която всички решения се приемат с общо равнопоставено гласуване на анонимните участници в мрежата.

 

Главното, според мен, достойнство на Tachyon VPN е в пределната простота на използването: инсталирате приложението от магазина на Google или Apple, отваряте го – и на екрана виждате единствен бутон: Включи/Изключи. Кликвате и след секунда цялата ваша комуникация е скрита за страничните очи.

При това знаете, че никой не може да принуди вашия доставчик на услугите VPN да се регистрира някъде и да споделя някаква информация. Не, защото вашият доставчик е такъв принципен и упорит, а защото няма кой да бъде задължен за това – няма щаб-квартира, няма ръководител, юридическо лице също липсва. Остава, едва ли не, всеки редови потребител на мрежата Tachyon да бъде заплашван. Но и това е безполезно, защото самите потребители не знаят нищо, освен собствения си трафик.

Колкото повече наблюдавам развитието на събитията в света, толкова по-силно се поддавам на възхищението – до какво ли така подредено и съразмерно върви постъпателното движение към новата глобална парадигма на съществуването на човечеството от старата! Новата парадигма се нарича E Unibus Pluram /От едно – много/. Заимствах тази фраза от есето на американския писател Дейвид Фостър Уолъс, който, на свой ред, е обърнал крилатата фраза на Цицерон „E Pluribus Unum“ /От многото – едно/, украсяващо герба на САЩ.

Прехода към новата парадигма ние наблюдаваме навсякъде:

-- Вместо централизирания интернет вече има Web 3.0, в който информацията се съхранява разпределена в шифрован вид в компютрите на редовите потребители на световната Мрежа /Blockstack, Filecoin, Graphite Docs и др./.

-- Вместо йерархичната система DNS, основана на разрешаващи сървъри и контролирана от шепа американски компании, а същевременно и вместо националните реплики на тази уязвима структура са включени сервизи с имена на домейни, реализирани в децентрализирани блокчейни /Ethereum Naming Service, Handshake, Unstoppable Domains и др./

-- Вместо тоталитарните социални мрежи са създадени и пуснати алтернативни децентрализирани блокчейн-площадки; Facebook е реализиран в Minds, вместо Reddit и Twitter има Hive, Golos, Steem и Commun, вместо Instagram има All.me, вместо LinkedIn има Indorse.

-- Вместо фиатни пари с безгранична инфлационна емисия, контролируема от централните банки на държавите, съществуват множество криптовалути, предоставящи широк спектър функции – от спестовни /Bitcoin/ до пълна анонимност (Monero, Zcash, Beam и др./.

-- Вместо банковото оценъчно-разрешително кредитиране са създадени десетки сервизи DeFI /децентрализирано финансиране/, където всеки потребител може сам да вземе или да предостави кредит, без да иска от никого разрешение, доколкото изпълнението на задължените страни безпристрастно се контролира не от юридически структури, а от математически проверени смарт-договори / Maker, Compund, Aave, dYdX, bZx, Sablier, Dharma, Fulcrum и др./.

В тази картина липсваше последното звено – този същият предходен вид, в дарвинския смисъл на думата, без който е невъзможно раждането на новото качество. Такова звено дойде като Tachyon VPN, обединил инфраструктурата на традиционния интернет с новите технологии на пълната децентрализация в човешките отношения.

През двата месеца тестване на Tachyon VPN броят на потребителите му надскочи 150 000 души. Успехът е абсолютно феноменален за блокчейн-продуктите, които с огромен труд достигат до масовия потребител.

Сдържаността на всеки човек е лесна за разбиране. Първо, хората се опариха от безкраен поток от измами, стартирани под прикритието на ICO през 2016-2018 г. лъвският дял на тези афери се падна на Русия, страните от бившия ОНД, Източна Европа и Азия, което се обяснява с бедността на населението. Когато няма спестявания, няма и особено какво да се губи, поради това нашите хора са готови да заравят монетки в първото им попаднало Поле на Чудесата, обещаващо бързо забогатяване.

Второ, правителствените медии в цял свят упорито формират негативно отношение към криптотехнологиите, като създават в обществото измамни конотации на криптовалутата с тероризма, педофилията, търговията с наркотици и международните криминални групировки. Всичко това, разбира се, е безсмислица, доколкото делът на криптовалутата в нелегалния търговски оборот не превишава 1%. Но ефектът на доверието към правителствените медии действа безотказно – попитайте 10 души на улицата за биткойна и 9 от тях ще отговорят, че това е афера или престъпление.

Трето, блокчейн-технологиите са сложни за масово възприемане. Всъщност, сложно е всичко, което ни заобикаля – от автомобила до интернета. Но и автомобилът, и интернет отдавна достигнаха пределното ниво на адаптация за обикновения човек – за да постигнеш желания резултат е достатъчно да натиснеш бутона.

До този момент блокчейн-технологиите се намираха на същото равнище, на което беше и интернет в началото на 90-те години. Спомням си, че първото си включване в световната Мрежа настройвах в Windows for Workgroups 3.11 повече от денонощие – първо четях в списания за компютри за информационните форуми /BBS/, на които може да се намери Winsock API от странични разработчици Chameleon и Trumpet (Winsock -- това е специален фреймуърк за работа с протокол TCP/IP, който беше вграден в ОС първо само в Windows 95/, после с часове настройвах дузина параметри, после поотделно конфигурирах приложения за работа с поща, конференция на новини и сървъри FTP.
Това беше ужасно и единици ползваха интернет. Също толкова ужасно изглеждаха и нещата с блокчейн-продуктите, които само специалисти можеха да настроят.

Днес ние се оказваме свидетели на последната, решаващата стъпка – превода на сложните блокчейн-технологии на езика на „един клик“. За да получим или дадем пари на кредит при гаранцията на безпристрастния смарт-договор, е достатъчно да изтеглим приложението на мобилния си телефон и да кликнем на бутона на дисплея. За да се възползваме от VPN за блокчейн, също не се налага нищо да настройваме – клик! И всичко работи.

Предвиждам и възражения – алтернативите на социалните мрежи, реализирани в блокчейн, съществуват относително неотдавна, но не се налага да говорим за никаква конкуренция с Facebook и ВКонтакте. Само вижте високотехнологичната социална мрежа №1 в блокчейн minds.com – чиста проба Зомбиленд, където са се приютили без работа два и половина потребители. При това всичко в minds.com е интуитивно, всичко е привично и познато, а единствената разлика с Facebook е в това, че липсват Цукърбърг или друг началник, които биха могли да ви баннат за това, че сте казали нещо неправилно /според неговото, цукърбърговското разбиране за правилно и неправилно/.

На практика алтернативните социални мрежи са пусти, но вината за това изобщо не е на блокчейн-технологиите, а самата идея на социалната мрежа, която се вдъхновява само от масовото стълпотворение – колкото повече народ се струпа на площадката, толкова повече площадката е привлекателна.

Социално активната публика непременно ще премине към децентрализираните мрежи, но ще направи това последно, само след като алтернативните технологии основателно стеснят традиционните сервизи на интернета по силата на своите обективни преимущества.

Съвсем по друг начин стоят нещата с VPN. Тук внедряването на блокчейн-технологиите се очаква вече отдавна и с голямо нетърпение.

Интересът на обществото към защитата на комуникациите в интернет се проявява навсякъде, дори и да е неравномерно разпределен в геополитическо отношение. Например, в страните на западната цивилизация VPN ползват от 4% /в Австралия/ до 6% /в Германия/ от потребителите на интернет.

Не така е в страните от Третия свят – в Индия и турция – 18%, в Китай – 20%, в Тайланд и русия – около 24%.
Само помислете: една четвърт от редовите потребители на мрежата, които компютърните гикове (ентусиазирани до обсесия от технологиите б.ред.) са свикнали да считат за леймъри (неосведомени, невежи б.ред.), се справят забележително с такава технология като VPN.

Главната заслуга в популяризирането на VPN в нашето отечество, без съмнение, принадлежи на Роскомнадзор. Последната вълна на народния интерес към този преди неведом сервиз се случи, след като Роскомнадзор успешно се пребори с Telegram. В името на справедливостта трябва да се каже, че масовата популярност на VPN бе осигурена все пак не от Месинджър, а от пиратския видеоконтент и торент-тракерите, които, впрочем, Роскомнадзор също блокира. Поради това лаврите за популяризирането на технологията очевидно остават за този орган.

С други думи, популярността на Tachyon беше предопределена от потребността на пазара на самия сервиз за анонимизиране на трафика. Не по-малко важна роля изигра и блокчейн-технологията, която прави невъзможен натиска върху компанията, предоставяща услугите на VPN.

При сънародниците ми е налице забележителен прецедент – през март миналата година Роскомнадзор разпрати писма до десетина крупни VPN-провайдери с изискването да се включат в държавната информационна система. Девет от тях очаквано се отказаха /всички, освен Kaspersky Secure Connection, което също беше очаквано/, прибраха своите сървъри от територията на Руската федерация и спокойно продължиха предоставянето на услугите си.

Рискувам все пак да предположа, че въпросната реакция се случи само заради това, че мнението на Роскомнадзор не интересува никого извън пределите на Русия, още повече – никого не плаши.

Представете си, че идентично изискване би повдигнал американският събрат на Роскомнадзор за борбата за чистота на политическите нрави – Агенцията за национална сигурност на САЩ. Дали изобщо някой счита, че отговорът на NordVPN, Hide My Ass!, Hola VPN, Openvpn, VyprVPN, ExpressVPN, TorGuard, IPVanish и VPN Unlimited би бил подобен на този от случая с руското ведомство?

На всички, които си пазят геополитическите илюзии, предлагам да си спомнят за съдбата на продуктите на Facebook /Libra/ и Telegram /TON/, свързани с криптотехнологиите – и двата бяха пречупени като дървеница още преди се появят на бял свят.

Прелестта и надеждността на VPN, базиран на блокчейн-технологиите, се състои в това, че там няма кого да пречупваш и тормозиш – нито физически /със завземане на сървърното оборудване/, нито юридически /с обявяване извън закона на територията на националните юрисдикции/.

Преди всичко, аз бих искал да се поздравим всички ние не просто с настъпването на цифровата революция, а с революцията от нов тип. Новината й се крие в морално-етичното великолепие – за пръв път в историята се случва безкръвна революция! Не, защото хорските нрави са станали по-малко кръвожадни, а защото изчезват обектите и субектите на натиска. Воистину -- E Unibus Pluram!

Автор: Сергей Голубицкий, Новая газета 

Превод: Екип на Биволъ
***Руското издание „Новая газета“ е част от Международния консорциум на разследващите журналисти /OCCRP/, където е и сайтът Биволъ.

Migrating from VPN to Access

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/migrating-from-vpn-to-access/

Migrating from VPN to Access

Migrating from VPN to Access

With so many people at Cloudflare now working remotely, it’s worth stepping back and looking at the systems we use to get work done and how we protect them. Over the years we’ve migrated from a traditional “put it behind the VPN!” company to a modern zero-trust architecture. Cloudflare hasn’t completed its journey yet, but we’re pretty darn close. Our general strategy: protect every internal app we can with Access (our zero-trust access proxy), and simultaneously beef up our VPN’s security with Spectrum (a product allowing the proxying of arbitrary TCP and UDP traffic, protecting it from DDoS).

Before Access, we had many services behind VPN (Cisco ASA running AnyConnect) to enforce strict authentication and authorization. But VPN always felt clunky: it’s difficult to set up, maintain (securely), and scale on the server side. Each new employee we onboarded needed to learn how to configure their client. But migration takes time and involves many different teams. While we migrated services one by one, we focused on the high priority services first and worked our way down. Until the last service is moved to Access, we still maintain our VPN, keeping it protected with Spectrum.

Some of our services didn’t run over HTTP or other Access-supported protocols, and still required the use of the VPN: source control (git+ssh) was a particular sore spot. If any of our developers needed to commit code they’d have to fire up the VPN to do so. To help in our new-found goal to kill the pinata, we introduced support for SSH over Access, which allowed us to replace the VPN as a protection layer for our source control systems.

Over the years, we’ve been whittling away at our services, one-by-one. We’re nearly there, with only a few niche tools remaining behind the VPN and not behind Access. As of this year, we are no longer requiring new employees to set up VPN as part of their company onboarding! We can see this in our Access logs, with more users logging into more apps every month:

Migrating from VPN to Access

During this transition period from VPN to Access, we’ve had to keep our VPN service up and running. As VPN is a key tool for people doing their work while remote, it’s extremely important that this service is highly available and performant.

Enter Spectrum: our DDoS protection and performance product for any TCP and UDP-based protocol. We put Spectrum in front of our VPN very early on and saw immediate improvement in our security posture and availability, all without any changes in end-user experience.

With Spectrum sitting in front of our VPN, we now use the entire Cloudflare edge network to protect our VPN endpoints against DDoS and improve performance for VPN end-users.

Setup was a breeze, with only minimal configuration needed:

Migrating from VPN to Access

Cisco AnyConnect uses HTTPS (TCP) to authenticate, after which the actual data is tunneled using a DTLS encrypted UDP protocol.

Although configuration and setup was a breeze, actually getting it to work was definitely not. Our early users quickly noted that although authenticating worked just fine, they couldn’t actually see any data flowing through the VPN. We quickly realized our arch nemesis, the MTU (maximum transmission unit) was to blame. As some of our readers might remember, we have historically always set a very small MTU size for IPv6. We did this because there might be IPv6 to IPv4 tunnels in between eyeballs and our edge. By setting it very low we prevented PTB (packet too big) packets from ever getting sent back to us, which causes problems due to our ECMP routing inside our data centers. But with a VPN, you always increase the packet size due to the VPN header. This means that the 1280 MTU that we had set would never be enough to run a UDP-based VPN. We ultimately settled on an MTU of 1420, which we still run today and allows us to protect our VPN entirely using Spectrum.

Over the past few years this has served us well, knowing that our VPN infrastructure is safe and people will be able to continue to work remotely no matter what happens. All in all this has been a very interesting journey, whittling down one service at a time, getting closer and closer to the day we can officially retire our VPN. To us, Access represents the future, with Spectrum + VPN to tide us over and protect our services until they’ve migrated over. In the meantime, as of the start of 2020, new employees no longer get a VPN account by default!

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

Post Syndicated from Evan Johnson original https://blog.cloudflare.com/dogfooding-from-home/

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

It’s never been more crucial to help remote workforces stay fully operational — for the sake of countless individuals, businesses, and the economy at large. In light of this, Cloudflare recently launched a program that offers our Cloudflare for Teams suite for free to any company, of any size, through September 1. Some of these firms have been curious about how Cloudflare itself uses these tools.

Here’s how Cloudflare’s next-generation VPN alternative, Cloudflare Access, came to be.

Rewind to 2015. Back then, as with many other companies, all of Cloudflare’s internally-hosted applications were reached via a hardware-based VPN. When one of our on-call engineers received a notification (usually on their phone), they would fire up a clunky client on their laptop, connect to the VPN, and log on to Grafana.

It felt a bit like solving a combination lock with a fire alarm blaring overhead.

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

But for three of our engineers enough was enough. Why was a cloud network security company relying on clunky on-premise hardware?

And thus, Cloudflare Access was born.

A Culture of Dogfooding

Many of the products Cloudflare builds are a direct result of the challenges our own team is looking to address, and Access is a perfect example. Development on Access originally began in 2015, when the project was known internally as EdgeAuth.

Initially, just one application was put behind Access. Engineers who received a notification on their phones could tap a link and, after authenticating via their browser, they would immediately have access to the key details of the alert in Grafana. We liked it a lot — enough to get excited about what we were building.

Access solved a variety of issues for our security team as well. Using our identity provider of choice, we were able to restrict access to internal applications at L7 using Access policies. This once onerous process of managing access control at the network layer with a VPN was replaced with a few clicks in the Cloudflare dashboard.

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

After Grafana, our internal Atlassian suite including Jira and Wiki, and hundreds of other internal applications, the Access team began working to support non-HTTP based services. Support for git allowed Cloudflare’s developers to securely commit code from anywhere in the world in a fully audited fashion. This made Cloudflare’s security team very happy. Here’s a slightly modified example of a real authentication event that was generated while pushing code to our internal git repository.

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

It didn’t take long for more and more of Cloudflare’s internal applications to make their way behind Access. As soon as people started working with the new authentication flow, they wanted it everywhere. Eventually our security team mandated that we move our apps behind Access, but for a long time it was totally organic: teams were eager to use it.

Incidentally, this highlights a perk of utilizing Access: you can start by protecting and streamlining the authentication flows for your most popular internal tools — but there’s no need for a wholesale rip-and-replace. For organizations that are experiencing limits on their hardware-based VPNs, it can be an immediate salve that is up and running after just one setup call with a Cloudflare onboarding expert (you can schedule a time here).

That said, there are some upsides to securing everything with Access.

Supporting a Global Team

VPNs are notorious for bogging down Internet connections, and the one we were using was no exception. When connecting to internal applications, having all of our employees’ Internet connections pass through a standalone VPN was a serious performance bottleneck and single point of failure.

Dogfooding from Home: How Cloudflare Built our Cloud VPN Replacement

Cloudflare Access is a much saner approach. Authentication occurs at our network edge, which extends to 200 cities in over 90 countries globally. Rather than having all of our employees route their network traffic through a single network appliance, employees connecting to internal apps are connecting to a data center just down the road instead.

As we support a globally-distributed workforce, our security team is committed to protecting our internal applications with the most secure and usable authentication mechanisms. With Cloudflare Access we’re able to rely on the strong two-factor authentication mechanisms of our identity provider, which was much more difficult to do with our legacy VPN.

With Cloudflare Access we’re able to rely on the strong two-factor authentication mechanisms of our identity provider, which was much more difficult to do with our legacy VPN.

On-Boarding and Off-Boarding with Confidence

One of the trickiest things for any company is ensuring everyone has access to the tools and data they need — but no more than that. That’s a challenge that becomes all the more difficult as a team scales. As employees and contractors leave, it is similarly essential to ensure that their permissions are swiftly revoked.

Managing these access controls is a real challenge for IT organizations around the world — and it’s greatly exacerbated when each employee has multiple accounts strewn across different tools in different environments. Before using Access, our team had to put in a lot of time to make sure every box was checked.

Now that Cloudflare’s internal applications are secured with Access, on- and offboarding is much smoother. Each new employee and contractor is quickly granted rights to the applications they need, and they can reach them via a launchpad that makes them readily accessible. When someone leaves the team, one configuration change gets applied to every application, so there isn’t any guesswork.

Access is also a big win for network visibility. With a VPN, you get minimal insight into the activity of users on the network – you know their username and IP address. but that’s about it. If someone manages to get in, it’s difficult to retrace their steps.

Cloudflare Access is based on a zero-trust model, which means that every packet is authenticated. It allows us to assign granular permissions via Access Groups to employees and contractors. And it gives our security team the ability to detect unusual activity across any of our applications, with extensive logging to support analysis. Put simply: it makes us more confident in the security of our internal applications.

But It’s Not Just for Us

With the massive transition to a remote work model for many organizations, Cloudflare Access can make you more confident in the security of your internal applications — while also driving increased productivity in your remote employees. Whether you rely on Jira, Confluence, SAP or custom-built applications, it can secure those applications and it can be live in minutes.

Cloudflare has made the decision to make Access completely free to all organizations, all around the world, through September 1. If you’d like to get started, follow our quick start guide here:
Or, if you’d prefer to onboard with one of our specialists, schedule a 30 minute call at this link: calendly.com/cloudflare-for-teams/onboarding?month=2020-03

Work-from-Home Security Advice

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/work-from-home_.html

SANS has made freely available its “Work-from-Home Awareness Kit.”

When I think about how COVID-19’s security measures are affecting organizational networks, I see several interrelated problems:

One, employees are working from their home networks and sometimes from their home computers. These systems are more likely to be out of date, unpatched, and unprotected. They are more vulnerable to attack simply because they are less secure.

Two, sensitive organizational data will likely migrate outside of the network. Employees working from home are going to save data on their own computers, where they aren’t protected by the organization’s security systems. This makes the data more likely to be hacked and stolen.

Three, employees are more likely to access their organizational networks insecurely. If the organization is lucky, they will have already set up a VPN for remote access. If not, they’re either trying to get one quickly or not bothering at all. Handing people VPN software to install and use with zero training is a recipe for security mistakes, but not using a VPN is even worse.

Four, employees are being asked to use new and unfamiliar tools like Zoom to replace face-to-face meetings. Again, these hastily set-up systems are likely to be insecure.

Five, the general chaos of “doing things differently” is an opening for attack. Tricks like business email compromise, where an employee gets a fake email from a senior executive asking him to transfer money to some account, will be more successful when the employee can’t walk down the hall to confirm the email’s validity — and when everyone is distracted and so many other things are being done differently.

Worrying about network security seems almost quaint in the face of the massive health risks from COVID-19, but attacks on infrastructure can have effects far greater than the infrastructure itself. Stay safe, everyone, and help keep your networks safe as well.

Cloudflare During the Coronavirus Emergency

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/cloudflare-during-the-coronavirus-emergency/

Cloudflare During the Coronavirus Emergency

This email was sent to all Cloudflare customers a short while ago

From: Matthew Prince
Date: Thu, Mar 12, 2020 at 4:20 PM
Subject: Cloudflare During the Coronavirus Emergency

We know that organizations and individuals around the world depend on Cloudflare and our network. I wanted to send you a personal note to let you know how Cloudflare is dealing with the Coronavirus emergency.

First, the health and safety of our employees and customers is our top priority. We have implemented a number of sensible policies to this end, including encouraging many employees to work from home. This, however, hasn’t slowed our operations. Our network operations center (NOC), security operations center (SOC), and customer support teams will remain fully operational and can do their jobs entirely remote as needed.

Second, we are tracking Internet usage patterns globally. As more people work from home, peak traffic in impacted regions has increased, on average, approximately 10%. In Italy, which has imposed a nationwide quarantine, peak Internet traffic is up 30%. Traffic patterns have also shifted so peak traffic is occurring earlier in the day in impacted regions. None of these traffic changes raise any concern for us. Cloudflare’s network is well provisioned to handle significant spikes in traffic. We have not seen, and do not anticipate, any impact to our network’s performance, reliability, or security globally.

Third, we are monitoring for any changes in cyberthreats. While we have seen more phishing attacks using the Coronavirus as a lure, we have not seen any significant increase in attack traffic or new threats. Again, our SOC remains fully operational and is continuously monitoring for any new security threats that may emerge.

Finally, we recognize that this emergency has put strain on the infrastructure of companies around the world as more employees work from home. On Monday, I wrote about how we are making our Cloudflare for Teams product, which helps support secure and efficient remote work, free for small businesses for at least the next six months:

https://blog.cloudflare.com/cloudflare-for-teams-free-for-small-businesses-during-coronavirus-emergency/

As the severity of the emergency has become clearer over the course of this week, we decided to extend this offer to help any business, regardless of size. The healthy functioning of our economy globally depends on work continuing to get done, even as people need to do that work remotely. If Cloudflare can do anything to help ensure that happens, I believe it is our duty to do so.

If you are already a Cloudflare for Teams customer, we have removed the caps on usage during this emergency so you can scale to whatever number of seats you need without additional cost. If you are not yet using Cloudflare for Teams, and if you or your employer are struggling with limits on the capacity of your existing VPN or Firewall, we stand ready to help and have removed the limits on the free trials of our Access and Gateway products for at least the next six months. Cloudflare employees around the world have volunteered to run no-cost onboarding sessions so you can get set up quickly and ensure your business’ continuity.

Details: https://developers.cloudflare.com/access/about/coronavirus-emergency/
Sign up for an onboarding session: https://calendly.com/cloudflare-for-teams/onboarding

Thank you for being a Cloudflare customer. These are challenging times but I want you to know that we stand ready to help however we can. We understand the critical role we play in the functioning of the Internet and we are continually humbled by the trust you place in us. Together, we can get through this.


Matthew Prince
Co-founder & CEO
Cloudflare

@eastdakota
@cloudflare

Open sourcing our Sentry SSO plugin

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/open-sourcing-our-sentry-sso-plugin/

Open sourcing our Sentry SSO plugin

Cloudflare Access, part of Cloudflare for Teams, replaces legacy corporate VPNs with Cloudflare’s global network. Using your existing identity provider, Access enables your end users to login from anywhere — without a clunky agent or traffic backhaul through a centralized appliance or VPN.

Today, we are open sourcing a plugin that continues to improve that experience by making it easier for teams to use Cloudflare Access with one of the software industry’s most popular engineering tools, Sentry.

What is Sentry?

Sentry is an application that helps software teams find and diagnose errors in their products. We use Sentry here at Cloudflare. When you encounter an error when using a Cloudflare product, like our dashboard, we log that event. We then use Sentry to determine what went wrong.

Sentry can categorize and roll up errors, making it easy to identify new problems before investigating them with the tool’s event logging. Engineering managers here can use the dashboards to monitor the health of a new release. Product managers often use those reports as part of prioritizing what to fix next. Engineers on our team can dig into the individual errors as they release a fix.

Sentry is available in two forms: a SaaS model and a self-hosted version. Both modes give engineering teams comprehensive insight into the behavior of their deployed applications and the issues their users encounter.

Connecting users to Sentry

Organizations can deploy the self-hosted version on-premise or in a cloud environment they control. However, they still need to create a secure way to allow their teams to connect to the app.

Historically, most opt for a VPN to solve that challenge. End users outside of the office need to configure a VPN client on their laptop and try to login with credentials that are often different from the ones used for a corporate SSO. Administrators had to make sure their VPN appliance could scale for a few users, but with most in the office, the VPN was a serious inconvenience for a smaller set of users.

Over the last few years, that group of users working outside of the office has grown. The outbreak of COVID-19 is accelerating that growth significantly. Users are working from BYOD laptops, mobile phones, and in unfamiliar networks that all struggle with a VPN. Even worse, a VPN has a load limit because it relies on an actual appliance (whether virtual or physical hardware). Organizations can attempt to stress test their VPN, but will always have a limit that administrators need to continuously monitor.

Cloudflare Access gives administrators the scale of Cloudflare’s global network and provides end users with a SaaS-like experience that just works from any device or network. When teams secure Sentry with Cloudflare Access, end users visit the hostname of the application, login with their identity provider, and are redirected from Cloudflare’s edge to the app if they have permission to reach it.

However, in the case of an app like Sentry, end users need to login one more time to the application itself. That small step adds real friction, which Access can now solve through this open source plugin.

JWT Security with Cloudflare for Teams

When a user logs in to their identity provider when connecting to an application protected by Access, Cloudflare signs a JSON Web Token (JWT).

Open sourcing our Sentry SSO plugin

Cloudflare Access uses that JWT, and its contents, to confirm a user identity before allowing or denying access to sensitive resources. Cloudflare securely creates these through the OAUTH or SAML integration between Cloudflare Access and the configured identity provider. Each JWT consists of three Base64-URL strings: the header, the payload, and the signature.

  • The header defines the cryptographic operation that encrypts the data in the JWT.
  • The payload consists of name-value pairs for at least one and typically multiple claims, encoded in JSON. For example, the payload can contain the identity of a user
  • The signature allows the receiving party to confirm that the payload is authentic.

The token is signed using a public private key pair and saved in the user’s browser. Inside of that token, we store the following details in addition to some general metadata:

  • User identity: typically the email address of the user retrieved from your identity provider.
  • Authentication domain: the domain that signs the token. For Access, we use “example.cloudflareaccess.com” where “example” is a subdomain you can configure.
  • Audience: The domain of the application you are attempting to reach.
  • Expiration: the time at which the token is no longer valid for use.

When a request is made to an application behind Access, Cloudflare looks for the presence of that token. If available, we decrypt it, validate its authenticity, and then read the payload. If the payload contains information about a user who should be able to reach the application, we send their request to an origin.

The Sentry plugin takes that JWT and reuses it, instead of prompting the visitor to login again with separate credentials. The plugin parses the user identity, checks it against the directory of users in Sentry, and maps that token to a Sentry profile and its assigned permissions.

All of this is seamless to the end user and takes just a few milliseconds. The user is instantly redirected to the application, fully authenticated, and only needs to remember their SSO login. Administrators now have one fewer set of credentials to worry about managing and the associated onboarding and offboarding.

Building your own SSO plugin

We believe that the JSON Web Token is a simple and efficient method for sending identity. Applications that use JWTs for authorization only need to support the JWT standard, instead of attempting to integrate with different versions of SAML or other formats like OIDC and OAUTH. A JWT is also information dense and built in a format, JSON, that can be easily parsed by the target application.

Some products, like Redash, already have native support for JWT integration. The Sentry plugin we built joins our Atlassian plugin as both options to extend support to those apps, but also examples that can be used for integration with other products. Other teams, like Auth0, have also published materials to add JWT integration to legacy apps.

What’s next?

Cloudflare Access is available on every Cloudflare account and 5 free seats are included by default. You can follow these instructions to get started.

If you are a small business, you can sign up for the Cloudflare for Teams program right now at the link below.

https://www.cloudflare.com/smallbusiness/

How Replicated Developers Develop Remotely

Post Syndicated from Guest Author original https://blog.cloudflare.com/how-replicated-secured-our-remote-dev-environment-with-cloudflare-access/

How Replicated Developers Develop Remotely

This is a guest post by Marc Campbell and Grant Miller, co-founders of Replicated.

How Replicated Developers Develop Remotely

Replicated is a 5-year old infrastructure software company working to make it easy for businesses to install and operate third party software. We don’t want you to have to send your data to a multi-tenant SaaS provider just to use their services. Our team is made up of twenty-two people distributed throughout the US. One thing that’s different about Replicated is our developers don’t actually store or execute code on their laptops; all of our development happens on remote instances in the cloud.

Our product, KOTS, runs in Kubernetes and manages the lifecycle of 3rd-party applications in the Kubernetes cluster. Building and validating the product requires a developer to have access to a cluster. But as we started to hire more and more engineers it became ridiculous to ask everyone to run their own local Kubernetes cluster. We needed to both simplify and secure our setup to allow every engineer to run their environment in the cloud, and we needed to do it in a way which was seamless and secure.

Previous Dev Environments with Docker for Mac

We started with each developer building their own local environments, using whatever tools they were comfortable with. Our first attempt to build a standard development environment that works for our engineering team was to use Docker for Mac and its built-in Kubernetes distribution. We would buy the best MacBook Pros available (16 GB, then 32 GB, now 64 GB), and everyone would have the entire stack running on their laptop.

This worked pretty well, except that there was a set of problems that our engineers would continue to hit–battery life was terrible because of the constant CPU usage, Docker For Mac was different from “real Kubernetes” in some meaningful ways, and Docker for Mac’s built-in K8s regularly would just sometimes stop working and the developer would need to uninstall and reinstall the entire stack. It was miserable.

We’d lose hours every week from engineers troubleshooting their local environments. When a front end engineer (who wasn’t expected to be a Kubernetes expert) would have issues, they’d need to pair and get help from a backend engineer; consuming not just one but two people’s valuable time.

We needed something better.

To The Cloud

Rather than running Docker locally, we now create an instance in Google Cloud for each developer. These instances have no public IP and are based on our machine image which has all of our prerequisites installed. This includes many tools, including a Kubernetes distribution that’s completely local to the server. We run a docker registry in each developer’s cluster as a cluster add-on. The cloud server has a magical tool called cloudflared running on it that replaces all of the network configuration and security work we would otherwise have had to do.‌‌

Cloudflared powers Argo Tunnel. When it starts, cloudflared creates four secure HTTP/2 tunnels to two Cloudflare data centers. When a request comes in for a development machine, Cloudflare routes that request over one of those tunnels directly to the machine running that developer’s environment. For example, my hostname is “marc.repl.dev”. Whenever I connect to that, from anywhere on earth, Cloudflare will see that I reach my development environment securely. If I need to spin up a new development environment, there is no configuration to do, wherever is running cloudflared with the appropriate credentials will receive the traffic. This all works on any cloud and in any cloud region.

‌‌This configuration has several advantages over a traditional deployment. For one, the server does not have a public IP and we don’t need to have any ports open in the Google Load Balancer, including for SSH. The only way to connect to these servers is through the Argo Tunnel, secured by Cloudflare Access. Access provides a BeyondCorp-style method of authentication, this ensures that the environment can be reached from anywhere in the world without the use of a VPN.

How Replicated Developers Develop Remotely

BeyondCorp is an elaborate way of saying that all our authentication is managed in a single place. We can write a policy which defines which machines a user should have access to and trust it will be applied everywhere. This means rather than managing SSH certificates which are hard to revoke and long-living, we can allow developers to login with the same Google credentials we use everywhere else! Should, knock on wood, a developer leave, we can revoke those credentials instantly; no more worrying what public keys they still might have lying around.

What happens on the developer’s machines?

Through Argo Tunnel and Access we now have the ability to connect to our new development instances, but that isn’t enough to allow our engineers to work. They need to be able to write and execute code on that remote machine in a seamless way. To solve that problem we turned to the Remote SSH extension for VS Code. In the words of the documentation for that project:

The Visual Studio Code Remote SSH extension allows you to open a remote folder on any remote machine, virtual machine, or container with a running SSH server and take full advantage of VS Code’s feature set. Once connected to a server, you can interact with files and folders anywhere on the remote filesystem.

With Remote SSH, VS Code seamlessly reads and writes files to the developer’s remote server. When a developer opens a project, it feels local and seamless, but everything is authenticated by Access and proxied through Argo over SSH. Our developers can travel anywhere in the world, and trust their development environment will be accessible and fast.

Locally, a developer has a .ssh/config file to define local ports to forward through the SSH connection to a port that’s only available on the remote server. For example, my .ssh/config file contains:‌‌

Host marc.repl.dev

HostName marc.repl.dev

User marc‌‌

LocalForward 8080 127.0.0.1:30080

LocalForward 8005 127.0.0.1:30015

To build and execute code our developers open the embedded terminal in VS Code. This automatically connects them to the remote server. We use skaffold, a Kubernetes CLI for local development. A simple skaffold dev starts the stack on their remote machine which feels local because it’s all happening inside VS Code. Once it’s started, the developer can access localhost in their browser to view the results of their work by visiting http://localhost:8080. The SSH config above will forward this traffic to port 30080 on the remote server. Port 30080 on the remote server is a NodePort configured in the local cluster, that has the web server running in it. All of our APIs and web servers have static NodePorts for local development environments.

Now, when a developer starts at Replicated, their first day (or even week) isn’t consumed by setting up the development environment–now it takes less than an hour. We have a Terraform script that makes it easy to replace any one of our developer’s machines in seconds.

The Aftermath

All developers at Replicated have now been using this environment for nine months. We haven’t eliminated the problems that occasionally come up where Kubernetes isn’t playing nicely, or Docker uses too much disk space. However, these problems do occur much less frequently than they did on Docker for Mac. We now have two new options that weren’t easily available when everyone ran their environment locally.

First, a backend engineer can just ssh through the Argo Tunnel into the other developers server to troubleshoot and help. Every development environment has become a collaborative place. This is great when two engineers aren’t in the same room.  Also, we’re less attached to our development environments–if my server isn’t working properly for unknown reasons, instead of troubleshooting it for hours, I can delete it and get a new clean one.

Some additional benefits include:

  • Developers can have multiple envs easily (to try out a new k8s version, for example)
  • Battery life is awesome again on laptops
  • We don’t need the biggest and most powerful laptops anymore (Hello Chromebooks and Tablets)
  • Developers can choose their local OS and environment (MacOS, Windows, Linux) because they are all supported, as long as SSH is supported.
  • Code does not live on a developer laptop; it doesn’t travel with them to coffee shops and other insecure places. This is great for security purposes–a lost laptop no longer means the codebase is out there with it.

How To

Beyond just telling you what we did, we’d like to show you how to replicate it for yourself! This assumes you have a domain which is already configured to use Cloudflare.

  1. Create an instance to represent your development environment in the cloud of your choice.

“`

gcloud compute instances create my-dev-universe

“`

2.   Configure your instance to run cloudflared when it starts up, and give it a helpful hostname like dev.mysite.com.‌‌

“`

sudo apt-get install cloudflare/cloudflare/cloudflared

cat “hostname: dev.mysite.com\n” > ~/.cloudflared/config.yml

cloudflared login

sudo cloudflared service install

“`

3.  Write an Access policy to allow only you to access your machine‌‌

How Replicated Developers Develop Remotely

‌4. Configure your local machine to SSH via Cloudflare:‌‌

“`

sudo apt-get install cloudflare/cloudflare/cloudflared

cloudflared access ssh-config –hostname dev.mysite.com –short-lived-cert >> ~/.ssh/config

“`‌‌

4. Install VS Code and the Remote Development extension pack

5. In VS Code select ‘Remote-SSH: Connect to Host…’ from the Command Palette and enter [email protected]. A browser window will open where you will be prompted to login with the identity provider you configured with Cloudflare.

6. You’re done! If you select File > Open you will be seeing files on your remote machine. The embedded terminal will also execute code on that remote machine.

7. Once you’re ready to get a production-ready setup for your team, take a look at the instructions we share with our team.

Conclusion

There is no doubt that the world is becoming more Internet-connected, and that deployment environments are becoming more complex. It stands to reason that it’s only a matter of time before all software development happens through and in concert with the Internet.

While it might not be the best solution for every team, it has resulted in a dramatically better experience for Replicated and we hope it does for you as well.

How to get started‌‌

‌‌Replicated develops remotely with Cloudflare Access, a remote access gateway that helps you secure access to internal applications and infrastructure without a VPN.

Effective until September 1, 2020, Cloudflare is making Access and other Cloudflare for Teams products free to small businesses. We’re doing this to help ensure that small businesses that implement work from home policies in order to combat the spread of the Coronavirus (COVID-19) can ensure business continuity.

‌You can learn more and apply at cloudflare.com/smallbusiness now.

How to Build a Highly Productive Remote Team (or Team of Contractors) with Cloudflare for Teams

Post Syndicated from Lane Billings original https://blog.cloudflare.com/how-to-build-a-highly-productive-remote-team-or-team-of-contractors-with-cloudflare-for-teams/

How to Build a Highly Productive Remote Team (or Team of Contractors) with Cloudflare for Teams

Much of IT has been built on two outdated assumptions about how work is done. First, that employees all sit in the same building or branch offices. Second, that those employees will work full-time at the same company for years.

Both of these assumptions are no longer true.

Employees now work from anywhere. In the course of writing this blog post, I opened review tickets in our internal JIRA from my dining table at home. I reviewed internal wiki pages on my phone during my commute on the train. And I spent time reviewing some marketing materials in staging in our CMS.

In a past job, I would have suffered trying to connect to these tools through a VPN. That would have slowed down my work on a laptop and made it nearly impossible to use a phone to catch up on my commute.

The second challenge is ramp-up. I joined Cloudflare a few months ago. As a member of the marketing team, I work closely with our product organization and there are several dozen tools that I need to do that.

I’m hardly alone. The rise of SaaS and custom internal applications means that employees need access to all kinds of tools to effectively do their job. The increasing prevalence of contractors and part-time employees is compounding the challenge of how to get employees productive. On-boarding (and off-boarding) is now not an occasional thing, but has become a regular rhythm of how companies operate.

All these factors are combining to cause a bigger question: how can I make teams that reflect the new modern workforce — often remote, and increasingly not the traditional full time, permanent employee — as productive as possible?

Step one: put the VPN on a performance improvement plan

We’ve blogged extensively about our own troubles with VPNs. As we became a complex, multinational organization made up of contractors and full-time employees, the private network we deployed to host internal applications began to slow our teams down. We built Cloudflare Access to address our own challenges with the VPN, and since then hundreds of customers have used it to accelerate access for their remote workforces.

How to Build a Highly Productive Remote Team (or Team of Contractors) with Cloudflare for Teams

India’s largest B2B e-commerce platform, Udaan, is one example. They used Access to avoid ever having to deploy a VPN in the first place. As Udaan grew to new locations around the world, their IT team needed fast ways to give access to the thousands of users — including contractors, employees, interns and vendors — that needed to connect to their internal systems.

Now that their internal applications are protected with Cloudflare, Udaan’s IT team doesn’t need to spend time manually onboarding contractors and issuing them corporate accounts. And logging into Udaan’s tools, whether they’re SaaS apps or private applications, looks and feels the same every time, for every user.

How to Build a Highly Productive Remote Team (or Team of Contractors) with Cloudflare for Teams

“VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them,” said Amod Malviya, Cofounder and CTO, Udaan.  Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0”

Cloudflare Access can help speed up remote teams by replacing VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams can deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.  Remote teams get work done faster without having to deal with a VPN client, and IT teams spend less time troubleshooting their VPN issues.

Step two: make tools easier to find

High performing remote teams get new employees started fast. That starts with day-one access to the right tools. If you’re like me and you recently joined a new organization, you know how hard it can be to find the right applications you need to do your job. I am drowning in an ocean of new productivity tools.

App launchpads were designed to be a life-raft in the tools ocean. They make discovering apps easier by bringing every application a user can access into one easy, graphical dashboard. But they’re hard for IT teams to customize for different types of users with different permission levels (intern/contractor/full-time), and often not comprehensive of every kind of app (internal/SaaS).

Cloudflare Access’ App Launch is a dashboard for all the applications protected by Access. Once enabled, end users can login and connect to every app behind Access with a single click.  IT teams that are setting up contractors can send contractors a custom launchpad of everything they need to access on day one.

How to Build a Highly Productive Remote Team (or Team of Contractors) with Cloudflare for Teams

When administrators secure an application with Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

To check identity, Access relies on the identity provider that the team already uses. Access integrates with providers like OneLogin, Okta, AzureAD, G Suite and others to determine who a user is. If the user has not logged in yet, Access will prompt them to do so at the identity provider configured.

Step 3: fast-track your contractors

Modern remote teams are made up of whatever combination of people can get online and get the work done. That means many different kinds of users are working together in the same tools –full-time employees, contractors, freelancers, vendors and partners.

IT models predicated on full-time workforces imagined identifying users as a straight line process, where users could be identified and validated against one source of truth – the corporate directory. The old model breaks down when users join organizations temporarily to work on isolated projects, and you need to figure out how to authenticate them based on their organizational identity, not yours.

In response, many organizations deploy VPNs to temporary users, scotch-tape together federations between multiple SSO providers, or even have administrators spend hours issuing contracted users new corporate identities to complete one-off projects.

Meanwhile, contractors waste valuable cycles getting set up with the tools they need, and feel like second-class citizens in the IT hierarchy.

How to Build a Highly Productive Remote Team (or Team of Contractors) with Cloudflare for Teams

Cloudflare Access prevents contractor onboarding slowdown by simultaneously integrating with multiple identity providers, including popular services like Gmail or GitHub that do not require corporate subscriptions.

External users login with these accounts and still benefit from the same ease-of-use available to internal employees. Meanwhile, administrators avoid the burden in legacy deployments that require onboarding and offboarding new accounts for each project.

How to get started – at no cost

Cloudflare for Teams lets your team use all the same features to stay productive from anywhere in the world.

Effective until September 1, 2020, we’re making Cloudflare for Teams products free to small businesses. We’re doing this to help ensure that small businesses that implement work from home policies in order to combat the spread of the Coronavirus (COVID-19) can ensure business continuity.

You can learn more and apply at cloudflare.com/smallbusiness now.

Cloudflare for Teams Free for Small Businesses During Coronavirus Emergency

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/cloudflare-for-teams-free-for-small-businesses-during-coronavirus-emergency/

Cloudflare for Teams Free for Small Businesses During Coronavirus Emergency

Cloudflare for Teams Free for Small Businesses During Coronavirus Emergency

There are a lot of people and businesses worldwide that are currently suffering, so I don’t want to waste any time in getting to the point.

Beginning today, we are making our Cloudflare for Teams products free to small businesses around the world. Teams enables remote workers to operate securely and easily. We will continue this policy for at least the next 6 months. We’re doing this to help ensure that small businesses that implement work from home policies in order to combat the spread of the virus can ensure business continuity. You can learn more and apply at: https://www.cloudflare.com/smallbusiness

We’ve also helped launch an online hub where small businesses can see technology services available to them for free or a substantial discount from multiple companies, during the Coronavirus Emergency: https://openforbusiness.org

To understand more about why we’re doing this, read on.

The IT Strain of WFH

We have a team at Cloudflare carefully monitoring the spread of the SARS-Coronavirus-2, which is responsible for the COVID-19 respiratory disease. Like at many other companies, we have heeded the advice of medical professionals and government agencies and are increasingly allowing employees to work from home in impacted regions in order to hopefully help slow the spread of the disease.

While this is prudent advice to help control the spread of the disease, employees working from home put a different load on a company’s IT resources than if they are working from the office. In-person meetings are instead held online, so you need to ensure your video conferencing systems are up for the task. Critical documents can’t be signed in person, so electronic signature systems need to be in place. There’s an increased importance on online chat and other communication tools.

And, importantly, the systems that ensure online authorized access to these tools can no longer use the physical location of an employee as evidence they are authorized to use a service.

WFH Strains IT Security

We’ve seen some large companies struggle in ways both serious and silly with increased loads on their traditional firewall and VPN infrastructures over the last week.


Large organizations, undoubtedly, can work through these issues by either increasing the number of licenses for their firewalls and VPNs or moving to a more modern, cloud-based solution. What’s been concerning to us is the number of small businesses that don’t have the ability to quickly provision the resources they need to support their employees when they’re not physically in the office.


What We’re Seeing

The story that hit home to me came last week when I heard about a small business who had reached out to us. The company has approximately 100 employees in a region hard-hit by viral infections and thousands of partners who use their platform. They, responsibly, allowed their employees to work from home. Unfortunately, their small office VPN was limited in terms of the number of simultaneous users as well as capacity. Their outsourced IT team said getting a new one up and running would take at least a week. And, at a time when travel bookings were already waning, the owner was legitimately concerned that his business would not survive this crisis.

I happened to be sitting with a group of our sales engineers over lunch last week when I heard this story. They were proud that we’d been able to offer Cloudflare for Teams as a solution to quickly replace the travel agency’s VPN. And that’s great—the owner of the travel agency was thrilled—but it still felt like we should be doing more.

I spent some time digging into recent inquiries for Cloudflare for Teams coming from small businesses and found that the travel agency was hardly alone. Small businesses around the world are struggling to maintain some semblance of business continuity as increasingly their employees aren’t physically coming into the office. While firewalls and VPNs were hardly their only concern, the limitations they imposed were becoming real threats to business continuity.

The Fragility of Small Businesses

Small businesses are the lifeblood of most countries’ economies. In the United States, for instance, small businesses employ half of all non-government employees. They are responsible for the creation of two-thirds of net new jobs. Unfortunately, they are much more vulnerable to even minor interruptions in their operations. Oftentimes their margins are so thin that any significant new expense or reduction in revenue can cause them to fail.

Today Cloudflare makes most of our money selling to large enterprises. But serving small businesses has always been in our DNA. We began as a small business ourselves and spent our early years providing the tools previously available only to the big guys to every individual developer and small business. We wouldn’t be the company we are today if small businesses hadn’t trusted us in our early years.

So while the impact of the Coronavirus is being felt by businesses large and small, I am worried the impact on small businesses could be especially devastating. Small businesses have always been there for us and we want to be there for them during this time of increased strain, therefore today we’re announcing two initiatives:

Free Cloudflare for Teams

First, we are making Cloudflare for Teams available to small businesses worldwide for free for at least the next six months. We will evaluate the situation in six months and make a determination about whether we will extend the length of the free offer.

We are using the US Small Business Administration’s definition of a small business to define what businesses qualify, but the offer is not limited to US companies. The Coronavirus is an issue for small businesses globally and we have an extensive global network that can serve customers worldwide.

To apply, visit: https://www.cloudflare.com/smallbusiness

Our team is standing by and will move quickly evaluating applications.

Moreover, since small businesses often don’t have sophisticated IT teams, Cloudflare team members from all over the world have volunteered to host onboarding sessions to help small businesses get setup quickly and correctly. We’ve worked hard to make Cloudflare for Teams easy for any business to be able to use, but we understand that it can still be intimidating if your expertise isn’t IT. Our team stands ready to help.

The Open for Business Hub

Second, we realize that Cloudflare for Teams solves only one little part of a small business’ challenges as their employees increasingly work from home. They also need communication, video conferencing, collaboration, document management, and other IT resources. We don’t provide them all, but we know the leaders at a lot of companies who do.

Cloudflare for Teams Free for Small Businesses During Coronavirus Emergency

I spent the weekend talking with other companies that I admire and that provide cloud-based solutions that could help solve the challenges many businesses are currently facing. Many shared the same concerns that we had about the fragility of small businesses and wanted to help. Together we are helping launch a hub of resources for small businesses working to ensure business continuity over the months to come: https://openforbusiness.org/

The hub features free and deeply discounted services for small businesses from several technology companies. And I expect more will step up to this challenge over the days to come. To request inclusion, companies can email: [email protected].

We’re In This Together

The news of the spread of the Coronavirus has made it clear it is no longer business as usual for any business worldwide. Every responsible business leader spent the weekend worried about how they’re going to get through the weeks and months ahead: ensuring their employees’ safety, delivering for their customers, and protecting their business. I believe we have a duty to step up where we can to help each other out during times of stress like the one we’re in. Together, we can get through this.

How Cloudflare keeps employees productive from any location

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/how-cloudflare-keeps-employees-productive-from-any-location/

How Cloudflare keeps employees productive from any location

Cloudflare employs more than 1,200 people in 13 different offices and maintains a network that operates in 200 cities. To do that, we used to suffer through a traditional corporate VPN that backhauled traffic through a physical VPN appliance. It was, frankly, horrible to work with as a user or IT person.

With today’s mix of on-prem, public cloud and SaaS and a workforce that needs to work from anywhere, be it a coffee shop or home, that model is no longer sustainable. As we grew in headcount, we were spending too much time resolving VPN helpdesk tickets. As offices around the world opened, we could not ask our workforce to sit as every connection had to go back through a central location.

We also had to be ready to scale. Some organizations are currently scrambling to load test their own VPN in the event that their entire workforce needs to work remotely during the COVID-19 outbreak. We could not let a single physical appliance constrain our ability to deliver 26M Internet properties to audiences around the world.

To run a network like Cloudflare, we needed to use Cloudflare’s network to stay fast and secure.

We built Cloudflare Access, part of Cloudflare for Teams, as an internal project several years ago to start replacing our VPN with a faster, safer, alternative that made internal applications, no matter where they live ,seamless for our users.

To address the scale challenge, we built Cloudflare Access to run on Workers, Cloudflare’s serverless platform. Each data center in the Cloudflare network becomes a comprehensive identity proxy node, giving us the scale to stay productive from any location – and to do it for our customers as well.

Over the last two years, we’ve continued to expand its feature set by prioritizing the use cases we had to address to remove our reliance on a VPN. We’re excited to help customers stay online and productive with the same tools and services we use to run Cloudflare.

How does Cloudflare Access work?

Cloudflare Access is one-half of Cloudflare for Teams, a security platform that runs on Cloudflare’s network and focuses on keeping users, devices, and data safe without compromising experience or  performance. We built Cloudflare Access to solve our own headaches with private networks as we grew from a team concentrated in a single office to a globally distributed organization.

How Cloudflare keeps employees productive from any location

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

To defend against attackers addressing IPs directly, Argo Tunnel can help secure the interface and force outbound requests through Cloudflare Access. With Argo Tunnel, and firewall rules preventing inbound traffic, no request can reach those IPs without first hitting Cloudflare, where Access can evaluate the request for authentication.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, instantly connected to the application without being slowed down. Internally managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar.

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Our team members SSO into the Atlassian suite with one-click

We rely on a set of productivity tools built by Atlassian, including Jira and Confluence. We secure them with Cloudflare Access.

In the past, when our team members wanted to reach those applications, they first logged into the VPN with a separate set of credentials unique to their VPN client. They navigated to one of the applications, and then broke out a second set of credentials, specific to the Atlassian suite, to reach Jira or Wiki.

All of this was clunky, reliant on the VPN, and not integrated with our SSO provider.

We decided to put the Atlassian suite behind Access and to build a plugin that could use the login from Access to SSO the end user into the application. Users login with their SSO provider and are instantly redirected into Jira or Wiki or Bitbucket, authorized without managing extra credentials.

We selected Atlassian because nearly every member of our global team uses the product each day. Saving the time to input a second set of credentials, daily, has real impact. Additionally, removing the extra step makes reaching these critical tools easier from mobile devices.

When we rolled this out at Cloudflare, team members had one fewer disruption in their day. We all became accustomed to it. We only received real feedback when we disabled it, briefly, to test a new release. And that response was loud. When we returned momentarily to the old world of multiple login flows, we started to appreciate just how convenient SSO is for a team. The lesson motivated us to make this available, quickly, to our customers.

You can read more about using our Atlassian plugin in your organization, check out the announcement here.

Our engineers can SSH to the resources they need

When we launched Cloudflare Access, we started with browser-based applications. We built a command-line tool to make CLI operations a bit easier, but SSH connections still held us back from killing the VPN altogether.

To solve that challenge, we released support for SSH connections through Cloudflare Access. The feature builds on top of our Argo Tunnel and Argo Smart Routing products.

Argo Smart Routing intelligently routes traffic around Cloudflare’s network, so that our engineers can connect to any data center in our fleet without suffering from Internet congestion. The Argo Tunnel product creates secure, outbound-only, connections from our data centers back to our network.

Team members can then use their SSH client to connect without any special wrappers or alternate commands. Our command-line tool, `cloudflared`, generates a single config file change and our engineers are ready to reach servers around the world.

We started by making our internal code repositories available in this flow. Users login with our SSO and can pull and submit new code without the friction of a private network. We then expanded the deployment to make it possible for our reliability engineering team to connect to the data centers that power Cloudflare’s network without a VPN.

You can read more about using our SSH workflow in your organization in the post here.

We can onboard users rapidly

Cloudflare continues to grow as we add new team members in locations around the world. Keeping a manual list of bookmarks for new users no longer scales.

With Cloudflare Access, we have the pieces that we need to remove that step in the onboarding flow. We released a feature, the Access App Launch, that gives our users a single location from which they can launch any application they should be able to reach with a single click.

For administrators, the App Launch does not require additional configuration for each app. The dashboard reads an organization’s Access policies and only presents apps to the end user that they already have permission to reach. Each team member has a personalized dashboard, out of the box, that they can use to navigate and find the tools they need. No onboarding sessions required.

How Cloudflare keeps employees productive from any location

You can read more about using our App Launch feature in your organization in the post here.

Our security team can add logging everywhere with one-click

When users leave the office, security teams can lose a real layer of a defense-in-depth strategy. Employees do not badge into a front desk when they work remotely.

Cloudflare Access addresses remote work blindspots by adding additional visibility into how applications are used. Access logs every authentication event and, if enabled, every user request made to a resource protected by the platform. Administrators can capture every request and attribute it to a user and IP address without any code changes. Cloudflare Access can help teams meet compliance and regulatory requirements for distributed users without any additional development time.

Our Security team uses this data to audit every request made to internal resources without interrupting any application owners.

You can read more about using our per-request logging in your organization in the post here.

How to get started

Your team can use all of the same features to stay online and secure from any location. To find out more about Cloudflare for Teams, visit teams.cloudflare.com.

If you’re looking to get started with Cloudflare Access today, it’s available on any Cloudflare plan. The first five seats are free. Follow the link here to get started.
Finally, need help in getting it up? A quick start guide is available here.

Seamless remote work with Cloudflare Access

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/seamless-remote-work-with-cloudflare-access/

Seamless remote work with Cloudflare Access

The novel coronavirus is actively changing how organizations work in real-time. According to Fortune, the virus has led to the “world’s largest work-from-home experiment.” As the epidemic crosses borders, employees are staying home and putting new stress on how companies manage remote work.

This is only accelerating an existing trend, however. Remote work has gained real traction in the last decade and Gartner projects that it will only continue. However, teams which are moving to a distributed model tend to do so slowly. When those timelines are accelerated, IT and security administrators need to be able to help their workforce respond without disrupting their team members.

Cloudflare Access can help teams migrate to a model that makes it seamless for users to work from any location, or any device, without the need for lengthy migrations or onboarding sessions. Cloudflare Access can be deployed in less than one hour and bring SaaS-like convenience and speed to the self-hosted applications that previously lived behind a VPN.

Leaving the castle-and-moat

When users share a physical space, working on a private network is easy. Users do not need clunky VPN clients to connect to the resources they need. Team members physically sit close to the origin servers and code repositories that power their corporate apps.

Seamless remote work with Cloudflare Access

In this castle-and-moat model, every team member is assumed to be trusted simply by their presence inside of the walls of the office. They can silently attempt to connect to any resource without any default checks. Administrators must build complex network segmentation to avoid breaches and logging is mostly absent.

Seamless remote work with Cloudflare Access

This model has begun to fall apart for two reasons: the shift to cloud-hosted applications and the distribution of employees around the world.

The first trend, cloud-hosted applications, shifts resources outside of the castle-and-moat. Corporate apps no longer live in on-premise data centers but operate from centralized cloud providers. Those environments can sit hundreds or thousands of miles away from users, slowing down the connections to the applications hosted in those providers.

The second shift, users working outside of the office or from branch offices, introduces both a performance challenge in addition to a security concern. Organizations need to poke holes in their perimeter to allow users to connect back into their private network, before sending those users on to their target destination.

The spread of the coronavirus has accelerated the trend of users working away from home. Remote workers are putting new strain on the VPN appliances that sit in corporate headquarters, and that adds to the burden of IT teams attempting to manage a workplace shift that is happening much faster than planned.

Cloudflare Access

Cloudflare Access is one-half of Cloudflare for Teams, a security platform that runs on Cloudflare’s network and focuses on keeping users, devices, and data safe without compromising for performance. We built Cloudflare Access to solve our own headaches with private networks as we grew from a team concentrated in a single office to a globally distributed organization.

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Seamless remote work with Cloudflare Access

Work from any device

The coronavirus is not only changing where employees work, but also the devices they use to do their work. Digitimes reports that the demand for tablets continues to grow as workers find alternatives to the desktops sitting in corporate offices, a trend they attribute to the rise in cases of coronavirus and increasing percentages of employees working outside of the office.

Tablets and other mobile devices introduce new challenges for teams. Users need to install and configure a VPN profile to connect, if they can connect at all.

Cloudflare Access offers an alternative that requires no user action or IT administration. End users can login and reach their corporate apps from any device, no client or agent required.

Rapid remote development

Working remotely can be difficult for users doing their job on browser-based applications. It becomes much more difficult for engineers and developers who need to do their work over RDP or SSH.

In the event that teams need to connect to the desktops back inside of the office, Access also supports RDP connections. Team members can reach desktops over Cloudflare’s global network, reducing the latency of traditional VPN-based RDP clients. Organizations do not need to deploy new credentials or run the risk of leaving remote desktops open to the Internet. Cloudflare Access integrates with a team’s identity provider to bring SSO login to remote desktops.

Cloudflare Access also includes support for native SSH workflows. With Access, developers and engineers can connect over SSH to the code repositories or build systems they need to stay productive. Users can connect remotely, from low-end devices, to powerful servers and machines hosted in cloud environments.

Seamless remote work with Cloudflare Access

Additionally, with the SSH feature in Cloudflare Access, organizations can replace the static SSH keys that live on user devices with short-lived certificates generated when a user logs in to Okta, AzureAD, or any other supported identity provider. If team members working from home are using personal devices, organizations can prevent those devices from ever storing long-lived keys that can reach production systems or code repositories.

One-click logging and compliance

When users leave the office, security teams can lose a real layer of a defense-in-depth strategy. Employees do not badge into a front desk when they work remotely.

Cloudflare Access addresses remote work blindspots by adding additional visibility into how applications are used. Access logs every authentication event and, if enabled, every user request made to a resource protected by the platform. Administrators can capture every request and attribute it to a user and IP address without any code changes. Cloudflare Access can help teams meet compliance and regulatory requirements for distributed users without any additional development time.

Onboard users without onboarding sessions

When IT departments change how users do their work, even to faster and safer models, those shifts can still require teams to invest time in training employees. Discoverability becomes a real problem. If users cannot find the applications they need, teams lose the benefit of faster connections and maintenance overhead.

Cloudflare Access includes an application launchpad , available to every user with additional configuration. With the Access App Launch, administrators can also skip sending custom emails or lists of links to new contractors and replace them with a single URL. When external users login with LinkedIn, GitHub, or any other provider, the Access App Launch will display only the applications they can reach. In a single view, users can find and launch the tools that they need.

Whether those users are employees or contractors and partners, every team member can quickly find the tools they need to avoid losing a step as they shift from working on a private network to a model built on Cloudflare’s global network.

Seamless remote work with Cloudflare Access

How to get started

It’s really very simple. To find out more about Cloudflare for Teams, visit teams.cloudflare.com.

If you’re looking to get started with Cloudflare Access today, it’s available on any Cloudflare plan. The first five seats are free. Follow the link here to get started.

Finally, need help in getting it up? A quick start guide is available here.

Using your devices as the key to your apps

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/using-your-devices-as-the-key-to-your-apps/

Using your devices as the key to your apps

I keep a very detailed budget. I have for the last 7 years. I manually input every expense into a spreadsheet app and use a combination of sumifs functions to track spending.

Opening the spreadsheet app, and then the specific spreadsheet, every time that I want to submit an expense is a little clunky. I’m working on a new project to make that easier. I’m building a simple web app, with a very basic form, into which I will enter one-off expenses. This form will then append those expenses as rows into the budget workbook.

I want to lock down this project; I prefer that I am the only person with the power to wreck my budget. To do that, I’m going to use Cloudflare Access. With Access, I can require a login to reach the page – no server-side changes required.

Except, I don’t want to allow logins from any device. For this project, I want to turn my iPhone into the only device that can reach this app.

To do that, I’ll use Cloudflare Access in combination with an open source toolkit from Cloudflare, cfssl. Together, I can convert my device into a secure key for this application in about 45 minutes.

While this is just one phone and a simple project, a larger organization could scale this up to hundreds of thousands or millions – without spending 45 minutes per device. Authentication occurs in the Cloudflare network and lets teams focus on securely deploying devices, from IoT sensors to corporate laptops, that solve new problems.


🎯 I have a few goals for this project:

  • Protect my prototype budget-entry app with authentication
  • Avoid building a custom login flow into the app itself
  • Use mutual TLS (mTLS) authentication so that only requests from my iPhone are allowed

🗺️ This walkthrough covers how to:

  • Build an Access policy to enforce mutual TLS authentication
  • Use Cloudflare’s PKI toolkit to create a Root CA and then generate a client certificate
  • Use OpenSSL to convert that client certificate into a format for iPhone usage
  • Place that client certificate on my iPhone

⏲️Time to complete: ~45 minutes


Cloudflare Access

Cloudflare Access is a bouncer that checks ID at the door. Any and every door.

Old models of security built on private networks operate like a guard at the front door of a large apartment building, except this apartment building does not have locks on any of the individual units. If you can walk through the front door, you could walk into any home. By default, private networks assume that a user on that network is trusted until proven malicious – you’re free to roam the building until someone reports you. None of us want to live in that complex.

Access replaces that model with a bouncer in front of each apartment unit. Cloudflare checks every attempt to reach a protected app, machine, or remote desktop against rules that define who is allowed in.

To perform that check, Access needs to confirm a user’s identity. To do that, teams can integrate Access with identity providers like G Suite, AzureAD, Okta or even Facebook and GitHub.

Using your devices as the key to your apps

For this project, I want to limit not just who can reach the app, but also what can reach it. I want to only allow my particular iPhone to connect. Since my iPhone does not have its own GitHub account, I need to use a workflow that allows devices to authenticate: certificates, specifically mutual TLS (mTLS) certificate authentication.

📃 Please reach out. Today, the mTLS feature in Access is only available to Enterprise plans. Are you on a self-serve plan and working on a project where you want to use mTLS? IoT, service-to-service, corporate security included. If so, please reach out to me at [email protected] and let’s chat.

mTLS and cfssl

Public key infrastructure (PKI) makes it possible for your browser to trust that this blog really is blog.cloudflare.com. When you visit this blog, the site presents a certificate to tell your browser that it is the real blog.cloudflare.com.

Your browser is skeptical. It keeps a short list of root certificates that it will trust. Your browser will only trust certificates signed by authorities in that list. Cloudflare offers free certificates for hostnames using its reverse proxy. You can also get origin certificates from other services like Let’s Encrypt. Either way, when you visit a web page with a certificate, you can ensure you are on the authentic site and that the traffic between you and the blog is encrypted.

For this project, I want to go the other direction. I want my device to present a certificate to Cloudflare Access demonstrating that it is my authentic iPhone. To do that, I need to create a chain that can issue a certificate to my device.

Cloudflare publishes an open source PKI toolkit, cfssl, which can solve that problem for me. cfssl lets me quickly create a Root CA and then use that root to generate a client certificate, which will ultimately live on my phone.

To begin, I’ll follow the instructions here to set up cfssl on my laptop. Once installed, I can start creating certificates.

Generating a Root CA and an allegory about Texas

First, I need to create the Root CA. This root will give Access a basis for trusting client certificates. Think of the root as the Department of Motor Vehicles (DMV) in Texas. Only the State of Texas, through the DMV, can issue Texas driver licenses. Bouncers do not need to know about every driver license issued, but they do know to trust the State of Texas and how to validate Texas-issued licenses.

In this case, Access does not need to know about every client cert issued by this Root CA. The product only needs to know to trust this Root CA and how to validate if client certificates were issued by this root.

I’m going to start by creating a new directory, cert-auth to keep things organized. Inside of that directory, I’ll create a folder, root, where I’ll store the Root CA materials

Next, I’ll define some details about the Root CA. I’ll create a file, ca-csr.json and give it some specifics that relate to my deployment.

{
    "CN": "Sam Money App",
    "key": {
      "algo": "rsa",
      "size": 4096
    },
    "names": [
      {
        "C": "PT",
        "L": "Lisboa",
        "O": "Money App Test",
        "OU": "Sam Projects",
        "ST": "Lisboa"
      }
    ]
  }

Now I need to configure how the CA will be used. I’ll create another new file, ca-config.json, and add the following details.

{
    "signing": {
      "default": {
        "expiry": "8760h"
      },
      "profiles": {
        "server": {
          "usages": ["signing", "key encipherment", "server auth"],
          "expiry": "8760h"
        },
        "client": {
          "usages": ["signing","key encipherment","client auth"],
          "expiry": "8760h"
        }
      }
    }
  }

The ca-csr.json file gives the Root CA a sense of identity and the ca-config.json will later define the configuration details when signing new client certificates.

With that in place, I can go ahead and create the Root CA. I’ll run the following command in my terminal from within the root folder.

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca

The “Root CA” here is really a composition of three files, all of which are created by that command. cfssl generates a private key, a certificate signing request, and the certificate itself. The output should resemble this screenshot:

Using your devices as the key to your apps

I need to guard the private key like it’s the only thing that matters. In real production deployments, most organizations will create an intermediate certificate and sign client certificates with that intermediate. This allows administrators to keep the root locked down even further, they only need to handle it when creating new intermediates (and those intermediates can be quickly revoked). For this test, I’m just going to use a root to create the client certificates.

Now that I have the Root CA, I can upload the certificate in PEM format to Cloudflare Access. Cloudflare can then use that certificate to authenticate incoming requests for a valid client certificate.

In the Cloudflare Access dashboard, I’ll use the card titled “Mutual TLS Root Certificates”. I can click “Add A New Certificate” and then paste the content of the ca.pem file directly into it.

Using your devices as the key to your apps

I need to associate this certificate with a fully qualified domain name (FQDN). In this case, I’m going to use the certificate to authenticate requests for money.samrhea.com, so I’ll just input that subdomain, but I could associate this cert with multiple FQDNs if needed.

Once saved, the Access dashboard will list the new Root CA.

Using your devices as the key to your apps

Building an Access Policy

Before I deploy the budget app prototype to money.samrhea.com, I need to lock down that subdomain with an Access policy.

In the Cloudflare dashboard, I’ll select the zone samrhea.com and navigate to the Access tab. Once there, I can click Create Access Policy in the Access Policies card. That card will launch an editor where I can build out the rule(s) for reaching this subdomain.

Using your devices as the key to your apps

In the example above, the policy will be applied to just the subdomain money.samrhea.com. I could make it more granular with path-based rules, but I’ll keep it simple for now.

In the Policies section, I’m going to create a rule to allow client certificates signed by the Root CA I generated to reach the application. In this case, I’ll pick “Non Identity” from the Decision drop-down. I’ll then choose “Valid Certificate” under the Include details.

This will allow any valid certificate signed by the “Money App Test” CA I uploaded earlier. I could also build a rule using Common Names, but I’ll stick with valid cert for now. I’ll hit Save and finish the certificate deployment.

Issuing client certs and converting to PKCS #12

So far, I have a Root CA and an Access policy that enforces mTLS with client certs issued by that Root CA. I’ve stationed a bouncer outside of my app and told them to only trust ID cards issued by The State of Texas. Now I need to issue a license in the form of a client certificate.

To avoid confusion, I’m going to create a new folder in the same directory as the root folder, this one called client. Inside of this directory, I’ll create a new file: client-csr.json with the following .json blob:

{
    "CN": "Rhea Group",
    "hosts": [""],
    "key": {
      "algo": "rsa",
      "size": 4096
    },
    "names": [
      {
        "C": "PT",
        "L": "Lisboa",
        "O": "Money App Test",
        "OU": "Sam Projects",
        "ST": "Lisboa"
      }
    ]
  }

This sets configuration details for the client certificate that I’m about to request.

I can now use cfssl to generate a client certificate against my Root CA. The command below uses the -profile flag to create the client cert using the JSON configuration I just saved. This also gives the file the name iphone-client.

$ cfssl gencert -ca=../root/ca.pem -ca-key=../root/ca-key.pem -config=../root/ca-config.json -profile=client client-csr.json | cfssljson -bare iphone-client

The combined output should resemble the following:

Using your devices as the key to your apps

FileDescriptionclient-csr.jsonThe JSON configuration created earlier to specify client cert details.iphone-client-key.pemThe private key for the client certificate generated.iphone-client.csrThe certificate signing request used to request the client cert.iphone-client.pemThe client certificate created.

With my freshly minted client certificate and key, I can go ahead and test that it works with my Access policy with a quick cURL command.

$ curl -v --cert iphone-client.pem --key iphone-client-key.pem https://money.samrhea.com

That works, but I’m not done yet. I need to get this client certificate on my iPhone. To do so, I need to convert the certificate and key into a format that my iPhone understands, PKCS #12.

PKCS 12 is a file format used for storing cryptographic objects. To convert the two .pem files, the certificate and the key, into PKCS 12, I’m going to use the OpenSSL command-line tool.

OpenSSL is a popular toolkit for TLS and SSL protocols that can solve a wide variety of certificate use cases. In my example, I just need it for one command:

$ openssl pkcs12 -export -out sam-iphone.p12 -inkey iphone-client-key.pem -in iphone-client.pem -certfile ../root/ca.pem

The command above takes the key and certificate generated previously and converts them into a single .p12 file. I’ll also be prompted to create an “Export Password”. I’ll use something that I can remember, because I’m going to need it in the next section.

Using your devices as the key to your apps

Authenticating from my iPhone

I now need to get the .p12 file on my iPhone. In corporate environments, organizations distribute client certificates via mobile device management (MDM) programs or other tools. I’m just doing this for a personal test project, so I’m going to use AirDrop.

Using your devices as the key to your apps

Once my iPhone receives the file, I’ll be prompted to select a device where the certificate will be installed as a device profile.

Using your devices as the key to your apps

I’ll then be prompted to enter my device password and the password set in the “Export” step above. Once complete, I can view the certificate under Profiles in Settings.

Using your devices as the key to your apps

Now, when I visit money.samrhea.com for the first time from my phone, I’ll be prompted to use the profile created.

Using your devices as the key to your apps

Browsers can exhibit strange behavior when handling client certificate prompts. This should be the only time I need to confirm this profile should be used, but it might happen again.

What’s next?

My prototype personal finance app now is only accessible from my iPhone. This also makes it easy to login through Access from my device.

Access policies can be pretty flexible. If I want to reach it from a different device, I could build a rule to allow logins through Google as an alternative. I can also create a policy to require both a certificate and SSO login.

Beyond just authentication, I can also build something with this client cert flow now. Cloudflare Access makes the details from the client cert, the ones I created earlier in this tutorial, available to Cloudflare Workers. I can start to create routing rules or trigger actions based on the details about this client cert.

Introducing Cloudflare for Teams

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/introducing-cloudflare-for-teams/

Introducing Cloudflare for Teams

Ten years ago, when Cloudflare was created, the Internet was a place that people visited. People still talked about ‘surfing the web’ and the iPhone was less than two years old, but on July 4, 2009 large scale DDoS attacks were launched against websites in the US and South Korea.

Those attacks highlighted how fragile the Internet was and how all of us were becoming dependent on access to the web as part of our daily lives.

Fast forward ten years and the speed, reliability and safety of the Internet is paramount as our private and work lives depend on it.

We started Cloudflare to solve one half of every IT organization’s challenge: how do you ensure the resources and infrastructure that you expose to the Internet are safe from attack, fast, and reliable. We saw that the world was moving away from hardware and software to solve these problems and instead wanted a scalable service that would work around the world.

To deliver that, we built one of the world’s largest networks. Today our network spans more than 200 cities worldwide and is within milliseconds of nearly everyone connected to the Internet. We have built the capacity to stand up to nation-state scale cyberattacks and a threat intelligence system powered by the immense amount of Internet traffic that we see.

Introducing Cloudflare for Teams

Today we’re expanding Cloudflare’s product offerings to solve the other half of every IT organization’s challenge: ensuring the people and teams within an organization can access the tools they need to do their job and are safe from malware and other online threats.

The speed, reliability, and protection we’ve brought to public infrastructure is extended today to everything your team does on the Internet.

In addition to protecting an organization’s infrastructure, IT organizations are charged with ensuring that employees of an organization can access the tools they need safely. Traditionally, these problems would be solved by hardware products like VPNs and Firewalls. VPNs let authorized users access the tools they needed and Firewalls kept malware out.

Castle and Moat

Introducing Cloudflare for Teams

The dominant model was the idea of a castle and a moat. You put all your valuable assets inside the castle. Your Firewall created the moat around the castle to keep anything malicious out. When you needed to let someone in, a VPN acted as the drawbridge over the moat.

This is still the model most businesses use today, but it’s showing its age. The first challenge is that if an attacker is able to find its way over the moat and into the castle then it can cause significant damage. Unfortunately, few weeks go by without reading a news story about how an organization had significant data compromised because an employee fell for a phishing email, or a contractor was compromised, or someone was able to sneak into an office and plug in a rogue device.

The second challenge of the model is the rise of cloud and SaaS. Increasingly an organization’s resources aren’t in the just one castle anymore, but instead in different public cloud and SaaS vendors.

Services like Box, for instance, provide better storage and collaboration tools than most organizations could ever hope to build and manage themselves. But there’s literally nowhere you can ship a hardware box to Box in order to build your own moat around their SaaS castle. Box provides some great security tools themselves, but they are different from the tools provided by every other SaaS and public cloud vendor. Where IT organizations used to try to have a single pane of glass with a complex mess of hardware to see who was getting stopped by their moats and who was crossing their drawbridges, SaaS and cloud make that visibility increasingly difficult.

The third challenge to the traditional castle and moat strategy of IT is the rise of mobile. Where once upon a time your employees would all show up to work in your castle, now people are working around the world. Requiring everyone to login to a limited number of central VPNs becomes obviously absurd when you picture it as villagers having to sprint back from wherever they are across a drawbridge whenever they want to get work done. It’s no wonder VPN support is one of the top IT organization tickets and likely always will be for organizations that maintain a castle and moat approach.

Introducing Cloudflare for Teams

But it’s worse than that. Mobile has also introduced a culture where employees bring their own devices to work. Or, even if on a company-managed device, work from the road or home — beyond the protected walls of the castle and without the security provided by a moat.

If you’d looked at how we managed our own IT systems at Cloudflare four years ago, you’d have seen us following this same model. We used firewalls to keep threats out and required every employee to login through our VPN to get their work done. Personally, as someone who travels extensively for my job, it was especially painful.

Regularly, someone would send me a link to an internal wiki article asking for my input. I’d almost certainly be working from my mobile phone in the back of a cab running between meetings. I’d try and access the link and be prompted to login to our VPN in San Francisco. That’s when the frustration would start.

Corporate mobile VPN clients, in my experience, all seem to be powered by some 100-sided die that only will allow you to connect if the number of miles you are from your home office is less than 25 times whatever number is rolled. Much frustration, and several IT tickets later, with a little luck I may be able to connect. And, even then, the experience was horribly slow and unreliable.

When we audited our own system, we found that the frustration with the process had caused multiple teams to create work arounds that were, effectively, unauthorized drawbridges over our carefully constructed moat. And, as we increasingly adopted SaaS tools like Salesforce and Workday, we lost much visibility into how these tools were being used.

Around the same time we were realizing the traditional approach to IT security was untenable for an organization like Cloudflare, Google published their paper titled “BeyondCorp: A New Approach to Enterprise Security.” The core idea was that a company’s intranet should be no more trusted than the Internet. And, rather than the perimeter being enforced by a singular moat, instead each application and data source should authenticate the individual and device each time it is accessed.

The BeyondCorp idea, which has come to be known as a ZeroTrust model for IT security, was influential for how we thought about our own systems. Powerfully, because Cloudflare had a flexible global network, we were able to use it both to enforce policies as our team accessed tools as well as to protect ourselves from malware as we did our jobs.

Cloudflare for Teams

Today, we’re excited to announce Cloudflare for Teams™: the suite of tools we built to protect ourselves, now available to help any IT organization, from the smallest to the largest.

Cloudflare for Teams is built around two complementary products: Access and Gateway. Cloudflare Access™ is the modern VPN — a way to ensure your team members get fast access to the resources they need to do their job while keeping threats out. Cloudflare Gateway™ is the modern Next Generation Firewall — a way to ensure that your team members are protected from malware and follow your organization’s policies wherever they go online.

Powerfully, both Cloudflare Access and Cloudflare Gateway are built atop the existing Cloudflare network. That means they are fast, reliable, scalable to the largest organizations, DDoS resistant, and located everywhere your team members are today and wherever they may travel. Have a senior executive going on a photo safari to see giraffes in Kenya, gorillas in Rwanda, and lemurs in Madagascar — don’t worry, we have Cloudflare data centers in all those countries (and many more) and they all support Cloudflare for Teams.

Introducing Cloudflare for Teams

All Cloudflare for Teams products are informed by the threat intelligence we see across all of Cloudflare’s products. We see such a large diversity of Internet traffic that we often see new threats and malware before anyone else. We’ve supplemented our own proprietary data with additional data sources from leading security vendors, ensuring Cloudflare for Teams provides a broad set of protections against malware and other online threats.

Moreover, because Cloudflare for Teams runs atop the same network we built for our infrastructure protection products, we can deliver them very efficiently. That means that we can offer these products to our customers at extremely competitive prices. Our goal is to make the return on investment (ROI) for all Cloudflare for Teams customers nothing short of a no brainer. If you’re considering another solution, contact us before you decide.

Both Cloudflare Access and Cloudflare Gateway also build off products we’ve launched and battle tested already. For example, Gateway builds, in part, off our 1.1.1.1 Public DNS resolver. Today, more than 40 million people trust 1.1.1.1 as the fastest public DNS resolver globally. By adding malware scanning, we were able to create our entry-level Cloudflare Gateway product.

Cloudflare Access and Cloudflare Gateway build off our WARP and WARP+ products. We intentionally built a consumer mobile VPN service because we knew it would be hard. The millions of WARP and WARP+ users who have put the product through its paces have ensured that it’s ready for the enterprise. That we have 4.5 stars across more than 200,000 ratings, just on iOS, is a testament of how reliable the underlying WARP and WARP+ engines have become. Compare that with the ratings of any corporate mobile VPN client, which are unsurprisingly abysmal.

We’ve partnered with some incredible organizations to create the ecosystem around Cloudflare for Teams. These include endpoint security solutions including VMWare Carbon Black, Malwarebytes, and Tanium. SEIM and analytics solutions including Datadog, Sumo Logic, and Splunk. Identity platforms including Okta, OneLogin, and Ping Identity. Feedback from these partners and more is at the end of this post.

If you’re curious about more of the technical details about Cloudflare for Teams, I encourage you to read Sam Rhea’s post.

Serving Everyone

Cloudflare has always believed in the power of serving everyone. That’s why we’ve offered a free version of Cloudflare for Infrastructure since we launched in 2010. That belief doesn’t change with our launch of Cloudflare for Teams. For both Cloudflare Access and Cloudflare Gateway, there will be free versions to protect individuals, home networks, and small businesses. We remember what it was like to be a startup and believe that everyone deserves to be safe online, regardless of their budget.

With both Cloudflare Access and Gateway, the products are segmented along a Good, Better, Best framework. That breaks out into Access Basic, Access Pro, and Access Enterprise. You can see the features available with each tier in the table below, including Access Enterprise features that will roll out over the coming months.

Introducing Cloudflare for Teams

We wanted a similar Good, Better, Best framework for Cloudflare Gateway. Gateway Basic can be provisioned in minutes through a simple change to your network’s recursive DNS settings. Once in place, network administrators can set rules on what domains should be allowed and filtered on the network. Cloudflare Gateway is informed both by the malware data gathered from our global sensor network as well as a rich corpus of domain categorization, allowing network operators to set whatever policy makes sense for them. Gateway Basic leverages the speed of 1.1.1.1 with granular network controls.

Gateway Pro, which we’re announcing today and you can sign up to beta test as its features roll out over the coming months, extends the DNS-provisioned protection to a full proxy. Gateway Pro can be provisioned via the WARP client — which we are extending beyond iOS and Android mobile devices to also support Windows, MacOS, and Linux — or network policies including MDM-provisioned proxy settings or GRE tunnels from office routers. This allows a network operator to filter on policies not merely by the domain but by the specific URL.

Introducing Cloudflare for Teams

Building the Best-in-Class Network Gateway

While Gateway Basic (provisioned via DNS) and Gateway Pro (provisioned as a proxy) made sense, we wanted to imagine what the best-in-class network gateway would be for Enterprises that valued the highest level of performance and security. As we talked to these organizations we heard an ever-present concern: just surfing the Internet created risk of unauthorized code compromising devices. With every page that every user visited, third party code (JavaScript, etc.) was being downloaded and executed on their devices.

The solution, they suggested, was to isolate the local browser from third party code and have websites render in the network. This technology is known as browser isolation. And, in theory, it’s a great idea. Unfortunately, in practice with current technology, it doesn’t perform well. The most common way the browser isolation technology works is to render the page on a server and then push a bitmap of the page down to the browser. This is known as pixel pushing. The challenge is that can be slow, bandwidth intensive, and it breaks many sophisticated web applications.

We were hopeful that we could solve some of these problems by moving the rendering of the pages to Cloudflare’s network, which would be closer to end users. So we talked with many of the leading browser isolation companies about potentially partnering. Unfortunately, as we experimented with their technologies, even with our vast network, we couldn’t overcome the sluggish feel that plagues existing browser isolation solutions.

Enter S2 Systems

Introducing Cloudflare for Teams

That’s when we were introduced to S2 Systems. I clearly remember first trying the S2 demo because my first reaction was: “This can’t be working correctly, it’s too fast.” The S2 team had taken a different approach to browser isolation. Rather than trying to push down a bitmap of what the screen looked like, instead they pushed down the vectors to draw what’s on the screen. The result was an experience that was typically at least as fast as browsing locally and without broken pages.

The best, albeit imperfect, analogy I’ve come up with to describe the difference between S2’s technology and other browser isolation companies is the difference between WindowsXP and MacOS X when they were both launched in 2001. WindowsXP’s original graphics were based on bitmapped images. MacOS X were based on vectors. Remember the magic of watching an application “genie” in and out the MacOS X doc? Check it out in a video from the launch…

At the time watching a window slide in and out of the dock seemed like magic compared with what you could do with bitmapped user interfaces. You can hear the awe in the reaction from the audience. That awe that we’ve all gotten used to in UIs today comes from the power of vector images. And, if you’ve been underwhelmed by the pixel-pushed bitmaps of existing browser isolation technologies, just wait until you see what is possible with S2’s technology.

Introducing Cloudflare for Teams

We were so impressed with the team and the technology that we acquired the company. We will be integrating the S2 technology into Cloudflare Gateway Enterprise. The browser isolation technology will run across Cloudflare’s entire global network, bringing it within milliseconds of virtually every Internet user. You can learn more about this approach in Darren Remington’s blog post.

Once the rollout is complete in the second half of 2020 we expect we will be able to offer the first full browser isolation technology that doesn’t force you to sacrifice performance. In the meantime, if you’d like a demo of the S2 technology in action, let us know.

The Promise of a Faster Internet for Everyone

Cloudflare’s mission is to help build a better Internet. With Cloudflare for Teams, we’ve extended that network to protect the people and organizations that use the Internet to do their jobs. We’re excited to help a more modern, mobile, and cloud-enabled Internet be safer and faster than it ever was with traditional hardware appliances.

But the same technology we’re deploying now to improve enterprise security holds further promise. The most interesting Internet applications keep getting more complicated and, in turn, requiring more bandwidth and processing power to use.

For those of us fortunate enough to be able to afford the latest iPhone, we continue to reap the benefits of an increasingly powerful set of Internet-enabled tools. But try and use the Internet on a mobile phone from a few generations back, and you can see how quickly the latest Internet applications leaves legacy devices behind. That’s a problem if we want to bring the next 4 billion Internet users online.

We need a paradigm shift if the sophistication of applications and complexity of interfaces continues to keep pace with the latest generation of devices. To make the best of the Internet available to everyone, we may need to shift the work of the Internet off the end devices we all carry around in our pockets and let the network — where power, bandwidth, and CPU are relatively plentiful — carry more of the load.

That’s the long term promise of what S2’s technology combined with Cloudflare’s network may someday power. If we can make it so a less expensive device can run the latest Internet applications — using less battery, bandwidth, and CPU than ever before possible — then we can make the Internet more affordable and accessible for everyone.

We started with Cloudflare for Infrastructure. Today we’re announcing Cloudflare for Teams. But our ambition is nothing short of Cloudflare for Everyone.

Early Feedback on Cloudflare for Teams from Customers and Partners

Introducing Cloudflare for Teams

“Cloudflare Access has enabled Ziff Media Group to seamlessly and securely deliver our suite of internal tools to employees around the world on any device, without the need for complicated network configurations,” said Josh Butts, SVP Product & Technology, Ziff Media Group.

Introducing Cloudflare for Teams

“VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them,” said Amod Malviya, Cofounder and CTO, Udaan. “Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0”

Introducing Cloudflare for Teams

“Roman makes healthcare accessible and convenient,” said Ricky Lindenhovius, Engineering Director, Roman Health. “Part of that mission includes connecting patients to physicians, and Cloudflare helps Roman securely and conveniently connect doctors to internally managed tools. With Cloudflare, Roman can evaluate every request made to internal applications for permission and identity, while also improving speed and user experience.”

Introducing Cloudflare for Teams

“We’re excited to partner with Cloudflare to provide our customers an innovative approach to enterprise security that combines the benefits of endpoint protection and network security,” said Tom Barsi, VP Business Development, VMware. “VMware Carbon Black is a leading endpoint protection platform (EPP) and offers visibility and control of laptops, servers, virtual machines, and cloud infrastructure at scale. In partnering with Cloudflare, customers will have the ability to use VMware Carbon Black’s device health as a signal in enforcing granular authentication to a team’s internally managed application via Access, Cloudflare’s Zero Trust solution. Our joint solution combines the benefits of endpoint protection and a zero trust authentication solution to keep teams working on the Internet more secure.”

Introducing Cloudflare for Teams

“Rackspace is a leading global technology services company accelerating the value of the cloud during every phase of our customers’ digital transformation,” said Lisa McLin, vice president of alliances and channel chief at Rackspace. “Our partnership with Cloudflare enables us to deliver cutting edge networking performance to our customers and helps them leverage a software defined networking architecture in their journey to the cloud.”

Introducing Cloudflare for Teams

“Employees are increasingly working outside of the traditional corporate headquarters. Distributed and remote users need to connect to the Internet, but today’s security solutions often require they backhaul those connections through headquarters to have the same level of security,” said Michael Kenney, head of strategy and business development for Ingram Micro Cloud. “We’re excited to work with Cloudflare whose global network helps teams of any size reach internally managed applications and securely use the Internet, protecting the data, devices, and team members that power a business.”

Introducing Cloudflare for Teams

“At Okta, we’re on a mission to enable any organization to securely use any technology. As a leading provider of identity for the enterprise, Okta helps organizations remove the friction of managing their corporate identity for every connection and request that their users make to applications. We’re excited about our partnership with Cloudflare and bringing seamless authentication and connection to teams of any size,” said Chuck Fontana, VP, Corporate & Business Development, Okta.

Introducing Cloudflare for Teams

“Organizations need one unified place to see, secure, and manage their endpoints,” said Matt Hastings, Senior Director of Product Management at Tanium. “We are excited to partner with Cloudflare to help teams secure their data, off-network devices, and applications. Tanium’s platform provides customers with a risk-based approach to operations and security with instant visibility and control into their endpoints. Cloudflare helps extend that protection by incorporating device data to enforce security for every connection made to protected resources.”

Introducing Cloudflare for Teams

“OneLogin is happy to partner with Cloudflare to advance security teams’ identity control in any environment, whether on-premise or in the cloud, without compromising user performance,” said Gary Gwin, Senior Director of Product at OneLogin. “OneLogin’s identity and access management platform securely connects people and technology for every user, every app, and every device. The OneLogin and Cloudflare for Teams integration provides a comprehensive identity and network control solution for teams of all sizes.”

Introducing Cloudflare for Teams

“Ping Identity helps enterprises improve security and user experience across their digital businesses,” said Loren Russon, Vice President of Product Management, Ping Identity. “Cloudflare for Teams integrates with Ping Identity to provide a comprehensive identity and network control solution to teams of any size, and ensures that only the right people get the right access to applications, seamlessly and securely.”

Introducing Cloudflare for Teams

“Our customers increasingly leverage deep observability data to address both operational and security use cases, which is why we launched Datadog Security Monitoring,” said Marc Tremsal, Director of Product Management at Datadog. “Our integration with Cloudflare already provides our customers with visibility into their web and DNS traffic; we’re excited to work together as Cloudflare for Teams expands this visibility to corporate environments.”

Introducing Cloudflare for Teams

“As more companies support employees who work on corporate applications from outside of the office, it is vital that they understand each request users are making. They need real-time insights and intelligence to react to incidents and audit secure connections,” said John Coyle, VP of Business Development, Sumo Logic. “With our partnership with Cloudflare, customers can now log every request made to internal applications and automatically push them directly to Sumo Logic for retention and analysis.”

Introducing Cloudflare for Teams

“Cloudgenix is excited to partner with Cloudflare to provide an end-to-end security solution from the branch to the cloud.  As enterprises move off of expensive legacy MPLS networks and adopt branch to internet breakout policies, the CloudGenix CloudBlade platform and Cloudflare for Teams together can make this transition seamless and secure. We’re looking forward to Cloudflare’s roadmap with this announcement and partnership opportunities in the near term.” said Aaron Edwards, Field CTO, Cloudgenix.

Introducing Cloudflare for Teams

“In the face of limited cybersecurity resources, organizations are looking for highly automated solutions that work together to reduce the likelihood and impact of today’s cyber risks,” said Akshay Bhargava, Chief Product Officer, Malwarebytes. “With Malwarebytes and Cloudflare together, organizations are deploying more than twenty layers of security defense-in-depth. Using just two solutions, teams can secure their entire enterprise from device, to the network, to their internal and external applications.”

Introducing Cloudflare for Teams

“Organizations’ sensitive data is vulnerable in-transit over the Internet and when it’s stored at its destination in public cloud, SaaS applications and endpoints,” said Pravin Kothari, CEO of CipherCloud. “CipherCloud is excited to partner with Cloudflare to secure data in all stages, wherever it goes. Cloudflare’s global network secures data in-transit without slowing down performance. CipherCloud CASB+ provides a powerful cloud security platform with end-to-end data protection and adaptive controls for cloud environments, SaaS applications and BYOD endpoints. Working together, teams can rely on integrated Cloudflare and CipherCloud solution to keep data always protected without compromising user experience.”

Cloudflare + Remote Browser Isolation

Post Syndicated from Darren Remington original https://blog.cloudflare.com/cloudflare-and-remote-browser-isolation/

Cloudflare + Remote Browser Isolation

Cloudflare announced today that it has purchased S2 Systems Corporation, a Seattle-area startup that has built an innovative remote browser isolation solution unlike any other currently in the market. The majority of endpoint compromises involve web browsers — by putting space between users’ devices and where web code executes, browser isolation makes endpoints substantially more secure. In this blog post, I’ll discuss what browser isolation is, why it is important, how the S2 Systems cloud browser works, and how it fits with Cloudflare’s mission to help build a better Internet.

What’s wrong with web browsing?

It’s been more than 30 years since Tim Berners-Lee wrote the project proposal defining the technology underlying what we now call the world wide web. What Berners-Lee envisioned as being useful for “several thousand people, many of them very creative, all working toward common goals[1] has grown to become a fundamental part of commerce, business, the global economy, and an integral part of society used by more than 58% of the world’s population[2].

The world wide web and web browsers have unequivocally become the platform for much of the productive work (and play) people do every day. However, as the pervasiveness of the web grew, so did opportunities for bad actors. Hardly a day passes without a major new cybersecurity breach in the news. Several contributing factors have helped propel cybercrime to unprecedented levels: the commercialization of hacking tools, the emergence of malware-as-a-service, the presence of well-financed nation states and organized crime, and the development of cryptocurrencies which enable malicious actors of all stripes to anonymously monetize their activities.

The vast majority of security breaches originate from the web. Gartner calls the public Internet a “cesspool of attacks” and identifies web browsers as the primary culprit responsible for 70% of endpoint compromises.[3] This should not be surprising. Although modern web browsers are remarkable, many fundamental architectural decisions were made in the 1990’s before concepts like security, privacy, corporate oversight, and compliance were issues or even considerations. Core web browsing functionality (including the entire underlying WWW architecture) was designed and built for a different era and circumstances.

In today’s world, several web browsing assumptions are outdated or even dangerous. Web browsers and the underlying server technologies encompass an extensive – and growing – list of complex interrelated technologies. These technologies are constantly in flux, driven by vibrant open source communities, content publishers, search engines, advertisers, and competition between browser companies. As a result of this underlying complexity, web browsers have become primary attack vectors. According to Gartner, “the very act of users browsing the internet and clicking on URL links opens the enterprise to significant risk. […] Attacking thru the browser is too easy, and the targets too rich.[4] Even “ostensibly ‘good’ websites are easily compromised and can be used to attack visitors” (Gartner[5]) with more than 40% of malicious URLs found on good domains (Webroot[6]). (A complete list of vulnerabilities is beyond the scope of this post.)

The very structure and underlying technologies that power the web are inherently difficult to secure. Some browser vulnerabilities result from illegitimate use of legitimate functionality: enabling browsers to download files and documents is good, but allowing downloading of files infected with malware is bad; dynamic loading of content across multiple sites within a single webpage is good, but cross-site scripting is bad; enabling an extensive advertising ecosystem is good, but the inability to detect hijacked links or malicious redirects to malware or phishing sites is bad; etc.

Enterprise Browsing Issues

Enterprises have additional challenges with traditional browsers.

Paradoxically, IT departments have the least amount of control over the most ubiquitous app in the enterprise – the web browser. The most common complaints about web browsers from enterprise security and IT professionals are:

  1. Security (obviously). The public internet is a constant source of security breaches and the problem is growing given an 11x escalation in attacks since 2016 (Meeker[7]). Costs of detection and remediation are escalating and the reputational damage and financial losses for breaches can be substantial.
  2. Control. IT departments have little visibility into user activity and limited ability to leverage content disarm and reconstruction (CDR) and data loss prevention (DLP) mechanisms including when, where, or who downloaded/upload files.
  3. Compliance. The inability to control data and activity across geographies or capture required audit telemetry to meet increasingly strict regulatory requirements. This results in significant exposure to penalties and fines.

Given vulnerabilities exposed through everyday user activities such as email and web browsing, some organizations attempt to restrict these activities. As both are legitimate and critical business functions, efforts to limit or curtail web browser use inevitably fail or have a substantive negative impact on business productivity and employee morale.

Current approaches to mitigating security issues inherent in browsing the web are largely based on signature technology for data files and executables, and lists of known good/bad URLs and DNS addresses. The challenge with these approaches is the difficulty of keeping current with known attacks (file signatures, URLs and DNS addresses) and their inherent vulnerability to zero-day attacks. Hackers have devised automated tools to defeat signature-based approaches (e.g. generating hordes of files with unknown signatures) and create millions of transient websites in order to defeat URL/DNS blacklists.

While these approaches certainly prevent some attacks, the growing number of incidents and severity of security breaches clearly indicate more effective alternatives are needed.

What is browser isolation?

The core concept behind browser isolation is security-through-physical-isolation to create a “gap” between a user’s web browser and the endpoint device thereby protecting the device (and the enterprise network) from exploits and attacks. Unlike secure web gateways, antivirus software, or firewalls which rely on known threat patterns or signatures, this is a zero-trust approach.

There are two primary browser isolation architectures: (1) client-based local isolation and (2) remote isolation.

Local browser isolation attempts to isolate a browser running on a local endpoint using app-level or OS-level sandboxing. In addition to leaving the endpoint at risk when there is an isolation failure, these systems require significant endpoint resources (memory + compute), tend to be brittle, and are difficult for IT to manage as they depend on support from specific hardware and software components.

Further, local browser isolation does nothing to address the control and compliance issues mentioned above.

Remote browser isolation (RBI) protects the endpoint by moving the browser to a remote service in the cloud or to a separate on-premises server within the enterprise network:

  • On-premises isolation simply relocates the risk from the endpoint to another location within the enterprise without actually eliminating the risk.
  • Cloud-based remote browsing isolates the end-user device and the enterprise’s network while fully enabling IT control and compliance solutions.

Given the inherent advantages, most browser isolation solutions – including S2 Systems – leverage cloud-based remote isolation. Properly implemented, remote browser isolation can protect the organization from browser exploits, plug-ins, zero-day vulnerabilities, malware and other attacks embedded in web content.

How does Remote Browser Isolation (RBI) work?

In a typical cloud-based RBI system (the red-dashed box ❶ below), individual remote browsers ❷ are run in the cloud as disposable containerized instances – typically, one instance per user. The remote browser sends the rendered contents of a web page to the user endpoint device ❹ using a specific protocol and data format ❸. Actions by the user, such as keystrokes, mouse and scroll commands, are sent back to the isolation service over a secure encrypted channel where they are processed by the remote browser and any resulting changes to the remote browser webpage are sent back to the endpoint device.

Cloudflare + Remote Browser Isolation

In effect, the endpoint device is “remote controlling” the cloud browser. Some RBI systems use proprietary clients installed on the local endpoint while others leverage existing HTML5-compatible browsers on the endpoint and are considered ‘clientless.’

Data breaches that occur in the remote browser are isolated from the local endpoint and enterprise network. Every remote browser instance is treated as if compromised and terminated after each session. New browser sessions start with a fresh instance. Obviously, the RBI service must prevent browser breaches from leaking outside the browser containers to the service itself. Most RBI systems provide remote file viewers negating the need to download files but also have the ability to inspect files for malware before allowing them to be downloaded.

A critical component in the above architecture is the specific remoting technology employed by the cloud RBI service. The remoting technology has a significant impact on the operating cost and scalability of the RBI service, website fidelity and compatibility, bandwidth requirements, endpoint hardware/software requirements and even the user experience. Remoting technology also determines the effective level of security provided by the RBI system.

All current cloud RBI systems employ one of two remoting technologies:

(1)    Pixel pushing is a video-based approach which captures pixel images of the remote browser ‘window’ and transmits a sequence of images to the client endpoint browser or proprietary client. This is similar to how remote desktop and VNC systems work. Although considered to be relatively secure, there are several inherent challenges with this approach:

  • Continuously encoding and transmitting video streams of remote webpages to user endpoint devices is very costly. Scaling this approach to millions of users is financially prohibitive and logistically complex.
  • Requires significant bandwidth. Even when highly optimized, pushing pixels is bandwidth intensive.
  • Unavoidable latency results in an unsatisfactory user experience. These systems tend to be slow and generate a lot of user complaints.
  • Mobile support is degraded by high bandwidth requirements compounded by inconsistent connectivity.
  • HiDPI displays may render at lower resolutions. Pixel density increases exponentially with resolution which means remote browser sessions (particularly fonts) on HiDPI devices can appear fuzzy or out of focus.

(2) DOM reconstruction emerged as a response to the shortcomings of pixel pushing. DOM reconstruction attempts to clean webpage HTML, CSS, etc. before forwarding the content to the local endpoint browser. The underlying HTML, CSS, etc., are reconstructed in an attempt to eliminate active code, known exploits, and other potentially malicious content. While addressing the latency, operational cost, and user experience issues of pixel pushing, it introduces two significant new issues:

  • Security. The underlying technologies – HTML, CSS, web fonts, etc. – are the attack vectors hackers leverage to breach endpoints. Attempting to remove malicious content or code is like washing mosquitos: you can attempt to clean them, but they remain inherent carriers of dangerous and malicious material. It is impossible to identify, in advance, all the means of exploiting these technologies even through an RBI system.
  • Website fidelity. Inevitably, attempting to remove malicious active code, reconstructing HTML, CSS and other aspects of modern websites results in broken pages that don’t render properly or don’t render at all. Websites that work today may not work tomorrow as site publishers make daily changes that may break DOM reconstruction functionality. The result is an infinite tail of issues requiring significant resources in an endless game of whack-a-mole. Some RBI solutions struggle to support common enterprise-wide services like Google G Suite or Microsoft Office 365 even as malware laden web email continues to be a significant source of breaches.

Cloudflare + Remote Browser Isolation

Customers are left to choose between a secure solution with a bad user experience and high operating costs, or a faster, much less secure solution that breaks websites. These tradeoffs have driven some RBI providers to implement both remoting technologies into their products. However, this leaves customers to pick their poison without addressing the fundamental issues.

Given the significant tradeoffs in RBI systems today, one common optimization for current customers is to deploy remote browsing capabilities to only the most vulnerable users in an organization such as high-risk executives, finance, business development, or HR employees. Like vaccinating half the pupils in a classroom, this results in a false sense of security that does little to protect the larger organization.

Unfortunately, the largest “gap” created by current remote browser isolation systems is the void between the potential of the underlying isolation concept and the implementation reality of currently available RBI systems.

S2 Systems Remote Browser Isolation

S2 Systems remote browser isolation is a fundamentally different approach based on S2-patented technology called Network Vector Rendering (NVR).

The S2 remote browser is based on the open-source Chromium engine on which Google Chrome is built. In addition to powering Google Chrome which has a ~70% market share[8], Chromium powers twenty-one other web browsers including the new Microsoft Edge browser.[9] As a result, significant ongoing investment in the Chromium engine ensures the highest levels of website support, compatibility and a continuous stream of improvements.

A key architectural feature of the Chromium browser is its use of the Skia graphics library. Skia is a widely-used cross-platform graphics engine for Android, Google Chrome, Chrome OS, Mozilla Firefox, Firefox OS, FitbitOS, Flutter, the Electron application framework and many other products. Like Chromium, the pervasiveness of Skia ensures ongoing broad hardware and platform support.

Cloudflare + Remote Browser Isolation
Skia code fragment

Everything visible in a Chromium browser window is rendered through the Skia rendering layer. This includes application window UI such as menus, but more importantly, the entire contents of the webpage window are rendered through Skia. Chromium compositing, layout and rendering are extremely complex with multiple parallel paths optimized for different content types, device contexts, etc. The following figure is an egregious simplification for illustration purposes of how S2 works (apologies to Chromium experts):

Cloudflare + Remote Browser Isolation

S2 Systems NVR technology intercepts the remote Chromium browser’s Skia draw commands ❶, tokenizes and compresses them, then encrypts and transmits them across the wire ❷ to any HTML5 compliant web browser ❸ (Chrome, Firefox, Safari, etc.) running locally on the user endpoint desktop or mobile device. The Skia API commands captured by NVR are pre-rasterization which means they are highly compact.

On first use, the S2 RBI service transparently pushes an NVR WebAssembly (Wasm) library ❹ to the local HTML5 web browser on the endpoint device where it is cached for subsequent use. The NVR Wasm code contains an embedded Skia library and the necessary code to unpack, decrypt and “replay” the Skia draw commands from the remote RBI server to the local browser window. A WebAssembly’s ability to “execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms[10] results in near-native drawing performance.

The S2 remote browser isolation service uses headless Chromium-based browsers in the cloud, transparently intercepts draw layer output, transmits the draw commands efficiency and securely over the web, and redraws them in the windows of local HTML5 browsers. This architecture has a number of technical advantages:

(1)    Security: the underlying data transport is not an existing attack vector and customers aren’t forced to make a tradeoff between security and performance.

(2)    Website compatibility: there are no website compatibility issues nor long tail chasing evolving web technologies or emerging vulnerabilities.

(3)    Performance: the system is very fast, typically faster than local browsing (subject of a future blog post).

(4)    Transparent user experience: S2 remote browsing feels like native browsing; users are generally unaware when they are browsing remotely.

(5)    Requires less bandwidth than local browsing for most websites. Enables advanced caching and other proprietary optimizations unique to web browsers and the nature of web content and technologies.

(6)    Clientless: leverages existing HTML5 compatible browsers already installed on user endpoint desktop and mobile devices.

(7)    Cost-effective scalability: although the details are beyond the scope of this post, the S2 backend and NVR technology have substantially lower operating costs than existing RBI technologies. Operating costs translate directly to customer costs. The S2 system was designed to make deployment to an entire enterprise and not just targeted users (aka: vaccinating half the class) both feasible and attractive for customers.

(8)    RBI-as-a-platform: enables implementation of related/adjacent services such as DLP, content disarm & reconstruction (CDR), phishing detection and prevention, etc.

S2 Systems Remote Browser Isolation Service and underlying NVR technology eliminates the disconnect between the conceptual potential and promise of browser isolation and the unsatisfying reality of current RBI technologies.

Cloudflare + S2 Systems Remote Browser Isolation

Cloudflare’s global cloud platform is uniquely suited to remote browsing isolation. Seamless integration with our cloud-native performance, reliability and advanced security products and services provides powerful capabilities for our customers.

Our Cloudflare Workers architecture enables edge computing in 200 cities in more than 90 countries and will put a remote browser within 100 milliseconds of 99% of the Internet-connected population in the developed world. With more than 20 million Internet properties directly connected to our network, Cloudflare remote browser isolation will benefit from locally cached data and builds on the impressive connectivity and performance of our network. Our Argo Smart Routing capability leverages our communications backbone to route traffic across faster and more reliable network paths resulting in an average 30% faster access to web assets.

Once it has been integrated with our Cloudflare for Teams suite of advanced security products, remote browser isolation will provide protection from browser exploits, zero-day vulnerabilities, malware and other attacks embedded in web content. Enterprises will be able to secure the browsers of all employees without having to make trade-offs between security and user experience. The service will enable IT control of browser-conveyed enterprise data and compliance oversight. Seamless integration across our products and services will enable users and enterprises to browse the web without fear or consequence.

Cloudflare’s mission is to help build a better Internet. This means protecting users and enterprises as they work and play on the Internet; it means making Internet access fast, reliable and transparent. Reimagining and modernizing how web browsing works is an important part of helping build a better Internet.


[1] https://www.w3.org/History/1989/proposal.html

[2] “Internet World Stats,”https://www.internetworldstats.com/, retrieved 12/21/2019.

[3] “Innovation Insight for Remote Browser Isolation,” (report ID: G00350577) Neil MacDonald, Gartner Inc, March 8, 2018”

[4] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[5] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[6] “2019 Webroot Threat Report: Forty Percent of Malicious URLs Found on Good Domains”, February 28, 2019

[7] “Kleiner Perkins 2018 Internet Trends”, Mary Meeker.

[8] https://www.statista.com/statistics/544400/market-share-of-internet-browsers-desktop/, retrieved December 21, 2019

[9] https://en.wikipedia.org/wiki/Chromium_(web_browser), retrieved December 29, 2019

[10] https://webassembly.org/, retrieved December 30, 2019

Log every request to corporate apps, no code changes required

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/log-every-request-to-corporate-apps-no-code-changes-required/

Log every request to corporate apps, no code changes required

When a user connects to a corporate network through an enterprise VPN client, this is what the VPN appliance logs:

Log every request to corporate apps, no code changes required

The administrator of that private network knows the user opened the door at 12:15:05, but, in most cases, has no visibility into what they did next. Once inside that private network, users can reach internal tools, sensitive data, and production environments. Preventing this requires complicated network segmentation, and often server-side application changes. Logging the steps that an individual takes inside that network is even more difficult.

Cloudflare Access does not improve VPN logging; it replaces this model. Cloudflare Access secures internal sites by evaluating every request, not just the initial login, for identity and permission. Instead of a private network, administrators deploy corporate applications behind Cloudflare using our authoritative DNS. Administrators can then integrate their team’s SSO and build user and group-specific rules to control who can reach applications behind the Access Gateway.

When a request is made to a site behind Access, Cloudflare prompts the visitor to login with an identity provider. Access then checks that user’s identity against the configured rules and, if permitted, allows the request to proceed. Access performs these checks on each request a user makes in a way that is transparent and seamless for the end user.

However, since the day we launched Access, our logging has resembled the screenshot above. We captured when a user first authenticated through the gateway, but that’s where it stopped. Starting today, we can give your team the full picture of every request made to every application.

We’re excited to announce that you can now capture logs of every request a user makes to a resource behind Cloudflare Access. In the event of an emergency, like a stolen laptop, you can now audit every URL requested during a session. Logs are standardized in one place, regardless of whether you use multiple SSO providers or secure multiple applications, and the Cloudflare Logpush platform can send them to your SIEM for retention and analysis.

Auditing every login

Cloudflare Access brings the speed and security improvements Cloudflare provides to public-facing sites and applies those lessons to the internal applications your team uses. For most teams, these were applications that traditionally lived behind a corporate VPN. Once a user joined that VPN, they were inside that private network, and administrators had to take additional steps to prevent users from reaching things they should not have access to.

Access flips this model by assuming no user should be able to reach anything by default; applying a zero-trust solution to the internal tools your team uses. With Access, when any user requests the hostname of that application, the request hits Cloudflare first. We check to see if the user is authenticated and, if not, send them to your identity provider like Okta, or Azure ActiveDirectory. The user is prompted to login, and Cloudflare then evaluates if they are allowed to reach the requested application. All of this happens at the edge of our network before a request touches your origin, and for the user, it feels like the seamless SSO flow they’ve become accustomed to for SaaS apps.

Log every request to corporate apps, no code changes required

When a user authenticates with your identity provider, we audit that event as a login and make those available in our API. We capture the user’s email, their IP address, the time they authenticated, the method (in this case, a Google SSO flow), and the application they were able to reach.

Log every request to corporate apps, no code changes required

These logs can help you track every user who connected to an internal application, including contractors and partners who might use different identity providers. However, this logging stopped at the authentication. Access did not capture the next steps of a given user.

Auditing every request

Cloudflare secures both external-facing sites and internal resources by triaging each request in our network before we ever send it to your origin. Products like our WAF enforce rules to protect your site from attacks like SQL injection or cross-site scripting. Likewise, Access identifies the principal behind each request by evaluating each connection that passes through the gateway.

Once a member of your team authenticates to reach a resource behind Access, we generate a token for that user that contains their SSO identity. The token is structured as JSON Web Token (JWT). JWT security is an open standard for signing and encrypting sensitive information. These tokens provide a secure and information-dense mechanism that Access can use to verify individual users. Cloudflare signs the JWT using a public and private key pair that we control. We rely on RSA Signature with SHA-256, or RS256, an asymmetric algorithm, to perform that signature. We make the public key available so that you can validate their authenticity, as well.

When a user requests a given URL, Access appends the user identity from that token as a request header, which we then log as the request passes through our network. Your team can collect these logs in your preferred third-party SIEM or storage destination by using the Cloudflare Logpush platform.

Cloudflare Logpush can be used to gather and send specific request headers from the requests made to sites behind Access. Once enabled, you can then configure the destination where Cloudflare should send these logs. When enabled with the Access user identity field, the logs will export to your systems as JSON similar to the logs below.

{
   "ClientIP": "198.51.100.206",
   "ClientRequestHost": "jira.widgetcorp.tech",
   "ClientRequestMethod": "GET",
   "ClientRequestURI": "/secure/Dashboard/jspa",
   "ClientRequestUserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36",
   "EdgeEndTimestamp": "2019-11-10T09:51:07Z",
   "EdgeResponseBytes": 4600,
   "EdgeResponseStatus": 200,
   "EdgeStartTimestamp": "2019-11-10T09:51:07Z",
   "RayID": "5y1250bcjd621y99"
   "RequestHeaders":{"cf-access-user":"srhea"},
}
 
{
   "ClientIP": "198.51.100.206",
   "ClientRequestHost": "jira.widgetcorp.tech",
   "ClientRequestMethod": "GET",
   "ClientRequestURI": "/browse/EXP-12",
   "ClientRequestUserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36",
   "EdgeEndTimestamp": "2019-11-10T09:51:27Z",
   "EdgeResponseBytes": 4570,
   "EdgeResponseStatus": 200,
   "EdgeStartTimestamp": "2019-11-10T09:51:27Z",
   "RayID": "yzrCqUhRd6DVz72a"
   "RequestHeaders":{"cf-access-user":"srhea"},
}

In the example above, the user initially visited the splash page for a sample Jira instance. The next request was made to a specific Jira ticket, EXP-12, about one minute after the first request. With per-request logging, Access administrators can review each request a user made once authenticated in the event that an account is compromised or a device stolen.

The logs are consistent across all applications and identity providers. The same standard fields are captured when contractors login with their AzureAD instance to your supply chain tool as when your internal users authenticate with Okta to your Jira. You can also augment the data above with other request details like TLS cipher used and WAF results.

How can this data be used?

The native logging capabilities of hosted applications vary wildly. Some tools provide more robust records of user activity, but others would require server-side code changes or workarounds to add this level of logging. Cloudflare Access can give your team the ability to skip that work and introduce logging in a single gateway that applies to all resources protected behind it.

The audit logs can be exported to third-party SIEM tools or S3 buckets for analysis and anomaly detection. The data can also be used for audit purposes in the event that a corporate device is lost or stolen. Security teams can then use this to recreate user sessions from logs as they investigate.

What’s next?

Any enterprise customer with Logpush enabled can now use this feature at no additional cost. Instructions are available here to configure Logpush and additional documentation here to enable Access per-request logs.

NordVPN Breached

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/nordvpn_breache.html

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

The Technical Challenges of Building Cloudflare WARP

Post Syndicated from Zack Bloom original https://blog.cloudflare.com/warp-technical-challenges/

The Technical Challenges of Building Cloudflare WARP

If you have seen our other post you know that we released WARP to the last members of our waiting list today. With WARP our goal was to secure and improve the connection between your mobile devices and the Internet. Along the way we ran into problems with phone and operating system versions, diverse networks, and our own infrastructure, all while working to meet the pent up demand of a waiting list nearly two million people long.

To understand all these problems and how we solved them we first need to give you some background on how the Cloudflare network works:

How Our Network Works

The Cloudflare network is composed of data centers located in 194 cities and in more than 90 countries. Every Cloudflare data center is composed of many servers that receive a continual flood of requests and has to distribute those requests between the servers that handle them. We use a set of routers to perform that operation:

The Technical Challenges of Building Cloudflare WARP

Our routers listen on Anycast IP addresses which are advertised over the public Internet. If you have a site on Cloudflare, your site is available via two of these addresses. In this case, I am doing a DNS query for “workers.dev”, a site which is powered by Cloudflare:

➜ dig workers.dev

;; QUESTION SECTION:
;workers.dev.      IN  A

;; ANSWER SECTION:
workers.dev.    161  IN  A  198.41.215.162
workers.dev.    161  IN  A  198.41.214.162

;; SERVER: 1.1.1.1#53(1.1.1.1)

workers.dev is available at two addresses 198.41.215.162 and 198.41.214.162 (along with two IPv6 addresses available via the AAAA DNS query). Those two addresses are advertised from every one of our data centers around the world. When someone connects to any Internet property on Cloudflare, each networking device their packets pass through will choose the shortest path to the nearest Cloudflare data center from their computer or phone.

Once the packets hit our data center, we send them to one of the many servers which operate there. Traditionally, one might use a load balancer to do that type of traffic distribution across multiple machines. Unfortunately putting a set of load balancers capable of handling our volume of traffic in every data center would be exceptionally expensive, and wouldn’t scale as easily as our servers do. Instead, we use devices built for operating on exceptional volumes of traffic: network routers.

Once a packet hits our data center it is processed by a router. That router sends the traffic to one of a set of servers responsible for handling that address using a routing strategy called ECMP (Equal-Cost Multi-Path). ECMP refers to the situation where the router doesn’t have a clear ‘winner’ between multiple routes, it has multiple good next hops, all to the same ultimate destination. In our case we hack that concept a bit, rather than using ECMP to balance across multiple intermediary links, we make the intermediary link addresses the final destination of our traffic: our servers.

The Technical Challenges of Building Cloudflare WARP

Here’s the configuration of a Juniper-brand router of the type which might be in one of our data centers, and which is configured to balance traffic across three destinations:

[email protected]# show routing-options

static {
  route 172.16.1.0/24 next-hop [ 172.16.2.1 172.16.2.2 172.16.2.3 ];
}
forwarding-table {
  export load-balancing-policy;
}

Since the ‘next-hop’ is our server, traffic will be split across multiple machines very efficiently.

TCP, IP, and ECMP

IP is responsible for sending packets of data from addresses like 93.184.216.34 to 208.80.153.224 (or [2606:2800:220:1:248:1893:25c8:1946] to [2620:0:860:ed1a::1] in the case of IPv6) across the Internet. It’s the “Internet Protocol”.

TCP (Transmission Control Protocol) operates on top of a protocol like IP which can send a packet from one place to another, and makes data transmission reliable and useful for more than one process at a time. It is responsible for taking the unreliable and misordered packets that might arrive over a protocol like IP and delivering them reliably, in the correct order. It also introduces the concept of a ‘port’, a number from 1-65535 which help route traffic on a computer or phone to a specific service (such as the web or email). Each TCP connection has a source and destination port which is included in the header TCP adds to the beginning of each packet. Without the idea of ports it would not be easy to figure out which messages were destined for which program. For example, both Google Chrome and Mail might wish to send messages over your WiFi connection at the same time, so they will each use their own port.

Here’s an example of making a request for https://cloudflare.com/ at 198.41.215.162, on the default port for HTTPS: 443. My computer has randomly assigned me the port 51602 which it will listen on it for a response, which will (hopefully) receive the contents of the site:

Internet Protocol Version 4, Src: 19.5.7.21, Dst: 198.41.215.162
    Protocol: TCP (6)
    Source: 19.5.7.21
    Destination: 198.41.215.162
Transmission Control Protocol, Src Port: 51602, Dst Port: 443, Seq: 0, Len: 0
    Source Port: 51602
    Destination Port: 443

Looking at the same request from the Cloudflare side will be a mirror image, a request from my public IP address originating at my source port, destined for port 443 (I’m ignoring NAT for the moment, more on that later):

Internet Protocol Version 4, Src: 198.41.215.16, Dst: 19.5.7.21
    Protocol: TCP (6)
    Source: 198.41.215.162
    Destination: 19.5.7.21
Transmission Control Protocol, Src Port: 443, Dst Port: 51602, Seq: 0, Len: 0
    Source Port: 443
    Destination Port: 51602

We can now return to ECMP! It could be theoretically possible to use ECMP to balance packets between servers randomly, but you would almost never want to do that. A message over the Internet is generally composed of multiple TCP packets. If each packet were sent to a different server it would be impossible to reconstruct the original message in any one place and act on it. Even beyond that, it would be terrible for performance: we rely on being able to maintain long-lived TCP and TLS sessions which require a persistent connection to a single server. To provide that persistence, our routers don’t balance traffic randomly, they use a combination of four values: the source address, the source port, the destination address, and the destination port. Traffic with the same combination of those four values will always make it to the same server. In the case of my example above, all of my messages destined to cloudflare.com will make it to a single server which can reconstruct the TCP packets into my request and return packets in a response.

Enter WARP

For a conventional request it is very important that our ECMP routing sends all of your packets to the same server for the duration of your request. Over the web a request commonly lasts less than ten seconds and the system works well. Unfortunately we quickly ran into issues with WARP.

WARP uses a session key negotiated with public-key encryption to secure packets. For a successful connection, both sides must negotiate a connection which is then only valid for that particular client and the specific server they are talking to. This negotiation takes time and has to be completed any time a client talks to a new server. Even worse, if packets get sent which expect one server, and end up at another, they can’t be decrypted, breaking the connection. Detecting those failed packets and restarting the connection from scratch takes so much time that our alpha testers experienced it as a complete loss of their Internet connection. As you can imagine, testers don’t leave WARP on very long when it prevents them from using the Internet.

WARP was experiencing so many failures because devices were switching servers much more often than we expected. If you recall, our ECMP router configuration uses a combination of (Source IP, Source Port, Destination IP, Destination Port) to match a packet to a server. Destination IP doesn’t generally change, WARP clients are always connecting to the same Anycast addresses. Similarly, Destination Port doesn’t change, we always listen on the same port for WARP traffic. The other two values, Source IP and Source Port, were changing much more frequently than we had planned.

One source of these changes was expected. WARP runs on cell phones, and cell phones commonly switch from Cellular to Wi-Fi connections. When you make that switch you suddenly go from communicating over the Internet via your cellular carrier’s (like AT&T or Verizon) IP address space to that of the Internet Service Provider your Wi-Fi connection uses (like Comcast or Google Fiber). It’s essentially impossible that your IP address won’t change when you move between connections.

The port changes occurred even more frequently than could be explained by network switches however. For an understanding of why we need to introduce one more component of Internet lore: Network Address Translation.

NAT

An IPv4 address is composed of 32 bits (often written as four eight-bit numbers). If you exclude the reserved addresses which can’t be used, you are left with 3,706,452,992 possible addresses. This number has remained constant since IPv4 was deployed on the ARPANET in 1983, even as the number of devices has exploded (although it might go up a bit soon if the 0.0.0.0/8 becomes available). This data is based on Gartner Research predictions and estimates:

The Technical Challenges of Building Cloudflare WARP

IPv6 is the definitive solution to this problem. It expands the length of an address from 32 to 128 bits, with 125 available in a valid Internet address at the moment (all public IPv6 addresses have the first three bits set to 001, the remaining 87.5% of the IPv6 address space is not considered necessary yet). 2^125 is an impossibly large number and would be more than enough for every device on Earth to have its own address. Unfortunately, 21 years after it was published, IPv6 still remains unsupported on many networks. Much of the Internet still relies on IPv4, and as seen above, there aren’t enough IPv4 addresses for every device to have their own.

To solve this problem many devices are commonly put behind a single Internet-addressable IP address. A router is used to do Network Address Translation; to take messages which arrive on that single public IP and forward them to the appropriate device on their local network. In effect it’s as if everyone in your apartment building had the same street address, and the postal worker was responsible for sorting out what mail was meant for which person.

When your devices send a packet destined for the Internet your router intercepts it. The router then rewrites the source address to the single public Internet address allocated for you, and the source port to a port which is unique for all the messages being sent across all the Internet-connected devices on your network. Just as your computer chooses a random source port for your messages which was unique between all the different processes on your computer, your router chooses a random source port which is unique for all the Internet connections across your entire network. It remembers the port it is selecting for you as belonging to your connection, and allows the message to continue over the Internet.

The Technical Challenges of Building Cloudflare WARP

When a response arrives destined for the port it has allocated to you, it matches it to your connection and again rewrites it, this time replacing the destination address with your address on the local network, and the destination port with the original source port you specified. It has transparently allowed all the devices on your network to act as if they were one big computer with a single Internet-connected IP address.

This process works very well for the duration of a common request over the Internet. Your router only has so much space however, so it will helpfully delete old port assignments, freeing up space for new ones. It generally waits for the connection to not have any messages for thirty seconds or more before deleting an assignment, making it unlikely a response will arrive which it can no longer direct to the appropriate source. Unfortunately, WARP sessions need to last much longer than thirty seconds.

When you next send a message after your NAT session has expired, you are given a new source port. That new port causes your ECMP mapping (based on source IP, source port, destination IP, destination port) to change, causing us to route your requests to a new machine within the Cloudflare data center your messages are arriving at. This breaks your WARP session, and your Internet connection.

We experimented extensively with methods of keeping your NAT session fresh by periodically sending keep-alive messages which would prevent routers and mobile carriers from evicting mappings. Unfortunately waking the radio of your device every thirty seconds has unfortunate consequences for your battery life, and it was not entirely successful at preventing port and address changes. We needed a way to always map sessions to the same machine, even as their source port (and even source address) changed.

Fortunately, we had a solution which came from elsewhere at Cloudflare. We don’t use dedicated load balancers, but we do have many of the same problems load balancers solve. We have long needed to map traffic to Cloudflare servers with more control than ECMP allows alone. Rather than deploying an entire tier of load balancers, we use every server in our network as a load balancer, forwarding packets first to an arbitrary machine and then relying on that machine to forward the packet to the appropriate host. This consumes minimal resources and allows us to scale our load balancing infrastructure with each new machine we add. We have a lot more to share on how this infrastructure works and what makes it unique, subscribe to this blog to be notified when that post is released.

To make our load balancing technique work though we needed a way to identify which client a WARP packet was associated with before it could be decrypted. To understand how we did that it’s helpful to understand how WARP encrypts your messages. The industry standard way of connecting a device to a remote network is a VPN. VPNs use a protocol like IPsec to allow your device to send messages securely to a remote network. Unfortunately, VPNs are generally rather disliked. They slow down connections, eat battery life, and their complexity makes them frequently the source of security vulnerabilities. Users of corporate networks which mandate VPNs often hate them, and the idea that we would convince millions of consumers to install one voluntarily seemed ridiculous.

After considering and testing several more modern options, we landed on WireGuard®. WireGuard is a modern, high performance, and most importantly, simple, protocol created by Jason Donenfeld to solve the same problem. Its original code-base is less than 1% the size of a popular IPsec implementation, making it easy for us to understand and secure. We chose Rust as the language most likely to give us the performance and safety we needed and implemented WireGuard while optimizing the code heavily to run quickly on the platforms we were targeting. Then we open sourced the project.

The Technical Challenges of Building Cloudflare WARP

WireGuard changes two very relevant things about the traffic you send over the Internet. The first is it uses UDP not TCP. The second is it uses a session key negotiated with public-key encryption to secure the contents of that UDP packet.

TCP is the conventional protocol used for loading a website over the Internet. It combines the ability to address ports (which we talked about previously) with reliable delivery and flow control. Reliable delivery ensures that if a message is dropped, TCP will eventually resend the missing data. Flow control gives TCP the tools it needs to handle many clients all sharing the same link who exceed its capacity. UDP is a much simpler protocol which trades these capabilities for simplicity, it makes a best-effort attempt to send a message, and if the message is missing or there is too much data for the links, messages are simply never heard of again.

UDP’s lack of reliability would normally be a problem while browsing the Internet, but we are not simply sending UDP, we are sending a complete TCP packet _inside_ our UDP packets.

Inside the payload encrypted by WireGuard we have a complete TCP header which contains all the information necessary to ensure reliable delivery. We then wrap it with WireGuard’s encryption and use UDP to (less-than-reliably) send it over the Internet. Should it be dropped TCP will do its job just as if a network link lost the message and resend it. If we instead wrapped our inner, encrypted, TCP session in another TCP packet as some other protocols do we would dramatically increase the number of network messages required, destroying performance.

The second interesting component of WireGuard relevant to our discussion is public-key encryption. WireGuard allows you to secure each message you send such that only the specific destination you are sending it to can decrypt it. That is a powerful way of ensuring your security as you browse the Internet, but it means it is impossible to read anything inside the encrypted payload until the message has reached the server which is responsible for your session.

Returning to our load balancing issue, you can see that only three things are accessible to us before we can decrypt the message: The IP Header, the UDP Header, and the WireGuard header. Neither the IP Header or UDP Header include the information we need, as we have already failed with the four pieces of information they contain (source IP, source port, destination IP, destination port). That leaves the WireGuard header as the one location where we can find an identifier which can be used to keep track of who the client was before decrypting the message. Unfortunately, there isn’t one. This is the format of the message used to initiate a connection:

The Technical Challenges of Building Cloudflare WARP

sender looks temptingly like a client id, but it’s randomly assigned every handshake. Handshakes have to be performed every two minutes to rotate keys making them insufficiently persistent. We could have forked the protocol to add any number of additional fields, but it is important to us to remain wire-compatible with other WireGuard clients. Fortunately, WireGuard has a three byte block in its header which is not currently used by other clients. We decided to put our identifier in this region and still support messages from other WireGuard clients (albeit with less reliable routing than we can offer). If this reserved section is used for other purposes we can ignore those bits or work with the WireGuard team to extend the protocol in another suitable way.

When we begin a WireGuard session we include our clientid field which is provided by our authentication server which has to be communicated with to begin a WARP session:

The Technical Challenges of Building Cloudflare WARP

Data messages similarly include the same field:

The Technical Challenges of Building Cloudflare WARP

It’s important to note that the clientid is only 24 bits long. That means there are less possible clientid values than the current number of users waiting to use WARP. This suits us well as we don’t need or want the ability to track individual WARP users. clientid is only necessary for load balancing, once it serves its purpose we get it expunged from our systems as quickly as we can.

The load balancing system now uses a hash of the clientid to identify which machine a packet should be routed to, meaning  WARP messages always arrive at the same machine even as you change networks or move from Wi-Fi to cellular, and the problem was eliminated.

Client Software

Cloudflare has never developed client software before. We take pride in selling a service anyone can use without needing to buy hardware or provision infrastructure. To make WARP work, however, we needed to deploy our code onto one of the most ubiquitous hardware platforms on Earth: smartphones.

While developing software on mobile devices has gotten steadily easier over the past decade, unfortunately developing low-level networking software remains rather difficult. To consider one example: we began the project using the latest iOS connection API called Network, introduced in iOS 12. Apple strongly recommends the use of Network, in their words “Your customers are going to appreciate how much better your connections, how much more reliable your connections are established, and they’ll appreciate the longer battery life from the better performance.”

The Network framework provides a pleasantly high-level API which, as they say, integrates well with the native performance features built into iOS. Creating a UDP connection (connection is a bit of a misnomer, there are no connections in UDP, just packets) is as simple as:

self.connection = NWConnection(host: hostUDP, port: portUDP, using: .udp)

And sending a message can be as easy as:

self.connection?.send(content: content)

Unfortunately, at a certain point code actually gets deployed, and bug reports begin flowing in. The first issue was the simplicity of the API made it impossible for us to process more than a single UDP packet at a time. We commonly use packets of up to 1500 bytes, running a speed test on my Google Fiber connection currently results in a speed of 370 Mbps, or almost thirty-one thousand packets per second. Attempting to process each packet individually was slowing down connections by as much as 40%. According to Apple, the best solution to get the performance we needed was to fallback to the older NWUDPSession API, introduced in iOS 9.

IPv6

If we compare the code required to create a NWUDPSession to the example above you will notice that we suddenly care which protocol, IPv4 or IPv6, we are using:

let v4Session = NWUDPSession(upgradeFor: self.ipv4Session)
v4Session.setReadHandler(self.filteringReadHandler, maxDatagrams: 32)

In fact, NWUDPSession does not handle many of the more tricky elements of creating connections over the Internet. For example, the Network framework will automatically determine whether a connection should be made over IPv4 or 6:

The Technical Challenges of Building Cloudflare WARP

NWUDPSession does not do this for you, so we began creating our own logic to determine which type of connection should be used. Once we began to experiment, it quickly became clear that they are not created equal. It’s fairly common for a route to the same destination to have very different performance based on whether you use its IPv4 or IPv6 address. Often this is because there are simply fewer IPv4 addresses which have been around for longer, making it possible for those routes to be better optimized by the Internet’s infrastructure.

Every Cloudflare product has to support IPv6 as a rule. In 2016, we enabled IPv6 for over 98% of our network, over four million sites, and made a pretty big dent in IPv6 adoption on the web:

The Technical Challenges of Building Cloudflare WARP

We couldn’t release WARP without IPv6 support. We needed to ensure that we were always using the fastest possible connection while still supporting both protocols with equal measure. To solve that we turned to a technology we have used with DNS for years: Happy Eyeballs. As codified in RFC 6555 Happy Eyeballs is the idea that you should try to look for both an IPv4 and IPv6 address when doing a DNS lookup. Whichever returns first, wins. That way you can allow IPv6 websites to load quickly even in a world which does not fully support it.

As an example, I am loading the website http://zack.is/. My web browser makes a DNS request for both the IPv4 address (an “A” record) and the IPv6 address (an “AAAA” record) at the same time:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 1.1.1.1
User Datagram Protocol, Src Port: 47447, Dst Port: 53
Domain Name System (query)
    Queries
        zack.is: type A, class IN

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 1.1.1.1
User Datagram Protocol, Src Port: 49946, Dst Port: 53
Domain Name System (query)
    Queries
        zack.is: type AAAA, class IN

In this case the response to the A query returned more quickly, and the connection is begun using that protocol:

Internet Protocol Version 4, Src: 1.1.1.1, Dst: 192.168.7.21
User Datagram Protocol, Src Port: 53, Dst Port: 47447
Domain Name System (response)
    Queries
        zack.is: type A, class IN
    Answers
        zack.is: type A, class IN, addr 104.24.101.191
       
Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.24.101.191
Transmission Control Protocol, Src Port: 55244, Dst Port: 80, Seq: 0, Len: 0
    Source Port: 55244
    Destination Port: 80
    Flags: 0x002 (SYN)

We don’t need to do DNS queries to make WARP connections, we know the IP addresses of our data centers already, but we do want to know which of the IPv4 and IPv6 addresses will lead to a faster route over the Internet. To accomplish that we perform the same technique but at the network level: we send a packet over each protocol and use the protocol which returns first for subsequent messages. With some error handling and logging removed for brevity, it appears as:

let raceFinished = Atomic<Bool>(false)

let happyEyeballsRacer: (NWUDPSession, NWUDPSession, String) -> Void = {
    (session, otherSession, name) in
    // Session is the session the racer runs for, otherSession is a session we race against

    let handleMessage: ([Data]) -> Void = { datagrams in
        // This handler will be executed twice, once for the winner, again for the loser.
        // It does not matter what reply we received. Any reply means this connection is working.

        if raceFinished.swap(true) {
            // This racer lost
            return self.filteringReadHandler(data: datagrams, error: nil)
        }

        // The winner becomes the current session
        self.wireguardServerUDPSession = session

        session.setReadHandler(self.readHandler, maxDatagrams: 32)
        otherSession.setReadHandler(self.filteringReadHandler, maxDatagrams: 32)
    }

    session.setReadHandler({ (datagrams) in
        handleMessage(datagrams)
    }, maxDatagrams: 1)

    if !raceFinished.value {
        // Send a handshake message
        session.writeDatagram(onViable())
    }
}

This technique successfully allows us to support IPv6 addressing. In fact, every device which uses WARP instantly supports IPv6 addressing even on networks which don’t have support. Using WARP takes the 34% of Comcast’s network which doesn’t support IPv6 or the 69% of Charter’s network which doesn’t (as of 2018), and allows those users to communicate to IPv6 servers successfully.

This test shows my phone’s IPv6 support before and after enabling WARP:

The Technical Challenges of Building Cloudflare WARP

The Technical Challenges of Building Cloudflare WARP

Dying Connections

Nothing is simple however, with iOS 12.2 NWUDPSession began to trigger errors which killed connections. These errors were only identified with a code ‘55’. After some research it appears 55 has referred to the same error since the early foundations of the FreeBSD operating system OS X was originally built upon. In FreeBSD it’s commonly referred to as ENOBUFS, and it’s returned when the operating system does not have sufficient BUFfer Space to handle the operation being completed. For example, looking at the source of a FreeBSD today, you see this code in its IPv6 implementation:

The Technical Challenges of Building Cloudflare WARP

In this example, if enough memory cannot be allocated to accommodate the size of an IPv6 and ICMP6 header, the error ENOBUFS (which is mapped to the number 55) will be returned. Unfortunately, Apple’s take on FreeBSD is not open source however: how, when, and why they might be returning the error is a mystery. This error has been experienced by other UDP-based projects, but a resolution is not forthcoming.

What is clear is once an error 55 begins occurring, the connection is no longer usable. To handle this case we need to reconnect, but doing the same Happy Eyeballs mechanic we do on initial connection is both unnecessary (as we were already talking over the fastest connection), and will consume valuable time. Instead we add a second connection method which is only used to recreate an already working session:

/**
Create a new UDP connection to the server using a Happy Eyeballs like heuristic.

This function should be called when first establishing a connection to the edge server.

It will initiate a new connection over IPv4 and IPv6 in parallel, keeping the connection that receives the first response.
*/

func connect(onViable: @escaping () -> Data, onReply: @escaping () -> Void, onFailure: @escaping () -> Void, onDisconnect: @escaping () -> Void)

/**
Recreate the current connections.

This function should be called as a response to error code 55, when a quick connection is required.

Unlike `happyEyeballs`, this function will use viability as its only success criteria.
*/

func reconnect(onViable: @escaping () -> Void, onFailure: @escaping () -> Void, onDisconnect: @escaping () -> Void)

Using reconnect we are able to recreate sessions broken by code 55 errors, but it still adds a latency hit which is not ideal. As with all client software development on a closed-source platform however, we are dependent on the platform to identify and fix platform-level bugs.

Truthfully, this is just one of a long list of platform-specific bugs we ran into building WARP. We hope to continue working with device vendors to get them fixed. There are an unimaginable number of device and connection combinations, and each connection doesn’t just exist at one moment in time, they are always changing, entering and leaving broken states almost faster than we can track. Even now, getting WARP to work on every device and connection on Earth is not a solved problem, we still get daily bug reports which we work to triage and resolve.

WARP+

WARP is meant to be a place where we can apply optimizations which make the Internet better. We have a lot of experience making websites more performant, WARP is our opportunity to experiment with doing the same for all Internet traffic.

At Cloudflare we have a product called Argo. Argo makes websites’ time to first byte more than 30% faster on average by continually monitoring thousands of routes over the Internet between our data centers. That data builds a database which maps every IP address range with the fastest possible route to every destination. When a packet arrives it first reaches the closest data center to the client, then that data center uses data from our tests to discover the route which will get the packet to its destination with the lowest possible latency. You can think of it like a traffic-aware GPS for the Internet.

Argo has historically only operated on HTTP packets. HTTP is the protocol which powers the web, sending messages which load websites on top of TCP and IP. For example, if I load http://zack.is/, an HTTP message is sent inside a TCP packet:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.24.101.191
Transmission Control Protocol, Src Port: 55244, Dst Port: 80
    Source Port: 55244
    Destination Port: 80
    TCP payload (414 bytes)
Hypertext Transfer Protocol
    GET / HTTP/1.1\r\n
    Host: zack.is\r\n
    Connection: keep-alive\r\n
    Accept-Encoding: gzip, deflate\r\n
    Accept-Language: en-US,en;q=0.9\r\n
    \r\n

The modern and secure web presents a problem for us however: When I make the same request over HTTPS (https://zack.is) rather than just HTTP (http://zack.is), I see a very different result over the wire:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.25.151.102
Transmission Control Protocol, Src Port: 55983, Dst Port: 443
    Source Port: 55983
    Destination Port: 443
    Transport Layer Security
    TCP payload (54 bytes)
Transport Layer Security
    TLSv1.2 Record Layer: Application Data Protocol: http-over-tls
        Encrypted Application Data: 82b6dd7be8c5758ad012649fae4f469c2d9e68fe15c17297…

My request has been encrypted! It’s no longer possible for WARP (or anyone but the destination) to tell what is in the payload. It might be HTTP, but it also might be any other protocol. If my site is one of the twenty-million which use Cloudflare already, we can decrypt the traffic and accelerate it (along with a long list of other optimizations). But for encrypted traffic destined for another source existing HTTP-only Argo technology was not going to work.

Fortunately we now have a good amount of experience working with non-HTTP traffic through our Spectrum and Magic Transit products. To solve our problem the Argo team turned to the CONNECT protocol.

As we now know, when a WARP request is made it first communicates over the WireGuard protocol to a server running in one of our 194 data centers around the world. Once the WireGuard message has been decrypted, we examine the destination IP address to see if it is an HTTP request destined for a Cloudflare-powered site, or a request destined elsewhere. If it’s destined for us it enters our standard HTTP serving path; often we can reply to the request directly from our cache in the very same data center.

If it’s not destined for a Cloudflare-powered site we instead forward the packet to a proxy process which runs on each machine. This proxy is responsible for loading the fastest path from our Argo database and beginning an HTTP session with a machine in the data center this traffic should be forwarded to. It uses the CONNECT command to both transmit metadata (as headers) and turn the HTTP session into a connection which can transmit the raw bytes of the payload:

CONNECT 8.54.232.11:5564 HTTP/1.1\r\n
Exit-Tcp-Keepalive-Duration: 15\r\n
Application: warp\r\n
\r\n
<data to send to origin>

Once the message arrives at the destination data center it is either forwarded to another data center (if that is best for performance), or directed directly to the origin which is awaiting the traffic.

The Technical Challenges of Building Cloudflare WARP

Smart routing is just the beginning of WARP+; We have a long list of projects and plans which are all aimed at making your Internet faster, and couldn’t be more thrilled to finally have a platform to test them with.

Our Mission

Today, after well over a year of development, WARP is available to you and to your friends and family. For us though, this is just the beginning. With the ability to improve full network connection for all traffic, we unlock a whole new world of optimizations and security improvements which were simply impossible before. We couldn’t be more excited to experiment, play, and eventually release, all sorts of new WARP and WARP+ features.

Cloudflare’s mission is to help build a better Internet. If we are willing to experiment and solve hard technical problems together we believe we can help make the future of the Internet better than the Internet of today, and we are all grateful to play a part in that. Thank you for trusting us with your Internet connection.

WARP was built by Vlad Krasnov, Chris Branch, Dane Knecht, Naga Tripirineni, Andrew Plunk, Adam Schwartz, Irtefa, and intern Michelle Chen with support from members of our Austin, San Francisco, Champaign, London, Warsaw, and Lisbon offices.

WARP is here (sorry it took so long)

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/announcing-warp-plus/

WARP is here (sorry it took so long)

WARP is here (sorry it took so long)

Today, after a longer than expected wait, we’re opening WARP and WARP Plus to the general public. If you haven’t heard about it yet, WARP is a mobile app designed for everyone which uses our global network to secure all of your phone’s Internet traffic.

We announced WARP on April 1 of this year and expected to roll it out over the next few months at a fairly steady clip and get it released to everyone who wanted to use it by July. That didn’t happen. It turned out that building a next generation service to secure consumer mobile connections without slowing them down or burning battery was… harder than we originally thought.

Before today, there were approximately two million people on the waitlist to try WARP. That demand blew us away. It also embarrassed us. The common refrain is consumers don’t care about their security and privacy, but the attention WARP got proved to us how wrong that assumption actually is.

This post is an explanation of why releasing WARP took so long, what we’ve learned along the way, and an apology for those who have been eagerly waiting. It also talks briefly about the rationale for why we built WARP as well as the privacy principles we’ve committed to. However, if you want a deeper dive on those last two topics, I encourage you to read our original launch announcement.

And, if you just want to jump in and try it, you can download and start using WARP on your iOS or Android devices for free through the following links:

If you’ve already installed the 1.1.1.1 App on your device, you may need to update to the latest version in order to get the option to enable Warp.

Mea Culpa

Let me start with the apology. We are sorry making WARP available took far longer than we ever intended. As a way of hopefully making amends, for everyone who was on the waitlist before today, we’re giving 10 GB of WARP Plus — the even faster version of WARP that uses Cloudflare’s Argo network — to those of you who have been patiently waiting.

For people just signing up today, the basic WARP service is free without bandwidth caps or limitations. The unlimited version of WARP Plus is available for a monthly subscription fee. WARP Plus is the even faster version of WARP that you can optionally pay for. The fee for WARP Plus varies by region and is designed to approximate what a McDonald’s Big Mac would cost in the region. On iOS, the WARP Plus pricing as of the publication of this post is still being adjusted on a regional basis, but that should settle out in the next couple days.

WARP Plus uses Cloudflare’s virtual private backbone, known as Argo, to achieve higher speeds and ensure your connection is encrypted across the long haul of the Internet. We charge for it because it costs us more to provide. However, in order to help spread the word about WARP, you can earn 1GB of WARP Plus for every friend you refer to sign up for WARP. And everyone you refer gets 1GB of WARP Plus for free to get started as well.

Okay, Thanks, That’s Nice, But What Took You So Long?

So what took us so long?

WARP is an ambitious project. We set out to secure Internet connections from mobile devices to the edge of Cloudflare’s network. In doing so, however, we didn’t want to slow devices down or burn excess battery. We wanted it to just work. We also wanted to bet on the technology of the future, not the technology of the past. Specifically, we wanted to build not around legacy protocols like IPsec, but instead around the hyper-efficient WireGuard protocol.

At some level, we thought it would be easy. We already had the 1.1.1.1 App that was securing DNS requests running on millions of mobile devices. That worked great. How much harder could securing all the rest of the requests on a device be? Right??

It turns out, a lot. Zack Bloom has written up a great technical post describing many of the challenges we faced and the solutions we had to invent to deal with them. If you’re interested, I encourage you to check it out.

Some highlights:

Apple threw us a curveball by releasing iOS 12.2 just days before the April 1 planned roll out. The new version of iOS significantly changed the underlying network stack implementation in a way that made some of what we were doing to implement WARP unstable. Ultimately we had to find work-arounds in our networking code, costing us valuable time.

We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we’d anticipated. The Internet is made up of diverse network components which do not always play nicely, we knew that. What we didn’t expect was how much more pain is introduced by the diversity of mobile carriers, mobile operating systems, and mobile device models.

And, while phones in our testbed were relatively stationary, phones in the real world move around — a lot. When they do, their network settings can change wildly. While that doesn’t matter much for stateless, simple DNS queries, for the rest of Internet traffic that makes things complex. Keeping WireGuard fast requires long-lived sessions between your phone and a server in our network, maintaining that for hours and days was very complex. Even beyond that, we use a technology called Anycast to route your traffic to our network. Anycast meant your traffic could move not just between machines, but between entire data centers. That made things very complex.

Overcoming Challenges

But there is a huge difference between hard and impossible. From long before the announcement, the team has been hard at work and I’m deeply proud of what they’ve accomplished. We changed our roll out plan to focus on iOS and solidify the shared underpinnings of the app to ensure it would work even with future network stack upgrades. We invited beta users not in the order of when they signed up, but instead based on networks where we didn’t yet have information to help us discover as many corner cases as possible. And we invented new technologies to keep session state even when the wild west of mobile networks and Anycast routing collide.

WARP is here (sorry it took so long)

I’ve been running WARP on my phone since April 1. The first few months were… rough. Really rough. But, today, WARP has blended into the background of my mobile. And I sleep better knowing that my Internet connections from my phone are secure. Using my phone is as fast, and in some cases faster, than without WARP. In other words, WARP today does what we set out to accomplish: securing your mobile Internet connection and otherwise getting out of the way.

There Will Be Bugs

While WARP is a lot better than it was when we first announced it, we know there are still bugs. The most common bug we’re seeing these days is when WARP is significantly slower than using the mobile Internet without WARP. This is usually due to traffic being misrouted. For instance, we discovered a network in Turkey earlier this week that was being routed to London rather than our local Turkish facility. Once we’re aware of these routing issues we can typically fix them quickly.

Other common bugs involved captive portals — the pages where you have to enter information, for instance, when connecting to a hotel WiFI. We’ve fixed a lot of them but we haven’t had WARP users connecting to every hotel WiFi yet, so there will inevitably still be some that are broken.

WARP is here (sorry it took so long)

We’ve made it easy to report issues that you discover. From the 1.1.1.1 App you can click on the little bug icon near the top of the screen, or just shake your phone with the app open, and quickly send us a report. We expect, over the weeks ahead, we’ll be squashing many of the bugs that you report.

Even Faster With Plus

WARP is not just a product, it’s a testbed for all of the Internet-improving technology we have spent years developing. One dream was to use our Argo routing technology to allow all of your Internet traffic to use faster, less-congested, routes through the Internet. When used by Cloudflare customers for the past several years Argo has improved the speed of their websites by an average of over 30%. Through some hard work of the team we are making that technology available to you as WARP Plus.

WARP is here (sorry it took so long)

The WARP Plus technology is not without cost for us. Routing your traffic over our network often costs us more than if we release it directly to the Internet. To cover those costs we charge a monthly fee — $4.99/month or less — for WARP Plus. The fee depends on the region that you’re in and is intended to approximate what a Big Mac would cost in the same region.

Basic WARP is free. Our first priority is not to make money off of WARP however, we want to grow it to secure every single phone. To help make that happen, we wanted to give you an incentive to share WARP with your friends. You can earn 1GB of free WARP Plus for every person you share WARP with. And everyone you refer also gets 1GB of WARP Plus for free as well. There is no limit on how much WARP Plus data you can earn by sharing.

Privacy First

The free consumer security space has traditionally not been the most reputable. Many other companies that have promised to keep consumers’ data safe but instead built businesses around selling it or using it help target you with advertising. We think that’s disgusting. That is not Cloudflare’s business model and it never will be. WARP continues all the strong privacy protections that 1.1.1.1 launched with including:

  1. We don’t write user-identifiable log data to disk;
  2. We will never sell your browsing data or use it in any way to target you with advertising data;
  3. Don’t need to provide any personal information — not your name, phone number, or email address — in order to use WARP or WARP Plus; and
  4. We will regularly work with outside auditors to ensure we’re living up to these promises.

What WARP Is Not

From a technical perspective, WARP is a VPN. But it is designed for a very different audience than a traditional VPN. WARP is not designed to allow you to access geo-restricted content when you’re traveling. It will not hide your IP address from the websites you visit. If you’re looking for that kind of high-security protection then a traditional VPN or a service like Tor are likely better choices for you.

WARP, instead, is built for the average consumer. It’s built to ensure that your data is secured while it’s in transit. So the networks between you and the applications you’re using can’t spy on you. It will help protect you from people sniffing your data while you’re at a local coffee shop. It will also help ensure that your ISP isn’t hoovering up data on your browsing patterns to sell to advertisers.

WARP isn’t designed for the ultra-techie who wants to specify exactly what server their traffic will be routed through. There’s basically only one button in the WARP interface: ON or OFF. It’s simple on purpose. It’s designed for my mom and dad who ask me every holiday dinner what they can do to be a bit safer online. I’m excited this year to have something easy for them to do: install the 1.1.1.1 App, enable WARP, and rest a bit easier.

How Fast Is It?

Once we got WARP to a stable place, this was my first question. My initial inclination was to go to one of the many Speed Test sites and see the results. And the results were… weird. Sometimes much faster, sometimes much slower. Overall, they didn’t make a lot of sense. The reason why is that these sites are designed to measure the speed of your ISP. WARP is different, so these test sites don’t give particularly accurate readings.

The better test is to visit common sites around the Internet and see how they load, in real conditions, on WARP versus off. We’ve built a tool that does this. Generally, in our tests, WARP is around the same speed as non-WARP connections when you’re on a high performance network. As network conditions get worse, WARP will often improve performance more. But your experience will depend on the particular conditions of your network.

We plan, in the next few weeks, to expose the test tool within the 1.1.1.1 App so you can see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus. And, again, if you’re seeing particularly poor performance, please report it to us. Our goal is to provide security without slowing you down or burning excess battery. We can already do that for many networks and devices and we won’t rest until we can do it for everyone.

Here’s to a More Secure, Fast Internet

Cloudflare’s mission is to help build a better Internet. We’ve done that by securing and making more performance millions of Internet properties since we launched almost exactly 9 years ago. WARP furthers Cloudflare’s mission by extending our network to help make every consumer’s mobile device a bit more secure. Our team is proud of what we’ve built with WARP — albeit a bit embarrassed it took us so long to get into your hands. We hope you’ll forgive us for the delay, give WARP a try, and let us know what you think.

WARP is here (sorry it took so long)

Securing infrastructure at scale with Cloudflare Access

Post Syndicated from Jeremy Bernick original https://blog.cloudflare.com/access-wildcard-subdomain/

Securing infrastructure at scale with Cloudflare Access

I rarely have to deal with the hassle of using a corporate VPN and I hope it remains this way. As a new member of the Cloudflare team, that seems possible. Coworkers who joined a few years ago did not have that same luck. They had to use a VPN to get any work done. What changed?

Cloudflare released Access, and now we’re able to do our work without ever needing a VPN again. Access is a way to control access to your internal applications and infrastructure. Today, we’re releasing a new feature to help you replace your VPN by deploying Access at an even greater scale.

Access in an instant

Access replaces a corporate VPN by evaluating every request made to a resource secured behind Access. Administrators can make web applications, remote desktops, and physical servers available at dedicated URLs, configured as DNS records in Cloudflare. These tools are protected via access policies, set by the account owner, so that only authenticated users can access those resources. These end users are able to be authenticated over both HTTPS and SSH requests. They’re prompted to login with their SSO credentials and Access redirects them to the application or server.

For your team, Access makes your internal web applications and servers in your infrastructure feel as seamless to reach as your SaaS tools. Originally we built Access to replace our own corporate VPN. In practice, this became the fastest way to control who can reach different pieces of our own infrastructure. However, administrators configuring Access were required to create a discrete policy per each application/hostname. Now, administrators don’t have to create a dedicated policy for each new resource secured by Access; one policy will cover each URL protected.

When Access launched, the product’s primary use case was to secure internal web applications. Creating unique rules for each was tedious, but manageable. Access has since become a centralized way to secure infrastructure in many environments. Now that companies are using Access to secure hundreds of resources, that method of building policies no longer fits.

Starting today, Access users can build policies using a wildcard subdomain to replace the typical bottleneck that occurs when replacing dozens or even hundreds of bespoke rules within a single policy. With a wildcard, the same ruleset will now automatically apply to any subdomain your team generates that is gated by Access.

How can teams deploy at scale with wildcard subdomains?

Administrators can secure their infrastructure with a wildcard policy in the Cloudflare dashboard. With Access enabled, Cloudflare adds identity-based evaluation to that traffic.

In the Access dashboard, you can now build a rule to secure any subdomain of the site you added to Cloudflare. Create a new policy and enter a wildcard tag (“*”) into the subdomain field. You can then configure rules, at a granular level, using your identity provider to control who can reach any subdomain of that apex domain.

Securing infrastructure at scale with Cloudflare Access

This new policy will propagate to all 180 of Cloudflare’s data centers in seconds and any new subdomains created will be protected.

Securing infrastructure at scale with Cloudflare Access

How are teams using it?

Since releasing this feature in a closed beta, we’ve seen teams use it to gate access to their infrastructure in several new ways. Many teams use Access to secure dev and staging environments of sites that are being developed before they hit production. Whether for QA or collaboration with partner agencies, Access helps make it possible to share sites quickly with a layer of authentication. With wildcard subdomains, teams are deploying dozens of versions of new sites at new URLs without needing to touch the Access dashboard.

For example, an administrator can create a policy for “*.example.com” and then developers can deploy iterations of sites at “dev-1.example.com” and “dev-2.example.com” and both inherit the global Access policy.

The feature is also helping teams lock down their entire hybrid, on-premise, or public cloud infrastructure with the Access SSH feature. Teams can assign dynamic subdomains to their entire fleet of servers, regardless of environment, and developers and engineers can reach them over an SSH connection without a VPN. Administrators can now bring infrastructure online, in an entirely new environment, without additional or custom security rules.

What about creating DNS records?

Cloudflare Access requires users to associate a resource with a domain or subdomain. While the wildcard policy will cover all subdomains, teams will still need to connect their servers to the Cloudflare network and generate DNS records for those services.

Argo Tunnel can reduce that burden significantly. Argo Tunnel lets you expose a server to the Internet without opening any inbound ports. The service runs a lightweight daemon on your server that initiates outbound tunnels to the Cloudflare network.

Instead of managing DNS, network, and firewall complexity, Argo Tunnel helps administrators serve traffic from their origin through Cloudflare with a single command. That single command will generate the DNS record in Cloudflare automatically, allowing you to focus your time on building and managing your infrastructure.

What’s next?

More teams are adopting a hybrid or multi-cloud model for deploying their infrastructure. In the past, these teams were left with just two options for securing those resources: peering a VPN with each provider or relying on custom IAM flows with each environment. In the end, both of these solutions were not only quite costly but also equally unmanageable.

While infrastructure benefits from becoming distributed, security is something that is best when controlled in a single place. Access can consolidate how a team controls who can reach their entire fleet of servers and services.