Developments in the FOSS response to Copilot and related technologies

Post Syndicated from original https://lwn.net/Articles/886129/

Back in July, the Free Software Foundation (FSF) put out a call for white papers to explore the issues around GitHub’s Copilot AI-assisted programming tool, especially with regard to copyleft licensing; each selected white paper was awarded $500. The FSF has now published five of the submissions that the organization thought “advanced discussion of important questions, and did so clearly“.

The Free Software Foundation (FSF)
In our call for papers, we set forth several areas of interest. Most of these areas centered around copyright law, questions of ownership for AI-generated code, and legal impacts for GitHub authors who use a GNU or other copyleft license(s) for their works. We are pleased to announce the community-provided research into these areas, and much more.

First, we want to thank everyone who participated by sending in their papers. We received a healthy response of twenty-two papers from members of the community. The papers weighed-in on the multiple areas of interest we had indicated in our announcement. Using an anonymous review process, we concluded there were five papers that would be best suited to inform the community and foster critical conversations to help guide our actions in the search for solutions.

One of the submissions published was from Policy Fellow at Software Freedom Conservancy, Bradley M. Kuhn; that organization has announced the formation of a committee to “develop recommendations and plans for a Free and Open Source Software (FOSS) community response to the use of machine learning tools for code generation and authorship“. A public ai-assist mailing list has been set up for discussions. “The inaugural members of the Committee are: Matthew Garrett, Benjamin Mako Hill, Bradley M. Kuhn, Heiki Lõhmus, Allison Randal, Karen M. Sandler, Slavina Stefanova, John Sullivan, David ‘Novalis’ Turner, and Stefano ‘Zack’ Zacchiroli.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/886124/

Security updates have been issued by Fedora (dotnet6.0, kernel, libarchive, libxml2, and wireshark), openSUSE (opera), Oracle (cyrus-sasl), Red Hat (cyrus-sasl, python-pillow, and ruby:2.5), Scientific Linux (cyrus-sasl), and Ubuntu (snapd).

Privacy Violating COVID Tests

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/privacy-violating-covid-tests.html

A good lesson in reading the fine print:

Cignpost Diagnostics, which trades as ExpressTest and offers £35 tests for holidaymakers, said it holds the right to analyse samples from seals to “learn more about human health” — and sell information on to third parties.

Individuals are required to give informed consent for their sensitive medical data to be used ­ but customers’ consent for their DNA to be sold now as buried in Cignpost’s online documents.

Of course, no one ever reads the fine print.

Zabbix security advisories regarding CVE-2022-23131 and CVE-2022-23134

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-security-advisories-regarding-cve-2022-23131-and-cve-2022-23134/19720/

Here at Zabbix, the security of our product is our top priority. It has come to our attention that two potential CVE issues have been highlighted in tech media outlets  –  CVE-2022-23131 and CVE-2022-23134.

The most critical issue – CVE-2022-23131, affects only Zabbix instances where SAML SSO authentication is in use. While CVE-2022-23134 Affects Zabbix 5.4.x releases older than Zabbix 5.4.9.

Zabbix is aware of the following vulnerabilities And they have since been fixed in Zabbix version 5.4.9 and the stable release of Zabbix 6.0 LTS.

  • CVE-2022-23131 – Unsafe client-side session storage leading to authentication bypass/instance takeover via Zabbix Frontend with configured SAML
    • Affected versions: 5.4.0 – 5.4.8; 6.0.0alpha1
  • CVE-2022-23134 – Possible view of the setup pages by unauthenticated users if config file already exists
    • Affected versions: 5.4.0 – 5.4.8; 6.0.0 – 6.0.0beta1

We urge everyone who is using the SAML SSO authentication features in your environment o update your Zabbix instance to one of the aforementioned versions where the security vulnerabilities have been resolved.

keep track of any potential Zabbix security issues, the affected versions, and the required updates, visit our public Zabbix Security Advisories and CVE database page.

The post Zabbix security advisories regarding CVE-2022-23131 and CVE-2022-23134 appeared first on Zabbix Blog.

Как да говорим за сексистко риалити шоу, без да ставаме сексисти

Post Syndicated from Светла Енчева original https://toest.bg/kak-da-govorim-za-seksistko-reality-show/

„Шоуто трябва да продължи“ – със заглавието на финалната песен от последния албум на Queen преди смъртта на Фреди Меркюри може да се обобщи същността на шоубизнеса. Още неизлезли от пандемията от COVID-19, виждаме началото на война, която ни засяга пряко. Ала колкото по-мрачни са перспективите, толкова по-силна е потребността за бягство от реалността под формата на забавление. Европа на 20-те и 30-те години на ХХ век знае най-добре. Затова не е учудващо, че вместо всекидневно да следят статистиките за смъртността от коронавируса или да се питат откъде са се появили танковете в Донбас, мнозина у нас се занимават с… един ерген и потенциалните му избранички. Впрочем няма пречка човек да обсъжда с еднаква по интензитет страст и това, и сериозните обществени проблеми.

Новото риалити шоу по bTV „Ергенът“ предизвика реакции дори у хора, които не са го гледали.

Концепцията му е следната – 22 жени се борят за сърцето на един мъж. Онези, които не получат от ергена роза, отпадат, докато не остане избраницата на главния герой. Получилата „специалната“ роза е фаворитката на ергена за конкретната серия. Борбата се провежда на екзотично-романтичен фон – в луксозен частен комплекс в Турция. Обстановката включва море, басейни, зеленина, вечерни светлини, местенца за усамотяване, дълги рокли, които повече показват, отколкото скриват. И разбира се, рози.

Сред тези хубости кандидатките прилагат разнообразни стратегии, за да се харесат на ергена. Дават му подаръци, борят се за време с него насаме или напротив – опитват се да направят така, че той да ги забележи и сам да ги повика. А някои се срамуват и не знаят какво да предприемат. Ергенът пък по сценарий уж се държи еднакво добре и „джентълменски“ с всички, но се старае да поддържа и интрига – на една дава „специалната“ роза, друга целува… В края на серията кандидатките се подреждат прави и чакат, някои буквално треперейки, дали ще получат роза. Ергенът до последно държи в напрежение най-треперещата и най-плачещата от тях.

Форматът впрочем не е родно откритие –

шоуто стартира в САЩ под името The Bachelor преди две десетилетия – през 2002 г. За да има някакъв джендър баланс, още на следващата година американската телевизия ABC поставя началото на вариант на риалитито, в който мъже се борят за една жена – The Bachelorette. В двата си варианта и с още няколко разклонения оригиналното предаване продължава и до днес, освен това като франчайз съществува в различни страни, а вече и у нас.

Ако bTV реши да излъчва и форма̀та, в който в центъра е жена, интересно как би се казвал той. Показателно за културата ни е, че в българския език липсва съвременно звучаща дума за женския аналог на ерген. Да си желан ерген е достойно за душата, да си неомъжена жена – недотам. „Мома“ звучи ретро и иронично, будейки асоциации със „стара мома“. А „девойка“ се свързва с възрастова група, която участничките в подобни шоута са надхвърлили.

Но да се върнем на реакциите след пилотния епизод на българското издание на „Ергенът“. Осъждащите риалитито от високи интелектуални и естетически позиции коментират преди всичко

външния вид, поведението и интелекта на кандидатките.

А определено има какво да се коментира. При някои участнички ефектите на пластичната хирургия са, да го кажем евфемистично, твърде забележими. Някои не знаят какво известно държавно предприятие има в Козлодуй. Една подарява на ергена червените си дантелени гащи, друга – икона. Една се смята за „българската Кари Брадшоу, ама по-успяла“, друга твърди, че Господ е от родния ѝ град. Изброяването на подобни примери може да продължи още дълго. Освен това всички твърдят, че имат високи критерии при избора на интимен партньор, а в същото време се борят за мъж, когото на практика не познават.

Участничките, разбира се, не са представителна извадка на жените в България. Не са и подбирани според общосподелени стандарти за женска красота и характер, друг въпрос е дали изобщо има такива в България. Нито репрезентативността, нито красотата, характерът или интелектът гарантират успеха на едно шоу. В медийна среда като българската, в която стандартите традиционно отстъпват пред популизма, решаващо за гледаемостта на едно риалити е да има интриги, скандали, карикатурни персонажи и поводи за възмущение. Ето защо, без да осъзнават, възмутените са хванати в клопката на шоуто. Критикувайки участничките, те всъщност правят безплатна реклама на „Ергенът“. По-важното е, че те

неволно възпроизвеждат сексизма, който е в основата на шоуто.

„Ергенът“ е нещо като съвременна версия на пазар за булки, в който участничките са сведени до стоки. Да намираш кусури на стоките не означава да отричаш принципа на съществуване на този пазар. Напротив, по този начин го утвърждаваш, просто искаш „по-качествени“ стоки.

Това обаче е само едната страна на медала. Ако възприемаме предаването единствено като форма на покупко-продажба на човешки същества, сериозният въпрос е не защо някои от участничките предизвикват подигравки, а защо има и кандидатки, които изглеждат читави. Защо толкова жени доброволно приемат да са част от тази игра, в която ще са гледани под лупа как си подливат вода една на друга и как треперят дали ще получат роза? Възможно е да има повече от един отговор.

От една страна, изборът на партньор често пъти е повече или по-малко пазар –

и в предмодерните времена, и в наши дни. Навремето се е гледало колко е имотен момъкът, колко е голяма зестрата на булката, а ако няма зестра – поне да е хубавка. Най-важно е било обаче родителите на двете страни да се разбират, защото те са уговаряли брака – не е било изключение младоженците дори да не се познават преди сватбата. А понякога, разказва Иван Хаджийски, булката дори са я омъжвали „за калпак“, ако младоженецът е имал работа далеч от дома.

Когато в модерния свят постепенно връзките започват да зависят от личния избор, някои успяват да си намерят партньор, без да се налага да прибягват до специални платформи за целта. За останалите има различни форми, които са повече или по-малко „пазарни“ – обяви във вестници, после – сайтове за запознанства, а днес – онлайн приложения, „бързи срещи“ и прочее. Общото при всички тях е, че потенциалните партньори се представят по най-привлекателния начин, на който са способни, след което се преценяват и отсяват взаимно – така, както човек си избира например апартамент. С тази разлика, че в случая и „апартаментът“ има думата. Една от героините от „Просто ей така…“ (продължението на „Сексът и градът“), която е брокерка на недвижими имоти, „листва“ в приложения за запознанства своя приятелка, защото през този сезон продажбите на имоти не вървят.

От друга страна, точно любовта ли търсят участничките в „Ергенът“?

Близко е до ума, че заявената идея на шоуто е трудноизпълнима на практика. Това, което гарантира забавление за публиката, в общия случай не е предпоставка за успешна връзка. От общо 25 издания на оригинала The Bachelor по ABC само две от фаворитките на мъжа са все още с него, като за втората е рано да се каже, тъй като предаването е приключило едва преди два месеца.

Вместо мъж на живота си обаче участничките ще получат известност, а някои от тях – вероятно и професионален тласък, телевизионна кариера и изобщо повече биографични шансове. Дали моралистично настроеният зрител одобрява подобна житейска стратегия, е отделен въпрос. Морализмът също може да е форма на сексизъм. Ако някои свободно могат да изберат да са секс работници, защо други да не се опитат да направят кариера, играейки влюбени по телевизията?

Ала докато всички търсят кусури на участничките, почти никой не коментира ергена.

Логично е – те са много, а той е един. Подбран е така, че да се харесва на възможно по-широк кръг от хора, за да бъде легитимна цел на женската борба. В страната, в която до неотдавна идеалът за човек на власт беше грубата мъжка сила, логично е идеалният ерген да е спортист. В същото време той трябва да е джентълмен и романтичен – качества, които избраният за тази роля фитнес инструктор Виктор Стоянов демонстрира. У него има дори нещо академично. Докато някои академично преподават, пишат книги и правят изследвания, Виктор Стоянов е бил състезател по академично гребане. Има и физическа травма, разделила го с този спорт, както и душевна травма – разбито сърце, което участничките да се надпреварват да излекуват.

Отвъд идентичността и поведението по сценарий ергенът си позволява да е малко по-игрив в Instagram, пускайки не съвсем джентълменски майтапи, в които свежда женския интелект до инструмент за сексуално удоволствие на мъжа. В някои страни с по-развита феминистка култура това би било скандално. Но не и у нас.

Би ли компенсирал сексизма вариантът на шоуто, в който мъже се борят за сърцето на жена?

Сексизмът е в третирането на представителите на един пол като обект. Ако жените се заменят с мъже, това не променя принципа. Не води и до преодоляване на джендър стереотипите. Показателно в тази връзка е, че за разлика от оригиналния The Bachelor, в женския вариант The Bachelorette от 18 издания 5 от двойките фаворити са още заедно (е, за последната, както и в мъжкия вариант, е още рано да се каже). Този факт може да се обясни със социалния натиск над жените да са по-сериозни в интимните си отношения. Докато за един мъж е допустимо просто да си „поиграе“ с жените. В The Bachelor не са прецедент случаите, когато ергенът на финала избира една кандидатка, а започва връзка с подгласничката ѝ.

Тъй като във всичките си варианти шоуто „Ергенът“ се основава на хетеронормативните джендър стереотипи, най-интересни са случаите, когато те се развиват не по сценарий. Колтън Ъндърууд например, участник и в женския вариант на шоуто като кандидат, и в мъжкия – като ергена, накрая се разкрива като гей. А две участнички, вместо да се борят за ергена, започват връзка помежду си.

При всичкия си сексизъм „Ергенът“ е полезно шоу –

не за друго, а защото изважда наяве собствените ни предразсъдъци и стереотипи. Как съдим другите, казва много за това що за хора сме ние самите. Послевкусът, който оставят такива предавания, може да се обобщи с този диалог от филма „Жените наистина плачат“:

– Много е гадно да си жена, много е гадно!
– Обаче знаеш ли кое е по-отвратителното?
– Да си мъж!

Заглавна снимка: Стопкадър от първия епизод на „Ергенът“ по bTV

Източник

Russia/Ukraine Conflict: What Is Rapid7 Doing to Protect My Organization?

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/02/25/russia-ukraine-conflict-what-is-rapid7-doing-to-protect-my-organization/

Russia/Ukraine Conflict: What Is Rapid7 Doing to Protect My Organization?

Rapid7 is monitoring the escalating conflict in Ukraine, and we have provided a blog on the various attack vectors organizations may see, as well as guidance on mitigations and remediations.

To assist with your preparation and response efforts, Rapid7 is continuously integrating into our products the most up-to-date threat intelligence — both consumed and curated — which are monitoring for new attack vectors and intelligence in order to alert on attacker behaviors that are associated with various Advanced Persistent Threat (APT) groups within InsightIDR.

If you are a Managed Detection & Response (MDR) customer, our global SOC teams are monitoring your environment 24/7 with a high degree of diligence, and as standard procedure, any verified suspicious activity will be investigated and reported to you with expediency. Considering the current crisis, we have placed a special emphasis on the most relevant APT groups, and we are closely monitoring a wide breadth of sources to make use of any newly created and verified indicators.

Keeping software patched against known vulnerabilities is an important first line of defense against attackers. On January 11, 2022, the Cybersecurity & Infrastructure Security Agency (CISA) published Alert AA22-011A: Understanding and Mitigating Russian State-Sponsored Cyber Threats to US Critical Infrastructure, listing several vulnerabilities known to be exploited by Russian threat actors.

InsightVM and Nexpose have checks for the CVEs called out in this alert. These vulnerabilities are included in InsightVM’s Threat Feed Dashboard (see the Assets With Actively Targeted Vulnerabilities card and the Most Common Actively Targeted Vulnerabilities card), along with other vulnerabilities known to be exploited in the wild.

Useful resources

Staying Secure in a Global Cyber Conflict

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/02/25/russia-ukraine-staying-secure-in-a-global-cyber-conflict/

Staying Secure in a Global Cyber Conflict

Now that Russia has begun its armed invasion of Ukraine, we should expect increasing risks of cybersecurity attacks and incidents, either as spillover from cyberattacks targeting Ukraine or direct attacks against actors supporting Ukraine.

Any state-sponsored Russian attacks aiming to support the Russian invasion of Ukraine, or to retaliate for US, NATO, or other foreign measures taken in response to the Russian invasion of Ukraine, are most likely to be destructive or disruptive in nature rather than aiming to steal data. This blog discusses the types of attacks organizations may see — including distributed denial of service (DDoS), website defacements, and the use of ransomware or destructive malware — and recommends steps for their mitigation or remediation.

As we have stated before, we do not believe organizations need to panic. But as per guidance from numerous governments, we do believe it is wise to be extra vigilant at this time. Rapid7 will continue to monitor the cybersecurity risks, both internally and for our Managed Detection and Response (MDR) customers as the situation evolves. We will post updates as relevant and suggest subscription to our blog to see them as they are posted.

Malware

One of the most concerning possibilities is the risk of a destructive malware attack on the US, NATO members, or other foreign countries. This could take the form of a direct attack or spillover from an attack on Ukraine, such as the 2017 NotPetya operation that targeted Ukraine and spread to other parts of the globe. Cybersecurity researchers have just discovered a new data wiping malware, dubbed HermeticWiper (AKA KillDisk.NCV), that infected hundreds of Ukrainian machines in the last two months. This seems to be a custom-written malware that corrupts the Master Boot Record (MBR), resulting in boot failure. This malware, like NotPetya, is intended to be destructive and will cripple the assets that it infects.

As always, the best malware prevention is to avoid infection in the first place — a risk we can minimize by ensuring that assets are up to date and use strong access controls, including multi-factor authentication. Additionally, it is crucial to have an incident response plan in place for the worst-case scenario, as well as a business continuity plan — including failover infrastructure if possible — for business-critical assets.

DDoS

There have already been reports of DDoS attacks on Ukrainian websites, and Russia has historically used DDoS in support of operations against other former Soviet republics, such as Georgia, in the past. Given this context, it is plausible that state-sponsored Russian actors would use DDoS if they choose to retaliate in response to measures taken against Russia for the invasion of Ukraine, such as sanctions or cyber operations from NATO countries.

While DDoS does not receive the same level of attention as some other forms of attack, it can still have significant impacts to business operations. DDoS mitigations can include reduction of attack surface area via Content Distribution Networks or load balancers, as well as the use of Access Control Lists and firewalls to drop traffic coming from attacker nodes.

Phishing campaigns

Russian state-sponsored actors are also well known for engaging in spear-phishing attacks, specifically with compromised valid accounts. Defenders should ensure strong spam filtering and attachment scanning is in place. Educating end users of the dangers of phishing and regularly running phishing campaigns will also help mitigate this issue.

State-sponsored, APT-style groups are not the only relevant threats. In times of crisis, it is common to see phishing attacks linking to malicious websites masquerading as news, aid groups, or other seemingly relevant content. Opportunistic scammers and other bad actors will attempt to take advantage of our human nature when curiosity, anxiety, and desire to help can make people less suspicious. Remain vigilant and avoid clicking unknown links or opening attachments — basic cyber hygiene that can be forgotten when emotions run high.

Brute-force attacks

According to a report from the NSA, CISA, FBI, and NCSC, “From mid-2019 through early 2021, Russian General Staff Main Intelligence Directorate (GRU) … conduct[ed] widespread, distributed, and anonymized brute-force access attempts against hundreds of government and private sector targets worldwide.” GRU used the discovered credentials to gain access into networks and further used known vulnerabilities such as CVE-2020-0688 and CVE-2020-17144 to increase access.

The best mitigation for these types of attacks is to enable MFA on all systems. Minimize externally facing systems and ensure externally facing systems are fully patched.

Defacement

Ukraine has also been experiencing website defacements, which provide attackers with an opportunity to spread messaging. Website defacement is typically associated with hacktivist activity, but state-sponsored Russian actors could pose as hacktivists in order to disguise Russian state involvement, and spread their strategic communication themes to international audiences by defacing Western websites.

Website defacement often occurs as a result of weak passwords for admin accounts, cross-site scripting, injection, file upload, or vulnerable plugins. This can be managed by limiting the level of access accounts have and enforcing strong passwords. Additionally, looking for places where scripts or iframes could be injected or where SQL injection could occur can help identify vulnerabilities to remediate.

Ransomware

Ransomware could also be used to disrupt foreign targets. Criminals based in Russia were believed to be behind the 2021 ransomware attack on Colonial Pipeline in the United States. Ransomware can have disruptive effects on targets, and the attackers could simply refrain from decrypting files, even if they receive ransom payments, in order to maximize and extend the disruptive impact on victims. Additionally, opportunistic attackers who are actually looking for ransoms will still be on the prowl, and are likely to take advantage of the chaos.

To this end, defenders should:

  • Evaluate asset and application configurations to ensure resilience
  • Double-check visibility into the functioning of business-critical assets
  • Assess incident response processes in the case of an incident

What else should you be doing?

The following activities are mission-critical in times of uncertainty, but they are also best practices in general.

  • Continuous monitoring: Reinforce cybersecurity measures and staff during nights, weekends, and holidays. Threat actors are known to target their victims when there are gaps in “eyes on glass.”
  • Incident response plan: Prepare a dedicated team with a detailed workflow and a contact person that will be available offline in case of a cybersecurity incident.
  • Back up data: Implement data backup procedures of the company networks and systems. Backup procedures should be conducted on a frequent, regular basis for immediate recovery. Also, be sure to store backups offline and check them regularly to ensure they have not been poisoned with malware.
  • Reduce opportunities for attackers: Identify exposures, vulnerabilities, and misconfigurations that can provide opportunities for attackers to gain a foothold in your environment, and apply relevant mitigations or patches. In particular, Russian operators are well known to exploit edge systems. The Cybersecurity and Infrastructure Security Agency (CISA) recently put out an alert listing 13 known vulnerabilities that Russian state-sponsored threat actors use to initially compromise networks. We recommend this as a starting point for focused patching and mitigation.
  • Stay informed: Follow the latest updates and recommendations provided by Rapid7, as well as governmental security entities in specific press releases/alerts from the Ukraine CERT, The Security Service of Ukraine (SSU), and the US CISA.

We expect the situation to be fluid over the coming days and weeks, and security guidance and threats may also evolve as the conflict develops. The measures suggested in this blog will continue to be relevant, and we plan to provide additional information as needed.

In the meantime, you can also check this blog to see how Rapid7 can help you prepare for and respond to cyber attacks. We also recommend organizations check their government’s cybersecurity website for guidance.

New Major Funding from the Ford Foundation

Post Syndicated from Let's Encrypt original https://letsencrypt.org/2022/02/25/ford-foundation.html

ISRG’s pragmatic, public-interest approach to Internet security has fundamentally changed the web at an astonishing scale and pace.

Michael Brennan, Ford Foundation

The Internet has considerable potential to help build a more just, equitable, and sustainable world for all people. Yet for everyone online—and indeed the billions not yet online—barriers to secure and privacy-respecting communication remain pervasive.

ISRG was founded in 2013 to find and eliminate these barriers. Today, we’re proud to announce a $1M grant from the Ford Foundation to continue our efforts.

Our first project, Let’s Encrypt, leverages technology whose foundation has existed for nearly three decades—TLS certificates for securely communicating information via HTTP. Yet even for people well-versed in technology, adopting TLS proved daunting.

Before Let’s Encrypt, the growth rate for HTTPS page loads merely puttered along. As recently as 2013, just 25% of websites used HTTPS. In order for the Internet to reach its full potential, this glaring risk to peoples’ security and privacy needed to be mitigated.

Let’s Encrypt changed the paradigm. Today 81% of website page loads use HTTPS. That means that you and the other 4.9 billion people online can leverage the Internet for your own pursuits with a greater degree of security and privacy than ever before.

But TLS adoption was just one hurdle. Much can be done to further improve the Internet’s most critical pieces of technology to be more secure; much can be done to further improve the privacy of everyone using the Internet today.

Building our efforts thanks to transformational support

Ford Foundation’s commitment recognizes that the Internet can be a technological tool to build a more just, equitable, and sustainable world, but that it will take organizations like ISRG to help build it.

“Ford Foundation is one of the most respected grantmaking institutions in the world,” Josh Aas, ISRG Executive Director, said. “We are proud that Ford believes in the impact we’ve created and the potential of our efforts to continue benefiting everyone using the Internet.”

This support, which began in 2021, will help ISRG continue to invest in Let’s Encrypt and our other projects, Prossimo and Divvi Up.

Launched in late 2020, Prossimo intends to move the Internet’s most critical security-sensitive software infrastructure to memory safe code. Society pays the price for these vulnerabilities with privacy violations, staggering financial losses, denial of public services (e.g., hospitals, power grids), and human rights violations. Meaningful effort will be required to bring about such change, but the Internet will be around for a long time. There is time for ambitious efforts to pay off.

Divvi Up is a system for privacy-preserving metrics analysis. With Divvi Up, organizations can analyze and share data to further their aims without sacrificing their users’ privacy. Divvi Up is currently used for COVID-19 Exposure Notification apps and has processed over 14 billion metrics to aid Public Health Authorities to better hone their app to be responsive to their local populations.

"ISRG’s pragmatic, public-interest approach to Internet security has fundamentally changed the web at an astonishing scale and pace,” Michael Brennan of the Ford Foundation said. "I believe their new projects have the same potential and I am eager to see what they turn their sights to next."

We’re grateful to Ford for their support of our efforts, and to all of you who have contributed time and resources to our projects. For more information on ISRG and our projects, take a read through our 2021 Annual Report. 100% of ISRG’s funding comes from contributed sources. If you or your organization are interested in helping advance our mission, consider becoming a sponsor, making a one-time contribution, or reaching out with your idea on how you can help financially support our mission at [email protected].

HPKE: Standardizing public-key encryption (finally!)

Post Syndicated from Christopher Wood original https://blog.cloudflare.com/hybrid-public-key-encryption/

HPKE: Standardizing public-key encryption (finally!)

For the last three years, the Crypto Forum Research Group of the Internet Research Task Force (IRTF) has been working on specifying the next generation of (hybrid) public-key encryption (PKE) for Internet protocols and applications. The result is Hybrid Public Key Encryption (HPKE), published today as RFC 9180.

HPKE was made to be simple, reusable, and future-proof by building upon knowledge from prior PKE schemes and software implementations. It is already in use in a large assortment of emerging Internet standards, including TLS Encrypted Client Hello and Oblivious DNS-over-HTTPS, and has a large assortment of interoperable implementations, including one in CIRCL. This article provides an overview of this new standard, going back to discuss its motivation, design goals, and development process.

A primer on public-key encryption

Public-key cryptography is decades old, with its roots going back to the seminal work of Diffie and Hellman in 1976, entitled “New Directions in Cryptography.” Their proposal – today called Diffie-Hellman key exchange – was a breakthrough. It allowed one to transform small secrets into big secrets for cryptographic applications and protocols. For example, one can bootstrap a secure channel for exchanging messages with confidentiality and integrity using a key exchange protocol.

HPKE: Standardizing public-key encryption (finally!)
Unauthenticated Diffie-Hellman key exchange

In this example, Sender and Receiver exchange freshly generated public keys with each other, and then combine their own secret key with their peer’s public key. Algebraically, this yields the same value \(g^{xy} = (g^x)^y = (g^y)^x\). Both parties can then use this as a shared secret for performing other tasks, such as encrypting messages to and from one another.

The Transport Layer Security (TLS) protocol is one such application of this concept. Shortly after Diffie-Hellman was unveiled to the world, RSA came into the fold. The RSA cryptosystem is another public key algorithm that has been used to build digital signature schemes, PKE algorithms, and key transport protocols. A key transport protocol is similar to a key exchange algorithm in that the sender, Alice, generates a random symmetric key and then encrypts it under the receiver’s public key. Upon successful decryption, both parties then share this secret key. (This fundamental technique, known as static RSA, was used pervasively in the context of TLS. See this post for details about this old technique in TLS 1.2 and prior versions.)

At a high level, PKE between a sender and receiver is a protocol for encrypting messages under the receiver’s public key. One way to do this is via a so-called non-interactive key exchange protocol.

To illustrate how this might work, let \(g^y\) be the receiver’s public key, and let \(m\) be a message that one wants to send to this receiver. The flow looks like this:

  1. The sender generates a fresh private and public key pair, \((x, g^x)\).
  2. The sender computes \(g^{xy} = (g^y)^x\), which can be done without involvement from the receiver, that is, non-interactively.
  3. The sender then uses this shared secret to derive an encryption key, and uses this key to encrypt m.
  4. The sender packages up \(g^x\) and the encryption of \(m\), and sends both to the receiver.

The general paradigm here is called “hybrid public-key encryption” because it combines a non-interactive key exchange based on public-key cryptography for establishing a shared secret, and a symmetric encryption scheme for the actual encryption. To decrypt \(m\), the receiver computes the same shared secret \(g^{xy} = (g^x)^y\), derives the same encryption key, and then decrypts the ciphertext.

Conceptually, PKE of this form is quite simple. General designs of this form date back for many years and include the Diffie-Hellman Integrated Encryption System (DHIES) and ElGamal encryption. However, despite this apparent simplicity, there are numerous subtle design decisions one has to make in designing this type of protocol, including:

  • What type of key exchange protocol should be used for computing the shared secret? Should this protocol be based on modern elliptic curve groups like Curve25519? Should it support future post-quantum algorithms?
  • How should encryption keys be derived? Are there other keys that should be derived? How should additional application information be included in the encryption key derivation, if at all?
  • What type of encryption algorithm should be used? What types of messages should be encrypted?
  • How should sender and receiver encode and exchange public keys?

These and other questions are important for a protocol, since they are required for interoperability. That is, senders and receivers should be able to communicate without having to use the same source code.

There have been a number of efforts in the past to standardize PKE, most of which focus on elliptic curve cryptography. Some examples of past standards include: ANSI X9.63 (ECIES), IEEE 1363a, ISO/IEC 18033-2, and SECG SEC 1.

HPKE: Standardizing public-key encryption (finally!)
Timeline of related standards and software

A paper by Martinez et al. provides a thorough and technical comparison of these different standards. The key points are that all these existing schemes have shortcomings. They either rely on outdated or not-commonly-used primitives such as RIPEMD and CMAC-AES, lack accommodations for moving to modern primitives (e.g., AEAD algorithms), lack proofs of IND-CCA2 security, or, importantly, fail to provide test vectors and interoperable implementations.

The lack of a single standard for public-key encryption has led to inconsistent and often non-interoperable support across libraries. In particular, hybrid PKE implementation support is fractured across the community, ranging from the hugely popular and simple-to-use NaCl box and libsodium box seal implementations based on modern algorithm variants like X-SalsaPoly1305 for authenticated encryption, to BouncyCastle implementations based on “classical” algorithms like AES and elliptic curves.

Despite the lack of a single standard, this hasn’t stopped the adoption of ECIES instantiations for widespread and critical applications. For example, the Apple and Google Exposure Notification Privacy-preserving Analytics (ENPA) platform uses ECIES for public-key encryption.

When designing protocols and applications that need a simple, reusable, and agile abstraction for public-key encryption, existing standards are not fit for purpose. That’s where HPKE comes into play.

Construction and design goals

HPKE is a public-key encryption construction that is designed from the outset to be simple, reusable, and future-proof. It lets a sender encrypt arbitrary-length messages under a receiver’s public key, as shown below. You can try this out in the browser at Franziskus Kiefer’s blog post on HPKE!

HPKE: Standardizing public-key encryption (finally!)
HPKE overview

HPKE is built in stages. It starts with a Key Encapsulation Mechanism (KEM), which is similar to the key transport protocol described earlier and, in fact, can be constructed from the Diffie-Hellman key agreement protocol. A KEM has two algorithms: Encapsulation and Decapsulation, or Encap and Decap for short. The Encap algorithm creates a symmetric secret and wraps it for a public key such that only the holder of the corresponding private key can unwrap it. An attacker knowing this encapsulated key cannot recover even a single bit of the shared secret. Decap takes the encapsulated key and the private key associated with the public key, and computes the original shared secret. From this shared secret, HPKE computes a series of derived keys that are then used to encrypt and authenticate plaintext messages between sender and receiver.

This simple construction was driven by several high-level design goals and principles. We will discuss these goals and how they were met below.

Algorithm agility

Different applications, protocols, and deployments have different constraints, and locking any single use case into a specific (set of) algorithm(s) would be overly restrictive. For example, some applications may wish to use post-quantum algorithms when available, whereas others may wish to use different authenticated encryption algorithms for symmetric-key encryption. To accomplish this goal, HPKE is designed as a composition of a Key Encapsulation Mechanism (KEM), Key Derivation Function (KDF), and Authenticated Encryption Algorithm (AEAD). Any combination of the three algorithms yields a valid instantiation of HPKE, subject to certain security constraints about the choice of algorithm.

One important point worth noting here is that HPKE is not a protocol, and therefore does nothing to ensure that sender and receiver agree on the HPKE ciphersuite or shared context information. Applications and protocols that use HPKE are responsible for choosing or negotiating a specific HPKE ciphersuite that fits their purpose. This allows applications to be opinionated about their choice of algorithms to simplify implementation and analysis, as is common with protocols like WireGuard, or be flexible enough to support choice and agility, as is the approach taken with TLS.

Authentication modes

At a high level, public-key encryption ensures that only the holder of the private key can decrypt messages encrypted for the corresponding public key (being able to decrypt the message is an implicit authentication of the receiver.) However, there are other ways in which applications may wish to authenticate messages from sender to receiver. For example, if both parties have a pre-shared key, they may wish to ensure that both can demonstrate possession of this pre-shared key as well. It may also be desirable for senders to demonstrate knowledge of their own private key in order for recipients to decrypt the message (this functionally is similar to signing an encryption, but has some subtle and important differences).

To support these various use cases, HPKE admits different modes of authentication, allowing various combinations of pre-shared key and sender private key authentication. The additional private key contributes to the shared secret between the sender and receiver, and the pre-shared key contributes to the derivation of the application data encryption secrets. This process is referred to as the “key schedule”, and a simplified version of it is shown below.

HPKE: Standardizing public-key encryption (finally!)
Simplified HPKE key schedule

These modes come at a price, however: not all KEM algorithms will work with all authentication modes. For example, for most post-quantum KEM algorithms there isn’t a private key authentication variant known.

Reusability

The core of HPKE’s construction is its key schedule. It allows secrets produced and shared with KEMs and pre-shared keys to be mixed together to produce additional shared secrets between sender and receiver for performing authenticated encryption and decryption. HPKE allows applications to build on this key schedule without using the corresponding AEAD functionality, for example, by exporting a shared application-specific secret. Using HPKE in an “export-only” fashion allows applications to use other, non-standard AEAD algorithms for encryption, should that be desired. It also allows applications to use a KEM different from those specified in the standard, as is done in the proposed TLS AuthKEM draft.

Interface simplicity

HPKE hides the complexity of message encryption from callers. Encrypting a message with additional authenticated data from sender to receiver for their public key is as simple as the following two calls:

// Create an HPKE context to send messages to the receiver
encapsulatedKey, senderContext = SetupBaseS(receiverPublicKey, ”shared application info”)

// AEAD encrypt the message using the context
ciphertext = senderContext.Seal(aad, message)

In fact, many implementations are likely to offer a simplified “single-shot” interface that does context creation and message encryption with one function call.

Notice that this interface does not expose anything like nonce (“number used once”) or sequence numbers to the callers. The HPKE context manages nonce and sequence numbers internally, which means the application is responsible for message ordering and delivery. This was an important design decision done to hedge against key and nonce reuse, which can be catastrophic for security.

Consider what would be necessary if HPKE delegated nonce management to the application. The sending application using HPKE would need to communicate the nonce along with each ciphertext value for the receiver to successfully decrypt the message. If this nonce was ever reused, then security of the AEAD may fall apart. Thus, a sending application would necessarily need some way to ensure that nonces were never reused. Moreover, by sending the nonce to the receiver, the application is effectively implementing a message sequencer. The application could just as easily implement and use this sequencer to ensure in-order message delivery and processing. Thus, at the end of the day, exposing the nonce seemed both harmful and, ultimately, redundant.

Wire format

Another hallmark of HPKE is that all messages that do not contain application data are fixed length. This means that serializing and deserializing HPKE messages is trivial and there is no room for application choice. In contrast, some implementations of hybrid PKE deferred choice of wire format details, such as whether to use elliptic curve point compression, to applications. HPKE handles this under the KEM abstraction.

Development process

HPKE is the result of a three-year development cycle between industry practitioners, protocol designers, and academic cryptographers. In particular, HPKE built upon prior art relating to public-key encryption, iterated on a design and specification in a tight specification, implementation, experimentation, and analysis loop, with an ultimate goal towards real world use.

HPKE: Standardizing public-key encryption (finally!)
HPKE development process

This process isn’t new. TLS 1.3 and QUIC famously demonstrated this as an effective way of producing high quality technical specifications that are maximally useful for their consumers.

One particular point worth highlighting in this process is the value of interoperability and analysis. From the very first draft, interop between multiple, independent implementations was a goal. And since then, every revision was carefully checked by multiple library maintainers for soundness and correctness. This helped catch a number of mistakes and improved overall clarity of the technical specification.

From a formal analysis perspective, HPKE brought novel work to the community. Unlike protocol design efforts like those around TLS and QUIC, HPKE was simpler, but still came with plenty of sharp edges. As a new cryptographic construction, analysis was needed to ensure that it was sound and, importantly, to understand its limits. This analysis led to a number of important contributions to the community, including a formal analysis of HPKE, new understanding of the limits of ChaChaPoly1305 in a multi-user security setting, as well as a new CFRG specification documenting limits for AEAD algorithms. For more information about the analysis effort that went into HPKE, check out this companion blog by Benjamin Lipp, an HPKE co-author.

HPKE’s future

While HPKE may be a new standard, it has already seen a tremendous amount of adoption in the industry. As mentioned earlier, it’s an essential part of the TLS Encrypted Client Hello and Oblivious DoH standards, both of which are deployed protocols on the Internet today. Looking ahead, it’s also been integrated as part of the emerging Oblivious HTTP, Message Layer Security, and Privacy Preserving Measurement standards. HPKE’s hallmark is its generic construction that lets it adapt to a wide variety of application requirements. If an application needs public-key encryption with a key-committing AEAD, one can simply instantiate HPKE using a key-committing AEAD.

Moreover, there exists a huge assortment of interoperable implementations built on popular cryptographic libraries, including OpenSSL, BoringSSL, NSS, and CIRCL. There are also formally verified implementations in hacspec and F*; check out this blog post for more details. The complete set of known implementations is tracked here. More implementations will undoubtedly follow in their footsteps.

HPKE is ready for prime time. I look forward to seeing how it simplifies protocol design and development in the future. Welcome, RFC 9180.

Rust 1.59.0 released

Post Syndicated from original https://lwn.net/Articles/886056/

Version
1.59.0
of the Rust language has been released. There are a number of
new features, including support for inline assembly (in unsafe blocks,
naturally), the ability to use tuples and slices on the left-hand side of
an assignment, const generic defaults, and more. Incremental compilation
is also disabled by default in this release to work around a known bug.

Протестът пред Посолството на Русия – в снимки Украинци и българи – заедно срещу войната на Путин

Post Syndicated from Николай Марченко original https://bivol.bg/%D1%83%D0%BA%D1%80%D0%B0%D0%B8%D0%BD%D1%86%D0%B8-%D0%B8-%D0%B1%D1%8A%D0%BB%D0%B3%D0%B0%D1%80%D0%B8-%D0%B7%D0%B0%D0%B5%D0%B4%D0%BD%D0%BE-%D1%81%D1%80%D0%B5%D1%89%D1%83-%D0%B2%D0%BE%D0%B9%D0%BD%D0%B0.html

четвъртък 24 февруари 2022


Стотици украинци, както и пресъединилите се към тях българи и руснаци се събраха пред Посолството на Руската Федерация в София в четвъртък, на 24 февруари 2022 г., видя “Биволъ” на…

Automate your Data Extraction for Oil Well Data with Amazon Textract

Post Syndicated from Ashutosh Pateriya original https://aws.amazon.com/blogs/architecture/automate-your-data-extraction-for-oil-well-data-with-amazon-textract/

Traditionally, many businesses archive physical formats of their business documents. These can be invoices, sales memos, purchase orders, vendor-related documents, and inventory documents. As more and more businesses are moving towards digitizing their business processes, it is becoming challenging to effectively manage these documents and perform business analytics on them. For example, in the Oil and Gas (O&G) industry, companies have numerous documents that are generated through the exploration and production lifecycle of an oil well. These documents can provide many insights that can help inform business decisions.

As documents are usually stored in a paper format, information retrieval can be time consuming and cumbersome. Even those available in a digital format may not have adequate metadata associated to efficiently perform search and build insights.

In this post, you will learn how to build a text extraction solution using Amazon Textract service. This will automatically extract text and data from scanned documents and upload into Amazon Simple Storage Service (S3). We will show you how to find insights and relationships in the extracted text using Amazon Comprehend. This data is indexed and populated into Amazon OpenSearch Service to search and visualize it in a Kibana dashboard.

Figure 1 illustrates a solution built with AWS, which extracts O&G well data information from PDF documents. This solution is serverless and built using AWS Managed Services. This will help you to decrease system maintenance overhead while making your solution scalable and reliable.

Figure 1. Automated form data extraction architecture

Figure 1. Automated form data extraction architecture

Following are the high-level steps:

  1. Upload an image file or PDF document to Amazon S3 for analysis. Amazon S3 is a durable document storage used for central document management.
  2. Amazon S3 event initiates the AWS Lambda function Fn-A. AWS Lambda has functional logic to call the Amazon Textract and Comprehend services and processing.
  3. AWS Lambda function Fn-A invokes Amazon Textract to extract text as key-value pairs from image or PDF. Amazon Textract automatically extracts data from the scanned documents.
  4. Amazon Textract sends the extracted keys from image/PDF to Amazon SNS.
  5. Amazon SNS notifies Amazon SQS when text extraction is complete by sending the extracted keys to Amazon SQS.
  6. Amazon SQS initiates AWS Lambda function Fn-B with the extracted keys.
  7. AWS Lambda function Fn-B invokes Amazon Comprehend for the custom entity recognition. Comprehend uses custom-trained machine learning (ML) to find discrepancies in key names from Amazon Textract.
  8. The data is indexed and loaded into Amazon OpenSearch, which indexes and visualizes the data.
  9. Kibana processes the indexed data.
  10. User accesses Kibana to search documents.

Steps illustrated with more detail:

1. User uploads the document for analysis to Amazon S3. Uploaded document can be an image file or a PDF. Here we are using the S3 console for document upload. Figure 2 shows the sample file used for this demo.

Figure 2. Sample input form

Figure 2. Sample input form

2. Amazon S3 upload event initiates AWS Lambda function Fn-A. Refer to the AWS tutorial to learn about S3 Lambda configuration. View Sample code for Lambda FunctionA.

3. AWS Lambda function Fn-A invokes Amazon Textract. Amazon Textract uses artificial intelligence (AI) to read as a human would, by extracting text, layouts, tables, forms, and structured data with context and without configuration, training, or custom code.

4. Amazon Textract starts processing the file as it is uploaded. This process takes few minutes since the file is a multipage document.

5. Amazon SNS notifies Amazon Textract of completion. Amazon Textract processing works asynchronously, as we decouple our architecture using Amazon SQS. To configure Amazon SNS to send data to Amazon SQS:

  • Create an SNS topic. ‘AmazonTextract-SNS’ is the SNS topic that we created for this demo.
  • Then create an SQS queue. ‘AmazonTextract-SQS’ is the queue that we created for this demo.
  • To receive messages published to a topic, you must subscribe an endpoint to the topic. When you subscribe an endpoint to a topic, the endpoint begins to receive messages published to the associated topic. Figure 3 shows the SNS topic ‘AmazonTextract-SNS’ subscribed to Amazon SQS queue.
Figure 3. Amazon SNS configuration

Figure 3. Amazon SNS configuration

Figure 4. Amazon SQS configuration

Figure 4. Amazon SQS configuration

6. Configure SQS queue to initiate the AWS Lambda function Fn-B. This should happen upon receiving extracted data via SNS topic. Refer to this SQS tutorial to learn about SQS Lambda configuration. See Sample code for Lambda FunctionB.

7. AWS Lambda function Fn-B invokes Amazon Comprehend for the custom entity recognition.

Figure 5. Lambda FunctionB configuration in Amazon Comprehend

Figure 5. Lambda FunctionB configuration in Amazon Comprehend

  • Configure Amazon Comprehend to create a custom entity recognition (text-job2) for the entities. These can be API Number, Lease_Number, Water_Depth, Well_Number, and can use the model created in previous step (well_no, well#, well num). For instructions on labeling your data, see Developing NER models with Amazon SageMaker Ground Truth and Amazon Comprehend.
Figure 6. Comprehend job

Figure 6. Comprehend job

  • Now create an endpoint for the custom entity recognition for the Lambda function, to send the data to Amazon Comprehend service, as shown in Figure 7 and 8.
Figure 7. Comprehend endpoint creation

Figure 7. Comprehend endpoint creation

  • Copy the Amazon Comprehend endpoint ARN to include it in the Lambda function as an environment variable (see Figure 5).
Figure 8. Comprehend endpoint created successfully

Figure 8. Comprehend endpoint created successfully

8. Launch an Amazon OpenSearch domain. See Creating and managing Amazon OpenSearch Service domains. The data is indexed and populated into Amazon OpenSearch. The Amazon OpenSearch domain name is configured at Lambda FnB as an environment variable to push the extracted data to OpenSearch.

9. Kibana processes the indexed data from Amazon OpenSearch. Amazon OpenSearch data is populated on Kibana, shown in Figure 9.

Figure 9. Kibana dashboard showing Amazon OpenSearch data

Figure 9. Kibana dashboard showing Amazon OpenSearch data

10. Access Kibana for document search. The selected fields can be viewed as a table using filters, see Figure 10.

Figure 10. Kibana dashboard table view for selected fields

Figure 10. Kibana dashboard table view for selected fields

You can s­earch the LEASE_NUMBER = OCS-031, as shown in Figure 11.

Figure 11. Kibana dashboard search on Lease Number

Figure 11. Kibana dashboard search on Lease Number

OR you can search all the information for the WATER_DEPTH = 60, see Figure 12.

Figure 12. Kibana dashboard search on Water Depth

Figure 12. Kibana dashboard search on Water Depth

Cleanup

  1. Shut down OpenSearch domain
  2. Delete the Comprehend endpoint
  3. Clear objects from S3 bucket

Conclusion

Data is growing at an enormous pace in all industries. As we have shown, you can build an ML-based text extraction solution to uncover the unstructured data from PDFs or images. You can derive intelligence from diverse data sources by incorporating a data extraction and optimization function. You can gain insights into the undiscovered data, by leveraging managed ML services, Amazon Textract, and Amazon Comprehend.

The extracted data from PDFs or images is indexed and populated into Amazon OpenSearch. You can use Kibana to search and visualize the data. By implementing this solution, customers can reduce the costs of physical document storage, in addition to labor costs for manually identifying relevant information.

This solution will drive decision-making efficiency. We discussed the oil and gas industry vertical as an example for this blog. But this solution can be applied to any industry that has physical/scanned documents such as legal documents, purchase receipts, inventory reports, invoices, and purchase orders.

For further reading:

Cloudflare re-enforces commitment to security in Germany via BSIG audit

Post Syndicated from Rebecca Rogers original https://blog.cloudflare.com/bsig-audit-and-beyond/

Cloudflare re-enforces commitment to security in Germany via BSIG audit

Cloudflare re-enforces commitment to security in Germany via BSIG audit

As a large data processing country, Germany is at the forefront of security and privacy regulation in Europe and sets the tone for other countries to follow. Analyzing and meeting the requirements to participate in Germany’s cloud security industry requires adherence to international, regional, and country-specific standards. Cloudflare is pleased to announce that we have taken appropriate organizational and technical precautions to prevent disruptions to the availability, integrity, authenticity, and confidentiality of Cloudflare’s production systems in accordance with BSI-KritisV. TÜViT, the auditing body tasked with auditing Cloudflare and providing the evidence to BSI every two years. Completion of this audit allows us to comply with the NIS Directive within Germany.

Why do cloud companies operating in Germany need to go through a BSI audit?

In 2019, Cloudflare registered as an Operator of Essential Services’ under the EU Directive on Security of Network and Information Systems (NIS Directive). The NIS Directive is cybersecurity legislation with the goal to enhance cybersecurity across the EU. Every member state has started to adopt national legislation for the NIS Directive and the criteria for compliance is set individually by each country. As an ‘Operator of Essential Services’ in Germany, Cloudflare is regulated by the Federal Office for Information Security (The BSI) and must adhere to the requirements set by The BSI.

What does the audit prove?

This audit includes a thorough review of Cloudflare’s security controls in the following areas:

  • Asset Management
  • Risk Analysis
  • Business Continuity and Disaster Recovery
  • Personnel and Organizational Security
  • Encryption
  • Network Security
  • Security Authentication
  • Incident Response
  • Vendor Security
  • Physical Security

In addition to an audit of Cloudflare’s security controls in the aforementioned areas, TÜViT also conducted a thorough review of Cloudflare’s Information Security Management System (ISMS).

By having these areas audited, German customers can rest assured that Cloudflare respects the requirements put forth by the governing bodies tasked with protecting their data.

Are there any additional German-specific audits on the horizon?

Yes. Cloudflare is currently undergoing an independent third-party audit for the Cloud Computing Compliance Criteria Catalog (C5) certification. The C5 was introduced by BSI Germany in 2016 and reviews operational security within cloud services. Industries that place a high level of importance on C5 include cloud computing and German federal agencies. Learn more here.

What other certifications does Cloudflare hold that demonstrate its dedication to privacy and security?

Different certifications measure different elements of a company’s security or privacy posture. Cloudflare has met the requirements of the following standards:

  • ISO 27001 – Cloudflare has been ISO 27001 certified since 2019. Customers can be assured that Cloudflare has a formal information security management program that adheres to a globally recognized standard.
  • SOC2 Type II – Cloudflare maintains SOC reports that include the security, confidentiality, and availability trust principles.
  • PCI DSS – Cloudflare engages with a QSA (Qualified Security Assessor) on an annual basis to evaluate us as a Level 1 Merchant and a Service Provider.
  • ISO 27701 – Cloudflare was one of the first companies in the industry to achieve ISO 27701 certification as both a data processor and controller. The certification provides assurance to our customers that we have a formal privacy program that is aligned to GDPR.
  • FedRAMP In Process – Cloudflare hit a major milestone by being listed on the FedRAMP Marketplace as ‘In Process’ for receiving an agency authorization at a moderate baseline. Once an Authorization to Operate (ATO) is granted, it will allow agencies and other cloud service providers to leverage our product and services in a public sector capacity.

Pro, Business, and Enterprise customers now have the ability to obtain a copy of Cloudflare’s certifications, reports, and overview through the Cloudflare Dashboard. For the latest information about our certifications and reports, please visit our Trust Hub.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close