Дилемата пред Близкия изток. Разговор с Надим Шeхади

Post Syndicated from Мирослав Зафиров original https://toest.bg/nadim-shehadi-interview/

Надим Шехади (р. 1956) е изпълнителен директор на Академичния център на Ливано-американския университет в Ню Йорк и асоцииран сътрудник в Кралския институт за международни отношения („Чатъм Хаус“), където по-рано е ръководил програмата за Близкия изток. Бил е директор на Центъра за източносредиземноморски изследвания „Фарес“ към Факултета по право и дипломация в Университета „Тъфтс“ и дългогодишен директор на „Ливански изследвания“ в колежа „Св. Антоний“ към Оксфордския университет. Той е водещ учен по въпросите на Близкия изток и Северна Африка и е консултант на няколко правителства и международни организации, в т.ч. и на Европейския съюз. Негови статии са публикувани в големи международни новинарски издания, като „Ню Йорк Таймс“, „Гардиън“ и Си Ен Ен. 

По повод наскоро състоялите се парламентарни избори в Ливан, а и цялостната обстановка в Близкия изток в светлината на настоящите глобални кризи, с г-н Шехади разговаря Мирослав Зафиров. 


Според мнозина Ливан е своеобразен паноптикум на Близкия изток. Доколкото това е вярно, смятате ли, че е трудно да се предвиди какво е бъдещето на Близкия изток? Вече много години регионът сякаш не може да намери себе си.

Мнозинството от дисидентите, интелектуалците и хората на бизнеса намираха пристан в Бейрут, където свободно можеха да упражняват своите професии и да изразяват себе си. На практика Бейрут беше поле на сблъсък на идеи, които едва по-късно намираха аудитория на други места. Ливанската гражданска война от 1975 до 1990 г. се превърна в лаборатория на съперничещите си идеологии на исляма и национализма, на светското и религиозното начало, на шиитския и сунитския ислям, на либерализма и авторитаризма, и т.н.

Ние вече не се намираме в това положение, днес ще откриете този дебат по-скоро в Лондон, отколкото в Бейрут. Регионът постепенно навлезе в спиралата на непредсказуемостта – процес, започнал още през 50-те години на XX век. Възходът на светските националистически военни режими, който постепенно изтласка традиционната аристокрация и елитите, останали от Османската империя, показа истинското си лице – това на репресивния режим. Този режим бе особено безпощаден спрямо движенията, принадлежащи към политическия ислям. Но не само ислямистите бяха целта, скоро жертви на преследвания станаха левите политически течения и либералите.

Вероятно всичко това е в дъното на т.нар. Арабска пролет и е в основата на исканията на младото поколение за решително противопоставяне срещу доминацията на организации като милициите, свързани с иранския Корпус на стражите на Ислямската революция. Борбата беше срещу опита на КСИР да получи надмощие на всички места, където институционалността на държавата отслабваше или отсъстваше.

Какво е мнението Ви за наскоро проведените парламентарни избори в Ливан? След години на нарастващо напрежение, икономическа криза и политическа нестабилност смятате ли, че страната е готова да излекува своите рани? 

Изборите в Ливан наподобяват тези в Ирак от 2021 г., доколкото резултатите демонстрираха намаляване на влиянието на проиранските партии и милиции. И в двете държави имаше сходно по дух обществено настроение. В случая с Ливан „Хизбулла“ се ползваше с абсолютния контрол над шиитските гласове, а коалицията ѝ с другата шиитска партия – „Амал“, ѝ осигури 27 места в парламента, въпреки че партиите загубиха двете места, които не бяха предвидени за шиитите.

Системата в Ливан дава възможност на „Хизбулла“ и нейните съюзници да упражняват правото на вето върху всички назначения: председател на парламента, министър-председател, президент и т.н. Коалицията е в състояние да парализира всеки изборен процес, докато условията, налагани от „Хизбулла“, не бъдат изпълнени. Те направиха това през 2006 г., когато окупираха центъра на Бейрут и парализираха страната за 19 месеца. Кризата приключи едва със Споразумението от Доха от 2008 г., в което ветото на „Хизбулла“ беше на практика институционализирано. По-късно отново шиитската коалиция блокира политическия процес за нови 29 месеца – между 2013 и 2016 г., когато Ливан съществуваше без президент, изборите за парламент бяха забавени, а правителството беше лишено от възможност да функционира. И всичко това – за да може отново „Хизбулла“ да наложи избора си за държавен глава.

На 15 май 2022 г. бяха проведени парламентарни избори в Ливан, на които за първи път след десетилетия на доминация „Хизбулла“ загуби мнозинството си. Коалицията, водена от шиитското движение, разполага с 62 от 128 места в Народното събрание. Противниците на шиитската коалиция – в т.ч. и финансираната от Саудитска Арабия християнска партия „Ливански сили“ – спечелиха мнозинство. Сред големите изненади бе и победата на движението на друзкия лидер Уалид Джумблат над неговия исторически противник и лидер на другата основна друзка фамилия – Талал Арслан. Джумблат е известен с позицията си срещу влиянието на Сирия в Ливан. Заслужава внимание и ниската избирателна активност сред сунитското население, като много от поддръжниците на бившия премиер Саад Харири по този начин изразиха подкрепа за сина на загиналия през 2005 г. Рафик Харири и решението му да не участва в изборите.

Какъв е Вашият анализ на парламентарните резултати? Наистина ли говорим за постепенно отслабване на „Хизбулла“? 

Резултатите демонстрираха нарастваща опозиция срещу „Хизбулла“, но тези резултати засегнаха единствено нейните нешиитски съюзници. Основно „пострадаха“ просирийските кандидат-депутати, чиито имена бяха включени в списъците под натиска на Сирия или на „Хизбулла“. Всичко това обаче няма да повлияе особено на позициите на партията, защото хегемонията ѝ над шиитите остава, както и влиянието ѝ над шиитските депутати. Тя е в състояние да диктува своите условия.

Важно е да се отбележи, че общите листи между „Амал“ и „Хизбулла“ са резултат от споразумение, постигнато след тригодишни преговори. Лидерът на „Амал“ Набих Бери беше заявил, че „Хизбулла“ олицетворява Дракула, защото живее за сметка на кръвта, която пролива, и обвини партията, че е отговорна за убийството на повече шиити, отколкото самият Израел. Няколко от водещите фигури в „Амал“ бяха убити, а други напуснаха страната или бяха изолирани, докато най-сетне не беше сключено споразумението за обща листа. По този начин това, което по-късно стана известно като „шиитското дуо“, видя бял свят и не позволи появата на нови независими политически гласове сред шиитите. В шиитските части на Ливан общата листа получи почти пълна подкрепа. Подобно на вота в един авторитарен режим, 97,6 % от гласовете на шиитите бяха за коалицията между „Амал“ и „Хизбулла“.

През 2020 г. в статия за „Ню Йорк Таймс“ Лина Мунзер написа, че въпреки убеждението си, че могат да преживеят всичко, ливанците се оказаха излъгани. Вие съгласен ли сте с нея? И ако е така, кое е онова нещо, което ливанците не успяха да преживеят – последиците от Споразумението от Таиф? Възхода на „Хизбулла“ през годините? Сирийската война, а преди нея – израелската окупация в Южен Ливан? 

Признавам си, че не си спомням тази статия, но дори последните две години в Ливан бяха достатъчно трудни. Мнозина ливанци осъзнаха, че повече не са в състояние да „преживеят всичко“. Ливанците трябваше да избират между няколко дилеми – конфронтация или компромис, сигурност или свобода, война или мир, и т.н. Но в крайна сметка политическата ни система не можа да издържи на постоянния натиск на „Хизбулла“. Това е все едно да играете футбол под прицела на снайперист, по чиято воля във всеки момент всеки от играчите може да бъде елиминиран. Някои ливанци обвиняват системата, но аз лично смятам, че никоя система не е в състояние да понесе подобен партньор и да оцелее под подобен натиск.

Вие сте известен с позицията си спрямо Иран. Предвид санкциите смятате ли, че Иран някак успя да оцелее въпреки всички атаки срещу себе си? Също така успя ли Техеран да намери изход от кризата в Сирия?

Моето отношение не е спрямо Иран, а спрямо регионалната криминална структура, от която страдат включително и иранските граждани – по същия начин, по който жертви са и сирийците, и иракчаните, а донякъде – и палестинците и йеменците. Регионът е заложник на КСИР и неговите военни съюзници. Всички те използват един и същи набор от средства: убийства, парализа на политическата система, натиск.

Ние всички живеем в условията на последиците от войната между Иран и Ирак от 1980 г. Войната позволи на режима на Саддам Хюсеин да консолидира властта си и да наложи пълен контрол над Ирак, така както стори и КСИР в Иран. „Хизбулла“ е продължение на този процес, използвайки същата методология. Днес обаче ние виждаме ново поколение от хора, които се изправят срещу описаната тенденция в Ливан, Ирак, а и в самия Иран.

През 2011 и 2012 г., благодарение на Вашата помощ, София успя да стане домакин на първата истинска среща на сирийската опозиция. Мнозина бяха скептични, но срещата произведе първата обща декларация, която, за съжаление, впоследствие беше неглижирана от някои страни. Днес, десет години по-късно, какъв е изходът от продължаващата криза в Сирия, или тя вече е хроничен проблем, подобно на палестинския въпрос – две кризи, които влияят на стабилността в региона? 

От работата си по Сирия мога да направя два извода. Първият сочи, че беше грешка да се опитваме да формираме опозиция на режима в Дамаск. Подобен опит всъщност беше в полза на самия режим, тъй като това доведе до необходимостта самата опозиция да доказва своята легитимност и жизнеспособност. Дамаск се възползва от ситуацията, тъй като цялото световно обществено внимание се съсредоточи върху опозицията, а не върху деянията на режима.

Появата на опозицията също така позволи на медиите да започнат да говорят за водена в Сирия „гражданска война“, като по този начин се измести акцентът от темата за революцията, която настъпи там, и престъпленията, които режимът на Башар ал Асад извършваше срещу своите граждани, за да остане на власт. Опозицията беше под натиск да докаже легитимността си, че е обединена, че е светска, че има силно ръководство и т.н. Докато всъщност някаква форма на плурализъм, а и отсъствие на единно ръководство можеше да се смята за положителен феномен в нововъзникващото демократично общество след периода „Асад“.

Вторият извод, който мога да направя, говори за това, че в резултат на над 50-годишното управление на режима на семейство Асад сирийското общество беше доведено до ниво на политическа нефункционалност. Хората се надигнаха, излизайки анонимно по улиците, но едва сблъскали се с форма на лидерство или алтернативни групи, у тях веднага се появи едно вродено недоверие и подозрителност. Подозрение, което режимът и неговите служби за сигурност съзнателно подклаждаха.

Попаднеш ли в затвора, от теб ще бъде поискано да станеш доносник. Тези, които откажат, никога не напускат затвора и загиват там. Прекаралите години в затвора и хората, чиито семейства са страдали, веднага попадат под подозрение единствено заради това, че са успели да оцелеят. Всичко това наподобява поведението на ЩАЗИ и останалите служби за сигурност. По тази причина беше безсмислено да се опитваме да формираме опозиция, при условие че режимът бе все на мястото си, очаквайки, че ще можем да го заменим. Имаше един мъничък Башар във всеки човек в Сирия по това време.

След 24 февруари т.г. системата на международните отношения, завещана ни след Втората световна война, сякаш престана да съществува. Русия де факто не се ползва от привилегията да е част от клуба на т.нар. големи сили, представени от постоянните членки на Съвета за сигурност на ООН. Струва ли си да се поддадем на изкушението да мислим за свят след модела, заложен от държавите победителки, или по-скоро следва да запазим реда, докато не намерим с какво по-добро да заменим вече съществуващата система? 

Много от държавите в Близкия изток сякаш гледат на войната в Украйна с оптимизъм. Западът игнорираше руските действия в Сирия и сега западните държави осъзнават, че това, което се случва в Украйна, е своеобразно повторение на сирийския сценарий. Ние непрекъснато повтаряхме, че Башар ал Асад следва правилата от Хама и Грозни и че Иран действа по един криминален начин в Сирия чрез своите милиции, извършващи масови убийства. Но Западът избра да сключи сделка с Русия, с Иран и с Ал Асад за сметка на сирийския народ и за сметка на задълбочаваща се криза в Близкия изток.

Мнозина в Близкия изток са някак скептични да последват действията на САЩ и ЕС спрямо Русия. Египет, Ирак и ключови държави от Залива са дори доста внимателни. Какво мислите за това? 

Много малко е доверието в региона спрямо способността на САЩ или ЕС да отстояват своите позиции в Украйна. Видяхме, че и Вашингтон, и Брюксел сякаш признаха своето поражение в тази война, преди дори тя да е започнала. На президента Зеленски беше предложено той и семейството му да напуснат Украйна, докато САЩ и ЕС обсъждаха санкции, които да бъдат наложени едва след войната, а дори и по този въпрос нямаше съгласие. Да обсъждаш последиците от дадено събитие на практика означава да го приемеш за вече случило се. И страните в Залива и Ирак са непосредствено изложени на иранската заплаха и нямат никакво доверие, че САЩ ще ги защити. Египет се намира във фарватера на Залива, а режимът на Абделфатах ал Сиси не е в състояние да оцелее дори седмица без финансовата помощ на ОАЕ и Саудитска Арабия. Така виждам аз събитията.

Заглавна снимка: © Chatham House / Flickr

Източник

За украинските бежанци… и за нас

Post Syndicated from original http://www.gatchev.info/blog/?p=2457

Напоследък чух тонове плювни срещу украинските бежанци:

– че са гадове, понеже имат хубави коли и домове
– че искат да наливат бензин и да ядат в ресторанти, без да плащат
– че ние им даваме по 40 лева на ден от джоба си, а те искат повече
– (въпреки че имат цели куфари с пачки долари и евро, и мезонети по Черноморието)
– че им плащаме от джоба си да живеят в петзвездни хотели, докато българските пенсионери мрат от глад
– че са неблагодарници, понеже не им харесва да ги тъпчем по 6 души в стари ЖП контейнери
– че са неблагодарници, понеже се възмущават, че някой тук им обстрелвал децата с въздушна пушка
– че са неблагодарници, понеже не им харесва български патриоти да ги пребиват на улицата
– че са претенциозни, понеже някои искат да имат достъп до аптека, а някои – даже до лекар
– че са нагли, понеже се учудват, ако към храната няма прибори
– че нямат място тук, след като за по цяла седмица все още не са научили български добре
– че ако ги подслоняваме, Русия ще ни атакува (заслужено – украинците са нацисти!)
– че а пък в Германия умните германци не им дават никакви помощи
– че а пък в Германия тъпите германци им дават по 200 евро помощ на ден, и проклетите мръсници си живеят безгрижно
– накрая и че хич не са бежанци, след като след известен престой тук започват да бягат от България, обратно в Украйна, при войната…

(Съкратил съм списъка няколко пъти, читателю – нищо лошо не си ми направил…)

Срещу тези приказки стои само една, простичка и обикновена – че украинските бежанци бягат от война и имат нужда от помощ. Не са и нужни повече – истината по всеки въпрос е една, безкрайно много и различни могат да са лъжите.

Но има нещо, още по-важно от това.

На какви струни в човешката душа се опитват да свирят тези твърдения?
Какви качества и черти са нужни, за да им вярва човек?
И какви качества и черти са нужни, за да ги разпространява?
Що за човек трябва да е – отвътре, под маската, която носи?

И какви струни в душата докосва другото – че бежанците бягат от зло и имат нужда от помощ?
Какви качества трябва да има човек, за да мисли първо за това, а после дали имат пари?
Какви качества са нужни, за да им помага с каквото може – подслон, дарения, добра дума в Интернет?…
Какъв трябва да е такъв човек отвътре, под маската на ежедневието?

С кой от тези два типа хора искате да сте близки, приятели… свои?
А от кой е добре да се пазите и да стоите по-далече?
На кой от тези два типа хора бихте вярвали, а на кой – не?
Кой от тях ще ви се отплати дори за доброто със зло, а кой – дори за прегрешенията с добро?

Сред кой от тези типове хора искате да живеете и да бъдете?

В момента правите именно този избор.

Нямам какво да добавя.

One developer’s journey bringing Dependabot to GitHub Enterprise Server

Post Syndicated from Landon Grindheim original https://github.blog/2022-06-07-one-developers-journey-bringing-dependabot-to-github-enterprise-server/

If you’re like me, you’re still excited by last week’s news that Dependabot is generally available on GitHub Enterprise Server (GHES). Developers using GHES can now let Dependabot secure their dependencies and keep them up-to-date. You know who would have loved that? Me at my last job.

Before joining GitHub, I spent five years working on teams that relied on GHES to host our code. As a GHES user, I really, really wanted Dependabot. Here’s why.

🤕 Dependencies

One constant pain point for my previous teams was staying on top of dependencies. Creating a Rails project with rails new results in an app with 74 dependencies, Django apps start with 88 dependencies, and a project initialized with Create React App will have 1,432 dependencies!

Unfortunately, security vulnerabilities happen, and they can expose your customers to existential risk, so it’s important they are handled as soon as they’re published.

As I’m most familiar with the Ruby ecosystem, I’ll use Nokogiri, a gem for parsing XML and HTML, to illustrate the process of manually resolving a vulnerability. Nokogiri has been a dependency of every Rails app I’ve maintained. It’s also seen seven vulnerabilities since 2019. To fix these manually, we’ve had to:

  • Clone `my_rails_app`
  • Track down and parse the Nokogiri release notes
  • Patch Nokogiri in `my_rails_app` to a non-vulnerable version
  • Push the changes and open a pull request
  • Wait for CI to pass
  • Get the necessary reviews
  • Deploy, observe, and merge

This is just one of (at least) 74 dependencies in one Rails app. My team maintained 14 Rails apps in our microservices-based architecture, so we needed to repeat the process for each app. A single vulnerability would eat up days of engineering time. That’s just one dependency in one ecosystem. We also worked on apps written in Elixir, Python, JavaScript, and PHP.

If an engineer was patching vulnerabilities, they couldn’t pursue feature work, the thing our customers could actually see. This would, understandably, lead to conversations about which vulnerabilities were most likely to be exploited and which we could tolerate for now.

If we had Dependabot security updates, that process would have started with a pull request. What took an engineer days to complete on their own could have been done before lunch.

We could have invested in keeping all of our dependencies up-to-date. Incremental upgrades are typically easier to perform and pose less risk. They also give bad actors less time to find and exploit vulnerabilities. One of my previous teams was still running Rails 3.2, which was no longer maintained when Rails 6 was released six years later. As support phased out, we had to apply our own security patches to our codebase instead of getting them from the framework. This made upgrading even harder. We spent years trying to get to a supported version, but other product priorities always won out.

If my team had Dependabot version updates, Dependabot would have opened pull requests each time a new version of Rails was released. We’d still need to make changes to ensure our apps were compliant with the new versions, but the changes would be made incrementally, making the lift much lighter. But we didn’t have Dependabot. We had to upgrade manually, and that meant upgrading didn’t happen until it became a P0.

A new home

I joined GitHub in 2021 to work on Dependabot. Being intimately familiar with the challenges Dependabot could help address, I wanted to be part of the solution. Little did I know, the team was just starting the process of bringing Dependabot to GHES. Call it serendipity, a dream come true, or tea leaves arranged just so.

I quickly realized why Dependabot wasn’t already on GHES. GitHub acquired Dependabot in 2019, and it took some time to scale Dependabot to be able to secure GitHub’s millions of repositories. To achieve this, we ported the service’s backend to run on Moda, GitHub’s internal Kubernetes-based platform. The dependency update jobs that result in pull requests were updated to run on lightweight Firecracker VMs, allowing Dependabot to create millions of pull requests in just hours. It was an impressive effort by a small team.

That effort, however, didn’t lend itself to the architecture of GHES, where everything runs on a single server with limited resources. An auto-scaling backend and network of VMs wasn’t an option. Instead, we needed to port Dependabot’s backend to run on Nomad, the container orchestration option on GHES. The jobs running on Firecracker VMs needed to run on our customers’ hardware. Fortunately, organizations can self-host GitHub Actions runners in GHES, so we adapted them to run on GitHub Actions. We also had to adjust our development processes to support continuous delivery in the cloud and less frequent GHES releases.

The result is that developers relying on GHES now have the option to have their dependencies updated for them. Now, my former teammates can update their dependencies by:

  • Viewing the already opened pull request
  • Reviewing the pull request and the included release notes
  • Deploying, observing, and merging

We’re really proud of that. As for me, I get the immense satisfaction of knowing that I built something that will directly benefit my former teammates. It doesn’t get much better than that!

Guess what? GitHub is hiring. What would you like to make better?

If you’re inspired to work at GitHub, we’d love for you to join us. Check out our Careers page to see all of our current job openings.

  • Dedicated remote-first company with flexible hours
  • Building great products used by tens of millions of people and companies around the world
  • Committed to nurturing a diverse and inclusive workplace
  • And so much more!

AWS HITRUST Shared Responsibility Matrix version 1.2 now available

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/aws-hitrust-shared-responsibility-matrix-version-1-2-now-available/

The latest version of the AWS HITRUST Shared Responsibility Matrix is now available to download. Version 1.2 is based on HITRUST MyCSF version 9.4[r2] and was released by HITRUST on April 20, 2022.

AWS worked with HITRUST to update the Shared Responsibility Matrix and to add new controls based on MyCSF v9.4[r2]. You don’t have to assess these additional controls because AWS already has completed HITRUST assessment using version 9.4 in 2021 . You can deploy your environments on AWS and inherit our HITRUST Common Security Framework (CSF) certification, provided that you use only in-scope services and apply the controls detailed on the HITRUST website.

What this means for our customers

The new AWS HITRUST Shared Responsibility Matrix has been tailored to reflect both the Cross Version ID (CVID) and Baseline Unique ID (BUID) in HITRUST so that you can select the correct control for inheritance even if you’re still using an older version of HITRUST MyCSF for your own assessment.

With the new version, you can also inherit some additional controls based on MyCSF v9.4[r2].

At AWS, we’re committed to helping you achieve and maintain the highest standards of security and compliance. We value your feedback and questions. You can contact the AWS HITRUST team at AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security ‘how-to’ content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali leads multiple AWS global compliance programs, including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications such as CISSP, C-GDPR|P, CCSK, CEH, CISA, PCIP, ISO 27001, and ISO 22301 Lead Auditor.

AWS achieves ISO 22301:2019 certification

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/aws-achieves-iso-223012019-certification/

We’re excited to announce that Amazon Web Services (AWS) has successfully achieved ISO 22301:2019 certification without audit findings. ISO 22301:2019 is a rigorous third-party independent assessment of the international standard for Business Continuity Management (BCM). Published by the International Organization for Standardization (ISO), ISO 22301:2019 is designed to help organizations prevent, prepare for, respond to, and recover from unexpected and disruptive events.

EY CertifyPoint, an independent third-party auditor, issued the certificate on June 2, 2022. The covered AWS Regions are included on the ISO 22301:2019 certificate, and the full list of AWS services in scope for ISO 22301:2019 is available on our ISO and CSA STAR Certified webpage. You can view and download the AWS ISO 22301:2019 certificate on demand online and in the AWS Management Console through AWS Artifact.

As always, we value your feedback and questions and are committed to helping you achieve and maintain the highest standard of security and compliance. Feel free to contact our team through AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali leads multiple AWS global compliance programs, including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications, such as CISSP, C-GDPR|P, CCSK, CEH, CISA, PCIP, and Lead Auditor for ISO 27001 and ISO 22301.

Introduction to Amazon QuickSight ML Insights

Post Syndicated from Rashid Sajjad original https://aws.amazon.com/blogs/big-data/introduction-to-amazon-quicksight-ml-insights/

Amazon QuickSight was launched in November 2016 as a fast, cloud-powered business analytics service to build visualizations, perform ad hoc analysis, and quickly get business insights from a variety of data sources. In 2018, ML Insights for QuickSight (Enterprise Edition) was announced to add machine learning (ML)-powered forecasting and anomaly detection with a few clicks. These insights are automatically generated as suggested insights, and you can also add custom insights to your analysis. Because they’re written out in narrative format, they’re easily consumable by any non-technical user and are a great way to increase adoption of your dashboards. Let’s dive deeper on how these insights are built and how to correctly set up your data to maximize the Suggested Insights feature.

What are ML Insights?

QuickSight uses ML to help uncover hidden insights and trends in your data. It does that by using an ML model that over time and with an increasing volume of data being fed into QuickSight, continually learns and improves its abilities to provide three key features (as of this writing):

  • ML-powered anomaly detection – Detect outliers that show significant variance from the dataset. This can help identify significant changes in your business metrics such has low-performing stores or products, or top selling items.
  • ML-powered forecasting – Detect trends and seasonality to forecast based on historical data. This can help project sales, orders, website traffic, and more.
  • Autonarratives – Embed narratives in your dashboard to tell the story of your data in plain language. This can help convey a shared understanding of the data within your organization. You can use either the suggested autonarrative or you can customize the computations and language to meet your organization’s unique requirements.

How does the ML model work?

QuickSight uses a built-in version of the Random Cut Forest (RCF) algorithm. This is a special type of Random Forest (RF) algorithm, a widely used and successful technique in ML. It takes a set of random data points, cuts them down to the same number of points, and then builds a collection of models. In contrast, a model corresponds to a decision tree—thereby the name “forest.” Because RFs can’t be easily updated in an incremental manner, RCFs were invented with variables in tree construction that were designed to allow incremental updates.

The key takeaway is that RCF is great for finding anomalies and building forecasts. This algorithm is good at finding data points that are outliers or finding trends and patterns to forecast future values.

One important thing to know about ML models is that each model is good at a certain set of predictive activities, but no one model is good for all activities.

Now that you understand what the RCF model is good at, namely anomaly detection and forecasting, you need to make sure the data meets certain requirements, so let’s walk through those steps.

Best practices for setting up data

To maximize the RCF model’s efficiency, the data that is being imported needs to contain certain properties:

  • At least one metric – Whatever you’re measuring (sold units, orders, and so on).
  • At least one dimension – The category or slice by which you look at the metric (product category, industry, customer type, and so on).
  • Data volumes – Your dataset requirements depend on your objective:
    • Anomaly detection – Requires at least 15 data points. For example, if you have Bicycles as a product category and want to detect anomalies at a daily level, you need at least 15 days of transactions (you could have multiple rows for multiple transactions in a given day) for Bicycles in the dataset.
    • Forecasting – This works best with a large dataset simply because the more history you have, the better the model can extract patterns and trends and generate future probable values. If you have daily aggregates, you need at least 38 days of data.
  • At least one date column – If we want to analyze anomalies or forecasts in the dataset.

QuickSight supports a wide variety of connections, like Amazon Simple Storage Service (Amazon S3), Amazon Athena, and Apache Spark. For more information about supported connections and some connection examples, refer to Amazon QuickSight Connection examples.

Get started with Suggested Insights

Let’s use a sample dataset and walk through an example of how to use the Suggested Insights feature.

To get started, let’s download a sample dataset from the public domain. For this post, we use House Sales in King County, USA. You need to have a Kaggle account to download the resource.

  1. Download and unzip the file.

If you inspect the CVS file, you will notice it has the right grain (date), metrics (price, bedrooms) and categories (zipcode, waterfront).

Depending on what your analysis needs are, even bedrooms could be a category by which you analyze price. So your metrics and categories ultimately depend on your analysis goals.

  1. Log in to your QuickSight account or sign up for a QuickSight Enterprise Edition account to use ML Insights.

We need to create a dataset first before we can create a QuickSight analysis.

  1. Choose New dataset.
  2. Choose Upload a file.
  3. Choose the unzipped CSV file.
  4. In the pop-up window, confirm the file upload settings, then choose Edit settings and prepare data.

You’re redirected to the data preparation editor. This is one of the most important yet overlooked functions in QuickSight.

This editor allows you to review your imported fields and their data types, specify if the field will be used as a dimension or measure, along with many other important data import functions. For production datasets, you should spend time reviewing how the dataset has been set up here.

For our sample CSV file, it’s imported into a QuickSight SPICE by default. SPICE is an in-memory engine for fast querying of imported data. For more details, see Importing data into SPICE.

  1. Choose Save & publish to start importing the CSV file into the SPICE engine.

The default dataset name is the file name that was imported, so in our case it’s kc_house_data. You can choose the dataset on the Datasets page to see the import stats for the dataset.

  1. Choose Create analysis to start creating your QuickSight analysis.

The analysis editor page starts by showing a blank Sheet 1 on your workspace. On the top right, your dataset’s import stats are shown again (this becomes important when importing or refreshing large datasets because the import job might still be in progress).

Let’s start by creating our first visual. The default visual type is AutoGraph, which will try to pick the best visual type based on the fields being selected.

  1. Choose the date field.

The visual changes to Count of Records by Date, with the date aggregation set to Day.

  1. To change the aggregation to monthly, choose the down arrow next to date on the X axis.
  2. Choose the price field.

The AutoGraph detects that the date is a dimension (blue color) and the price is a measure (green color) because these were set up like that in the dataset editor screen (I mentioned earlier how important the data preparation editor was).

Because these fields are already set up as dimensions and measures, the AutoGraph automatically changes to Sum of Price by Date.

This visualization isn’t very helpful. What we’re really looking for is the average price per month.

  1. For Field wells, choose price for Value and change the aggregate to Average.

We now have a nice visual that shows us the average sale price of homes in Kings County by month.

Now comes the fun part—ML Insights!

  1. In the navigation pane, choose Insights.

Voila! QuickSight has already run the RCF model along with other statistical computations and has generated insights that are ready to be added.

These suggested insights change based on the type of visual and data that is currently in the visual. We look at how suggested insights change later in this post.

Two immediately useful insights are Highest Month and Lowest Month.

Hover over the Highest Month insight and choose the plus sign to add it to the current Sheet 1.

I can start rearranging insights and visuals and format the price field to give my current layout a more polished look.

  1. For this post, change the format of the price field to 1,2345 to remove decimals.
  2. You can also add titles for the insights and rename the X axis label date to Aggregate.
  3. To add another sheet, choose the plus sign next to Sheet 1.

By default, we start again with an AutoGraph visual.

  1. Under Visual types¸ choose the vertical bar chart.
  2. Choose the price and zipcode fields.
  3. Change the aggregation of price from Sum to Average.
  4. Choose Insights in the navigation pane.

Suggested Insights now displays a completely different set of data highlights compared to Sheet 1.

Although the vertical bar chart may already tell you the top three and bottom three zip codes, Suggested Insights already recognized the type of analysis and selected the best insights to display.

Although you might eventually build a visual to portray the intended story, Suggested Insights speeds up the process of showcasing the highlights in your data and adding them to your worksheet to quickly give the reader the most important insights from your visuals.

Anomaly detection

An anomaly in QuickSight is described a data point that fall outside an overall pattern of distribution. ML-powered anomaly detection in QuickSight enables you to identify the causations and correlations to make data-driven decisions.

We already talked about data preparation for anomaly detection earlier. QuickSight already ran the RCF model during data import. As soon as a visual is added, QuickSight notifies you on the visual if it has detected an “Anomaly Insight.” This part of Suggested Insights. You can choose Setup anomaly detection to add this to your sheet.

You can also manually add an ML insight to detect anomalies.

  1. Let’s go back to Sheet 1 with the line chart displayed.
  2. When you choose the first suggested insight, it starts creating a widget for anomaly detection.

You can add up to five dimensions fields (not calculated fields, unless they were created in the data prep screen). QuickSight splits the metrics using the fields in the Categories section. We use the date field (our time dimension), price (our metric), and yr_built (our category) to create an anomaly detection insight. The question we are trying to answer is “Were there any monthly outliers in price based on the year built?”

  1. Choose Get started to set up anomaly detection.
  2. For Combinations to be analyzed, choose your field combinations.

Choosing Exact means that the date and price are analyzed against the yr_built dimension. You can also choose Hierarchical or All. These latter options become relevant when you choose multiple dimensions in the Categories list. For more information about these options, refer to Adding an ML insight to detect outliers and key drivers.

  1. Choose Save to return to Sheet 1.

Our widget is configured at this point.

  1. Choose Run now to start analyzing the data for anomalies.

Based on the volume of data and the number of data points in the analysis, it may take a while to run the anomaly detection.

Keep in mind that at least 15 data points are needed to run an anomaly, but then you can change the aggregation of a field to have a zoom-out view and therefore view anomalies at a higher level.

For example, if you choose the date field and change Aggregate to Monthly, you get the top anomalies at the monthly level.

In our test case, QuickSight identified a top anomaly. This is a great widget that immediately draws the reader to highlights in data that are outliers and might require further investigation.

Forecast

With ML-powered forecasting, you can forecast your key business metrics in QuickSight easily. The ML algorithm in QuickSight is designed to handle complex real-world scenarios. Not only does QuickSight provide the capability to create forecasts, it also provides Forecast as a Suggested Insight.

  1. Going back to Sheet 1, choose the line chart and expand Insights.

At the bottom you will see a suggested forecast insight. Forecast insights, along with all other suggested insights, are dynamic in the sense that when your data updates or when a user applies filters, the values in the insight will update immediately. Once you add this to your sheet you can even customize how many periods in the future you want the insight to display for the forecast by editing the Narrative and then editing the forecast Calculation.

What if we wanted to customize the price forecasting on this line chart and add it in the visual?

  1. Choose the options menu (three dots) at the top right of the visual and choose Add forecast.
  2. For Periods forward, enter 6.

That is the time interval selected for the visual.

  1. Set Prediction interval to 70.

This is the amount of interval between data points. It causes the forecast to either go wider or narrower. A wider interval means wider gaps between data points, which means the net change is higher, and vice versa.

  1. Leave Seasonality set to Automatic.

Seasonality takes into account complex seasonal trends in your data. You can experiment with both settings to see how it affects the forecast. For our scenario, because house sales are seasonal, we chose Automatic.

  1. Choose Apply.

With just a few clicks, we have added a forecast to our visual, as shown in the following screenshot. The orange shaded area represents the upper and lower bound of the forecasted price.

This is another great way to add intelligence to your data and quickly let analysts focus on key data points and trends.

Conclusion

The Suggested Insights feature in QuickSight allows you to speed up the discovery and highlighting of key data elements. You can find insights in your data faster, and because they’re written out in narrative format, they’re very easy for non-technical users to quickly gain insight into the most interesting trends in the data with no ML training needed.

For more details on QuickSight ML Insights, refer to the QuickSight documentation or interact with the QuickSight Community.

As always, AWS is customer obsessed and we are ready to help with any specific questions.


About the Author

Rashid Sajjad is a Partner Management Solutions Architect focused on Big Data & Analytics with Amazon Web Services. He works with APN Partners to help develop their Migration, Data & Analytics and AI/ML Practices with enterprise, mission critical solutions for their end customers.r

Identifying Cloud Waste to Contain Unnecessary Costs

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/06/07/identifying-cloud-waste-to-contain-unnecessary-costs/

Identifying Cloud Waste to Contain Unnecessary Costs

Cloud adoption has exploded over the past decade or so, and for good reason. Many digital transformation advancements – and even the complete reimagination of entire industries – can be directly mapped and attributed to cloud innovation. While this rapid pace of innovation has had a profound impact on businesses and how we connect with our customers and end users, the journey to the cloud isn’t all sunshine and rainbows.

Along with increased efficiency, accelerated innovation, and added flexibility comes an exponential increase in complexity, which can make managing and securing cloud-based applications and workloads a daunting challenge. This added complexity can make it difficult to maintain visibility into what’s running across your cloud(s).

Beyond management challenges, organizations often run into massive increases in IT costs as they scale. Whether from neglecting to shut down old resources when they are no longer needed or over-provisioning them from the beginning to avoid auto-scaling issues, cloud waste and overspend are among the most prevailing challenges that organizations face when adopting and accelerating cloud consumption.

Just how prevalent is this issue? Well, according to Flexera’s 2022 State of Cloud Report, nearly 60% of cloud decision-makers say optimizing their cloud usage to cut costs is a top priority for this year.

The cost benefits of reducing waste can be massive, but knowing where to look and what the most common culprits of waste can be a challenge, particularly if your organization are relative novices when it comes to cloud.

Common cases of cloud waste and how to avoid them

Now that we’ve covered the factors that drive exploding cloud budgets, let’s take a look at some of the most common cases of cloud waste we see, and the types of checks you and your teams should make to avoid unnecessary spending. I’ve categorized these issues as major, moderate, and minor, based on the relative cost savings possible when customers we’ve worked with eliminate them.

Important to note: While this is what we’ve seen in our experience, it’s important to keep in mind that the actual real-world impact will vary based on each organization’s specific situation.

Unattached volumes (major)

Multiple creation and termination of instances often results in certain volumes remaining attached to already terminated instances. These unused and overlooked volumes contribute directly to increased costs, while delivering little or no value.

Cloud teams should identify volumes that are not shown as attached to any instances. Once detected, schedule unattached storage volumes for deletion if they are no longer in use. Alternatively, you could minimize overhead by transitioning these volumes to serve as offline backups.

Load balancer with no instances (major)

Load balancers distribute traffic across instances to handle the load of your application. If a load balancer is not attached to any instances, it will consume costs without providing any functionality. An orphaned load balancer could also be an indication that an instance was deleted or otherwise impaired.

You should identify unattached load balancers, and double-check to make sure there isn’t a larger problem related to an improperly deleted instance that was once associated with those load balancers. After you’ve determined there isn’t a bigger issue to dig into, notify the necessary resource owners that they should delete them.

Database instance with zero connections (moderate)

Databases that have not been connected to within a given time frame are likely to be billable for all classes of service, except for free tiers.

After some agreed-upon time frame (we typically see teams use about 14 days), you should consider these databases stale and remove them. It’s important here to be sure there isn’t a good reason for the perceived inactivity before you go ahead and hit that delete button.

Snapshot older than 60 days (moderate)

Snapshots represent a complete backup of your computing instances at a specific point in time. Maintaining snapshot backups incurs cost and provides diminishing returns over time, as snapshots become old and diverge more and more from the instances they originally represented.  

Unless regulatory compliance or aggressive backup schedules mandate otherwise, old snapshots should be purged. Before scheduling a deletion or taking any other actions, create a ServiceNow Incident for reporting purposes and to ensure snapshot policy socialization.

Instance with high core counts (minor)

Instances that have more cores will tend to perform tasks more quickly and be able to handle larger loads. However, with greater power comes greater costs. For many workloads, eight cores should be more than sufficient.

Users should identify these instances, mark them non-compliant, and notify the resource owner or operations team about potentially downsizing, stopping, or deleting instances with more than eight cores.

How InsightCloudSec can help contain cloud costs

By this point, you might be wondering why we here at Rapid7 would be writing about cloud cost management. I mean, we’re a security company, right? While that’s true, and our dedication to powering protectors hasn’t waned one bit, the benefits of InsightCloudSec (ICS) don’t stop there.

ICS provides real-time visibility into your entire cloud asset inventory across all of your cloud platforms, which gives us the ability to provide relevant insights and automation that help improve cost effectiveness. In fact, we’ve got built-in checks for each of the issues outlined above (and many more) available right out of the box, as well as recommended remediation steps and tips for automating the entire process with native bots. So while you might initially look into our platform for the ability to simplify cloud security and compliance, you can also use it to get a handle on that runaway cloud spend.

Our customers have realized massive savings on their cloud bills over the years, covering portions –  or in some cases, the entirety – of the cost of their InsightCloudSec licenses. (Gotta love a security platform that can pay for itself!) If you’re interested in learning more about how you accelerate in the cloud without sacrificing security and save some money at the same time, don’t hesitate to request a free demo!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[$] Best practices for fstests

Post Syndicated from original https://lwn.net/Articles/897061/

As a followup to a session on testing
challenges
earlier in the day, Josef Bacik led a discussion on best
practices for testing in a combined storage and filesystem session at the
2022 Linux Storage,
Filesystem, Memory-management and BPF Summit
(LSFMM). There are a
number of ways that developers can collaborate on improving the testing
landscape using fstests and blktests, starting with gathering and sharing
information about which tests are expected to pass and fail. That
information depends on a lot of different factors, including kernel version
and configuration, fstest options, and more.

Fedora 34 is EOL

Post Syndicated from original https://lwn.net/Articles/897247/

The Fedora 34 distribution release has gone out of the supported mode:
No further updates, including security updates, will be available for
Fedora 34
“. Users should update to the Fedora 35 or 36
release.

Let’s celebrate the 8th anniversary of Project Galileo!

Post Syndicated from Jocelyn Woolbright original https://blog.cloudflare.com/lets-celebrate-the-8th-anniversary-of-project-galileo/

Let’s celebrate the 8th anniversary of Project Galileo!

Let’s celebrate the 8th anniversary of Project Galileo!

We started Project Galileo in 2014 with the simple idea that organizations that work in vulnerable yet essential areas of human rights and democracy building should not be taken down because of cyber attacks. In the past eight years, this idea has grown to more than just keeping them secure from a DDoS attack, but also how to foster collaboration with civil society to offer more tools and support to these groups. In March 2022, after the war in Ukraine started, we saw an increase in applications to Project Galileo by 177%.

Read ahead for details on all of our eighth anniversary announcements:

  • Two new civil society partners helping choose participants
  • New insights on attack patterns using data from Cloudflare Radar
  • A portal designed to ease onboarding for Galileo participants
  • Details on our sessions at RightsCon this week
  • New case studies highlighting Galileo participants and the important work they are doing

Announcing two new Project Galileo partners

This year, we are excited to welcome two new partners, International Media Support and CyberPeace Institute. As we introduce new partners, we are able to expand the project to protect a range of groups on the Internet. With this, we currently protect 1,900+ organizations in 111 countries.

With almost three years working on Project Galileo at Cloudflare, I get a front row seat to how we use security tools to protect the most vulnerable on the Internet. From journalism groups in Brazil reporting on environmental issues to social justice organizations in the United States to activists in authoritarian countries, we see a range of voices that come to Cloudflare for protection.

The anniversary of the project is one of my favorite times of the year, as it gives us the opportunity to show the world a glimpse of what we see on a daily basis. With the anniversary, it also gives us time to reflect on lessons learned and how we can improve the project.

In a time of crisis, we engage with civil society on how to protect the most vulnerable

Let’s celebrate the 8th anniversary of Project Galileo!

One of the most important lessons we have learned about Project Galileo is that in a time of crisis, whether it be the spread of COVID-19 and shift to remote work or geopolitical conflicts, we are able to quickly mobilize to offer our assistance. One way we do this is to leverage our partnerships with civil society to offer our security tools and technical expertise to those who need help to keep their online platforms secure and reliable.

This became clear at the end of February 2022 and the start of the Russian invasion of Ukraine.

After the war in Ukraine started, applications to the project increased by 177% in March 2022. Since then, we onboarded 43 organizations in Ukraine to Project Galileo. In the region, we protect 116 organizations with 62 organizations onboarded to the project during the crisis, this includes organizations in Ukraine. Many of these organizations are working in journalism and reporting on the ground in Kyiv, human rights activists that are assisting refugees fleeing the country, and groups who have built applications to alert users of incoming air raids.

We have seen how partnerships between civil society, governments, and private sector companies have given us the ability to provide a swift response in providing support to Ukraine.

We see this in the form of donations of security services to ensure that people on the ground have access to information. There has been a focus on the conflict in Ukraine primarily on how to protect organizations that work in human rights. But, many civil society groups that have been working to provide assistance may have been overlooked in the digital security context. Many times, civil society does not get as many resources to protect themselves, and we strive to provide our services to human rights defenders, but also those who support them.

We have learned in the past few months that collaboration in a time of crisis is essential to responsibly provide our protections under the project. Any Ukrainian organizations that are facing attack can apply for free protection under Project Galileo by visiting www.cloudflare.com/galileo, and we will expedite their review and approval.

What to expect for the 8th anniversary of Project Galileo

Radar dashboard

Let’s celebrate the 8th anniversary of Project Galileo!

For the Project Galileo 8th anniversary, we wanted to identify the types of attacks these groups face to better equip researchers, civil society, and organizations that are targeted with best practices for safeguarding their websites and internal data.

We created a Radar dashboard to focus on attacks against organizations in areas such as human rights, journalism, and community building groups. We onboarded a range of organizations in Ukraine and neighboring countries during the ongoing Russian invasion.

Learn more about the attacks we see against vulnerable groups protected under Project Galileo with an additional blog post and Radar dashboard tomorrow.

Social Impact Portal

Let’s celebrate the 8th anniversary of Project Galileo!

Project Galileo has grown to support more than 1,900 organizations. These organizations typically fall into two categories. The first are organizations that are familiar with the security landscape and the Cloudflare tools they need to keep their organization secure. The second, which is a majority of organizations we protect under the project, are not familiar with the threat landscape and do not have a dedicated IT staff.

We know too well that organizations that work to support democracy, accountability, and human rights face an increased rate of cyber attacks because of the sensitive nature of their work. Many times, organizations come to Cloudflare because they come under a cyber attack and need our help with mitigation and getting back online. Unfortunately, we see applications like this come in every day for Project Galileo.

With this, we wanted to create a new resource to help these organizations on their Cloudflare journey. We are proud to release a new centralized area that organizations protected under our many projects can turn to when they have questions about configurations, product requests, and training on how to keep their organization secure. With tailored videos on security products with a focus on Cloudflare Zero Trust products, we are excited to offer more resources to organizations with very little or no dedicated IT staff, to ensure they stay online and secure from cyber attacks.

Learn more about our Cloudflare Social Impact Project portal and how we built this specifically for organizations protected under our Cloudflare Impact projects this week.

RightsCon 2022

Let’s celebrate the 8th anniversary of Project Galileo!

Every year, Cloudflare sponsors Access Now’s RightsCon. RightsCon brings together a broad range of civil society groups and business and public sector stakeholders to talk and learn about digital rights issues. With topics including Internet shutdowns, digital security, privacy, and surveillance, it has it all for a great week of engaging with a range of players in the digital rights space.

This year, we are participating in a variety of events, but particularly excited about a community lab we are hosting with partner organizations like National Democratic Institute, Internews, CyberPeace Institute, and Okta. The session is focused on tools available for at-risk organizations and to learn more on how the private sector and civil society can improve security resources. We’ve learned in the last few years of Project Galileo that we are one part of the broader ecosystem. When it comes to providing tools to organizations, it is important to work together with the many players to find the best way to support organizations online and offline. We hope this session will generate further ideas on how we can work closely with others  and earn more on how organizations view security resources.

If you plan to attend RightsCon, please check out our session on Wednesday, June 8, at 12:30 pm ET. More information can be found on the RightsCon website.

Case Studies

Let’s celebrate the 8th anniversary of Project Galileo!

As we celebrate the anniversary, we want to highlight many of the organizations protected under the project and how they keep their organization secure from cyber attacks. We value organizations that want to tell their story of the amazing work they do in human rights and community building and how they stay online with Cloudflare. Our goal with telling their stories is to encourage others who may work in similar spaces to take advantage of security tools available to them. Case studies also help other organizations that may be new to the project.

Check out some of their stories on how they use Project Galileo to stay secure from cyber attacks.

If you are an organization looking for protection

As we kick off the 8th anniversary of Project Galileo, we want to thank all of our civil society partners that we work alongside to offer Cloudflare protection. If you are an organization looking for protection under Project Galileo, please visit our website: cloudflare.com/galileo.

Double Redundancy, Support Compliance, and More With Cloud Replication: Now Live

Post Syndicated from Jeremy Milk original https://www.backblaze.com/blog/double-redundancy-support-compliance-and-more-with-cloud-replication-now-live/

Cloning is a little bit creepy (Seriously, you can clone your pet now?), but having clones of your data is far from it—creating and storing redundant copies is essential when it comes to protecting your business, complying with regulations, or developing apps. With Backblaze Cloud Replication—now generally available—you can get set up in just a few clicks to automatically copy data across buckets, accounts, or regions.

Unbox Backblaze Cloud Replication

Join us for a webinar to unbox all the capabilities of Cloud Replication on July 13, 2022 at 10 a.m. PDT with Sam Lu, Product Manager at Backblaze.

➔ Sign Up

Existing customers can start using Cloud Replication immediately by clicking on Cloud Replication within their Backblaze account or via the Backblaze B2 Native API.

Simply click on Cloud Replication in your account to get started.

Not a Backblaze customer yet? Sign up here. And read on for more details on how this feature can benefit you.

What Is Backblaze Cloud Replication?

Backblaze Cloud Replication is a new service that allows customers to automatically store to different locations—across regions, across accounts, or in different buckets within the same account. You can set replication rules in a few easy steps.

Once the rules are set on a given bucket, any data uploaded to that bucket will automatically be replicated into the destination bucket you choose.

What Is Cloud Replication Good For?

There are three main reasons you might want to use Cloud Replication:

  • Data Redundancy: Replicating data for security, compliance, and continuity purposes.
  • Data Proximity: Bringing data closer to distant teams or customers for faster access.
  • Replication Between Environments: Replicating data between testing, staging, and production environments when developing applications.

Data Redundancy

Keeping redundant copies of your data is the most common use case for Cloud Replication. Enterprises with comprehensive backup strategies, especially as they are increasingly cloud-based, will likely find Cloud Replication immediately applicable. It can help businesses:

  • Recover quickly from natural disasters and cybersecurity threats.
  • Support modern business continuity.
  • Reduce the risk of data loss and downtime.
  • Comply with industry or board regulations centered on concentration risk issues.
  • Meet data residency requirements stemming from regulations like GDPR.

Data redundancy has always been a best practice—the gold standard for backup strategies has long been a 3-2-1 approach. The core principles of 3-2-1—keeping at least three copies of your data, on two different media, with one copy off-site—were originally developed for an on-premises world. They still hold true, and today they are being applied in even more robust ways to an increasingly cloud-based world.

Backblaze’s Cloud Replication helps businesses apply the principles of 3-2-1 within a cloud-first or cloud-dominant infrastructure. By storing to multiple regions and/or multiple buckets in the same region, businesses virtually achieve an “off-site” backup—easily and automatically protecting data from natural disasters, political instability, or even run-of-the-mill compliance headaches.

Data Proximity

If you have teams, customers, or workflows spread around the world, bringing a copy of your data closer to where work gets done can minimize speed-of-light limitations. Especially for media-heavy teams in industries like game development and postproduction, seconds can make the difference in keeping creative teams operating smoothly. And because you can automate replication and use metadata to track accuracy and process, you can remove some manual steps from the process where errors and data loss tend to crop up.

Replication Between Environments

Version control and smoke testing are nothing new, but when you’re controlling versions of large applications or trying to keep track of what’s live and what’s in testing, you might need a tool with more horsepower and options for customization. Backblaze Cloud Replication can serve these needs.

You can easily replicate objects between buckets dedicated for production, testing, or staging if you need to use the same data and maintain the same metadata. This allows you to observe best practices and automate replication between environments.

Want to Learn More About Backblaze Cloud Replication?

  • Join the webinar on July 13, 2022 at 10 a.m. PDT.
  • Here’s a walk-through of Cloud Replication, including step-by-step instructions for using Cloud Replication via the web UI and the Backblaze B2 Native API.
  • Access documentation here.
  • Check out our Help articles on how to create rules here.

If you’re a new customer, click here to sign up for Backblaze B2 Cloud Storage and learn more about Cloud Replication.

The post Double Redundancy, Support Compliance, and More With Cloud Replication: Now Live appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Post Syndicated from Lucas Pardue original https://blog.cloudflare.com/cloudflare-view-http3-usage/

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Today, a cluster of Internet standards were published that rationalize and modernize the definition of HTTP – the application protocol that underpins the web. This work includes updates to, and refactoring of, HTTP semantics, HTTP caching, HTTP/1.1, HTTP/2, and the brand-new HTTP/3. Developing these specifications has been no mean feat and today marks the culmination of efforts far and wide, in the Internet Engineering Task Force (IETF) and beyond. We thought it would be interesting to celebrate the occasion by sharing some analysis of Cloudflare’s view of HTTP traffic over the last 12 months.

However, before we get into the traffic data, for quick reference, here are the new RFCs that you should make a note of and start using:

  • HTTP Semantics – RFC 9110
    • HTTP’s overall architecture, common terminology and shared protocol aspects such as request and response messages, methods, status codes, header and trailer fields, message content, representation data, content codings and much more. Obsoletes RFCs 2818, 7231, 7232, 7233, 7235, 7538, 7615, 7694, and portions of 7230.
  • HTTP Caching – RFC 9111
    • HTTP caches and related header fields to control the behavior of response caching. Obsoletes RFC 7234.
  • HTTP/1.1 – RFC 9112
    • A syntax, aka "wire format", of HTTP that uses a text-based format. Typically used over TCP and TLS. Obsolete portions of RFC 7230.
  • HTTP/2 – RFC 9113
    • A syntax of HTTP that uses a binary framing format, which provides streams to support concurrent requests and responses. Message fields can be compressed using HPACK. Typically used over TCP and TLS. Obsoletes RFCs 7540 and 8740.
  • HTTP/3 – RFC 9114
    • A syntax of HTTP that uses a binary framing format optimized for the QUIC transport protocol. Message fields can be compressed using QPACK.
  • QPACK – RFC 9204
    • A variation of HPACK field compression that is optimized for the QUIC transport protocol.

On May 28, 2021, we enabled QUIC version 1 and HTTP/3 for all Cloudflare customers, using the final “h3” identifier that matches RFC 9114. So although today’s publication is an occasion to celebrate, for us nothing much has changed, and it’s business as usual.

Support for HTTP/3 in the stable release channels of major browsers came in November 2020 for Google Chrome and Microsoft Edge and April 2021 for Mozilla Firefox. In Apple Safari, HTTP/3 support currently needs to be enabled in the “Experimental Features” developer menu in production releases.

A browser and web server typically automatically negotiate the highest HTTP version available. Thus, HTTP/3 takes precedence over HTTP/2. We looked back over the last year to understand HTTP/3 usage trends across the Cloudflare network, as well as analyzing HTTP versions used by traffic from leading browser families (Google Chrome, Mozilla Firefox, Microsoft Edge, and Apple Safari), major search engine indexing bots, and bots associated with some popular social media platforms. The graphs below are based on aggregate HTTP(S) traffic seen globally by the Cloudflare network, and include requests for website and application content across the Cloudflare customer base between May 7, 2021, and May 7, 2022. We used Cloudflare bot scores to restrict analysis to “likely human” traffic for the browsers, and to “likely automated” and “automated” for the search and social bots.

Traffic by HTTP version

Overall, HTTP/2 still comprises the majority of the request traffic for Cloudflare customer content, as clearly seen in the graph below. After remaining fairly consistent through 2021, HTTP/2 request volume increased by approximately 20% heading into 2022. HTTP/1.1 request traffic remained fairly flat over the year, aside from a slight drop in early December. And while HTTP/3 traffic initially trailed HTTP/1.1, it surpassed it in early July, growing steadily and  roughly doubling in twelve months.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

HTTP/3 traffic by browser

Digging into just HTTP/3 traffic, the graph below shows the trend in daily aggregate request volume over the last year for HTTP/3 requests made by the surveyed browser families. Google Chrome (orange line) is far and away the leading browser, with request volume far outpacing the others.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Below, we remove Chrome from the graph to allow us to more clearly see the trending across other browsers. Likely because it is also based on the Chromium engine, the trend for Microsoft Edge closely mirrors Chrome. As noted above, Mozilla Firefox first enabled production support in version 88 in April 2021, making it available by default by the end of May. The increased adoption of that updated version during the following month is clear in the graph as well, as HTTP/3 request volume from Firefox grew rapidly. HTTP/3 traffic from Apple Safari increased gradually through April, suggesting growth in the number of users enabling the experimental feature or running a Technology Preview version of the browser. However, Safari’s HTTP/3 traffic has subsequently dropped over the last couple of months. We are not aware of any specific reasons for this decline, but our most recent observations indicate HTTP/3 traffic is recovering.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Looking at the lines in the graph for Chrome, Edge, and Firefox, a weekly cycle is clearly visible in the graph, suggesting greater usage of these browsers during the work week. This same pattern is absent from Safari usage.

Across the surveyed browsers, Chrome ultimately accounts for approximately 80% of the HTTP/3 requests seen by Cloudflare, as illustrated in the graphs below. Edge is responsible for around another 10%, with Firefox just under 10%, and Safari responsible for the balance.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends
HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

We also wanted to look at how the mix of HTTP versions has changed over the last year across each of the leading browsers. Although the percentages vary between browsers, it is interesting to note that the trends are very similar across Chrome, Firefox and Edge. (After Firefox turned on default HTTP/3 support in May 2021, of course.)  These trends are largely customer-driven – that is, they are likely due to changes in Cloudflare customer configurations.

Most notably we see an increase in HTTP/3 during the last week of September, and a decrease in HTTP/1.1 at the beginning of December. For Safari, the HTTP/1.1 drop in December is also visible, but the HTTP/3 increase in September is not. We expect that over time, once Safari supports HTTP/3 by default that its trends will become more similar to those seen for the other browsers.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends
HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends
HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends
HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Traffic by search indexing bot

Back in 2014, Google announced that it would start to consider HTTPS usage as a ranking signal as it indexed websites. However, it does not appear that Google, or any of the other major search engines, currently consider support for the latest versions of HTTP as a ranking signal. (At least not directly – the performance improvements associated with newer versions of HTTP could theoretically influence rankings.) Given that, we wanted to understand which versions of HTTP the indexing bots themselves were using.

Despite leading the charge around the development of QUIC, and integrating HTTP/3 support into the Chrome browser early on, it appears that on the indexing/crawling side, Google still has quite a long way to go. The graph below shows that requests from GoogleBot are still predominantly being made over HTTP/1.1, although use of HTTP/2 has grown over the last six months, gradually approaching HTTP/1.1 request volume. (A blog post from Google provides some potential insights into this shift.) Unfortunately, the volume of requests from GoogleBot over HTTP/3 has remained extremely limited over the last year.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Microsoft’s BingBot also fails to use HTTP/3 when indexing sites, with near-zero request volume. However, in contrast to GoogleBot, BingBot prefers to use HTTP/2, with a wide margin developing in mid-May 2021 and remaining consistent across the rest of the past year.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Traffic by social media bot

Major social media platforms use custom bots to retrieve metadata for shared content, improve language models for speech recognition technology, or otherwise index website content. We also surveyed the HTTP version preferences of the bots deployed by three of the leading social media platforms.

Although Facebook supports HTTP/3 on their main website (and presumably their mobile applications as well), their back-end FacebookBot crawler does not appear to support it. Over the last year, on the order of 60% of the requests from FacebookBot have been over HTTP/1.1, with the balance over HTTP/2. Heading into 2022, it appeared that HTTP/1.1 preference was trending lower, with request volume over the 25-year-old protocol dropping from near 80% to just under 50% during the fourth quarter. However, that trend was abruptly reversed, with HTTP/1.1 growing back to over 70% in early February. The reason for the reversal is unclear.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Similar to FacebookBot, it appears TwitterBot’s use of HTTP/3 is, unfortunately, pretty much non-existent. However, TwitterBot clearly has a strong and consistent preference for HTTP/2, accounting for 75-80% of its requests, with the balance over HTTP/1.1.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

In contrast, LinkedInBot has, over the last year, been firmly committed to making requests over HTTP/1.1, aside from the apparently brief anomalous usage of HTTP/2 last June. However, in mid-March, it appeared to tentatively start exploring the use of other HTTP versions, with around 5% of requests now being made over HTTP/2, and around 1% over HTTP/3, as seen in the upper right corner of the graph below.

HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends

Conclusion

We’re happy that HTTP/3 has, at long last, been published as RFC 9114. More than that, we’re super pleased to see that regardless of the wait, browsers have steadily been enabling support for the protocol by default. This allows end users to seamlessly gain the advantages of HTTP/3 whenever it is available. On Cloudflare’s global network, we’ve seen continued growth in the share of traffic speaking HTTP/3, demonstrating continued interest from customers in enabling it for their sites and services. In contrast, we are disappointed to see bots from the major search and social platforms continuing to rely on aging versions of HTTP. We’d like to build a better understanding of how these platforms chose particular HTTP versions and welcome collaboration in exploring the advantages that HTTP/3, in particular, could provide.

Current statistics on HTTP/3 and QUIC adoption at a country and autonomous system (ASN) level can be found on Cloudflare Radar.

Running HTTP/3 and QUIC on the edge for everyone has allowed us to monitor a wide range of aspects related to interoperability and performance across the Internet. Stay tuned for future blog posts that explore some of the technical developments we’ve been making.

And this certainly isn’t the end of protocol innovation, as HTTP/3 and QUIC provide many exciting new opportunities. The IETF and wider community are already underway building new capabilities on top, such as MASQUE and WebTransport. Meanwhile, in the last year, the QUIC Working Group has adopted new work such as QUIC version 2, and the Multipath Extension to QUIC.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close