Това е просто грип

Post Syndicated from Надежда Цекулова original https://toest.bg/tova-e-prosto-grip/

Тенисистът Григор Димитров, спортният министър Красен Кралев, телевизионният водещ Ути Бъчваров, анестезиологът и радиоводещ Мирослав Ненков, депутатът Хасан Адемов, фолкпевиците Галена и Деси Слава. Това са само част от известните личности у нас, които публично разказаха за боледуването си от COVID-19. Сред тях има различни случаи – хора, боледували тежко и дори лекувани за определен период в реанимация, и други, прекарали заболяването без никакви или с леки симптоми. Общото им послание е, че вирусът не бива да се подценява, а едно от най-честите определения, използвано от възстановилите се от болестта, е, че тя е „странна“.

В световен мащаб, а и у нас обаче продължава да вирее съмнението в сериозността на COVID-19. Има и хора, които все още не вярват, че вирусът съществува.

Новият вирус е като сезонен грип – дезинформация или оптимизъм

Често протичането на COVID-19 се сравнява с това на сезонния грип. В самото начало на пандемията подобни твърдения преобладаваха, а социалните мрежи и форумите изобилстваха от таблици и графики с броя на загиналите от глад, катастрофи, рак и грип. Данните се ползваха като доказателство колко нерелевантни са строгите ограничения, въведени в борбата срещу заразата, и като довод, че няма нужда от паника. Както по света, така и в България, лидери и експерти на най-високо ниво поддържаха тезата, че SARS-CoV-2 причинява обикновен грип и мерките за овладяването му не бива да бъдат по-различни от тези при сезонните епидемии.

Интерактивната графика по-долу илюстрира добре, макар и не докрай прецизно, защо сериозността на болестта не трябва да се подценява.

През януари Световната здравна организация (СЗО) обявява епидемията от COVID-19 за извънредна ситуация в областта на общественото здравеопазване от международно значение. Към тази дата болестта действително е на последно място като причина за смърт сред близо 20 изброени, в които дори не са включени най-масовите „убийци“ на населението на земята – исхемична болест на сърцето, инсулт, хронична обструктивна белодробна болест, алцхаймер. Липсата на пряко усещане за заплаха от новата болест и съвсем реалистичните страхове от по-стари и популярни зарази правят тезата за „обикновеността“ на вируса близка, желана и изглеждаща достоверно. За кратко.

Респираторно заболяване ли е COVID-19

Въпреки първоначалните хипотези, че SARS-CoV-2 е поредният коронавирус, предизвикващ сезонно респираторно заболяване, след значителните проблеми с лечението на тежко болните науката потърси отговорите в други посоки. Поредица от изследвания установиха, че вирусът всъщност може да засегне всеки орган, включително кръвоносната система. Теорията, че COVID-19 е по-скоро възпаление на съдовата система (васкулит), а не респираторно заболяване, беше популяризирана у нас най-вече от пулмолога д-р Александър Симидчиев, чиито медийни изяви по темата получиха широк отзвук. Той и други медицински експерти положиха усилия да изградят представата за болестта като „лека“, „респираторна“ и „сезонна“.

Въпреки всички научни доказателства обаче сравнението със сезонните респираторни заболявания продължава да циркулира като легитимна гледна точка дори в професионалния дебат по темата. Сред основните аргументи на защитниците на тази теза е, че известните досега данни за разпространението и смъртността от това заболяване приличат повече на тези на грипа, отколкото на тежки инфекции като рубеола и морбили например. Един от защитниците на тезата у нас е председателят на Центъра за защита правата в здравеопазването д-р Стойчо Кацаров, който от самото начало на пандемията акцентира върху несъразмерния според него отговор срещу заразата, който създава по-големи рискове от самата болест.

Специфичното за тази трактовка е, че тя стъпва на верни факти и поставя действията на официалните власти не само у нас, но и в целия свят под въпрос – и така звучи като аргументирана и логична критика. Тази теория, която често предлага превратна интерпретация на числата, пропуска един ключов детайл – в основата на агресивните мерки срещу разпространението на SARS-CoV-2 от самото начало на кризата стои не това, което световната наука знае за вируса, а това, което не знае за него.

„Леталитет“ и „тежко протичане“

Колко от заразените с новия коронавирус се разболяват тежко? А колко умират?

Изчисляването на тези два параметъра e един от факторите, които могат да дадат отговор на въпроса за сериозността на дадено заболяване. Вирусът ебола например буди изключителна тревога, тъй като само тазгодишният епидемичен взрив е убил около половината от засегнатите, а през годините смъртността варира между 25 и 90%.

Все още е трудно да се направят прецизно подобни калкулации за разпространението, тежестта и смъртността на COVID-19, тъй като заболяването е ново. В началото на август СЗО публикува научно писмо, в което подробно обяснява принципа за определяне на смъртността на дадено заболяване. Учените обясняват, че се наблюдават два различни коефициента – IFR (infection fatality ratio), който показва съотношението на броя починали към общия брой заразени, и CFR (case fatality ratio), или леталитет, който отразява броя починали към броя доказани случаи. Заради различията при изчисляването на двата коефициента се говори за „смъртност“ от под 0,1% до над 25%.

От друга страна, самото изчисляване на евентуалния брой заразени е резултат от допускания и предположения, а не точна сметка. За да онагледи този проблем, сайтът Our World in Data публикува диаграма, включваща данни от Съединените щати за доказаните случаи на коронавирус и оценка за истинския брой на заразените според четири от най-използваните математически модели. Вероятността реално заразените да са много повече от доказаните е видима и в четирите модела. По въпроса колко точно са заразените и как болестта се развива във времето обаче, моделите предлагат напълно различни резултати.

В не един научен материал се отбелязва, че трябва да се отчетат значителните разлики на възрастова основа, заради които осредняването на леталитета също би довело до некоректни изводи.

За „тежко протичане“ на COVID-19 може да се говори при определени медицински критерии, разписани от международните професионални организации. Един от най-ясно разпознаваемите от неспециалистите симптоми е дихателната недостатъчност. Все още липсва категоричен научен консенсус по отношение на риска от тежко протичане на заболяването, тъй като освен по възрастови групи, разлики се отчитат и при наличието на други фактори – като съпътстващи заболявания – и дори според държавата.

Въпреки всички условности, в скорошна публикация в медицинското издание The Lancet се прогнозира, че вероятно около 4% от световното население е заплашено от тежко протичане на COVID-19, като при мъжете рискът е два пъти по-висок (6%), отколкото при жените (3%). Делът на уязвимите групи е най-голям в страни със застаряващо население, африкански държави с високо разпространение на ХИВ/СПИН и малки островни държави с широко разпространение на диабет. Ключов фактор при оценката на риска се оказва наличието на диабет и хронични бъбречни, сърдечносъдови и респираторни заболявания.

На този етап от развитието на пандемията математическите модели не могат да дадат точни отговори. Но изминалите девет месеца от началото на коронакризата бяха достатъчни, за да покажат, че тревогите около новото заболяване може да се окажат дългосрочни.

Свидетелства за боледуване, продължаващо по 10–12 или над 15 седмици, се появяват в различни краища на света. Такива случаи бяха документирани и от световните медии и описани в социалните мрежи. Науката също потвърди някои от дългосрочните усложнения, причинени от COVID-19, и отхвърли други. Въпреки че научихме много за коронавируса от началото на пандемията насам, много от въпросите, засягащи хода на заболяването, все още нямат отговори.

Така наличната към момента информация за дела на смъртните случаи и на хората, които оцеляват след тежко протичане на болестта, може да се окаже недостатъчна за цялостна оценка на тежестта на заболяването. Едно обаче е сигурно – става все по-трудно да впишем COVID-19 в графата „лека настинка“.

Дългосрочните ефекти на COVID-19 и защо е важно

Една от най-престижните болнични вериги в Съединените щати – Mayo Clinic, наскоро публикува статия, в която обобщава на достъпен език най-сериозните доказани към момента дългосрочни усложнения на COVID-19. В материала се посочва, че повечето от боледувалите се възстановяват напълно в рамките на няколко седмици. Но при други – дори такива с леко протичане на заболяването, симптомите не отминават дори след първоначалното възстановяване. Най-често те продължават да страдат от умора, кашлица, задух, главоболие, болки в ставите. Екипът на Mayo Clinic изрежда и кои са уязвимите органи и системи – сърце, бели дробове, мозък, кръвоносна система.

Опровергава се и първоначалното убеждение, че здравите млади хора не са толкова застрашени от заболяването. Изследването на леките случаи е особено предизвикателство за медиците, тъй като много от пациентите съобщават за продължителни прояви на някои симптоми, дълго след официалното си излекуване.

Заради недостатъчната, а на места и липсваща диагностика обаче проследяването и изучаването на усложненията често е възпрепятствано. Например анкета, направена в Холандия, разкрива, че почти три месеца след появата на първите симптоми, 9 от всеки 10 души съобщават, че имат проблеми с извършването на обикновени ежедневни дейности. 1622 души със съмнение за коронавирус са участвали в проучването, но 91% не са били хоспитализирани, а 43% изобщо не са били диагностицирани от лекар.

В изследване на Кралския колеж в Лондон се предполага, че едва 52% от преболедувалите във Великобритания са оздравели за по-малко от 13 дни. Проучването включва както тежки случаи, лекувани в интензивно отделение, така и хора, прекарали заболяването у дома с леки симптоми. Друго английско проучване, публикувано в специализираното издание за неврология Brain, докладва за увреждания на нервната система, причинени от COVID-19, като някои от усложненията не са свързани с това дали болестта е протекла леко, или тежко.

Изследвани са хора, при които болестта не е дала никакви респираторни симптоми, и неврологичните разстройства са били първият и единствен симптом на COVID-19. Документирани са случаите на 43 души на възраст между 16 и 85 години с различни форми на мозъчно увреждане вследствие на прекараната инфекция.

През лятото репортаж на британската телевизия Sky News разказва за болницата в италианския град Бергамо, един от най-засегнатите от заразата райони в световен мащаб. Местните лекари проследяват състоянието на хората, лекувани от COVID-19 в най-тежките седмици на епидемията през март и април. Психоза, безсъние, бъбречна болест, гръбначни инфекции, инсулти, хронична умора и двигателни проблеми са сред най-сериозните усложнения, останали след излекуването на пациентите, споделят италианските лекари пред телевизията. Те подчертават, че според тях става дума за системна инфекция, която засяга всички органи в тялото, а не за респираторно заболяване, както се смяташе в началото на пандемията. Проактивното издирване на пациентите е провокирано от предишно малко проучване, което сочи, че над 87% от пациентите страдат от поне един симптом, който не отминава дори след като болните се смятат за излекувани.

Както всяко друго ново явление, и тази болест формира свой собствен речник. Описаните по-горе случаи получават общото наименование long COVID, или на български – „дългопротичащ ковид“. Понятието възниква спонтанно, тъй като в държавите с масово разпространение, като Великобритания и САЩ, се появяват много пациенти, съобщаващи за продължително неразположение, които много дълго време не са тествани. Понятието е възприето и от медицинската общност, като в началото на септември престижното научно издание British Medical Journal дори организира уебинар за специалисти със съвети как да диагностицират и лекуват пациенти с „дългопротичащ ковид“.

Сериозно, но не безнадеждно

Данните за много пациенти с дългосрочно проявяващи се симптоми в условията на липсващо специализирано лечение мотивират специалистите да търсят алтернативни решения за тези случаи. Репортаж на Си Ен Ен от Международния конгрес на Европейското дружество по белодробни болести съобщава за изследване, което дава надежда за по-бързо възстановяване с помощта на точна двигателна и респираторна рехабилитация. Любопитен детайл е, че рехабилитацията повлиява позитивно дори върху психо-неврологичните симптоми, като депресия и тревожност.

Не трябва да забравяме обаче, че далеч не всички въпроси около COVID-19 са получили своите отговори и не е изключено изследванията по темата отново да променят разбирането на медиците за същността на заболяването, неговия ход и лечение.

Заглавна илюстрация: © Пеню Кирацов
„Тоест“ е официален партньор за публикуването на материалите от поредицата „Хроники на инфодемията“, реализирана от АЕЖ-България съвместно с Фондация „Фридрих Науман“.

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

Introducing queued purchases for Savings Plans

Post Syndicated from Roshni Pary original https://aws.amazon.com/blogs/compute/introducing-queued-purchases-for-savings-plans/

This blog post is contributed by Idan Maizlits, Sr. Product Manager, Savings Plans

AWS now provides the ability for you to queue purchases of Savings Plans by specifying a time, up to 3 years in the future, to carry out those purchases. This blog reviews how you can queue purchases of Savings Plans.

In November 2019, AWS launched Savings Plans. This is a new flexible pricing model that allows you to save up to 72% on Amazon EC2, AWS Fargate, and AWS Lambda in exchange for making a commitment to a consistent amount of compute usage measured in dollars per hour (for example $10/hour) for a 1- or 3-year term. Savings Plans is the easiest way to save money on compute usage while providing you the flexibility to use the compute options that best fits your needs as they change.

Queueing Savings Plans allows you to plan ahead for future events. Say, you want to purchase a Savings Plan three months into the future to cover a new workload. Now, with the ability to queue plans in advance, you can easily schedule the purchase to be carried out at the exact time you expect your workload to go live. This helps you plan in advance by eliminating the need to make “just-in-time” purchases, and benefit from low prices on your future workloads from the get-go. With the ability to queue purchases, you can also enjoy uninterrupted Savings Plans coverage by scheduling renewals of your plans ahead of their expiry. This makes it even easier to save money on your overall AWS bill.

So how do queued purchases for Savings Plans work? Queued purchases are similar to regular purchases in all aspects but one – the start date. With a regular purchase, a plan goes active immediately whereas with a queued purchase, you select a date in the future for a plan to start. Up until the said future date, the Savings Plan remains in a queued state, and on the future date any upfront payments are charged and the plan goes active.

Now, let’s look at this in more detail with a couple of examples. I walk through three scenarios – a) queuing Savings Plans to cover future usage b) renewing expiring Savings Plans and c) deleting a queued Savings plan.

How do I queue a Savings Plan?

If you are planning ahead and would like to queue a Savings Plan to support future needs such as new workloads or expiring Reserved Instances, head to the Purchase Savings Plans page on the AWS Cost Management Console. Then, select the type of Savings Plan you would like to queue, including the term length, purchase commitment, and payment option.

Select the type of Savings Plan

Now, indicate the start date and time for this plan (this is the date/time at which your Savings Plan becomes active). The time you indicate is in UTC, but is also shown in your browser’s local time zone. If you are looking to replace an existing Reserved Instance, you can provide the start date and time to align with the expiration of your existing Reserved Instances. You can find the expiration time of your Reserved Instances on the EC2 Reserved Instances Console (this is in your local time zone, convert it to UTC when you queue a Savings Plan).

After you have selected the start time and date for the Savings Plan, click “Add to cart”. When you are ready to complete the purchase, click “Submit Order,” which completes the purchase.

Once you have submitted the order, the Savings Plans Inventory page lists the queued Savings Plan with a “Queued” status and that purchase will be carried out on the date and time provided.

How can I replace an expiring plan?

If you have already purchased a Savings Plan, queuing purchases allow you to renew that Savings Plan upon expiry for continuous coverage. All you have to do is head to the AWS Cost Management Console, go to the Savings Plans Inventory page, and select the Savings Plan you would like to renew. Then, click on Actions and select “Renew Savings Plan” as seen in the following image.

This action automatically queues a Savings Plan in the cart with the same configuration (as your original plan) to replace the expiring one. The start time for the plan automatically sets to one second after expiration of the old Savings Plan. All you have to do now is submit the order and you are good to go.

If you would like to renew multiple Savings Plans, select each one and click “Renew Savings Plan,” which adds them to the Cart. When you are done adding new Savings Plans, your cart lists all of the Savings Plans that you added to the order. When you are ready to submit the order, click “Submit order.

How can I delete a queued Savings Plan?

If you have queued Savings Plans that you no longer need to purchase, or need to modify, you can do so by visiting the console. Head to the AWS Cost Management Console, select the Savings Plans Inventory page, and then select the Savings Plan you would like to delete. By selecting the Savings Plan and clicking on Actions, as seen in the following image, you can delete the queued purchase if you need to make changes or if you no longer need the plan to be purchased. If you need the Savings Plan at a different commitment value, you can make a new queued purchase.

Conclusion

AWS Savings Plans allow you to save up to 72% of On-demand prices by committing to a 1- or 3- year term. Starting today, with the ability to queue purchases of Savings Plans, you can easily plan for your future needs or renew expiring Savings Plan ahead of time, all with just a few clicks. In this blog, I walked through various scenarios. As you can see, it’s even easier to save money with AWS Savings Plans by queuing your purchases to meet your future needs and continue benefiting from uninterrupted coverage.

Click here to learn more about queuing purchases of Savings Plans and visit the AWS Cost Management Console to get started.

[$] Saying goodbye to set_fs()

Post Syndicated from original https://lwn.net/Articles/832121/rss

The set_fs() function dates back to the earliest days of the Linux
kernel; it is a key part of the machinery that keeps user-space and
kernel-space memory separated from each other. It is also easy to misuse
and has been the source of various security problems over the years; kernel
developers have long wanted to be rid of it. They won’t completely get their
wish in the 5.10 kernel but, as the result of work that has been quietly
progressing for several months, the end of set_fs() will be easily
visible at that point.

Wishful Thinking: Extended Version History in Real Life

Post Syndicated from Caitlin Bryson original https://www.backblaze.com/blog/wishful-thinking-extended-version-history-in-real-life/

When it comes to your data, sometimes you just need to roll back time. Fortunately, with Extended Version History now available, Backblaze can save old and deleted files for up to 30 days, one year, or even forever—depending on the option you choose—so that you can access old versions of your files and restore files that you’ve previously deleted over a longer stretch of time.

Restoring a file just prior to that big “oops” moment or salvaging a deleted file you didn’t think you’d need again is a great feeling, but it’s a little hard to describe in the abstract. The more we thought about it, the more we realized that there are plenty of things other than data that we wish had retain/restore capabilities. Here are the top 10 things in real life that we wish had Extended Version History! We hope they help convey why Extended Version History could be valuable for you.

Where Extended Version History Would Be Great in Real Life

Haircuts

We’ll admit it—during shelter-in-place—more than a few of us have given ourselves poorly-advised, unskilled haircuts at home. It usually starts out well, your new hair actually looks pretty good, and you start feeling confident!

“This isn’t so hard,” you think, “so maybe I’ll just go a LITTLE shorter.” And then disaster strikes.

You’ve gone too short and your hair is ruined. What we wouldn’t give for the ability to restore back to the moment just before that last snip!

To take it a step further, with Forever Extended Version History, you could even go back to that one perfect haircut from a few years ago that your stylist has somehow never been able to recreate. Great hair for life!

Reading

Re-reading can be wonderful, but nothing compares to the thrill of reading an incredible book for the first time. Short of losing your memory, there’s no way to get that feeling back—but with Forever Version History, you could shift back to before your first read. Meet the characters all over again, follow every plot twist, and experience every page-turning moment again like new!

Eating

That second pint of ice cream seemed like such a good idea at the time. But now that you’ve finished off that mint-chocolatey goodness, your stomach is feeling… less than pleased. What if you could restore yourself to a single-pint state, while retaining the memories of that delicious second helping? We’d be on board, and just think of the endless “tasting” opportunities!

Avocados

Avocados are delicious, especially when you catch them at the perfect level of ripeness. Unfortunately, that level usually seems to be about a three-hour window between “rock hard” and “completely mushy.” If only Extended Avocado History existed—then you could avoid the sinking feeling of cutting into one and realizing you were too early or too late. Someone contact the Avocado Board!

Baking

We heard a few of you have taken up baking during the shelter-in-place orders. And by “heard,” we mean we’ve looked for yeast at the grocery store for months and it’s never in stock. We hope you enjoy your sourdough starter!

But whether you’re making cookies, pies, or sourdough, there’s nothing worse than smelling that burning odor and realizing you’ve just wasted time and a good portion of your ingredients on an inedible product.

Wouldn’t it be great to rewind back to just before smoke started billowing from your oven, saving you from having to throw out your charred baked goods and start over? We think so. (With our standard 30 Day Version History, you can at least recover old recipes you’ve saved over.)

Coffee

Ah, coffee. Delicious, caffeinated, and a crucial part of most morning routines. Until you spill it down the front of your brand-new shirt or across your laptop keyboard, and suddenly everything is terrible. Your shirt is ruined, your laptop is making funny noises, and now you have no coffee.

Being able to restore back to a pre-spill moment would save your clothing, your technology—and your morning. Backblaze has a lot of experience with helping people after spills…

Plants

Some people have green thumbs. Some seem to be death incarnate for our little vegetal compatriots. And there truly is no worse feeling than doing your absolute best and watching your newest plant friend slowly die, again.

Did you water too much? Did you not water enough? Did you leave it in direct sunlight or shade? By the time you figure it out, it’s usually too late. But if One Year Plant Version History was real, you could find the issue and revert back to when your plant was healthy—and this time, do it right. And just think about your never-ending cherry tomato yields!

Hiking

A seven-mile hike sounded like such a good idea at the time. But now you’re four miles down the trail, and you’re ready to be done. Too bad you’ve still got three more miles, and turning around would just make the trip back even longer!

Being able to restore back to an earlier point on the trail—maybe a mile or two—would let you cut your hike short and head back to the start, giving you the fun hiking experience without the pain of those last few miles. Sign us up.

Dates

The first date has been going so well. Conversation has been easy, the chemistry is there, and you feel like a second date is imminent. Then you make one comment, and you know as soon as it starts exiting your mouth that it’s coming out wrong.

You backpedal, you explain, you apologize—but the damage is done, and that first impression is ruined. The chance to rethink or rephrase would be everything in this moment, and Dating Version History would let you skip back a moment and save your potential future relationship. Also great for business meetings (ask that question that you’re a bit nervous to ask, get the answer, then go back in time and nod knowingly to yourself)!

Naps

It’s coming up on mid-afternoon, and you’re starting to feel a lull. Perfect time for a power nap, so you’ll awake refreshed and ready to take on the rest of the day. Except instead, you slept for four hours, and now you’re groggy and your sleep schedule is a mess. Sleeping through alarms happens to the best of us, but it can easily turn an energizing nap into a sleepy evening. The dream fix? Using Nap Version History to get back to that perfect 90-minute nap time, wake up, and finish off your day strong!

Conclusion

Okay, sure, we get it—this is all a tad unrealistic. But there’s one area of life where your keenest regrets CAN be corrected with a quick visit to our restore page: your backup! So check out our version history page and explore your options for gaining greater peace of mind! And if you’re an existing customer, you can upgrade easily right here.

The post Wishful Thinking: Extended Version History in Real Life appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

PostgreSQL 13 released

Post Syndicated from original https://lwn.net/Articles/832408/rss

Version 13 of the PostgreSQL database management system is out.
PostgreSQL 13 includes significant improvements to its indexing and lookup
system that benefit large databases, including space savings and performance
gains for indexes, faster response times for queries that use aggregates or
partitions, better query planning when using enhanced statistics, and more.

Along with highly requested features like parallelized vacuuming and
incremental sorting, PostgreSQL 13 provides a better data management
experience for workloads big and small, with optimizations for daily
administration, more conveniences for application developers, and security
enhancements.”

Security updates for Thursday

Post Syndicated from original https://lwn.net/Articles/832405/rss

Security updates have been issued by Fedora (firefox, libproxy, mbedtls, samba, and zeromq), openSUSE (chromium and virtualbox), Red Hat (firefox and kernel), SUSE (cifs-utils, conmon, fuse-overlayfs, libcontainers-common, podman, libcdio, python-pip, samba, and wavpack), and Ubuntu (rdflib).

Iranian Government Hacking Android

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/iranian-government-hacking-android.html

The New York Times wrote about a still-unreleased report from Chckpoint and the Miaan Group:

The reports, which were reviewed by The New York Times in advance of their release, say that the hackers have successfully infiltrated what were thought to be secure mobile phones and computers belonging to the targets, overcoming obstacles created by encrypted applications such as Telegram and, according to Miaan, even gaining access to information on WhatsApp. Both are popular messaging tools in Iran. The hackers also have created malware disguised as Android applications, the reports said.

It looks like the standard technique of getting the victim to open a document or application.

Building even faster interpreters in Rust

Post Syndicated from Zak Cutner original https://blog.cloudflare.com/building-even-faster-interpreters-in-rust/

Building even faster interpreters in Rust

Building even faster interpreters in Rust

At Cloudflare, we’re constantly working on improving the performance of our edge — and that was exactly what my internship this summer entailed. I’m excited to share some improvements we’ve made to our popular Firewall Rules product over the past few months.

Firewall Rules lets customers filter the traffic hitting their site. It’s built using our engine, Wirefilter, which takes powerful boolean expressions written by customers and matches incoming requests against them. Customers can then choose how to respond to traffic which matches these rules. We will discuss some in-depth optimizations we have recently made to Wirefilter, so you may wish to get familiar with how it works if you haven’t already.

Minimizing CPU usage

As a new member of the Firewall team, I quickly learned that performance is important — even in our security products. We look for opportunities to make our customers’ Internet properties faster where it’s safe to do so, maximizing both security and performance.

Our engine is already heavily used, powering all of Firewall Rules. But we have bigger plans. More and more products like our Web Application Firewall (WAF) will be running behind our Wirefilter-based engine, and it will become responsible for eating up a sizable chunk of our total CPU usage before long.

How to measure performance?

Measuring performance is a notoriously tricky task, and as you can probably imagine trying to do this in a highly distributed environment (aka Cloudflare’s edge) does not help. We’ve been surprised in the past by optimizations that look good on paper, but, when tested out in production, just don’t seem to do much.

Our solution? Performance measurement as a service — an isolated and reproducible benchmark for our Firewall engine and a framework for engineers to easily request runs and view results. It’s worth noting that we took a lot of inspiration from the fantastic Rust Compiler benchmarks to build this.

Building even faster interpreters in Rust
Our benchmarking framework, showing how performance during different stages of processing Wirefilter expressions has changed over time [1].

What to measure?

Our next challenge was to find some meaningful performance metrics. Some experimentation quickly uncovered that time was far too volatile a measure for meaningful comparisons, so we turned to hardware counters [2]. It’s not hard to find tools to measure these (perf and VTune are two such examples), although they (mostly) don’t allow control over which parts of the program are recorded. In our case, we wished to individually record measurements for different stages of filter processing — parsing, compilation, analysis, and execution.

Once again we took inspiration from the Rust compiler, and its self-profiling options, using the perf_event_open API to record counters from inside our binary. We then output something like the following, which our framework can easily ingest and store for later visualization.

Building even faster interpreters in Rust
Output of our benchmarks in JSON Lines format, showing a list of recordings for each combination of hardware counter and Wirefilter processing stage. We’ve used 10 repeats here for readability, but we use around 20, in addition to 5 warmup rounds, within our framework.

Whilst we mainly focussed on metrics relating to CPU usage, we also use a combination of getrusage and clear_refs to find the maximum resident set size (RSS). This is useful to understand the memory impact of particular algorithms in addition to CPU.

But the challenge was not over. Cloudflare’s standard CI agents use virtualization and sandboxing for security and convenience, but this makes accessing hardware counters virtually impossible. Running our benchmarks on a dedicated machine gave us access to these counters, and ensured more reproducible results.

Speeding up the speed test

Our benchmarks were designed from the outset to take an important place in our development process. For instance, we now perform a full benchmark run before releasing each new version to detect performance regressions.

But with our benchmarks in place, it quickly became clear that we had a problem. Our benchmarks simply weren’t fast enough — at least if we wanted to complete them in less than a few hours! The problem was we have a very large number  of filters. Since our engine would never usually execute requests against this many filters at once it was proving incredibly costly. We came up with a few tricks to cut this down…

  • Deduplication. It turns out that only around a third of filters are structurally unique (something that is easy to check as Wirefilter can helpfully serialize to JSON). We managed to cut down a great deal of time by ignoring duplicate filters in our benchmarks.
  • Sampling. Still, we had too many filters and random sampling presented an easy solution. A more subtle challenge was to make sure that the random sample was always the same to maintain reproducibility.
  • Partitioning. We worried that deduplication and sampling would cause us to miss important cases that are useful to optimize. By first partitioning filters by Wirefilter language feature, we can ensure we’re getting a good range of filters. It also helpfully gives us more detail about where specifically the impact of a performance change is.

Most of these are trade-offs, but very necessary ones which allow us to run continual benchmarks without development speed grinding to a halt. At the time of writing, we’ve managed to get a benchmark run down to around 20 minutes using these ideas.

Optimizing our engine

With a benchmarking framework in place, we were ready to begin testing optimizations. But how do you optimize an interpreter like Wirefilter? Just-in-time (JIT) compilation, selective inlining and replication were some ideas floating around in the word of interpreters that seemed attractive. After all, we previously wrote about the cost of dynamic dispatch in Wirefilter. All of these techniques aim to reduce that effect.

However, running some real filters through a profiler tells a different story. Most execution time, around 65%, is spent not resolving dynamic dispatch calls but instead performing operations like comparison and searches. Filters currently in production tend to be pretty light on functions, but throw in a few more of these and even less time would be spent on dynamic dispatch. We suspect that even a fair chunk of the remaining 35% is actually spent reading the memory of request fields.

Function CPU time
`matches` operator 0.6%
`in` operator 1.1%
`eq` operator 11.8%
`contains` operator 51.5%
Everything else 35.0%
Breakdown of CPU time while executing a typical production filter.

An adventure in substring searching

By now, you shouldn’t be surprised that the contains operator was one of the first in line for optimization. If you’ve ever written a Firewall Rule, you’re probably already familiar with what it does — it checks whether a substring is present in the field you are matching against. For example, the following expression would match when the host is “example.com” or “www.example.net”, but not when it is “cloudflare.com”. In string searching algorithms, this is commonly referred to as finding a ‘needle’ (“example”) within a ‘haystack’ (“example.com”).

http.host contains “example”

How does this work under the hood? Ordinarily, we may have used Rust’s `String::contains` function but Wirefilter also allows raw byte expressions that don’t necessarily conform to UTF-8.

http.host contains 65:78:61:6d:70:6c:65

We therefore used the memmem crate which performs a two-way substring search algorithm on raw bytes.

Sounds good, right? It was, and it was working pretty well, although we’d noticed that rewriting `contains` filters using regular expressions could bizarrely often make them faster.

http.host matches “example”

Regular expressions are great, but since they’re far more powerful than the `contains` operator, they shouldn’t be faster than a specialized algorithm in simple cases like this one.

Something was definitely up. It turns out that Rust’s regex library comes equipped with a whole host of specialized matchers for what it deems to be simple expressions like this. The obvious question was whether we could therefore simply use the regex library. Interestingly, you may not have realized that the popular ripgrep tool does just that when searching for fixed-string patterns.

However, our use case is a little different. Since we’re building an interpreter (and we’re using dynamic dispatch in any case), we would prefer to dispatch to a specialized case for `contains` expressions, rather than matching on some enum deep within the regex crate when the filter is executed. What’s more, there are some pretty cool things being done to perform substring searching that leverages SIMD instruction sets. So we wired up our engine to some previous work by Wojciech Muła and the results were fantastic.

Benchmark Improvement
Expressions using `contains` operator 72.3%
‘Simple’ expressions 0.0%
All expressions 31.6%
Improvements in instruction count using Wojciech Muła’s sse4-strstr library over the memmem crate with Wirefilter.

I encourage you to read more on “Algorithm 1”, which we used, but it works something like this (I’ve changed the order a little to help make it clearer). It’s worth reading up on SIMD instructions if you’re unfamiliar with them — they’re the essence behind what makes this algorithm fast.

  1. We fill one SIMD register with the first byte of the needle being searched for, simply repeated over and over.
  2. We load as much of our haystack as we can into another SIMD register and perform a bitwise equality operation with our previous register.
  3. Now, any position in the resultant register that is 0 cannot be the start of the match since it doesn’t start with the same byte of the needle.
  4. We now repeat this process with the last byte of the needle, offsetting the haystack, to rule out any positions that don’t end with the same byte as the needle.
  5. Bitwise ANDing these two results together, we (hopefully) have now drastically reduced our potential matches.
  6. Each of the remaining potential matches can be checked manually using a memcmp operation. If we find a match, then we’re done.
  7. If not, we continue with the next part of our haystack and repeat until we’ve checked the entire thing.

When it goes wrong

You may be wondering what happens if our haystack doesn’t fit neatly into registers. In the original algorithm, nothing. It simply continues reading into the oblivion after the end of the haystack until the last register is full, and uses a bitmask to ignore potential false-positives from this additional region of memory.

As we mentioned, security is our priority when it comes to optimizations, so we could never deploy something with this kind of behaviour. We ended up porting Muła’s library to Rust (we’ve also open-sourced the crate!) and performed an overlapping registers modification found in ARM’s blog.

It’s best illustrated by example — notice the difference between how we would fill registers on an imaginary SIMD instruction-set with 4-byte registers.

Before modification

Building even faster interpreters in Rust
How registers are filled in the original implementation for the haystack “abcdefghij”, red squares indicate out of bounds memory.

After modification

Building even faster interpreters in Rust
How registers are filled with the overlapping modification for the same haystack, notice how ‘g’ and ‘h’ each appear in two registers.

In our case, repeating some bytes within two different registers will never change the final outcome, so this modification is allowed as-is. However, in reality, we found it was better to use a bitmask to exclude repeated parts of the final register and minimize the number of memcmp calls.

What if the haystack is too small to even fill a single register? In this case, we can’t use our overlapping trick since there’s nothing to overlap with. Our solution is straightforward: while we were primarily targeting AVX2, which can store 32-bytes in a lane, we can easily move down to another instruction set with smaller registers that the haystack can fit into. In reality, we don’t currently go any smaller than SSE2. Beyond this, we instead use an implementation of the Rabin-Karp searching algorithm which appears to perform well.

Instruction set Register size
AVX2 32 bytes
SSE2 16 bytes
SWAR (u64) 8 bytes
SWAR (u32) 4 bytes
Register sizes in different SIMD instruction sets [3]. We did not consider AVX512 since support for this is not widespread enough.

Is it always fast?

Choosing the first and last bytes of the needle to rule out potential matches is a great idea. It means that when it does come to performing a memcmp, we can ignore these, as we know they already match. Unfortunately, as Muła points out, this also makes the algorithm susceptible to a worst-case attack in some instances.

Let’s give an expression that a customer might write to illustrate this.

http.request.uri.path contains “/wp-admin/”

If we try to search for this within a very long sequence of ‘/’s, we will find a potential match in every position and make lots of calls to memcmp — essentially performing a slow bruteforce substring search.

Clearly we need to choose different bytes from the needle. But which ones should we choose? For each choice, an adversary can always find a slightly different, but equally troublesome, worst case. We instead use randomness to throw off our would-be adversary, picking the first byte of the needle as before, but then choosing another random byte to use.

Our new version is unsurprisingly slower than Muła’s, yet it still exhibits a great improvement over both the memmem and regex crates. Performance, but without sacrificing safety.

Benchmark Improvement
sse4-strstr (original) sliceslice (our version)
Expressions using `contains` operator 72.3% 49.1%
‘Simple’ expressions 0.0% 0.1%
All expressions 31.6% 24.0%
Improvements in instruction count of using sse4-strstr and sliceslice over the memmem crate with Wirefilter.

What’s next?

This is only a small taste of the performance work we’ve been doing, and we have much more yet to come. Nevertheless, none of this would have been possible without the support of my manager Richard and my mentor Elie, who contributed a lot of these ideas. I’ve learned so much over the past few months, but most of all that Cloudflare is an amazing place to be an intern!

[1] Since our benchmarks are not run within a production environment, results in this post do not represent traffic on our edge.

[2] We found instruction counts to be a particularly stable measure, and CPU cycles a particularly unstable one.

[3] Note that SWAR is technically not an instruction set, but instead uses regular registers like vector registers.

17000ft| The MagPi 98

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/17000ft-the-magpi-98/

How do you get internet over three miles up the Himalayas? That’s what the 17000 ft Foundation and Sujata Sahu had to figure out. Rob Zwetsloot reports in the latest issue of the MagPi magazine, out now.

Living in more urban areas of the UK, it can be easy to take for granted decent internet and mobile phone signal. In more remote areas of the country, internet can be a bit spotty but it’s nothing compared with living up in a mountain.

Tablet computers are provided that connect to a Raspberry Pi-powered network

“17000 ft Foundation is a not-for-profit organisation in India, set up to improve the lives of people settled in very remote mountainous hamlets, in areas that are inaccessible and isolated due to reasons of harsh mountainous terrain,” explains its founder, Sujata Sahu. “17000 ft has its roots in high-altitude Ladakh, a region in the desolate cold desert of the Himalayan mountain region of India. Situated in altitudes upwards of 9300 ft and with temperatures dropping to -50°C in inhabited areas, this area is home to indigenous tribal communities settled across hundreds of tiny, scattered hamlets. These villages are remote, isolated, and suffer from bare minimum infrastructure and a centuries-old civilisation unwilling but driven to migrate to faraway cities in search of a better life. Ladakh has a population of just under 300,000 people living across 60,000 km2 of harsh mountain terrain, whose sustenance and growth depends on the infrastructure, resources, and support provided by the government.”

A huge number of students have already benefited from the program

The local governments have built schools. However, they don’t have enough resources or qualified teachers to be truly effective, resulting in a problem with students dropping out or having to be sent off to cities. 17000 ft’s mission is to transform the education in these communities.

High-altitude Raspberry Pi

“The Foundation today works in over 200 remote government schools to upgrade school infrastructure, build the capacity of teachers, provide better resources for learning, thereby improving the quality of education for its children,” says Sujata. “17000 ft Foundation has designed and implemented a unique solar-powered offline digital learning solution called the DigiLab, using Raspberry Pi, which brings the power of digital learning to areas which are truly off-grid and have neither electricity nor mobile connectivity, helping children to learn better, while also enabling the local administration to monitor performance remotely.”

Each school is provided with solar power, Raspberry Pi computers to act as a local internet for the school, and tablets to connect to it. It serves as a ‘last mile connectivity’ from a remote school in the cloud, with an app on a teacher’s phone that will download data when it can and then update the installed Raspberry Pi in their school.

Remote success

“The solution has now been implemented in 120 remote schools of Ladakh and is being considered to be implemented at scale to cover the entire region,” adds Sujata. “It has now run successfully across three winters of Ladakh, withstanding even the harshest of -50°C temperatures with no failure. In the first year of its implementation alone, 5000 students were enrolled, with over 93% being active. The system has now delivered over 60,000 hours of learning to students in remote villages and improved learning outcomes.”

Not all children stay in the villages year round

It’s already helping to change education in the area during the winter. Many villages (and schools) can shut down for up to six months, and families who can’t move away are usually left without a functioning school. 17000 ft has changed this.

“In the winter of 2018 and 2019, for the first time in a few decades, parents and community members from many of these hamlets decided to take advantage of their DigiLabs and opened them up for their children to learn despite the harsh winters and lack of teachers,” Sujata explains. “Parents pooled in to provide basic heating facilities (a Bukhari – a wood- or dung-based stove with a long pipe chimney) to bring in some warmth and scheduled classes for the senior children, allowing them to learn at their own pace, with student data continuing to be recorded in Raspberry Pi and available for the teachers to assess when they got back. The DigiLab Program, which has been made possible due to the presence of the Raspberry Pi Server, has solved a major problem that the Ladakhis have been facing for years!”

Some of the village schools go unused in the winter

How can people help?

Sujata says, “17000 ft Foundation is a non-profit organisation and is dependent on donations and support from individuals and companies alike. This solution was developed by the organisation in a limited budget and was implemented successfully across over a hundred hamlets. Raspberry Pi has been a boon for this project, with its low cost and its computing capabilities which helped create this solution for such a remote area. However, the potential of Raspberry Pi is as yet untapped and the solution still needs upgrades to be able to scale to cover more schools and deliver enhanced functionality within the school. 17000 ft is very eager to help take this to other similar regions and cover more schools in Ladakh that still remain ignored. What we really need is funds and technical support to be able to reach the good of this solution to more children who are still out of the reach of Ed Tech and learning. We welcome contributions of any size to help us in this project.”

For donations from outside India, write to [email protected]. Indian citizens can donate through 17000ft.org/donate.

The MagPi magazine is out now, available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the PDF directly from the MagPi magazine website.

Subscribers to the MagPi for 12 months get a free Adafruit Circuit Playground, or can choose from one of our other subscription offers, including this amazing limited-time offer of three issues and a book for only £10!

The post 17000ft| The MagPi 98 appeared first on Raspberry Pi.

Is It Really Two-Factor Authentication?

Post Syndicated from Bozho original https://techblog.bozho.net/is-it-really-two-factor-authentication/

Terminology-wise, there is a clear distinction between two-factor authentication (multi-factor authentication) and two-step verification (authentication), as this article explains. 2FA/MFA is authentication using more than one factors, i.e. “something you know” (password), “something you have” (token, card) and “something you are” (biometrics). Two-step verification is basically using two passwords – one permanent and another one that is short-lived and one-time.

At least that’s the theory. In practice it’s more complicated to say which authentication methods belongs to which category (“something you X”). Let me illustrate that with a few emamples:

  • An OTP hardware token is considered “something you have”. But it uses a shared symmetric secret with the server so that both can generate the same code at the same time (if using TOTP), or the same sequence. This means the secret is effectively “something you know”, because someone may steal it from the server, even though the hardware token is protected. Unless, of course, the server stores the shared secret in an HSM and does the OTP comparison on the HSM itself (some support that). And there’s still a theoretical possibility for the keys to leak prior to being stored on hardware. So is a hardware token “something you have” or “something you know”? For practical purposes it can be considered “something you have”
  • Smartphone OTP is often not considered as secure as a hardware token, but it should be, due to the secure storage of modern phones. The secret is shared once during enrollment (usually with on-screen scanning), so it should be “something you have” as much as a hardware token
  • SMS is not considered secure and often given as an example for 2-step verification, because it’s just another password. While that’s true, this is because of a particular SS7 vulnerability (allowing the interception of mobile communication). If mobile communication standards were secure, the SIM card would be tied to the number and only the SIM card holder would be able to receive the message, making it “something you have”. But with the known vulnerabilities, it is “something you know”, and that something is actually the phone number.
  • Fingerprint scanners represent “something you are”. And in most devices they are built in a way that the scanner authenticates to the phone (being cryptographically bound to the CPU) while transmitting the fingerprint data, so you can’t just intercept the bytes transferred and then replay them. That’s the theory; it’s not publicly documented how it’s implemented. But if it were not so, then “something you are” is “something you have” – a sequence of bytes representing your fingerprint scan, and that can leak. This is precisely why biometric identification should only be done locally, on the phone, without any server interaction – you can’t make sure the server is receiving sensor-scanned data or captured and replayed data. That said, biometric factors are tied to the proper implementation of the authenticating smartphone application – if your, say, banking application needs a fingerprint scan to run, a malicious actor should not be able to bypass that by stealing shared credentials (userIDs, secrets) and do API calls to your service. So to the server there’s no “something you are”. It’s always “something that the client-side application has verified that you are, if implemented properly”
  • A digital signature (via a smartcard or yubikey or even a smartphone with secure hardware storage for private keys) is “something you have” – it works by signing one-time challenges, sent by the server and verifying that the signature has been created by the private key associated with the previously enrolled public key. Knowing the public key gives you nothing, because of how public-key cryptography works. There’s no shared secret and no intermediary whose data flow can be intercepted. A private key is still “something you know”, but by putting it in hardware it becomes “something you have”, i.e. a true second factor. Of course, until someone finds out that the random generation of primes used for generating the private key has been broken and you can derive the private key form the public key (as happened recently with one vendor).

There isn’t an obvious boundary between theoretical and practical. “Something you are” and “something you have” can eventually be turned into “something you know” (or “something someone stores”). Some theoretical attacks can become very practical overnight.

I’d suggest we stick to calling everything “two-factor authentication”, because it’s more important to have mass understanding of the usefulness of the technique than to nitpick on the terminology. 2FA does not solve phishing, unfortunately, but it solves leaked credentials, which is good enough and everyone should have some form of it. Even SMS is better than nothing (obviously, for high-profile systems, digital signatures is the way to go).

The post Is It Really Two-Factor Authentication? appeared first on Bozho's tech blog.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close