България и киберсигурността: Готови ли сме за предизвикателствата на XXI век? 

Post Syndicated from Йоанна Елми original https://toest.bg/delyan-delchev-interview-cybersecurity/

Няколко дни след началото на руската инвазия в Украйна министърът на електронното управление Божидар Божанов обяви, че съвместно с ГДБОП са предприети действия за „филтриране или преустановяване на трафика от над 45 000 интернет адреса, от които са извършвани опити за зловредна намеса в електронни системи“. Същевременно от януари насам са осъществени редица кибератаки срещу Украйна, като мишени са от държавни институции до банки. Много от тези атаки са приписвани на Русия. Вземайки предвид активната информационна война в България и геополитическите интереси на Русия в региона, Йоанна Елми разговаря с Делян Делчев – телекомуникационен инженер и експерт по информационни технологии с познания и опит в сферата на киберсигурността. 


Г-н Делчев, често говорим за „хибридна война“, за която имаше предупреждения и във връзка с инвазията в Украйна. Струва ми се обаче, че има разминаване в схващането на термина. Какво да разбират читателите под този етикет? 

По принцип терминът означава динамично и/или едновременно комбиниране на конвенционални и неконвенционални военни действия, дипломация и кибервойна, в т.ч. и информационна (или както си я наричахме ние – пропаганда). Но подобно на други термини, с времето оригиналният смисъл се загубва и днес по-скоро имаме предвид основно електронната пропаганда, понякога подпомагана от хакерство, търсещо сензация.

Може ли да кажем грубо, че хибридната война включва два елемента: комуникационен, например пропаганда, и технически – кибератаки срещу ключова инфраструктура? 

Да. Но обръщам внимание, че това, с което асоциираме термина в последно време, е предимно пропагандата по интернет, а всички други съпътстващи действия по-скоро целят нейното подпомагане.

Съществуват ли кибератаки, които са особено популярни? Какви са практиките? 

Светът на хакерството е интересен и много по-различен от това, което виждате по телевизията. Доминиращото количество хора, занимаващи се с тези дейности, не са войници, професионалисти или пък гении. Това са най-обикновени хора, например тийнейджъри, събрани в малки приятелски банди, които пробват неща, за които са прочели тук и там, в повечето случаи – без да ги разбират в дълбочина. Те се радват на тръпката от потенциален успех, дори той да е малък, на емоцията да правиш нещо забранено, да получиш същия адреналин като при екстремните спортове.

Има всякакви хора – някои са мотивирани и от възможностите за малки или големи печалби или просто за събиране на информация, която те си мислят, че може да е тайна – да разкрият нещо ново, някоя голяма конспирация. И тези хора са напълно случайно разпръснати по света и са удивително много. Само в Китай има милиони тийнейджъри (т.нар. script kiddies), които отварят за първи път някоя книжка или по-скоро онлайн хакерски документ и веднага искат да си пробват късмета, да видят какво ще стане. Паралелно има и неструктуриран черен пазар: малките банди си взаимодействат и си помагат с услуги, скриптове, достъп, поръчки, плащат си с пари, (крадени) стоки, услуги, програмен код, ресурси и криптовалута. Където има хора и търсене, има и пари, и награди.

Държавните „киберармии“ всъщност се възползват от тези хакери и големия им брой. Те им спускат поръчки чрез подставени лица, посредници или приятели и съответно ги награждават, ако някъде постигнат успех. Същото правят и обикновени престъпници, частни фирми, детективи и какви ли не други. Ако използваме аналогия от спагети уестърните: обявена е парична награда за главата на някого и всички ловци на глави се втурват да се пробват. Няма никакви гаранции за успех, а заниманието е времеемко, защото реалността не е като по телевизията – идва хакер, оплаква се от нещо, после трака пет секунди на клавиатурата и казва: „Вътре съм.“ В реалния живот дори малки пробиви може да отнемат години и се правят на малки стъпки. Затова и когато бъдат разкрити, пораженията са вече големи, защото пробивът може да не е бил от вчера, а да е отворена порта с години.

Тъй като повечето хакери не разбират занаята в дълбочина, често някои държави или фирми, които са специализирани в областта на сигурността и разполагат с интелигентни и способни ресурси, предоставят готови инструменти, непознати хакове или вътрешна информация на хакерските банди. Понякога дори начеват процеса и подготвят обстановката, а после оставят хакерите да довършат нещата. Хакерите са, един вид, мулета и дори някой да ги уличи, директната връзка с поръчителя е много трудна.

А има ли специфичен почерк според държавата, извършваща кибератака?

Индивидуалните банди се специализират в различни направления – във флууд (от англ. flood, „наводняване“– претоварване на интернет връзки, което води например до блокиране на достъпа до уебсайтове); сриване на ресурси, затрудняване на работоспособността на инфраструктури; хакване, вземане на контрол; създаване и събиране на бот мрежи (които после се ползват за прикриване на хакове и флууд)*; кражби на идентификация, пароли, лична информация, данни за кредитни карти и за криптовалути; рансъмуер (зловреден софтуер, който криптира информацията на заразения компютър и изнудва потребителя да му плати откуп, за да получи ключ за дешифриране) и т.н.

Трудно е от пръв поглед да се каже кой какъв е и дали поведението му е самопородено от хаоса, или има частен интерес, или някой му дърпа конците и го е мотивирал, без значение дали това е станало знайно, или незнайно за извършителя. Но светът е малък и има модели на поведение, които са специфичен стил на различните групи и мотиватори. Има и много улики. В действителност в интернет нищо не е наистина анонимно. Така чрез различни техники може да се идентифицира кой кой е и дали е под влиянието на поръчители от една или друга държава. Киберсигурността се развива и на обикновените хакери им е все по-трудно да откриват нови слаби звена. Това го могат основно хора, които имат познания, специфичен достъп до информация (например сорскод на WindowsMicrosoft го предоставя под различни програми на няколко държави, сред които са и Русия, и Китай), разузнаване, възможности за събиране и мотивиране на съмишленици или помагачи, работещи в различни компании.

В скандала Solarwinds например пакетът от хакове съдържаше инструмент с компонент, написан от хакерите, но подписан така, сякаш идва от Microsoft. Този компонент води до лесното и невидимо инсталиране на код, който позволява отдалечено управление на Windows. Това не може да бъде направено от обикновени хакери, тези ключове и процесът по подписването с тях трябва да са тайна. От Microsoft и досега изследват как хакерите са направили пробива. Съвсем вероятно е да е станало по описания по-горе начин – през програмите, по които Microsoft работи директно с някои правителства, и е сериозен сигнал за правителствено участие. Хакерските банди нямат тези възможности, а дори да ги имаха, всичко това щеше да се появи публично и светът щеше да е залят с подобни подписани компоненти. Сега обаче същите инструменти и ключ ненадейно се появиха в няколко хакерски кампании за изземване на украински ИТ системи, което очевидно уличава руски правителствени интереси.

Китайските държавни хакери, както и американските, и руските, имат своя колекция от хакерски инструменти, които разработват тайно, не са публични, но понякога биват предоставени на близки банди (някои често съдействат на всички служби, държави или частни компании едновременно). Така и по инструментите може да се познае кой стои зад атаката. Или по наградите. Или по „мулетата“. Или по начина на плащане. Или дори по пропагандните изрази, които използват (и които издават с кого си комуникират). Макар хаотичните банди да стоят на преден план, отзад понякога се виждат сенките на по-сериозни професионалисти и организатори на кампании. Все още обаче над 99% от кибератаките са изцяло хаотични и не са свързани с държавни „киберармии“.

Как се наказват кибератаките? Съществува ли в света ефективно разработена рамка, която да ги третира като престъпления?

Има опити, но не мисля, че са ефективни. Проблемът е, че обикновено се наказват тийнейджъри, които са на практика невинни или пък малолетни и неопитни както в живота, така и в това, което правят, и всъщност заради това ги хващат. В огромната си част по-опитните хакери или пък техните поръчители, ако има такива, си остават недосегаеми. Не мисля, че е възможно при тази „екосистема“ въобще да има начин да изхвърлим мръсната вода, без да изхвърлим и бебето. Обществените реакции ще са тежки. Затова ако нещо се прави по темата, е епизодично и според мен никой не се опитва сериозно да се занимава с наказания, поне в свободния свят.

Друг проблем е, че видимите хакери са често разпръснати между много държави и просто няма как да хванеш един и после чрез него да намериш и хванеш друг, а чрез него – трети, без подкрепата на тези държави. А това е трудно и понякога невъзможно. Ето защо и никой не иска да прави нещо наистина сериозно и масово, когато не става въпрос за финансови престъпления. От известно време съответните полицейски служби се опитват – под предлог за борба с детската порнография – да разработят по-координирана комуникация между различни държави и често правят масови транснационални кампании. Създадената инфраструктура автоматично впоследствие може да се използва за всякакви киберпрестъпления – от най-големите до примерно нарушаване на авторски права (изтеглили сте някакъв филм от интернет). Но засега тази координация е в процес на подготовка и се фокусира върху детската порнография като общо безспорно припознат проблем от службите.

За мен това, че има куп младежи, които искат да се научат „на хакерство“, не е проблем. Така се натрупва познание. Ако някой хакер е намерил как да влезе в пощата или сървъра ви, не е чак такава беда в повечето случаи, защото загубите са най-често малки. Може да се възползвате от това, за да се учите да се пазите по-добре. Защото ако хакери, които не са държавно спонсорирани, могат да пробият системите ви, то държавно спонсорираните вероятно отдавна се разхождат там необезпокоявано.

Образът на руския хакер е почти фикционален, като на филм. В действителност има ли Русия особена роля в сферата на кибератаките? 

Има, но не и по този романтичен начин, по който повечето хора си го представят. „Руските хакери“ всъщност са всякакви хакери, с всякаква националност. Има немалко българи сред тях например. Има и американци, китайци, западноевропейци, какви ли не. И тези хора нямат задължително идеология, нито правят това от любов към Путин; мнозина дори не знаят, че са спонсорирани от Москва чрез посредници. Просто техните банди, приятелски кръгове и контакти ги поставят в позиция да получават, понякога през десетки посредници, възможности за поръчки, които са спуснати от руски поръчители или са в техен интерес.

За разлика от САЩ, които като цяло избягват да използват бандите (с малки изключения) и имат голям собствен и невидим ресурс, напълно отделен от хакерската общност, Русия е много по-прагматична и грубо казано, излиза на свободния пазар. Така тя разчита и на по-нискоквалифицирани хакери, които съответно повече се излагат и биват хващани, защото използват по-видими и груби подходи. Спокойно можем да определим руската практика като „слон в стъкларски магазин“. Но това работи за тях, защото те се радват на пропагандния ефект и на името, което си създават. Тази практика е и по-евтина и по-лесна, тъй като не се налага обучение на хора, нито създаване и развиване на специални държавни структури със специфично познание.

Но тук-там се виждат и по-прецизни руски изпълнения с директно въвличане на по-интелигентни участници от масовите хакери. Solarwinds, както и двете последни кампании в Украйна са добри примери и много служби осъзнаха, че Русия също започва да натрупва и такъв потенциал.

Показаха ли нагледно случаи като теча в НАП, че България е неподготвена в сферата на киберсигурността? Обществеността не се трогна много от изтичането на данни – защо? И как трябва да се обясни на хората, че проблемът ги засяга лично? 

Първо, повечето от хакванията, за които съм чул, че се случват в България, и по-специално това в НАП, са породени от небрежност и вероятно от незаинтересованост, често граничеща с глупост. Но основният проблем на България е, че работим след събитията – чакаме нещо да се случи, за да действаме. После работим на парче, докато следващото събитие не дойде да ни накара да действаме отново. Киберсигурността постоянно се променя. С единични действия не може да се постигне нищо. Трябва постоянно да следиш и да реагираш на това, което става или за което чуваш като възможни рискове в други държави. Дори хипотетично в момента да имаш най-добрата защита на света, след няколко месеца вече няма да е така. Държавата трябва да има процедури (а не само стратегии) и тези процедури активно да се изпълняват, а на киберсигурността трябва да се гледа много сериозно.

Усещането ми е, че киберхигиената в държавните ни институции не е на ниво и едва ли не още сричаме азбуката. Смешно е да слушаме изказвания как в НАП са си мислили, че имали сигурност, защото са минали обучение и са направили опит да се сертифицират по ISO27000. Както видяхме, това не е помогнало. Смешно е също някои други институции да си мислят, че като криптират нещо, то автоматично става защитено.

Разглеждайки внимателно как е разработвано хакването на някои от финансовите и държавните институции в Украйна, ще видим, че ако сме били ние на мушка, нито една от простите ни представи как да бъдем или да се чувстваме защитени, не би ни предпазила. Има големи разлики в изграждането на хигиена в киберсигурността на индивидуално или корпоративно ниво и на ниво държава, държавни институции и организации от сферата на сигурността. Ние засега се опитваме да покрием поне корпоративните стандарти – и дори в това нямаме големи успехи.

Гражданите няма как да следят какви пробиви се появяват в киберсигурността, нито пък ще се занимават постоянно с поддържането на киберхигиена, ако тя е трудна и неразбираема. Всяко нещо, което ти създава дискомфорт, скоро бива игнорирано, все едно никога не го е имало. Класически пример за това са изискванията за много сложни пароли – на пръв поглед трябва да се минимизира рискът някой хакер да ги познае, но пък така потребителят е принуден или да използва една и съща парола навсякъде, или да си ги записва и евентуално да ги оставя на публични места. Така вместо да се подобри сигурността с това правило, всъщност тя спада, както показва статистиката.

Киберсигурността трябва да се възприема сериозно отгоре надолу в държавата, а не обратното, от гражданите към властта. Решенията и процесите трябва да са прости и органични, в най-добрия случай невидими за крайните потребители, да не им пречат, и така всичко ще бъде разумно ефективно, дори да не е перфектно. Хората трябва да знаят, че никога нищо в киберсигурността не е перфектно, но може да е достатъчно добро, за да минимизира рисковете и експозицията. За пример: ако в НАП спазваха поне основните принципи на GPDR за съхраняване на личните данни, уязвимостта нямаше да е толкова голяма. Дали дори сега в НАП ги спазват? Или си мислят, че ако не публикуват бекъпите си в интернет достъпни сървъри, те ще са защитени? Предвид наблюдаваното напоследък, това е много измамно усещане.

А по въпроса как киберсигурността ни засяга лично: представете си, че всички електронни блага, които имате днес – банкови карти, пари, интернет, смартфони, лична информация, усещане за личен живот, – може да се загубят и/или да ги получи някой друг, а вие бъдете пренесен, метафорично казано, обратно в 70-те като ниво на комуникация. Ако тази мисъл ви създава дискомфорт, значи трябва да се отнасяте сериозно към киберхигиената си.

Мрежи от ботове се създават, като чрез вирус или по друг начин се заразят множество компютри, на които след това се инсталира софтуер (бот). Когато е нужно да се извърши атака, контролиращият мрежата активира тези ботове отдалечено и те започват координирано да атакуват конкретни сървъри. Така изглежда, че атаката идва от множество компютри по целия свят и е трудна за овладяване, а истинският извършител остава скрит зад своята армия от ботове. – Б.р.
Заглавна снимка: Michael Geiger / Unsplash

Източник

Detecting security issues in logging with Amazon CodeGuru Reviewer

Post Syndicated from Brian Farnhill original https://aws.amazon.com/blogs/devops/detecting-security-issues-in-logging-with-amazon-codeguru-reviewer/

Amazon CodeGuru is a developer tool that provides intelligent recommendations for identifying security risks in code and improving code quality. To help you find potential issues related to logging of inputs that haven’t been sanitized, Amazon CodeGuru Reviewer now includes additional checks for both Python and Java. In this post, we discuss these updates and show examples of code that relate to these new detectors.

In December 2021, an issue was discovered relating to Apache’s popular Log4j Java-based logging utility (CVE-2021-44228). There are several resources available to help mitigate this issue (some of which are highlighted in a post on the AWS Public Sector blog). This issue has drawn attention to the importance of logging inputs in a way that is safe. To help developers understand where un-sanitized values are being logged, CodeGuru Reviewer can now generate findings that highlight these and make it easier to remediate them.

The new detectors and recommendations in CodeGuru Reviewer can detect findings in Java where Log4j is used, and in Python where the standard logging module is used. The following examples demonstrate how this works and what the recommendations look like.

Findings in Java

Consider the following Java sample that responds to a web request.

@RequestMapping("/example.htm")
public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) {
    ModelAndView result = new ModelAndView("success");
    String userId = request.getParameter("userId");
    result.addObject("userId", userId);

    // More logic to populate `result`.
     log.info("Successfully processed {} with user ID: {}.", request.getRequestURL(), userId);
    return result;
}

This simple example generates a result to the initial request, and it extracts the userId field from the initial request to do this. Before returning the result, the userId field is passed to the log.info statement. This presents a potential security issue, because the value of userId is not sanitized or changed in any way before it is logged. CodeGuru Reviewer is able to identify that the variable userId points to a value that needs to be sanitized before it is logged, as it comes from an HTTP request. All user inputs in a request (including query parameters, headers, body and cookie values) should be checked before logging to ensure a malicious user hasn’t passed values that could compromise your logging mechanism.

CodeGuru Reviewer recommends to sanitize user-provided inputs before logging them to ensure log integrity. Let’s take a look at CodeGuru Reviewer’s findings for this issue.

A screenshot of the AWS Console that describes the log injection risk found by CodeGuru Reviewer

An option to remediate this risk would be to add a sanitize() method that checks and modifies the value to remove known risks. The specific process of doing this will vary based on the values you expect and what is safe for your application and its processes. By logging the now sanitized value, you have mitigated those risks that could impact on your logging framework. The modified code sample below shows one example of how this could be addressed.

@RequestMapping("/example.htm")
public ModelAndView handleRequestSafely(HttpServletRequest request, HttpServletResponse response) {
    ModelAndView result = new ModelAndView("success");
    String userId = request.getParameter("userId");
    String sanitizedUserId = sanitize(userId);
    result.addObject("userId", sanitizedUserId);

    // More logic to populate `result`.
    log.info("Successfully processed {} with user ID: {}.", request.getRequestURL(), sanitizedUserId);
    return result;
}

private static String sanitize(String userId) {
    return userId.replaceAll("\\D", "");
}

The example now uses the sanitize() method, which uses a replaceAll() call that uses a regular expression to remove all non-digit characters. This example assumes the userId value should only be digit characters, ensuring that any other characters that could be used to expose a vulnerability in the logging framework are removed first.

Findings in Python

Now consider the following python code from a sample Flask project that handles a web request.

from flask import app, current_app, request

@app.route('/log')
def getUserInput():
    input = request.args.get('input')
    current_app.logger.info("User input: %s", input)

    # More logic to process user input.

In this example, the input variable is assigned the input query string value from a web request. Then, the Flask logger records its value as an info level message. This has the same challenge as the Java example above. However this time rather than changing the value, we can instead inspect it and choose to log it only when it is in a format we expect. A simple example of this could be where we expect only alphanumeric characters in the input variable. The isalnum() function can act as a simple test in this case. Here is an example of what this style of validation could look like.

from flask import app, current_app, request

@app.route('/log')
def safe_getUserInput():
    input = request.args.get('input')    
    if input.isalnum():
        current_app.logger.info("User input: %s", input)        
    else:
        current_app.logger.warning("Unexpected input detected")

Getting started

While log sanitization implementation is a long journey for many, it is a guardrail for maintaining your application’s log integrity. With CodeGuru Reviewer detecting log inputs that are neither sanitized nor validated, developers can use these recommendations as a guide to reduce risks related to log injection attacks. Additionally, you can provide feedback on recommendations in the CodeGuru Reviewer console or by commenting on the code in a pull request. This feedback helps improve the precision of CodeGuru Reviewer, so the recommendations you see get better over time.

To get started with CodeGuru Reviewer, you can leverage AWS Free Tier without any cost. For 90 days, you can review up to 100K lines of code in onboarded repositories per AWS account. For more information, please review the pricing page.

About the authors

Brian Farnhill

Brian Farnhill is a Software Development Engineer in the Australian Public Sector team. His background is in building solutions and helping customers improve DevOps tools and processes. When he isn’t working, you’ll find him either coding for fun or playing online games.

Jia Qin

Jia Qin is part of the Solutions Architect team in Malaysia. She loves developing on AWS, trying out new technology, and sharing her knowledge with customers. Outside of work, she enjoys taking walks and petting cats.

[$] Belenios: a system for secret voting

Post Syndicated from original https://lwn.net/Articles/887077/

As part of the recent discussion on switching
to secret voting
for Debian general resolutions (GRs), which has
resulted in a ongoing GR of its own, the
subject of voting systems that embody various attributes some would like to
see for voting in Debian has been brought up. One of the systems mentioned, Belenios, provides an
open-source “verifiable online voting system“. Whether or not
Debian chooses to switch to secret voting, Belenios would seem to provide what
other projects or organizations may be looking for as a mechanism to handle
their voting needs.

Patch Tuesday – March 2022

Post Syndicated from Greg Wiseman original https://blog.rapid7.com/2022/03/08/patch-tuesday-march-2022/

Patch Tuesday - March 2022

Microsoft’s March 2022 updates include fixes for 92 CVEs (including 21 from the Chromium project, which is used by their Edge web browser). None of them have been seen exploited in the wild, but three have been previously disclosed. CVE-2022-24512, affecting .NET and Visual Studio, and CVE-2022-21990, affecting Remote Desktop Client, both allow RCE (Remote Code Execution). CVE-2022-24459 is an LPE (local privilege escalation) vulnerability in the Windows Fax and Scan service. All three publicly disclosed vulnerabilities are rated Important – organizations should remediate at their regular patch cadence.

Three CVEs this month are rated Critical. CVE-2022-22006 and CVE-2022-24501 both affect video codecs. In most cases, these will update automatically via the Microsoft Store. However, any organizations with automatic updates disabled should be sure to push out updates. The vulnerability most likely to raise eyebrows this month is CVE-2022-23277, a Critical RCE affecting Exchange Server. Thankfully, this is a post-authentication vulnerability, meaning attackers need credentials to exploit it. Although passwords can be obtained via phishing and other means, this one shouldn’t be as rampantly exploited as the deluge of Exchange vulnerabilities we saw throughout 2021. Exchange administrators should still patch as soon as reasonably possible.

SharePoint administrators get a break this month, though on the client side, a handful of Office vulnerabilities were fixed. Three separate RCEs in Visio, Tampering and Security Feature Bypass vulnerabilities in Word, and Information Disclosure in the Skype Extension for Chrome all got patched.

CVE-2022-24508 is an RCE affecting Windows SMBv3, which has potential for widespread exploitation, assuming an attacker can put together a suitable exploit. Luckily, like this month’s Exchange vulnerabilities, this too requires authentication.

Organizations using Microsoft’s Azure Site Recovery service should be aware that 11 CVEs were fixed with today’s updates, split between RCEs and LPEs. They are all specific to the scenario where an on-premise VMware deployment is set up to use Azure for disaster recovery.

Summary charts

Patch Tuesday - March 2022
Patch Tuesday - March 2022
Patch Tuesday - March 2022
Patch Tuesday - March 2022

Summary tables

Apps vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-23282 Paint 3D Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24465 Microsoft Intune Portal for iOS Security Feature Bypass Vulnerability No No 3.3 Yes

Azure vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-24467 Azure Site Recovery Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-24468 Azure Site Recovery Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-24517 Azure Site Recovery Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-24470 Azure Site Recovery Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-24471 Azure Site Recovery Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-24520 Azure Site Recovery Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-24469 Azure Site Recovery Elevation of Privilege Vulnerability No No 8.1 Yes
CVE-2022-24506 Azure Site Recovery Elevation of Privilege Vulnerability No No 6.5 Yes
CVE-2022-24515 Azure Site Recovery Elevation of Privilege Vulnerability No No 6.5 Yes
CVE-2022-24518 Azure Site Recovery Elevation of Privilege Vulnerability No No 6.5 Yes
CVE-2022-24519 Azure Site Recovery Elevation of Privilege Vulnerability No No 6.5 Yes

Browser vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-0809 Chromium: CVE-2022-0809 Out of bounds memory access in WebXR No No N/A Yes
CVE-2022-0808 Chromium: CVE-2022-0808 Use after free in Chrome OS Shell No No N/A Yes
CVE-2022-0807 Chromium: CVE-2022-0807 Inappropriate implementation in Autofill No No N/A Yes
CVE-2022-0806 Chromium: CVE-2022-0806 Data leak in Canvas No No N/A Yes
CVE-2022-0805 Chromium: CVE-2022-0805 Use after free in Browser Switcher No No N/A Yes
CVE-2022-0804 Chromium: CVE-2022-0804 Inappropriate implementation in Full screen mode No No N/A Yes
CVE-2022-0803 Chromium: CVE-2022-0803 Inappropriate implementation in Permissions No No N/A Yes
CVE-2022-0802 Chromium: CVE-2022-0802 Inappropriate implementation in Full screen mode No No N/A Yes
CVE-2022-0801 Chromium: CVE-2022-0801 Inappropriate implementation in HTML parser No No N/A Yes
CVE-2022-0800 Chromium: CVE-2022-0800 Heap buffer overflow in Cast UI No No N/A Yes
CVE-2022-0799 Chromium: CVE-2022-0799 Insufficient policy enforcement in Installer No No N/A Yes
CVE-2022-0798 Chromium: CVE-2022-0798 Use after free in MediaStream No No N/A Yes
CVE-2022-0797 Chromium: CVE-2022-0797 Out of bounds memory access in Mojo No No N/A Yes
CVE-2022-0796 Chromium: CVE-2022-0796 Use after free in Media No No N/A Yes
CVE-2022-0795 Chromium: CVE-2022-0795 Type Confusion in Blink Layout No No N/A Yes
CVE-2022-0794 Chromium: CVE-2022-0794 Use after free in WebShare No No N/A Yes
CVE-2022-0793 Chromium: CVE-2022-0793 Use after free in Views No No N/A Yes
CVE-2022-0792 Chromium: CVE-2022-0792 Out of bounds read in ANGLE No No N/A Yes
CVE-2022-0791 Chromium: CVE-2022-0791 Use after free in Omnibox No No N/A Yes
CVE-2022-0790 Chromium: CVE-2022-0790 Use after free in Cast UI No No N/A Yes
CVE-2022-0789 Chromium: CVE-2022-0789 Heap buffer overflow in ANGLE No No N/A Yes

Developer Tools vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-24526 Visual Studio Code Spoofing Vulnerability No No 6.1 Yes
CVE-2020-8927 Brotli Library Buffer Overflow Vulnerability No No 6.5 Yes
CVE-2022-24512 .NET and Visual Studio Remote Code Execution Vulnerability No Yes 6.3 Yes
CVE-2022-24464 .NET and Visual Studio Denial of Service Vulnerability No No 7.5 No

Exchange Server vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-24463 Microsoft Exchange Server Spoofing Vulnerability No No 6.5 Yes
CVE-2022-23277 Microsoft Exchange Server Remote Code Execution Vulnerability No No 8.8 Yes

Microsoft Office vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-24522 Skype Extension for Chrome Information Disclosure Vulnerability No No 7.5 Yes
CVE-2022-24462 Microsoft Word Security Feature Bypass Vulnerability No No 5.5 Yes
CVE-2022-24511 Microsoft Office Word Tampering Vulnerability No No 5.5 Yes
CVE-2022-24509 Microsoft Office Visio Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24461 Microsoft Office Visio Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24510 Microsoft Office Visio Remote Code Execution Vulnerability No No 7.8 Yes

System Center vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-23265 Microsoft Defender for IoT Remote Code Execution Vulnerability No No 7.2 Yes
CVE-2022-23266 Microsoft Defender for IoT Elevation of Privilege Vulnerability No No 7.8 Yes
CVE-2022-23278 Microsoft Defender for Endpoint Spoofing Vulnerability No No 5.9 Yes

Windows vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-21967 Xbox Live Auth Manager for Windows Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-24525 Windows Update Stack Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-24508 Windows SMBv3 Client/Server Remote Code Execution Vulnerability No No 8.8 Yes
CVE-2022-23284 Windows Print Spooler Elevation of Privilege Vulnerability No No 7.2 No
CVE-2022-21975 Windows Hyper-V Denial of Service Vulnerability No No 4.7 Yes
CVE-2022-23294 Windows Event Tracing Remote Code Execution Vulnerability No No 8.8 Yes
CVE-2022-23291 Windows DWM Core Library Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-23288 Windows DWM Core Library Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-23286 Windows Cloud Files Mini Filter Driver Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-24455 Windows CD-ROM Driver Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-24507 Windows Ancillary Function Driver for WinSock Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-23287 Windows ALPC Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-24505 Windows ALPC Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-24501 VP9 Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24451 VP9 Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24460 Tablet Windows User Interface Application Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-23295 Raw Image Extension Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-23300 Raw Image Extension Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-22010 Media Foundation Information Disclosure Vulnerability No No 4.4 Yes
CVE-2022-21977 Media Foundation Information Disclosure Vulnerability No No 3.3 Yes
CVE-2022-22006 HEVC Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-23301 HEVC Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-22007 HEVC Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24452 HEVC Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24453 HEVC Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24456 HEVC Video Extensions Remote Code Execution Vulnerability No No 7.8 Yes
CVE-2022-24457 HEIF Image Extensions Remote Code Execution Vulnerability No No 7.8 Yes

Windows ESU vulnerabilities

CVE Title Exploited Publicly disclosed? CVSSv3 base score Has FAQ?
CVE-2022-24454 Windows Security Support Provider Interface Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-23299 Windows PDEV Elevation of Privilege Vulnerability No No 7.8 Yes
CVE-2022-23298 Windows NT OS Kernel Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-23297 Windows NT Lan Manager Datagram Receiver Driver Information Disclosure Vulnerability No No 5.5 Yes
CVE-2022-21973 Windows Media Center Update Denial of Service Vulnerability No No 5.5 No
CVE-2022-23296 Windows Installer Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-23290 Windows Inking COM Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-24502 Windows HTML Platforms Security Feature Bypass Vulnerability No No 4.3 Yes
CVE-2022-24459 Windows Fax and Scan Service Elevation of Privilege Vulnerability No Yes 7.8 No
CVE-2022-23293 Windows Fast FAT File System Driver Elevation of Privilege Vulnerability No No 7.8 No
CVE-2022-23281 Windows Common Log File System Driver Information Disclosure Vulnerability No No 5.5 Yes
CVE-2022-23283 Windows ALPC Elevation of Privilege Vulnerability No No 7 Yes
CVE-2022-24503 Remote Desktop Protocol Client Information Disclosure Vulnerability No No 5.4 Yes
CVE-2022-21990 Remote Desktop Client Remote Code Execution Vulnerability No Yes 8.8 Yes
CVE-2022-23285 Remote Desktop Client Remote Code Execution Vulnerability No No 8.8 Yes
CVE-2022-23253 Point-to-Point Tunneling Protocol Denial of Service Vulnerability No No 6.5 No

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Announcing experimental DDR in 1.1.1.1

Post Syndicated from Christopher Wood original https://blog.cloudflare.com/announcing-ddr-support/

Announcing experimental DDR in 1.1.1.1

Announcing experimental DDR in 1.1.1.1

1.1.1.1 sees approximately 600 billion queries per day. However, proportionally, most queries sent to this resolver are over cleartext: 89% over UDP and TCP combined, and the remaining 11% are encrypted. We care about end-user privacy and would prefer to see all of these queries sent to us over an encrypted transport using DNS-over-TLS or DNS-over-HTTPS. Having a mechanism by which clients could discover support for encrypted protocols such as DoH or DoT will help drive this number up and lead to more name encryption on the Internet. That’s where DDR – or Discovery of Designated Resolvers – comes into play. As of today, 1.1.1.1 supports the latest version of DDR so clients can automatically upgrade non-secure UDP and TCP connections to secure connections. In this post, we’ll describe the motivations for DDR, how the mechanism works, and, importantly, how you can test it out as a client.

DNS transports and public resolvers

We initially launched our public recursive resolver service 1.1.1.1 over three years ago, and have since seen its usage steadily grow. Today, it is one of the fastest public recursive resolvers available to end-users, supporting the latest security and privacy DNS transports such as HTTP/3 for DNS-over-HTTPS (DoH), as well as Oblivious DoH.

As a public resolver, all clients, regardless of type, are typically manually configured based on a user’s desired performance, security, and privacy requirements. This choice reflects answers to two separate but related types of questions:

  1. What recursive resolver should be used to answer my DNS queries? Does the resolver perform well? Does the recursive resolver respect my privacy?
  2. What protocol should be used to speak to this particular recursive resolver? How can I keep my DNS data safe from eavesdroppers that should otherwise not have access to it?

The second question primarily concerns technical matters. In particular, whether or not a recursive resolver supports DoH is simple enough to answer. Either the recursive resolver does or does not support it!

In contrast, the first question is primarily a matter of policy. For example, consider the question of choosing between a local network-provided DNS recursive resolver and a public recursive resolver. How do resolver features (including DoH support, for example) influence this decision? How does the resolver’s privacy policy regarding data use and retention influence this decision? More generally, what information about recursive resolver capabilities is available to clients in making this decision and how is this information delivered to clients?

These policy questions have been the topic of substantial debate in the Internet Engineering Task Force (IETF), the standards body where DoH was standardized, and is the one facet of the Adaptive DNS Discovery (ADD) Working Group, which is chartered to work on the following items (among others):

– Define a mechanism that allows clients to discover DNS resolvers that support encryption and that are available to the client either on the public Internet or on private or local networks.

– Define a mechanism that allows communication of DNS resolver information to clients for use in selection decisions. This could be part of the mechanism used for discovery, above.

In other words, the ADD Working Group aims to specify mechanisms by which clients can obtain the information they need to answer question (1). Critically, one of those pieces of information is what encrypted transport protocols the recursive resolver supports, which would answer question (2).

As the answer to question (2) is purely technical and not a matter of policy, the ADD Working Group was able to specify a workable solution that we’ve implemented and tested with existing clients. Before getting into the details of how it works, let’s dig into the problem statement here and see what’s required to address it.

Threat model and problem statement

The DDR problem is relatively straightforward: given the IP address of a DNS recursive resolver, how can one discover parameters necessary for speaking to the same resolver using an encrypted transport? (As above, discovering parameters for a different resolver is a distinctly different problem that pertains to policy and is therefore out of scope.)

This question is only meaningful insofar as using encryption helps protect against some attacker. Otherwise, if the network was trusted, encryption would add no value! A direct consequence is that this question assumes the network – for some definition of “the network” – is untrusted and encryption helps protect against this network.

But what exactly is the network here? In practice, the topology typically looks like the following:

Announcing experimental DDR in 1.1.1.1
Typical DNS configuration from DHCP

Again, for DNS discovery to have any meaning, we assume that either the ISP or home network – or both – is untrusted and malicious. The setting here depends on the client and the network they are attached to, but it’s generally simplest to assume the ISP network is untrusted.

This question also makes one important assumption: clients know the desired recursive resolver address. Why is this important? Typically, the IP address of a DNS recursive resolver is provided via Dynamic Host Configuration Protocol (DHCP). When a client joins a network, it uses DHCP to learn information about the network, including the default DNS recursive resolver. However, DHCP is a famously unauthenticated protocol, which means that any active attacker on the network can spoof the information, as shown below.

Announcing experimental DDR in 1.1.1.1
Unauthenticated DHCP discovery

One obvious attacker vector would be for the attacker to redirect DNS traffic from the network’s desired recursive resolver to an attacker-controlled recursive resolver. This has important implications on the threat model for discovery.

First, there is currently no known mechanism for encrypted DNS discovery in the presence of an active attacker that can influence the client’s view of the recursive resolver’s address. In other words, to make any meaningful improvement, DNS discovery assumes the client’s view of the DNS recursive resolver address is correct (and obtained through some secure mechanism). A second implication is that the attacker can simply block any attempt of client discovery, preventing upgrade to encrypted transports. This seems true of any interactive discovery mechanism. As a result, DNS discovery must relax this attacker’s capabilities somewhat: rather than add, drop, or modify packets, the attacker can only add or modify packets.

Altogether, this threat model lets us sharpen the DNS discovery problem statement: given the IP address of a DNS recursive resolver, how can one securely discover parameters necessary for speaking to the same resolver using an encrypted transport in the presence of an active attacker that can add or modify packets? It should be infeasible, for example, for the attacker to redirect the client from the resolver that it knows at the outset to one the attacker controls.

So how does this work, exactly?

DDR mechanics

DDR depends on two mechanisms:

  1. Certificate-based authentication of encrypted DNS resolvers.
  2. SVCB records for encoding and communicating DNS parameters.

Certificates allow resolvers to prove authority for IP addresses. For example, if you view the certificate for one.one.one.one, you’ll see several IP addresses listed under the SubjectAlternativeName extension, including 1.1.1.1.

Announcing experimental DDR in 1.1.1.1
SubjectAltName list of the one.one.one.one certificate

SVCB records are extensible key-value stores that can be used for conveying information about services to clients. Example information includes the supported application protocols, including HTTP/3, as well as keying material like that used for TLS Encrypted Client Hello.

How does DDR combine these two to solve the discovery problem above? In three simple steps:

  1. Clients query the expected DNS resolver for its designations and their parameters with a special-purpose SVCB record.
  2. Clients open a secure connection to the designated resolver, for example, one.one.one.one, authenticating the resolver against the one.one.one.one name.
  3. Clients check that the designated resolver is additionally authenticated for the IP address of the origin resolver. That is, the certificate for one.one.one.one, the designated resolver, must include the IP address 1.1.1.1, the original designator resolver.

If this validation completes, clients can then use the secure connection to the designated resolver. In pictures, this is as follows:

Announcing experimental DDR in 1.1.1.1
DDR discovery process

This demonstrates that the encrypted DNS resolver is authoritative for the client’s original DNS resolver. Or, in other words, that the original resolver and the encrypted resolver are effectively  “the same.” An encrypted resolver that does not include the originally requested resolver IP address on its certificate would fail the validation, and clients are not expected to follow the designated upgrade path. This entire process is referred to as “Verified Discovery” in the DDR specification.

Experimental deployment and next steps

To enable more encrypted DNS on the Internet and help the standardization process, 1.1.1.1 now has experimental support for DDR. You can query it directly to find out:

$ dig +short @1.1.1.1 _dns.resolver.arpa type64

QUESTION SECTION
_dns.resolver.arpa.               IN SVCB 

ANSWER SECTION
_dns.resolver.arpa.                           300    IN SVCB  1 one.one.one.one. alpn="h2,h3" port="443" ipv4hint="1.1.1.1,1.0.0.1" ipv6hint="2606:4700:4700::1111,2606:4700:4700::1001" key7="/dns-query{?name}"
_dns.resolver.arpa.                           300    IN SVCB  2 one.one.one.one. alpn="dot" port="853" ipv4hint="1.1.1.1,1.0.0.1" ipv6hint="2606:4700:4700::1111,2606:4700:4700::1001"

ADDITIONAL SECTION
one.one.one.one.                              300    IN AAAA  2606:4700:4700::1111
one.one.one.one.                              300    IN AAAA  2606:4700:4700::1001
one.one.one.one.                              300    IN A     1.1.1.1
one.one.one.one.                              300    IN A     1.0.0.1

This command sends a SVCB query (type64) for the reserved name _dns.resolver.arpa to 1.1.1.1. The output lists the contents of this record, including the DoH and DoT designation parameters. Let’s walk through the contents of this record:

_dns.resolver.arpa.                           300    IN SVCB  1 one.one.one.one. alpn="h2,h3" port="443" ipv4hint="1.1.1.1,1.0.0.1" ipv6hint="2606:4700:4700::1111,2606:4700:4700::1001" key7="/dns-query{?name}"

This says that the DoH target one.one.one.one is accessible over port 443 (port=”443”) using either HTTP/2 or HTTP/3 (alpn=”h2,h3”), and the DoH path (key7) for queries is “/dns-query{?name}”.

Moving forward

DDR is a simple mechanism that lets clients automatically upgrade to encrypted transport protocols for DNS queries without any manual configuration. At the end of the day, users running compatible clients will enjoy a more private Internet experience. Happily, both Microsoft and Apple recently announced experimental support for this emerging standard, and we’re pleased to help them and other clients test support.
Going forward, we hope to help add support for DDR to open source DNS resolver software such as dnscrypt-proxy and Bind. If you’re interested in helping us continue to drive adoption of encrypted DNS and related protocols to help build a better Internet, we’re hiring!

DENT 2.0 released

Post Syndicated from original https://lwn.net/Articles/887213/

DENT is a special-purpose Linux
distribution aimed at router deployments; “DENT utilizes the Linux
Kernel, Switchdev, and other Linux based projects as the basis for building
a new standardized network operating system without abstractions or
overhead
“. Version
2.0
has been released:

DENT 2.0 adds secure scaling with Internet Protocol version 6
(IPv6) and Network Address Translation (NAT) to support a broader
community of enterprise customers. It also adds Power over Ethernet
(PoE) control to allow remote switching, monitoring, and shutting
down. Connectivity of IoT, Point of Sale (POS), and other devices
is highly valuable to retail storefronts, early adopters of
DENT. DENT 2.0 also adds traffic policing, helping mitigate attack
situations that overload the CPU.

PipeWire: A year in review & a look ahead (Collabora blog)

Post Syndicated from original https://lwn.net/Articles/887212/

The Collabora blog looks
at recent developments
in the PipeWire media system and looks forward
to what is yet to come:

Now in 2022, we are looking to the future. We already have designs
to improve WirePlumber and experiment with new things. On the
short-term horizon, we have plans to rework some parts of
WirePlumber in order to make its configuration more user-friendly
and the scripts easier to work with. We are also planning to
revisit the policy logic and try to go a step beyond what
PulseAudio has ever offered. In addition, we are looking forward to
experimenting with complex cameras to improve how PipeWire and
libcamera work together for an optimal user experience.

How to Run VFX Workflows in the Cloud

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/how-to-run-vfx-workflows-in-the-cloud/

An hour from Queens. An hour from Jersey. Two hours from Staten Island. That’s how long it would take Molecule VFX IT staff to travel from their homes to the closet in Manhattan that housed the team’s LTO device. All those hours, just to spend five minutes switching out one tape.

It was a huge waste of time, not to mention subway fares. The hassle of tape wasn’t the only reason Molecule decided to make their production workflows fully cloud-based, but the IT team certainly doesn’t mind skipping that trip these days.

Moving production entirely to the cloud allowed Molecule to unlock the value of their artists’ time as well as the IT staff to support them, and save money in the process. If your media team has been contemplating a fully cloud-based workflow, read on to learn how Molecule did it—including how they managed to maintain the ability to move data from the cloud back to tape on demand without maintaining on-premises tape infrastructure.

About Molecule VFX

Molecule VFX is a visual effects studio based in New York and Los Angeles that provides the elemental building blocks to tell a customer’s story. They have been servicing episodic television and feature films, like the Apple TV series, “Dickinson,” and the Hulu series, “Only Murders in the Building,” since 2005.

Molecule’s Case for the Cloud

Visual effects artists want to be able to hop into a new script, work on it, render it, review it, QC it, and call it done. Their work is the most valuable element of the business. Anything that gets in the way of that or slows down the workflow directly impacts the company’s success, and an on-premises system was doing exactly that.

  • With IT staff working from home, LTO maintenance tied them up for hours—time that could have been spent helping Molecule’s visual effects artists create.
  • Beyond tape, the team managed a whole system of machines, networks, and switches. Day-to-day issues could knock out the company’s ability to get work done for entire days.

They knew moving to the cloud would optimize staff time and mitigate those outages, but it didn’t happen overnight. Because much of their business already happens in the digital workspace, Molecule had been slowly moving to the cloud over the past few years. The shift to remote work due to the COVID-19 pandemic accelerated their transition.

Work from the Amazon Original Movie, “Bliss,” featuring Owen Wilson.

Strategies for Moving VFX Workflows to the Cloud

Molecule’s Full Stack Software Architect, Ben Zenker, explained their approach. Through the process, he identified a few key strategies that made the transition a success, including:

  • Taking a phased approach while deciding between hybrid and fully cloud-based workflows.
  • Reading the fine print when comparing providers.
  • Rolling their own solutions where possible.
  • Thoroughly testing workflows.
  • Repurposing on-premises infrastructure.

1. Take a Phased Approach

Early in the transition, the Molecule team was still using the tape system and an on-premises Isilon server for some workloads. Because they were still deciding if they were going to have a hybrid system or go fully cloud, they took an ad hoc approach to identifying what data was going to be in Backblaze B2 Cloud Storage and what production infrastructure was going to be in CoreWeave, a cloud compute partner that specializes in VFX workloads. Ben explained, “Once we decided definitively we wanted to be fully in the cloud, connecting CoreWeave and Backblaze was simple—if it was on CoreWeave, it was getting backed up in Backblaze B2 nightly.”

2. Read the Fine Print

The team planned to sync incremental backups to the cloud every night. That meant their data would change every day as staff deleted or updated files. They figured out early on that retention minimums were a non-starter. Some cloud providers charge for deleted data for 30, 60, or even 90 days, meaning Molecule would be forced to pay for storage on data they had deleted months ago. But not all cloud providers are transparent about their retention policies. Molecule took the time to track down these policies and compare costs.

“Backblaze was the only service that met our business requirements without a retention minimum.”
—Ben Zenker, Full Stack Software Architect, Molecule VFX

3. Roll Your Own Solutions Where Possible

The team creates a lot of their own web tools to interact with other technology, so it was a relatively easy lift to set up rclone commands to run syncs of their production data nightly to Backblaze B2. Using rclone, they also built a variable price reporting tool so that higher ups could easily price out different projects and catch potential problems like a runaway render.

“There are hundreds of options that you can pass into rclone, so configuring it involved some trial and error. Thankfully it’s open-source, and Backblaze has documentation. I made some small tweaks and additions to the tool myself to make it work better for us.”
—Ben Zenker, Full Stack Software Architect, Molecule VFX

4. Test and Test Again

In reflecting on the testing phase they went through, Ben acknowledges he could have been more liberal. He noted, “I went into it a little cautious because I didn’t want to end up incurring big charges for a test, but Backblaze has all sorts of safeguards in place. You can set price limits and caps, which was great for the testing period.”

5. Repurpose On-premises Infrastructure

The on-premises Isilon server and the physical tape system are no longer part of the active project workflow. They still utilized those devices to host some core services for a time—a firewall, authentication, and a VPN that some members used. In the end, they decided to fully retire all on-premises infrastructure, but repurposing the on-premises infrastructure allowed them to maximize its useful life.

But What If Clients Demand Tape?

While Molecule is more than happy to have modernized their workflows in the cloud, there are still some clients—and major clients at that—who require that contractors save final projects on tape for long-term storage. It no longer made sense to have staff trained on how to use the LTO system, so when a customer asked for a tape copy, they reached out to Backblaze for advice.

They needed a turnkey solution that they didn’t have to manage, and they definitely didn’t want to have to resort to reinvesting and managing tape hardware. Backblaze partner, TapeArk, fit the bill. TapeArk typically helps clients get data off of tape and into the cloud, but in this case they reversed the process. Molecule sent them a secure token to the exact piece of data they needed. TapeArk managed the download, put it on tape, and shipped it to the client.

If Molecule needs to send tape copies to clients in the future, they have an easy, hands-off solution and they don’t have to maintain an LTO system for infrequent use. Ben was grateful for the partnership and easy solution.

Work from the Apple TV series, “Dickinson,” featuring Hailee Steinfeld.

Cloud Workflows Free Up a Month of Time

Now that the staff no longer has to manage an LTO tape system, the team has recouped at least 30 payroll days a year that can be dedicated to supporting artists. Ben noted that with the workflows in the cloud, the nature of the IT workload has changed, and the team definitely appreciates having that time back to respond to changing demands.

Ready to move your VFX workflows to the cloud? Start testing today with 10GB of data storage free from Backblaze B2.

The post How to Run VFX Workflows in the Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Building a serverless image catalog with AWS Step Functions Workflow Studio

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-image-catalog-with-aws-step-functions-workflow-studio/

This post is written by Pascal Vogel, Associate Solutions Architect, and Benjamin Meyer, Sr. Solutions Architect.

Workflow Studio is a low-code visual workflow designer for AWS Step Functions that enables the orchestration of serverless workflows through a guided interactive interface. With the integration of Step Functions and the AWS SDK, you can now access more than 200 AWS services and over 9,000 API actions in your state machines.

This walkthrough uses Workflow Studio to implement a serverless image cataloging pipeline. It includes content moderation, automated tagging, and parallel image processing. Workflow Studio allows you to set up API integrations to other AWS services quickly with drag and drop actions, without writing custom application code.

Solution overview

Photo sharing websites often allow users to publish user-generated content such as text, images, or videos. Manual content review and categorization can be challenging. This solution enables the automation of these tasks.

Workflow overview

In this workflow:

  1. An image stored in Amazon S3 is checked for inappropriate content using the Amazon Rekognition DetectModerationLabels API.
  2. Based on the result of (1), appropriate images are forwarded to image processing while inappropriate ones trigger an email notification.
  3. Appropriate images undergo two processing steps in parallel: the detection of objects and text in the image via Amazon Rekognition’s DetectLabels and DetectText APIs. The results of both processing steps are saved in an Amazon DynamoDB table.
  4. An inappropriate image triggers an email notification for manual content moderation via the Amazon Simple Notification Service (SNS).

Prerequisites

To follow this walkthrough, you need:

  1. An AWS account.
  2. An AWS user with AdministratorAccess (see the instructions on the AWS Identity and Access Management (IAM) console).
  3. AWS CLI using the instructions here.
  4. AWS Serverless Application Model (AWS SAM) CLI using the instructions here.

Initial project setup

Get started by cloning the project repository from GitHub:

git clone https://github.com/aws-samples/aws-step-functions-image-catalog-blog.git

The cloned repository contains two AWS SAM templates.

  1. The starter directory contains a template. It deploys AWS resources and permissions that you use later for building the image cataloging workflow.
  2. The solution directory contains a template that deploys the finished image cataloging pipeline. Use this template if you want to skip ahead to the finished solution.

Both templates deploy the following resources to your AWS account:

  • An Amazon S3 bucket that holds the image files for the catalog.
  • A DynamoDB table as the data store of the image catalog.
  • An SNS topic and subscription that allow you to send an email notification.
  • A Step Functions state machine that defines the processing steps in the cataloging pipeline.

To follow the walkthrough, deploy the AWS SAM template in the starter directory using the AWS SAM CLI:

cd aws-step-functions-image-catalog-blog/starter
sam build
sam deploy --guided

Configure the AWS SAM deployment as follows. Input your email address for the parameter ModeratorEmailAddress:

Configuring SAM deploy

During deployment, you receive an email asking you to confirm the subscription to notifications generated by the Step Functions workflow. In the email, choose Confirm subscription to receive these notifications.

Subscription message

Confirm successful resource creation by going to the AWS CloudFormation console. Open the serverless-image-catalog-starter stack and choose the Stack info tab:

CloudFormation stack

View the Outputs tab of the CloudFormation stack. You reference these items later in the walkthrough:

Outputs tab

Implementing the image cataloging pipeline

Accessing Step Functions Workflow Studio

To access Step Functions in Workflow Studio:

  1. Access the Step Functions console.
  2. In the list of State machines, select image-catalog-workflow-starter.
  3. Choose the Edit button.
  4. Choose Workflow Studio.

Workflow Studio

Workflow Studio consists of three main areas:

  1. The Canvas lets you modify the state machine graph via drag and drop.
  2. The States Browser lets you browse and search more than 9,000 API Actions from over 200 AWS services.
  3. The Inspector panel lets you configure the properties of state machine states and displays the Step Functions definition in the Amazon States Language (ASL).

For the purpose of this walkthrough, you can delete the Pass state present in the state machine graph. Right click on it and choose Delete state.

Auto-moderating content with Amazon Rekognition and the Choice State

Use Amazon Rekognition’s DetectModerationLabels API to detect inappropriate content in the images processed by the workflow:

  1. In the States browser, search for the DetectModerationLabels API action.
  2. Drag and drop the API action on the state machine graph on the canvas.

Drag and drop

In the Inspector panel, select the Configuration tab and add the following API Parameters:

{
  "Image": {
    "S3Object": {
      "Bucket.$": "$.bucket",
      "Name.$": "$.key"
    }
  }
}

Switch to the Output tab and check the box next to Add original input to output using ResultPath. This allows you to pass both the original input and the task’s output on to the next state on the state machine graph.

Input the following ResultPath:

$.moderationResult

Step Functions enables you to make decisions based on the output of previous task states via the choice state. Use the result of the DetectModerationLabels API action to decide how to proceed with the image:

  1. Access the Flow tab in the States browser. Drag and drop a Choice state to the state machine graph below the DetectModerationLabels API action.
  2. In the States browser, choose Flow.
  3. Select a Choice state and place it after the DetectModerationLabels state on the graph.
  4. Select the added Choice state.
  5. In the Inspector panel, choose Rule #1 and select Edit.
  6. Choose Add conditions.
  7. For Variable, enter $.moderationResult.ModerationLabels[0].
  8. For Operator, choose is present.
  9. Choose Save conditions.
    Conditions for rule #1

If Amazon Rekognition detects inappropriate content, the workflow notifies content moderators to inspect the image manually:

  1. In the States browser, find the SNS Publish API Action.
  2. Drag the Action into the Rule #1 branch of the Choice state.
  3. For API Parameters, select the SNS topic that is visible in the Outputs of the serverless-image-catalog-starter stack in the CloudFormation console.

SNS topic in Workflow Studio

Speeding up image cataloging with the Parallel state

Appropriate images should be processed and included in the image catalog. In this example, processing includes the automated generation of tags based on objects and text identified in the image.

To accelerate this, instruct Step Functions to perform these tasks concurrently via a Parallel state:

  1. In the States browser, select the Flow tab.
  2. Drag and drop a Parallel state onto the Default branch of the previously added Choice state.
  3. Search the Amazon Rekognition DetectLabels API action in the States browser
  4. Drag and drop it inside the parallel state.
  5. Configure the following API parameters:
    {
      "Image": {
        "S3Object": {
          "Bucket.$": "$.bucket",
          "Name.$": "$.key"
        }
      }
    }
    
  6. Switch to the Output tab and check the box next to Add original input to output using ResultPath. Set the ResultPath to $.output.

Record the results of the Amazon Rekognition DetectLabels API Action to the DynamoDB database:

  1. Place a DynamoDB UpdateItem API Action inside the Parallel state below the Amazon Rekognition DetectLabels API action.
  2. Configure the following API Parameters to save the tags to the DynamoDB table. Input the name of the DynamoDB table visible in the Outputs of the serverless-image-catalog-starter stack in the CloudFormation console:
{
  "TableName": "<DynamoDB table name>",
  "Key": {
    "Id": {
      "S.$": "$.key"
    }
  },
  "UpdateExpression": "set detectedObjects=:o",
  "ExpressionAttributeValues": {
    ":o": {
      "S.$": "States.JsonToString($.output.Labels)"
    }
  }
}

This API parameter definition makes use of an intrinsic function to convert the list of objects identified by Amazon Rekognition from JSON to String.

Intrinsic functions

In addition to objects, you also want to identify text in images and store it in the database. To do so:

  1. Drag and drop an Amazon Rekognition DetectText API action into the Parallel state next to the DetectLabels Action.
  2. Configure the API Parameters and ResultPath identical to the DetectLabels API Action.
  3. Place another DynamoDB UpdateItem API Action inside the Parallel state below the Amazon Rekognition DetectText API Action. Set the following API Parameters and input the same DynamoDB table name as before.
{
  "TableName": "<DynamoDB table name>",
  "Key": {
    "Id": {
      "S.$": "$.key"
    }
  },
  "UpdateExpression": "set detectedText=:t",
  "ExpressionAttributeValues": {
    ":t": {
      "S.$": "States.JsonToString($.output.TextDetections)"
    }
  }
}

To save the state machine:

  1. Choose Apply and exit.
  2. Choose Save.
  3. Choose Save anyway.

Finishing up and testing the image cataloging workflow

To test the image cataloging workflow, upload an image to the S3 bucket created as part of the initial project setup. Find the name of the bucket in the Outputs of the serverless-image-catalog-starter stack in the CloudFormation console.

  1. Select the image-catalog-workflow-starter state machine in the Step Functions console.
  2. Choose Start execution.
  3. Paste the following test event (use your S3 bucket name):
    {
        "bucket": "<S3-bucket-name>",
        "key": "<Image-name>.jpeg"
    }
    
  4. Choose Start execution.

Once the execution has started, you can follow the state of the state machine live in the Graph inspector. For an appropriate image, the result will look as follows:

Graph inspector

Next, repeat the test process with an image that Amazon Rekognition classifies as inappropriate. Find out more about inappropriate content categories here. This produces the following result:

Graph inspector

You receive an email notifying you regarding the inappropriate image and its properties.

Cleaning up

To clean up the resources provisioned as part of the solution run the following command in the aws-step-functions-image-catalog-blog/starter directory:

sam delete

Conclusion

This blog post demonstrates how to implement a serverless image cataloging pipeline using Step Functions Workflow Studio. By orchestrating AWS API actions and flow states via drag and drop, you can process user-generated images. This example checks images for appropriateness and generates tags based on their content without custom application code.

You can now expand and improve this workflow by triggering it automatically each time an image is uploaded to the Amazon S3 bucket or by adding a manual approval step for submitted content. To find out more about Workflow Studio, visit the AWS Step Functions Developer Guide.

For more serverless learning resources, visit Serverless Land.

Breaking the Bias – Women at AWS Developer Relations

Post Syndicated from Rashmi Nambiar original https://aws.amazon.com/blogs/aws/breaking-the-bias-women-at-aws-developer-relations/

Today for International Women’s Day we’re joined by a special guest writer, Rashmi Nambiar. She’s here to share her conversations with a few other members of the AWS Developer Relations team, talking about their work and experience as women in tech. Enjoy!

– The AWS News Blog Team


When I was contemplating joining AWS, many warned me about boarding the “rocket ship.” But I took the leap of faith. It has been four years since then. Now when I look back, the growth trajectory is something that I am proud of, from starting my AWS journey with a regional role in India to going global and now driving the Worldwide Developer Marketing Strategy. #HereatAWS, I get to choose the direction of my career and prioritize my time between family and work.

At AWS, we believe that the future of technology is accessible, flexible, and inclusive. So we take it very seriously when we say, “All Builders Welcome.” As a woman in tech, I have felt that strong sense of belonging with the team and acceptance for who I am.

Being part of the AWS Developer Relations (DevRel) team, I get to meet and work with awesome builders within and outside of the organization who are changing the status quo and pushing technological boundaries. This International Women’s Day, I took the opportunity to talk to some of the women at AWS DevRel about their role as tech/dev advocates.

Veliswa Boya

Headshot of Veliswa Boya

Veliswa Boya, Senior Developer Advocate

What is it that you like about being a developer advocate at AWS?
“Becoming a developer advocate is something I didn’t even dare to dream about. Some of us go through life and at some point admit that some dreams are just not meant for us. That today I am a developer advocate at AWS working with the builder community of sub-Saharan Africa and beyond is one of the most fulfilling and exciting roles I can recall throughout my entire tech career. I especially enjoy working with those new to AWS and new to tech in general, so my role spans technical content creation and delivery all the way to the mentoring of community members. I enjoy working for an organization that’s at the forefront of innovation, but at the same time not innovating for the sake of innovating, but always being customer obsessed and innovating on behalf of the customer.”

You are an icon of possibilities with many titles. How did the transition from AWS Hero to AWS employee work out for you?
“I became an AWS Hero in May 2020, and with that, I became the first woman out of Africa to ever be named an AWS Hero. I have always enjoyed sharing knowledge. Every little bit I learn, I always make sure to share. I believe that this—and more—led to my nomination. Joining AWS as a developer advocate is awesome. I continue to live the passion that led to me being a Hero, sharing knowledge with others and at the same time learning from both the community and my wonderful peers.”

Antje Barth

Headshot photo of Antje Barth

Antje Barth, Principal Developer Advocate – AI/ML

What do you like about your role as an AI/ML specialist on the AWS Developer Relations Team?
“I’ve always been excited about technology and the speed of innovation in this field. What I like most about my role as a principal developer advocate for AI/ML is that I get to share this passion and enable customers, developers, and students to build amazing things. I recently organized a hackathon asking participants to think about creative ways of using machine learning to improve disaster response. And I was simply blown away by all the ideas the teams came up with.”

You have authored books like Data Science on AWS. What is your guidance for someone planning to get on the publishing path?
“The piece of advice I would give anyone interested in becoming a book author: Find the topic you are really passionate about, dive into the topic, and start developing content—whether it’s blog posts, code samples, or videos. Become a subject matter expert and make yourself visible. Speak at meetups, submit a talk to a conference. Grow your network. Find peers, discuss your ideas, ask for feedback, make sure the topic is relevant for a large audience group. And eventually, reach out to publishers, share your content ideas and collected feedback, and put together a book proposal.”

Lena Hall

Headshot photo of Lena Hall

Lena Hall, Head of Developer Relations – North America

What excites you about AWS Developer Relations?
“I love it because AWS culture empowers anyone at AWS, including developer advocates, to always advocate for the customer and the community. While doing that, no matter how hard it is or how much friction you run into, you can be confident in doing the right thing for our customers and community. This translates to our ability to influence internally across the company, using strong data and logical narratives to support our improvement proposals.”

You have recently joined the team as the DevRel Head for North America. What does it take to lead a team of builders?
“It is important to recognize that people on your team have unique strengths and superpowers. I found it valuable to identify those early on and offer paths to develop them even more. In many cases, it leads to a bigger impact and improved motivation. It is also crucial to listen to your team, be supportive and welcoming of ideas, and protective of their time.”

Rohini Gaonkar

Headshot photo of Rohini Gaonkar

Rohini Gaonkar, Senior Developer Advocate

You have been with AWS for over eight years. What attracted you to developer advocacy?
“As a developer advocate, I love being autonomous, and I have the freedom to pick the tech of my choice. The other fun part is to work closely with the community—my efforts, however small, help someone in their career, and that is the most satisfying part of my work.”

You have worked in customer support, solutions architect, and technical evangelist roles. What’s your tip on developing multiple technical skills?
“Skills are like flowers in your bouquet; you should keep adding beautiful colors to it. Sometimes it takes months to years to develop a skill, so keep an eye on your next thing and start adding the skills for it today. Interestingly, at AWS, the ‘Learn and be curious’ leadership principle encourages us to always find ways to improve ourselves, to explore new possibilities and act on them.”

Jenna Pederson

Headshot photo of Jenna Pederson

Jenna Pederson, Senior Developer Advocate

What is your reason for taking up a developer advocate role at AWS?
“I like being a developer advocate at AWS because it lets me scale my impact. I get to work with and help so many more builders gain knowledge, level up their skills, and bring their ideas to life through technology.”

It is such a delight to watch your presentations and demo at events and other programs. What is your advice to people who want to get into public speaking?
“If you’re a new speaker, talk about what you’re learning, even if you think everyone is talking about the same thing. You will have a fresh perspective on whatever it is.”

Kris Howard

Headshot photo of Kris Howard

Kris Howard, DevRel Manager

Why did you join the Developer Relations Team?
“I joined DevRel because I love being on stage and sharing my creativity and passion for tech with others. The most rewarding part is when someone tells you that you inspired them to learn a new skill, or change their career, or stretch themselves to reach a new goal.”

Since you have worked in different geographies, what would you say to someone who is exploring working in different countries?
“The last two years have really emphasized that if you want to see the world, you should take advantage of every opportunity you get. That’s one of the benefits of Amazon: that there are so many career paths available to you in lots of different places! As a hiring manager, I was always excited to get applications from internal transfers, and in 2020 I got the chance to experience it from the other side when I moved with my partner from Sydney to Munich. It was a challenging time to relocate, but in retrospect, I’m so glad we did.”

Join Us!

Interested in working with DevRel Team? Here are some of the available opportunities.

Deploying service-mesh-based architectures using AWS App Mesh and Amazon ECS

Post Syndicated from Kesha Williams original https://aws.amazon.com/blogs/architecture/deploying-service-mesh-based-architectures-using-aws-app-mesh-and-amazon-ecs/

This International Women’s Day, we’re featuring more than a week’s worth of posts that highlight female builders and leaders. We’re showcasing women in the industry who are building, creating, and, above all, inspiring, empowering, and encouraging everyone—especially women and girls—in tech.


Service-mesh-based architectures provide visibility and control for microservices (a group of loosely coupled services that function together to make an application operate) by providing a consistent way to route and monitor traffic between them. They often appear in concert with containers and microservices in modern, cloud-native development. Containers help simplify the build, test, and deploy phases of the code pipeline for a given microservice. Microservices also offer many benefits over monoliths: faster speed-to-market; better resiliency; increased scalability; and independent, reusable components.

Despite these benefits, not all organizations use containers and microservices. Why? Because refactoring monoliths can be architecturally challenging. It increases the complexity of your workload by adding many, sometimes thousands, of services. These services must then be monitored. The services also have to communicate with each other, so you need to properly route and monitor traffic. Adding services also means there are more APIs and databases that need protection.

If this sounds like an issue you’ve encountered or one you might need help with in the future, you’ll benefit from using a service mesh, a dedicated infrastructure layer for governing microservices and facilitating service-to-service communications. In this post, we’ll explain how to use AWS App Mesh to provide visibility and control for microservices by providing a consistent way to route and monitor traffic between them.

How will a service mesh help me govern my workload?

A service mesh helps you run a fast, reliable, and secure network of microservices, and it can help alleviate many of the pain points encountered when running microservices:

  1. Decouples governance from business logic
  2. Adds service discovery
  3. Maintains load balancing
  4. Provides traffic control
  5. Provides additional observability and monitoring capabilities
  6. Adds resiliency and health checks
  7. Increases security

How does a service mesh work?

A service mesh consists of two high-level components: a control plane and a data plane.

The control plane manages all of the individual microservices in the data plane and provides processes to manipulate and observe the entire application.

The data plane intercepts and processes calls between the different microservices. The data plane is typically implemented as a proxy, which runs alongside each microservice as a sidecar. A sidecar is a container that is automatically injected into the microservice at run time.

Architecture walkthrough

The example architecture in Figure 1 shows a microservices architecture for an Ordering application. It contains four microservices: Inventory, Order, and UI.

This example is a deliberately small and simple example to explore the concepts. Here’s how it works:

  1. The control plane is the central component that manages all the individual microservices in the data plane.
  2. The data plane intercepts and processes calls between the different microservices.
  3. App Mesh forms the service mesh and supports the services registered with AWS Cloud Map.
  4. AWS Cloud Map provides service discovery.
  5. Containers are defined in an ECS task definition.
  6. Envoy is the service mesh proxy that is deployed alongside the microservice container.
  7. The application container represents the application components that run in a Docker container.
  8. Service communication traces are made available to AWS X-Ray.
  9. Service-level logs and metrics are made available to Amazon CloudWatch.
Microservices architecture for an Ordering application managed by App Mesh

Figure 1. Microservices architecture for an Ordering application managed by App Mesh

Implementing the service mesh with App Mesh

To use App Mesh, you’ll need to have an existing service running on Amazon Elastic Container Service (Amazon ECS) and be registered with AWS Cloud Map.

App Mesh forms a service mesh for your application by providing an AWS-managed control plane. The control plane helps you run microservices by providing consistent visibility and network traffic controls for each microservice in your application.

App Mesh separates the logic needed for monitoring and controlling communications into a proxy that runs sideloaded to every microservice. App Mesh works with an open-source, high-performing network proxy called Envoy. After implementing your service mesh, you’ll update your services to use Envoy, which requires the services to communicate with each other through the proxy instead of directly with each other. All service-to-service traffic goes through the Envoy proxy allowing traffic routes to be configured and metrics, logs, and traces exported.

Components

There are several components needed to support the service mesh:

  • Virtual services – Virtual services are abstractions of actual microservices provided by a virtual node through a virtual router.
  • Virtual nodes – Virtual nodes are logical pointers to a particular task group, like an Amazon ECS service. You’ll need to provide the service discovery name found in AWS Cloud Map to connect your microservice.
  • Envoy proxy – The Envoy proxy configures your microservice task group to use App Mesh’s virtual routers and nodes.
  • Virtual routers – Virtual routers route traffic for one or more virtual services within your mesh.
  • Routes – Routes are used by the virtual router to match requests and direct traffic to one or more virtual nodes.

Integrating App Mesh with Amazon ECS

App Mesh integrates with your containerized microservices running on Amazon ECS (and other compute services). Amazon ECS is a container orchestration service that helps you deploy, manage, and scale containerized applications.

With Amazon ECS, your containers are defined in a task definition; you’ll need to add an Envoy proxy Docker container image to the task definition and register the microservices for discovery through AWS Cloud Map.

Conclusion

This post shows how App Mesh helps you solve some of the most common pitfalls of managing microservice architectures. It also shows you how to use App Mesh to provide visibility and control for microservices on AWS by providing a consistent way to route and monitor traffic between them.

App Mesh works as the control plane and uses the open-source Envoy proxy to provide the data plane that intercepts and processes calls between the different microservices. Through integrations with CloudWatch and X-Ray, you’re able to capture application-level metrics, logs, and traces.

Ready to get started? Check out the Learning AWS App Mesh post on the Database blog, the Using Service Meshes in AWS whitepaper, and Introduction to AWS App Mesh AWS Online Tech Talk to learn more. You can connect with Kesha on LinkedIn if you have questions.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

CVE-2022-26143: A Zero-Day vulnerability for launching UDP amplification DDoS attacks

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/cve-2022-26143-amplification-attack/

CVE-2022-26143: A Zero-Day vulnerability for launching UDP amplification DDoS attacks

CVE-2022-26143: A Zero-Day vulnerability for launching UDP amplification DDoS attacks

A zero-day vulnerability in the Mitel MiCollab business phone system has recently been discovered (CVE-2022-26143). This vulnerability, called TP240PhoneHome, which Cloudflare customers are already protected against, can be used to launch UDP amplification attacks. This type of attack reflects traffic off vulnerable servers to victims, amplifying the amount of traffic sent in the process by an amplification factor of 220 billion percent in this specific case.

Cloudflare has been actively involved in investigating the TP240PhoneHome exploit, along with other members of the InfoSec community. Read our joint disclosure here for more details. As far as we can tell, the vulnerability has been exploited as early as February 18, 2022. We have deployed emergency mitigation rules to protect Cloudflare customers against the amplification DDoS attacks.

Mitel has been informed of the vulnerability. As of February 22, they have issued a high severity security advisory advising their customers to block exploitation attempts using a firewall, until a software patch is made available. Cloudflare Magic Transit customers can use the Magic Firewall to block external traffic to the exposed Mitel UDP port 10074 by following the example in the screenshot below, or by pasting the following expression into their Magic Firewall rule editor and selecting the Block action:

(udp.dstport eq 10074).

CVE-2022-26143: A Zero-Day vulnerability for launching UDP amplification DDoS attacks
Creating a Magic Firewall rule to block traffic to port 10074

To learn more, register for our webinar on March 23rd, 2022.

Exploiting the vulnerability to launch DDoS attacks

Mitel Networks is based in Canada and provides business communications and collaboration products to over 70 million business users around the world. Amongst their enterprise collaboration products is the aforementioned Mitel MiCollab platform, known to be used in critical infrastructure such as municipal governments, schools, and emergency services. The vulnerability was discovered in the Mitel MiCollab platform.

The vulnerability manifests as an unauthenticated UDP port that is incorrectly exposed to the public Internet. The call control protocol running on this port can be used to, amongst other things, issue the debugging command startblast. This command does not place real telephone calls; rather, it simulates a “blast” of calls in order to test the system. For each test call that is made, two UDP packets are emitted in response to the issuer of the command.

According to the security advisory, the exploit can “allow a malicious actor to gain unauthorized access to sensitive information and services, cause performance degradations or a denial of service condition on the affected system. If exploited with a denial of service attack, the impacted system may cause significant outbound traffic impacting availability of other services.

Since this is an unauthenticated and connectionless UDP-based protocol, you can use spoofing to direct the response traffic toward any IP and port number — and by doing so, reflect and amplify a DDoS attack to the victim.

We’ve mainly focused on the amplification vector because it can be used to hurt the whole Internet, but the phone systems themselves can likely be hurt in other ways with this vulnerability. This UDP call control port offers many other commands. With some work, it’s likely that you could use this UDP port to commit toll fraud, or to simply render the phone system inoperable. We haven’t assessed these other possibilities, because we do not have access to a device that we can safely test with.

The good news

Fortunately, only a few thousand of these devices are improperly exposed to the public Internet, meaning that this vector can “only” achieve several hundred million packets per second total. This volume of traffic can cause major outages if you’re not protected by an always-on automated DDoS protection service, but it’s nothing to be concerned with if you are.

Furthermore, an attacker can’t run multiple commands at the same time. Instead, the server queues up commands and executes them serially. The fact that you can only launch one attack at a time from these devices, mixed with the fact that you can make that attack for many hours, has fascinating implications. If an attacker chooses to start an attack by specifying a very large number of packets, then that box is “burned” – it can’t be used to attack anyone else until the attack completes.

How Cloudflare detects and mitigates DDoS attacks

To defend organizations against DDoS attacks, we built and operate software-defined systems that run autonomously. They automatically detect and mitigate DDoS attacks across our entire network.

Initially, traffic is routed through the Internet via BGP Anycast to the nearest Cloudflare edge data center. Once the traffic reaches our data center, our DDoS systems sample it asynchronously allowing for out-of-path analysis of traffic without introducing latency penalties.

The analysis is done using data streaming algorithms. Packet samples are compared to the fingerprints and multiple real-time signatures are created based on the dynamic masking of various fingerprint attributes. Each time another packet matches one of the signatures, a counter is increased. When the system qualifies an attack, i.e., the activation threshold is reached for a given signature, a mitigation rule is compiled and pushed inline. The mitigation rule includes the real-time signature and the mitigation action, e.g., drop.

CVE-2022-26143: A Zero-Day vulnerability for launching UDP amplification DDoS attacks

You can read more about our autonomous DDoS protection systems and how they work in our joint-disclosure technical blog post.

Helping build a better Internet

Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone — even in the face of DDoS attacks and emerging zero-day threats. As part of our mission, since 2017, we’ve been providing unmetered and unlimited DDoS protection for free to all of our customers. Over the years, it has become increasingly easier for attackers to launch DDoS attacks. To counter the attacker’s advantage, we want to make sure that it is also easy and free for organizations of all sizes to protect themselves against DDoS attacks of all types.

Not using Cloudflare yet? Start now.

CVE-2022-26143: TP240PhoneHome reflection/amplification DDoS attack vector

Post Syndicated from Alex Forster original https://blog.cloudflare.com/cve-2022-26143/

CVE-2022-26143: TP240PhoneHome reflection/amplification DDoS attack vector

Beginning in mid-February 2022, security researchers, network operators, and security vendors observed a spike in DDoS attacks sourced from UDP port 10074 targeting broadband access ISPs, financial institutions, logistics companies, and organizations in other vertical markets.

Upon further investigation, it was determined that the devices abused to launch these attacks are MiCollab and MiVoice Business Express collaboration systems produced by Mitel, which incorporate TP-240 VoIP- processing interface cards and supporting software; their primary function is to provide Internet-based site-to-site voice connectivity for PBX systems.

Approximately 2600 of these systems have been incorrectly provisioned so that an unauthenticated system test facility has been inadvertently exposed to the public Internet, allowing attackers to leverage these PBX VoIP gateways as DDoS reflectors/amplifiers.

Mitel is aware that these systems are being abused to facilitate high-pps (packets-per-second) DDoS attacks, and have been actively working with customers to remediate abusable devices with patched software that disables public access to the system test facility.

In this blog, we will further explore the observed activity, explain how the driver has been abused, and share recommended mitigation steps. This research was created cooperatively among a team of researchers from Akamai SIRT, Cloudflare, Lumen Black Lotus Labs, NETSCOUT ASERT, TELUS, Team Cymru, and The Shadowserver Foundation.

DDoS attacks in the wild

While spikes of network traffic associated with the vulnerable service were observed on January 8th and February 7,th 2022, we believe the first actual attacks leveraging the exploit began on February 18th.

Observed attacks were primarily predicated on packets-per-second, or throughput, and appeared to be UDP reflection/amplification attacks sourced from UDP/10074 that were mainly directed towards destination ports UDP/80 and UDP/443. The single largest observed attack of this type preceding this one was approximately 53 Mpps and 23 Gbps. The average packet size for that attack was approximately 60 bytes, with an attack duration of approximately 5 minutes. The amplified attack packets are not fragmented.

This particular attack vector differs from most UDP reflection/amplification attack methodologies in that the exposed system test facility can be abused to launch a sustained DDoS attack of up to 14 hours in duration by means of a single spoofed attack initiation packet, resulting in a record-setting packet amplification ratio of 4,294,967,296:1. A controlled test of this DDoS attack vector yielded more than 400 Mmpps of sustained DDoS attack traffic.

It should be noted that this single-packet attack initiation capability has the effect of precluding network operator traceback of the spoofed attack initiator traffic. This helps mask the attack traffic generation infrastructure, making it less likely that the attack origin can be traced compared with other UDP reflection/amplification DDoS attack vectors.

Abusing the tp240dvr driver

The abused service on affected Mitel systems is called tp240dvr (“TP-240 driver”) and appears to run as a software bridge to facilitate interactions with TDM/VoIP PCI interface cards. The service listens for commands on UDP/10074 and is not meant to be exposed to the Internet, as confirmed by the manufacturer of these devices. It is this exposure to the Internet that ultimately allows it to be abused.

The tp240dvr service exposes an unusual command that is designed to stress-test its clients in order to facilitate debugging and performance testing. This command can be abused to cause the tp240dvr service to send this stress-test to attack victims. The traffic consists of a high rate of short informative status update packets that can potentially overwhelm victims and cause the DDoS scenario.

This command can also be abused by attackers to launch very high-throughput attacks. Attackers can use specially-crafted commands to cause the tp240dvr service to send larger informative status update packets, significantly increasing the amplification ratio.

By extensively testing isolated virtual TP-240-based systems in a lab setting, researchers were able to cause these devices to generate massive amounts of traffic in response to comparatively small request payloads. We will cover this attack scenario in greater technical depth in the following sections.

Calculating the potential attack impact

As previously mentioned, amplification via this abusable test facility differs substantially from how it is accomplished with most other UDP reflection/amplification DDoS vectors. Typically, reflection/amplification attacks require the attacker to continuously transmit malicious payloads to abusable nodes for as long as they wish to attack the victim. In the case of TP-240 reflection/amplification, this continuous transmission is not necessary to launch high-impact DDoS attacks.

Instead, an attacker leveraging TP-240 reflection/amplification can launch a high-impact DDoS attack using a single packet. Examination of the tp240dvr binary reveals that, due to its design, an attacker can theoretically cause the service to emit 2,147,483,647 responses to a single malicious command. Each response generates two packets on the wire, leading to approximately 4,294,967,294 amplified attack packets being directed toward the attack victim.

For each response to a command, the first packet contains a counter that increments with each sent response. As the counter value increments, the size of this first packet will grow from 36 bytes to 45 bytes. The second packet contains diagnostic output from the function, which can be influenced by the attacker. By optimizing each initiator packet to maximize the size of the second packet, every command will result in amplified packets that are up to 1,184 bytes in length.

In theory, a single abusable node generating the upper limit of 4,294,967,294 packets at a rate of 80kpps would result in an attack duration of roughly 14 hours. Over the course of the attack, the “counter” packets alone would generate roughly 95.5GB of amplified attack traffic destined for the targeted network. The maximally-padded “diagnostic output” packets would account for an additional 2.5TB of attack traffic directed towards the target.

This would yield a sustained flood of just under 393mb/sec of attack traffic from a single reflector/amplifier, all resulting from a single spoofed attack initiator packet of only 1,119 bytes in length. This results in a nearly unimaginable amplification ratio of 2,200,288,816:1 — a multiplier of 220 billion percent, triggered by a single packet.

Upper boundaries of attack volume and simultaneity

The tp240dvr service processes commands using a single thread. This means they can only process a single command at a time, and thus can only be used to launch one attack at a time. In the example scenario presented above, during the 14 hours that the abused device would be attacking the target, it cannot be leveraged to attack any other target. This is somewhat unique in the context of DDoS reflection/amplification vectors.

Although this characteristic also causes the tp240dvr service to be unavailable to legitimate users, it is much preferable to having these devices be leveraged in parallel by multiple attackers — and leaving legitimate operators of these systems to wonder why their outbound Internet data capacity is being consumed at much higher rates.

Additionally, it appears these devices are on relatively low-powered hardware, in terms of their traffic-generation capabilities. On an Internet where 100/Gbps links, dozens of CPU cores, and multi-threading capabilities have become commonplace, we can all be thankful this abusable service is not found on top-of-the-line hardware platforms capable of individually generating millions of packets per second, and running with thousands of parallelized threads.

Lastly, it is also good news that of the tens of thousands of these devices, which have been purchased and deployed historically by governments, commercial enterprises, and other organizations worldwide, a relatively small number of them have been configured in a manner that leaves them in this abusable state, and of those, many have been properly secured and taken offline from an attacker’s perspective.

Collateral impact

The collateral impact of TP-240 reflection/amplification attacks is potentially significant for organizations with Internet-exposed Mitel MiCollab and MiVoice Business Express collaboration systems that are abused as DDoS reflectors/amplifiers.

This may include partial or full interruption of voice communications through these systems, as well as additional service disruption due to transit capacity consumption, state-table exhaustion of NATs, and stateful firewalls, etc.

Wholesale filtering of all UDP/10074-sourced traffic by network operators may potentially overblock legitimate Internet traffic, and is therefore contraindicated.

TP-240 reflection/amplification DDoS attacks are sourced from UDP/10074 and destined for the UDP port of the attacker’s choice. This amplified attack traffic can be detected, classified, traced back, and safely mitigated using standard DDoS defense tools and techniques.

Flow telemetry and packet capture via open-source and commercial analysis systems can alert network operators and end customers of TP-240 reflection/amplification attacks.

Network access control lists (ACLs), flowspec, destination-based remotely triggered blackhole (D/RTBH), source-based remotely triggered blackhole (S/RTBH), and intelligent DDoS mitigation systems can be used to mitigate these attacks.

Network operators should perform reconnaissance to identify and facilitate remediation of abusable TP-240 reflectors/amplifiers on their networks and/or the networks of their customers.  Operators of Mitel MiCollab and MiVoice Business Express collaboration systems should proactively contact Mitel in order to receive specific remediation instructions from the vendor.

Organizations with business-critical public-facing Internet properties should ensure that all relevant network infrastructure, architectural, and operational Best Current Practices (BCPs) have been implemented, including situationally specific network access policies that only permit Internet traffic via required IP protocols and ports. Internet access network traffic to/from internal organizational personnel should be isolated from Internet traffic to/from public-facing Internet properties, and served via separate upstream Internet transit links.

DDoS defenses for all public-facing Internet properties and supporting infrastructure should be implemented in a situationally appropriate manner, including periodic testing to ensure that any changes to the organization’s servers/services/applications are incorporated into its DDoS defense plan.

It is imperative that organizations operating mission-critical public-facing Internet properties and/or infrastructure ensure that all servers/services/application/datastores/infrastructure elements are protected against DDoS attack, and are included in periodic, realistic tests of the organization’s DDoS mitigation plan. Critical ancillary supporting services such as authoritative and recursive DNS servers must be included in this plan.

Network operators should implement ingress and egress source address validation in order to prevent attackers from initiating reflection/amplification DDoS attacks.

All potential DDoS attack mitigation measures described in this document MUST be tested and customized in a situationally appropriate manner prior to deployment on production networks.

Mitigating factors

Operators of Internet-exposed TP-240-based Mitel MiCollab and MiVoice Business Express collaboration systems can prevent abuse of their systems to launch DDoS attacks by blocking incoming Internet traffic destined for UDP/10074 via access control lists (ACLs), firewall rules, and other standard network access control policy enforcement mechanisms.

Mitel have provided patched software versions that prevent TP-240-equipped MiCollab and MiVoice Business Express collaboration systems from being abused as DDoS reflectors/amplifiers by preventing exposure of the service to the Internet. Mitel customers should contact the vendor for remediation instructions.

Collateral impact to abusable TP-240 reflectors/amplifiers can alert network operators and/or end-customers to remove affected systems from “demilitarized zone” (DMZ) networks or Internet Data Centers (IDCs), or to disable relevant UDP port-forwarding rules that allow specific UDP/10074 traffic sourced from the public Internet to reach these devices, thereby preventing them from being abused to launch reflection/amplification DDoS attacks.

The amplified attack traffic is not fragmented, so there is no additional attack component consisting of non-initial fragments, as is the case with many other UDP reflection/amplification DDoS vectors.

Implementation of ingress and egress source-address validation (SAV; also known as anti-spoofing) can prevent attackers from launching reflection/amplification DDoS attacks.

Conclusion

Unfortunately, many abusable services that should not be exposed to the public Internet are nevertheless left open for attackers to exploit. This scenario is yet another example of real-world deployments not adhering to vendor guidance. Vendors can prevent this situation by adopting “safe by default” postures on devices before shipping.

Reflection/amplification DDoS attacks would be impossible to launch if all network operators implemented ingress and egress source-address validation (SAV, also known as anti-spoofing).  The ability to spoof the IP address(es) of the intended attack target(s) is required to launch such attacks. Service providers must continue to implement SAV in their own networks, and require that their downstream customers do so.

As is routinely the case with newer DDoS attack vectors, it appears that after an initial period of employment by advanced attackers with access to bespoke DDoS attack infrastructure, TP-240 reflection/amplification has been weaponized and added to the arsenals of so-called “booter/stresser” DDoS-for-hire services, placing it within the reach of the general attacker population.

Collaboration across the operational, research, and vendor communities is central to the continued viability of the Internet. The quick response to and ongoing remediation of this high-impact DDoS attack vector has only been possible as a result of such collaboration. Organizations with a vested interest in the stability and resiliency of the Internet should embrace and support cross-industry cooperative efforts as a core principle.

The combined efforts of the research and mitigation task force demonstrates that successful collaboration across industry peers to quickly remediate threats to availability and resiliency is not only possible, but is also increasingly critical for the continued viability of the global Internet.

Sources

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-26143/
https://www.mitel.com/en-ca/support/security-advisories/mitel-product-security-advisory-22-0001
https://www.cisa.gov/uscert/ncas/alerts/TA14-017A
https://www.senki.org/ddos-attack-preparation-workbook/
https://www.manrs.org/resources/
https://www.rfc-editor.org/info/bcp38
https://www.rfc-editor.org/info/bcp84
https://datatracker.ietf.org/doc/html/rfc7039

Research and mitigation task force contributors

Researchers from the following organizations have contributed to the findings and recommendations described in this document:

In particular, the Mitigation Task Force would like to cite Mitel for their exemplary cooperation, rapid response, and ongoing participation in remediation efforts. Mitel quickly created and disseminated patched software, worked with their customers and partners to update affected systems, and supplied valuable expertise as the Task Force worked to formulate this document.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/887159/

Security updates have been issued by Debian (gif2apng and twisted), Mageia (golang, kernel, and webmin), openSUSE (chromium, cyrus-sasl, and opera), Red Hat (virt:rhel and virt-devel:rhel), Slackware (mozilla), SUSE (cyrus-sasl), and Ubuntu (glibc and redis).

International Women’s Day 2022

Post Syndicated from Sofía Celi original https://blog.cloudflare.com/international-womens-day-2022/

International Women’s Day 2022

“I would venture to guess that Anon,
who wrote so many poems without signing them,
was often a woman.” – Virginia Woolf

International Women’s Day 2022

Welcome to International Women’s Day 2022! Here at Cloudflare, we are happy to celebrate it with you! Our celebration is not only this blog post, but many events prepared for the month of March: our way of honoring Women’s History Month by showcasing women’s empowerment. We want to celebrate the achievements, ideas, passion and work that women bring to the world. We want to advocate for equality and to achieve gender parity. And we want to highlight the brilliant work that our women colleagues do every day. Welcome!

This is a time of celebration but also one to reflect on the current state. The global gender gap is not expected to close for another 136 years. This gap has also worsened due to the COVID-19 pandemic, which has negatively impacted the lives of women and girls by deepening pre-existing inequalities. Improving this state is a collective effort—we all need to get involved!

Who are we? Womenflare!

First, let’s introduce ourselves. We are Womenflare—Cloudflare’s Employee Resource Group (ERG) for all who identify as and advocate for women. We’re an employee-led group that is here to empower, represent, and support.

Our purpose is not only to celebrate women’s achievements but also to shed a light on inequalities. That is why for International Women’s Day 2022, we’re joining in focusing on the theme of #BreakTheBias throughout our month of events and activities:

We can break the bias in our communities.
We can break the bias in our workplaces.
We can break the bias in our schools, colleges, and universities.
Together, we can all break the bias –
on International Women’s Day (IWD) and beyond

What are some of our internal activities for this month?

Celebrating International Women’s Day

Internally, we are kicking off our celebration on March 8. We will be joined by several women from North Coast hip hop improv comedy group. We hope this fun and freestyle event will encourage participants to think about unconscious biases, breaking them down, and how they can get more involved in empowering the women around them.

Intersectionality and Allyship at Cloudflare

Following our kick-off celebrations, we’re hosting open discussions about intersectionality and allyship alongside some of our fellow Employee Resource Groups including Afroflare, Asianflare, Flarability, and Nativeflare. It’s important to us to include other ERGs in these conversations since the goal of empowerment, representation, and support is shared among us and can’t be done alone. And we want to play closer attention to the layers that form a person’s social identity, creating compounding experiences of discrimination. “All inequality is not created equal,” says Kimberlé Crenshaw, the law professor who coined “intersectional feminism” term in 1989. Understanding the way different inequalities play a role in a person’s life means understanding the history, systematic discrimination, and the non-uniformity of it.

Internal Leadership Panel

Last year, we brought together an internal panel of women leaders at Cloudflare to share their journeys and lessons learned. It was extremely well received, so we decided to build upon its success by inviting another group of internal women leaders to discuss their experiences and insights with us. Some important takeaways from these panel discussions have been the realization that most backgrounds and journeys are vastly different, paths to success are often rocky but rewarding, and perseverance, tenacity, and an open mind, often rule the day. What better way to learn from others and encourage more women to lead!

What can we all do?

Allyship is integral to systemic change. An ally is someone who recognizes unearned privileges in their lives and takes responsibility to end patterns of injustice. At Cloudflare, we’re working hard to build more diverse and equitable teams, as well as create and maintain an environment that is inclusive and welcoming. There are many actions you can take as an ally; some include:

  • Educating yourself: listen to the experiences of your women colleagues and work with them to understand their perspectives.
  • Amplifying women’s opinions and advocating for them: speak up for others and champion them when they need support and encouragement.
  • Taking action in the workplace: if you see inequality or discrimination happening, reach out to discuss further and understand what can be done.
  • Advocating for diversity: talk with your peers and leaders about the ways you can get involved with improving diversity, equity, and inclusion.

Celebrate International Women’s Day and Women’s Empowerment Month in your own creative ways! And all throughout the year, remember to empower women and to recognize them in such a way that their work is no longer anonymous. Join the #IWD2022 movement — #BreakTheBias this month and beyond!

International Women’s Day 2022

Using Radar to Read Body Language

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/using-radar-to-read-body-language.html

Yet another method of surveillance:

Radar can detect you moving closer to a computer and entering its personal space. This might mean the computer can then choose to perform certain actions, like booting up the screen without requiring you to press a button. This kind of interaction already exists in current Google Nest smart displays, though instead of radar, Google employs ultrasonic sound waves to measure a person’s distance from the device. When a Nest Hub notices you’re moving closer, it highlights current reminders, calendar events, or other important notifications.

Proximity alone isn’t enough. What if you just ended up walking past the machine and looking in a different direction? To solve this, Soli can capture greater subtleties in movements and gestures, such as body orientation, the pathway you might be taking, and the direction your head is facing — ­aided by machine learning algorithms that further refine the data. All this rich radar information helps it better guess if you are indeed about to start an interaction with the device, and what the type of engagement might be.

[…]

The ATAP team chose to use radar because it’s one of the more privacy-friendly methods of gathering rich spatial data. (It also has really low latency, works in the dark, and external factors like sound or temperature don’t affect it.) Unlike a camera, radar doesn’t capture and store distinguishable images of your body, your face, or other means of identification. “It’s more like an advanced motion sensor,” Giusti says. Soli has a detectable range of around 9 feet­ — less than most cameras­ — but multiple gadgets in your home with the Soli sensor could effectively blanket your space and create an effective mesh network for tracking your whereabouts in a home.

“Privacy-friendly” is a relative term.

These technologies are coming. They’re going to be an essential part of the Internet of Things.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close