Къса памет, вярно куче

Post Syndicated from Светла Енчева original https://toest.bg/kusa-pamet-vyarno-kuche/

Тази седмица един от героите в сериала „Лъжите в нас“ по „Нова телевизия“ припомни теменужките на Костов. Въпреки че помня правителството на Иван Костов, отношението на този бивш премиер към теменужките бях забравила. А и темата е отпреди двайсетина години. Напоследък обаче забелязвам около себе си признаци на далеч по-къса памет. В собствения ми либералнодемократичен „балон“ смятат, че в настоящата ситуация на война с Украйна управлението на ГЕРБ би било за предпочитане пред настоящата четворна коалиция, включваща БСП. Защото именно на БСП и на президента Румен Радев (чиято кандидатура за първи мандат беше подкрепена от „столетницата“) се дължат колебливото отношение на страната ни и отказът да изпратим военна помощ в Украйна.

Извън либералнодемократичния „балон“ промяната е още по-отчетлива. Според социологическите изследвания през последния месец се наблюдава тенденция на спад на доверието към „Продължаваме промяната“, довела дотам, че ГЕРБ отново е най-предпочитаната политическа сила у нас. Дори да сме критични към резултатите на социологическите агенции, тези промени в общественото мнение трудно може да бъдат пренебрегнати.

На какво се дължат спадът в подкрепата за правителството и нарасналата популярност на ГЕРБ?

На първо място (не непременно по важност), от партията на Борисов проявяват хиперактивност, изказвайки се публично по всяка възможна тема. Стратегията на ГЕРБ е всъщност много простичка – всяко действие на правителството се подлага на критика, за всеки проблем се обвинява правителството, дори проблемът да е глобален. И се казва как по времето на ГЕРБ всичко е било по-хубаво от сега.

На второ място, мейнстрийм медиите охотно и като цяло безкритично ретранслират безбройните изяви на ГЕРБ. И по този начин се създава впечатлението, че макар и формално не на власт, партията на Борисов продължава да бъде най-важният политически субект у нас. Нека само си припомним как в продължение на два дни най-гледаните телевизии спираха планираните предавания в праймтайма си, за да излъчват събитията около ареста на Борисов, Горанов и Арнаудова.

Трето, опитите за „изчегъртване“ (по Слави Трифонов) на ГЕРБ и главния прокурор от властта, съчетано с медийната хиперактивност на „изчегъртваните“, създава впечатление, че те са обект на репресии. Особено на фона на не толкова добре комуникираната акция с ареста на Борисов, Горанов и Арнаудова и на експресивния нрав на вътрешния министър Бойко Рашков.

Четвърто, президентът Румен Радев нееднократно проявява остра критичност към правителството – и по темата за Македония, и във връзка с отношението към войната в Украйна, и по всевъзможни по-дребни въпроси. А рейтингът на Радев в началото на втория му мандат е по-висок, отколкото на който и да било политически лидер у нас.

Пето, обективните обстоятелства никак не помагат на правителството на Кирил Петков. Войната в Украйна и неочаквано високата инфлация биха били изпитание за всяко управление. А в България има и множество нереформирани сектори, неизкоренена корупция, както и прокуратура, която е във война с останалите власти. Всеки негатив се „пише на сметката“ на настоящото управление.

Шесто, самото правителство е нестабилна констелация от четири политически субекта (ако броим и мандатоносителите на „Продължаваме промяната“ и трите партии в „Демократична България“, субектите стават осем). Между тях има множество несъвместимости – БСП не иска конфронтация с Русия, от ДБ са за строги санкции и незабавна военна помощ за Украйна, ИТН и ПП не са склонни на компромис за кандидатите си за ръководител на БНБ и т.н.

Ала действително ли позицията на ГЕРБ по отношение на Русия щеше да е по-твърда от тази на настоящото правителство? За да отговорим на този въпрос, следва да си припомним какви бяха отношенията на доскорошните управляващи и лично на Бойко Борисов с режима на Владимир Путин.

Помните ли кученцето, което Борисов подари на Путин?

През 2010-та, година след като оглави правителството за първи път, Бойко Борисов подари на Владимир Путин домашен любимец, порода българско овчарско куче. Палето носеше името Йорго. Няма данни бившият премиер да е подарявал животно на друг държавен ръководител – само на внуците си. В културата на много народи, включително България и Русия, кучето е символ на вярност – котка или папагалче не биха имали подобна политическа тежест. Подаряването на животно освен това е задължаващо в дългосрочен план. Получателят трябва да се грижи за него, а с това – и постоянно да си спомня от кого го е получил.

Три години по-късно Путин публикува снимки как си играе с порасналото и видимо добре гледано куче. Но то вече не се казва Йорго, а Бъфи – името му е сменено след конкурс, резултатите от който игнорират предложенията, включващи асоциации с България, а се избира англоезичен вариант. Йорго впрочем е гръцко име (българският вариант би бил Гошо или Жоро), но не това е причината за преименуването на домашния любимец. Този акт също има символно значение – един вид, „ти ми го подари, но то вече е мое и не ме задължава към теб; твоята вярност към мен не е основание да съм ти верен и аз“.

За цялото управление на ГЕРБ не беше направено нищо, за да се намали зависимостта на България от Русия.

За сметка на това властта вложи цялата си страст в изграждането на газопровода „Турски поток“. И три милиарда лева на данъкоплатците – колкото струват и изтребителите F-16, които САЩ предлагат да продадат на България и чиято цена се смята за твърде висока. Борисов нарече газопровода „Балкански поток“ в неуспешен опит да приспи вниманието на критиците на изграждането му. А той всъщност си е руски, поради което бяха и критиките. Ползата на България от него е неясно каква, защото минаващият през тръбите газ така и не влиза в страната ни – планирано е той да отива на преференциални цени в Сърбия. Понастоящем Сърбия е една от малкото държави, които застават по-скоро на страната на Русия в контекста на войната в Украйна.

Зависимостта на България от Русия е особено силна в полето на пропагандата. И в това отношение управлението на ГЕРБ не положи усилия, за да ѝ се противопостави. Нещо повече – то дори допринесе за официализирането ѝ, особено по времето на третото правителство на Борисов, в което ВМРО беше коалиционен партньор. Именно тогава избухнаха кампаниите срещу Истанбулската конвенция, Стратегията за детето, Закона за социалните услуги, чиято крайна цел беше България да се настрои против западноевропейската либерална демокрация. Въпреки че тези кампании се осъществиха и с подкрепата на евангелистки фундаменталисти, без машината на руската пропаганда те едва ли биха постигнали особен успех.

Като говорим за зависимост и руска пропаганда, нека си припомним също, че в края на управлението на ГЕРБ и ВМРО България наложи вето върху членството на Северна Македония в ЕС. Условията за отпадането му бяха унизителни и на практика невъзможни – от македонците се искаше едва ли не да признаят, че са българи. Тоест да приемат, че нито идентичността им, нито езикът им са техни. Преди началото на войната в Украйна не беше толкова очевидно, но вече става все по-ясно, че упорството на България срещу членството на югозападната ѝ съседка в ЕС обслужва не толкова българския, колкото руския интерес.

От 2013 г. насам поне стотина богати чужденци, някои от тях – руснаци, получиха „златни паспорти“.

Въпреки че гражданството срещу инвестиции се предоставя от оглавяваната от вицепрезидента Комисия по гражданство, схемите, благодарение на които то се осъществява, не биха били възможни без участието (или поне без благословията) на различни държавни структури. За „златните паспорти“ България е критикувана от Европейската комисия, те са една от причините и за санкционирането на Делян Пеевски по закона „Магнитски“. А близката до ГЕРБ прокуратура, както е известно, упорито отказва да разследва Пеевски.

Руските граждани, които имаха нужда от убежище обаче, трудно можеха да го получат в България по времето на управлението на ГЕРБ. Опозиционерът от Русия Евгений Чупов, който беше едно от малкото щастливи изключения, накрая получи легален статут, след като Агенцията за бежанците оттегли отказа си в резултат медийното внимание към случая му.

ГЕРБ не направи нищо и срещу радикалните пропутински групировки у нас.

По време на управлението на Борисов държавата не само допускаше шествия на финансираната от Кремъл ултранационалистическа група рокери „Нощните вълци“ у нас, а през 2016 г. полицията дори арестува и подложи на физическо насилие протестиращи срещу нея. „Нощните вълци“ впрочем си имат и български клон, съдържанието на чиято страница във Facebook е почти изцяло на руски.

Управлението на Борисов не забрани и откровено прокремълските, антиевропейски и антисемитски паравоенни формирования като БНО „Шипка“ и Военен съюз „Васил Левски“. Напротив – те набраха популярност по време на бежанската криза в качеството на „ловци на мигранти“. Тези организации са въоръжени и нямат никакви притеснения да заплашват медии и да клеветят правозащитни организации.

По времето на ГЕРБ отравянето на Емилиян Гебрев беше потулено, а разследването на случая впоследствие – спряно.

Производителят на оръжие Емилиян Гебрев беше отровен през 2015 г. по начин, подобен на отравянията на Сергей Скрипал и по-късно – на Алексей Навални. След близо 4-годишно мълчание по случая тогавашният главен прокурор Сотир Цацаров изрази предположението, че Гебрев ще да е ял рукола с пестициди. Тук е уместно да припомним, че по повод на Цацаров бившият градски прокурор Николай Кокинов беше казал на Бойко Борисов: „Ти си го избра.“

През 2020 г., вече по времето на Иван Гешев, прокуратурата спря разследването за отравянето на Гебрев. Въпреки обоснованите подозрения, че в него са участвали руски агенти, отговорни и за покушението срещу Скрипал. И броени дни след отравянето на Навални. Чак през април 2021 г. прокуратурата „се сети“, че в случая с Гебрев има руска връзка. Това стана в контекста на парламентарните избори в началото на същия месец, на които ГЕРБ формално победи, но не успя да излъчи правителство. На фона на силните обществени настроения в онзи момент срещу ГЕРБ и главния прокурор този ход може да се интерпретира като отчаян опит да демонстрират, че са „на правилната страна на историята“.

По същия начин следва да се разбират и действията им срещу „руски шпиони“.

Акциите винаги стават в определен контекст, в който ГЕРБ и прокуратурата решават да играят критични към Русия, защото разчистват политически сметки. Сега, когато БСП е във властта, а Румен Радев отново е президент, е особено лесно да заемат проевропейска и критична към Русия поза. Също като унгарския премиер Виктор Орбан, с когото ги свързва взаимна симпатия, Борисов е способен да разиграва различни отношения към Русия според необходимостта на момента, но в критични ситуации е вярното куче на Путин, което се очаква да бъде.

Затова е добре да нямаме чак толкова къса памет. Ако сме още по-паметливи, може да си спомним, че Бойко Борисов лично е участвал в етническото прочистване, известно като „Възродителния процес“. Две десетилетия по-късно той твърди, че целите на „Възродителния процес“ са си били хубави, само методите били… „объркани“. Може да си спомним и че е бил бодигард на вече сваления от власт диктатор Тодор Живков и семейството му. Както и че въпреки твърденията си, че след промените е бил „от СДС“, той е бил член на БКП и през 1991 г. дори напуска МВР, защото отказва да се деполитизира. Което не му пречи после да се пише антикомунист, за да бъде приета партията му в европейското консервативно семейство.

Когато пак ни изглежда, че в настоящата ситуация ГЕРБ е за предпочитане пред БСП, нека просто замълчим за момент… и да си спомним.

Заглавна снимка: Владимир Путин с кучето Бъфи през 2013 г. © Пресслужбата на руския президент / Wikimedia

Източник

Екатерина Бончева: „Разклоненията на ДС са като ракови образувания в тялото на България“

Post Syndicated from Венелина Попова original https://toest.bg/ekaterina-boncheva-interview/

Екатерина Бончева е журналистка, в периода 1992–2006 г. води политически предавания в радиостанциите „Свободна Европа“ и „Нова Европа“ и в телевизия „Европа“. През 2007 г. е избрана от Народното събрание за член на Комисията по досиетата, в чийто състав е и досега. По повод приетите наскоро промени в Закона за досиетата Венелина Попова разговаря с Екатерина Бончева за ролята на Държавна сигурност в Българския преход, за несъстоялата се лустрация и резултатите от това, за мръсните тайни на бившия режим и за мрежата на репресивните му служби, обхванала цялата държава.


С приетите на първо четене промени в Закона за досиетата, принадлежността към Държавна сигурност на лица, които кандидатстват за публична длъжност, ще се обявява всеки път. Такава беше и нормата на закона до 2021 г., когато Върховният административен съд излезе с решение, според което Комисията по досиетата няма компетентност да обявява втори път вече установена и обявена принадлежност към ДС. Какво е значението на тази поправка?

Петнайсет години след началото на прилагането на закона ВАС се опита с тази противоречива практика фактически да цензурира дейността на Комисията и да ограничи правото на хората да получават информация от обществена значимост. Казвам „противоречива практика“, защото този проблем се появи някак неочаквано, и то при един точно определен състав на ВАС.

След като десет години имаше последователна съдебна практика да се спазва този текст от закона, който гласи, че едно лице, което е на публична длъжност или извършва публична дейност, се обявява толкова пъти, колкото различни позиции заема то, ВАС се опита за осакати закона. Той прегази едно решение на Конституционния съд, което ни задължаваше да обявяваме лицата по начина, определен в закона. Има и друго решение – на Европейския съд по правата на човека от 15 февруари 2017 г. по делото „Анчев срещу България“. То потвърди десетгодишната практика на Комисията, че лицата с установена принадлежност към ДС подлежат на проверка и публично осветяване всеки път, когато попадат в задължителната норма на закона, независимо дали има, или няма нови доказателства за тях.

Ето един много показателен пример: „честният“ частник Валентин Моллов, собственик на Първа частна банка, е обявяван като кредитен милионер от Комисията по досиетата 13 пъти. Има ли значение дали ще покажем един път колко пари е окрал и не е върнал на държавата, образно казано, или 13 пъти? Необезпечените кредити от банките, които е вземал, са нашите пари. Аз се надявам парламентът да потвърди на второ четене приетите в сряда поправки в закона, иначе гражданите щяха да знаят, че Моллов е взел пари без обезпечение само от една банка. То и от една банка да вземеш кредит, който после не връщаш, е престъпление, представете си какво е от 13 банки. Велизар Енчев сме го обявявали 17 пъти, Красимир Каракачанов – 13 пъти. Става дума за хора, които формират политики и имат обвързаност по някакъв начин.

Поправката връща справедливостта на закона. Но защо според Вас три десетилетия не се събра достатъчно политическа и законодателна воля да се приемат лустрационни текстове в закона, които да спрат инфилтрирането на бивши щатни служители на ДС и на тяхната агентура във всички публични сфери?

След толкова години аз вече мисля, че не ставаше дума толкова за воля, колкото за желание този оздравителен процес, наречен „лустрация“, да бъде задействан и в България. Не искам да генерализирам нещата и трябва да кажа, че в началото на Прехода депутати от СДС внесоха такъв законопроект в парламента. Тогава президентът Петър Стоянов се опита да убеди парламентарната група, че лустрация не трябва да има, защото ще ни попречи да привлечем комунистите на своя страна. Въпреки това президентът подписа закона, но Конституционният съд, в който имаше доста лица с агентурно минало, го „поряза“. В мандата на президента Росен Плевнелиев аз отново поставих въпроса за лустрацията и тогава той ми каза: „Катя, има решение на КС.“ Така е, но всяко решение на КС може да бъде прегласувано и променено, не казвам заобиколено. Връщам се към тези събития, които подкрепят тезата ми, че политическата ни класа нямаше желание да скъса пъпната си връв с комунистическия режим.

Съпротивата срещу разкриването на архивите на ДС в България е огромна, особено в средите на БСП, и продължава през всичките години на Прехода и до днес. Видяхме в сряда кои групи гласуваха против внесените от „Демократична България“ промени в Закона за досиетата. Колко пъти през тези години имаше опити за затваряне на достъпа до архивите на ДС и за закриване на Комисията по досиетата, която започна работа през 2007 г.?

Тринайсет опита имаше за закриване на Комисията или за саботиране на нейната дейност през всичките тези години. Що се отнася до затварянето на архивите, Симеон Сакскобургготски чрез едно изменение и допълнение на Закона за класифицираната информация закри тогавашната Комисия по досиетата с председател Методи Андреев. Но след това под вътрешен и външен натиск, най-вече от ЕС, беше създадена нашата комисия. Тя също не живя безметежно, защото имаше множество опити да бъде саботирана, осакатена и дори закрита – слава богу, до този момент винаги са били неуспешни.

За петнайсет години успя ли Комисията да събере архивите от всички ведомства и да ги прочете? И съдействаха ли ви службите и институциите с предоставянето на информация, както изисква законът?

Ако използваме българската поговорка, ще отговоря „со кротце, со благо и со малко кьотек“, макар че мина без пердах. Истината е, че не всички служби предадоха с желание своите архиви, но тъй като по закон са задължени да го направят, успяхме да съберем този огромен архив в Комисията. Единствените документи, които все още не са при нас, са тези на Военния архив във Велико Търново. Разбира се, имаме проблеми и с Българската православна църква и нейния висш клир, който в лицето на сегашния патриарх отказва да ни предостави списъци с имена на служители, които подлежат на проверка. Така че аз не знам къде е вярата в България…

Да, БПЦ беше една от първите превзети от ДС институции, а свещениците, които са отказвали да сътрудничат, са били подлагани на нечовешки издевателства. Подобно репресивно отношение е имала тоталитарната държава и към останалите деноминации у нас.

Така е. Разклоненията на ДС са като ракови образувания в тялото на България. А някои от тях са все още живи клетки.

Можем ли да предположим каква част от досиетата е била унищожена от ДС преди и след 10 ноември 1989 г. и можем ли без тях да подредим пъзела, наречен Български преход? Всъщност като човек, който е прочел много от мръсните тайни на бившия режим, можете ли да кажете до каква степен Преходът у нас беше зависим от ДС?

Не мога да се ангажирам с точна цифра относно унищожените досиета, някои казват, че са около 40%. Но със сигурност мога да посоча, че до този момент в нашия архив на хартиен носител се намират около 15 км документи и 2,5 млн. регистрационни картона, които са най-ясното доказателство за принадлежност към ДС. Обявили сме над 20 000 сътрудници и вече имаме 2,6 млн. страници дигитализирани документи. Всичко това ни дава ясна картина за какво става дума и възможност да си представим образа на ДС, която, разбира се, е била изцяло подчинена на БКП, на Съветския съюз и на братовчедите от КГБ. Така че пъзелът е подреден, остават някои детайли, но каквито и други документи да дойдат при нас, едва ли ще се промени представата за тази зловеща машина за следене, преследване и репресии.

От статистиката, която посочвате, излиза, че едва ли не всеки трети пълнолетен българин по време на режима на Тодор Живков е бил свързан с ДС – или като щатен служител, или като агент.

Не съм силна в математиката, но мога да кажа, че лицата, които нашата комисия обявява по закон – такива, които са заемали публични длъжности, или хората, вземащи решения и определящи политиките в България, – са само горният, видимият пласт на айсберга. За тези надолу, които не подлежат на проверка, не знаем нищо. Но вярвайте ми, почти всеки около себе си има човек, свързан със службите. Искам да подчертая, че агентите, които са били вербувани след 9 септември 1944 г. с натиск, заплахи за живота, затвори, лагери, също са били жертви на службите. Но те са един малък процент от агентурата на ДС.

Не отговорихте на въпроса ми: щеше ли да изглежда Преходът по друг начин без активната роля на ДС?

Категорично. Обвързаностите на Държавна сигурност (макар и не във вида, в който е съществувала) с политиката, бизнеса, медиите и пр. са много ясни. Затова казвам, че една от най-големите грешки на Прехода е липсата на лустрация. Днес нямаше да водим този разговор, ако тя се беше състояла в България, и то още в самото начало на т.нар. Преход. Затова е и толкова важен Законът за досиетата, защото показва кой кой е в държавата. Когато се проследи кариерното развитие на един човек, заемащ високи публични позиции и свързан с ДС, както и хората, с които се заобикаля, се разбира каква мрежа представляват все още остатъците от комунистическите тайни служби.

Не останаха ли щатните офицери в ДС пощадени от заклеймяване и в сянката на агентурата, част от която сама е жертва на тези служби? И трябваше ли да има различен подход в обявяването на агентурата на политическата полиция и на хората, работили в разузнавателните служби на комунистическата държава, за какъвто настояваха някои?

Митът, че политическата полиция е била само в Шести отдел на ДС, е една от опорните точки на самите служби. Ако говорим за щатните служители и за сътрудници в различните битности, в закона те са равнопоставени. Но медийно и публично документите на агентите са по-интересни, защото в тях има лични истории. Докато документите на щатните служители представят само кариерното им развитие. Но ако то бъде проследено и ако се погледне като цяло кои хора са ставали офицери и служители в ДС, ще станат ясни самите принципи на тези служби. В книгата на Момчил Методиев и Мария Дерменджиева „Държавна сигурност – предимство по наследство“ те са показани много добре. Тогава ще се проникне дълбоко и във връзките на ДС с КГБ.

Да, но ние продължаваме да не знаем агентурата на КГБ у нас.

Да, не знаем агентурата на КГБ, но знаем (и това винаги излиза в решенията на Комисията) школите, в които са били изпращани служителите на Първо главно управление – тези на ГРУ, на КГБ, които са били 10-месечни и повече. Това е също една нишка, по която тази агентура може да бъде проследена.

Трябва ли да продължи да съществува Комисията по досиетата и с какви правомощия? И среща ли обществена и политическа подкрепа идеята, която с Вас лансираме от годиниза създаването на Институт за национална памет, каквито има в повечето страни от бившия Съветски блок?

Най-логичното развитие на дейността на Комисията е превръщането ѝ в Институт за национална памет и това е не само мое желание, а и на цялата Комисия. В него може да има примерно два департамента – един, който да се занимава с научноизследователска и популяризаторска дейност, каквато ние и досега извършваме, но не в такъв обем, и друг, който да продължи с проверката и обявяването на хората, свързани с ДС и военното разузнаване.

За съжаление, не съм оптимист, че това може да се случи в мандата на този парламент, макар че има четири законопроекта, които съм изпратила и на депутати от ДБ. При тази конфигурация в Народното събрание няма да се събере политическа воля за подкрепа на такъв проект. Но нов състав на Комисията така или иначе ще има. Въпросът е хората, които ще дойдат след нас, да имат същото отношение и същата мотивация, каквито имаме ние към ДС и към архивите на бившите тайни комунистически служби през тези 15 години. Резултатът от нашия труд са всички публикувани документи, 55-те сборника и цялата информация, предоставена на журналисти и анализатори. Този труд не бива да бъде похабен и напразен. Ето, виждате, че дори и днес във всеки политически скандал излиза ДС.

Заглавна снимка: © Светла Енчева

Източник

Introducing global endpoints for Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/introducing-global-endpoints-for-amazon-eventbridge/

This post is written by Stephen Liedig, Sr Serverless Specialist SA.

Last year, AWS announced two new features for Amazon EventBridge that allow you to route events from any commercial AWS Region, and across your AWS accounts. This supported a wide range of use cases, allowing you to implement easily global event delivery and replication scenarios.

From today, EventBridge extends this capability with global endpoints. Global endpoints provide a simpler and more reliable way for you to improve the availability and reliability of event-driven applications. The feature allows you to fail over event ingestion automatically to a secondary Region during service disruptions. Global endpoints also provide optional managed event replication, simplifying your event bus configuration and reducing the risk of event loss during any service disruption.

This blog post explains how to configure global endpoints in your AWS account, update your applications to publish events to the endpoint, and how to test endpoint failover.

How global endpoints work

Customers building multi-Region architectures today are building more resilience by using self-managed replication via EventBridge’s cross-Region capabilities. With this architecture, events are sent directly to an event bus in a primary Region and replicated to another event bus in a secondary Region.

Architecture

This event flow can be interrupted if there is a service disruption. In this scenario, event producers in the primary Region cannot PutEvents to their event bus, and event replication to the secondary Region is impacted.

To put more resiliency around multi-Region architectures, you can now use global endpoints. Global endpoints solve these issues by introducing two core service capabilities:

  1. A global endpoint is a managed Amazon Route 53 DNS endpoint. It routes events to the event buses in either Region, depending on the health of the service in the primary Region.
  2. There is a new EventBridge metric called IngestionToInvocationStartLatency. This exposes the time to process events from the point at which they are ingested by EventBridge to the point the first invocation of a target in your rules is made. This is a service-level metric measured across all of your rules and provides an indication of the health of the EventBridge service. Any extended periods of high latency over 30 seconds may indicate a service disruption.

These two features provide you with the ability to failover event ingestion automatically to the event bus in the secondary Region. The failover is triggered via a Route 53 health check that monitors a CloudWatch alarm that observes the IngestionToInvocationStartLatency in the primary Region.

If the metric exceeds the configured threshold of 30 seconds consecutively for 5 minutes, the alarm state changes to “ALARM”. This causes the Route 53 health check state to become unhealthy, and updates the routing of the global endpoint. All events from that point on are delivered to the event bus in the secondary Region.

The diagram below illustrates how global endpoints reroutes events being delivered from the event bus in the primary Region to the event bus in the secondary Region when CloudWatch alarms trigger the failover of the Route 53 health check.

Rerouting events with global endpoints

Once events are routed to the secondary Region, you have a couple of options:

  1. Continue processing events by deploying the same solution that processes events in the primary Region to the secondary Region.
  2. Create an EventBridge archive to persist all events coming through the secondary event bus. EventBridge archives provide you with a type of “Active/Archive” architecture allowing you to replay events to the event bus once the primary Region is healthy again.

When the global endpoint alarm returns to a healthy state, the health check updates the endpoint configuration, and begins routing events back to the primary Region.

Global endpoints can be optionally configured to replicate events across Regions. When enabled, managed rules are created on your primary and secondary event buses that define the event bus in the other Region as the rule target.

Under normal operating conditions, events being sent to the primary event bus are also sent to the secondary in near-real-time to keep both Regions synchronized. When a failover occurs, consumers in the secondary Region have an up-to-date state of processed events, or the ability to replay messages delivered to the secondary Region, before the failover happens.

When the secondary Region is active, the replication rule attempts to send events back to the event bus in the primary Region. If the event bus in the primary Region is not available, EventBridge attempts to redeliver the events for up to 24 hours, per its default event retry policy. As this is a managed rule, you cannot change this. To manage for a longer period, you can archive events being ingested by the secondary event bus.

How do you identify if the event has been replicated from another Region? Events routed via global endpoints have identical resource fields, which contain the Amazon Resource Name (ARN) of the global endpoint that routed the event to the event bus. The region field shows the origin of the event. In the following example, the event is sent to the event bus in the primary Region and replicated to the event bus in the secondary Region. The event in the secondary Region is us-east-1, showing the source of the event was the event bus in the primary Region.

If there is a failover, events are routed to the secondary Region and replicated to the primary Region. Inspecting these events, you would expect to see us-west-2 as the source Region.

Replicated events

The two preceding events are identical except for the id. Event IDs can change across API calls so correlating events across Regions requires you to have an immutable, unique identifier. Consumers should also be designed with idempotency in mind. If you are replicating events, or replaying them from archives, this ensures that there are no side effects from duplicate processing.

Setting up a global endpoint

To configure a Global endpoint, define two event buses — one in the “primary” Region (this is the same Region you configure the endpoint in) and one in a “secondary” Region. To ensure that events are routed correctly, the event bus in the secondary Region must have the same name, in the same account, as the primary event bus.

  1. Create two event buses in different Regions with the same name. This is quickly set up using the AWS Command Line Interface (AWS CLI):Primary event bus:
    aws events create-event-bus --name orders-bus --region us-east-1Secondary event bus:
    aws events create-event-bus --name orders-bus --region us-west-2
  2. Open the Amazon EventBridge console in the Region where you want to create the global endpoint. This aligns with your primary Region. Navigate to the new global endpoints page and create a new endpoint.
    EventBridge console
  3. In the Endpoint details panel, specify a name for your global endpoint (for example, OrdersGlobalEndpoint) and enter a description.
  4. Select the event bus in the primary Region, orders-bus.
  5. Select the Region used when creating the secondary event bus previously. the secondary event bus by choosing the Region it was created in from the dropdown.Create global endpoint
  6. Select the Route 53 health check for triggering failover and recovery. If you have not created one before, choose New health check. This opens an AWS CloudFormation console to create the “LatencyFailuresHealthCheck” health check and CloudWatch alarm in your account. EventBridge provides a template with recommended defaults for a CloudWatch alarm that is triggered when the average latency exceeds 30 seconds for 5 minutes.Endpoint configuration
  7. Once the CloudFormation stack is deployed, return to the EventBridge console and refresh the dropdown list of health checks. Select the physical ID of the health check you created.Failover and recovery
  8. Ensure that event replication is enabled, and create the endpoint.
    Event replication
  9. Once the endpoint is created, it appears in the console. The global endpoint URL contains the EndpointId, which you must specify in PutEvents API calls to publish events to the endpoint.
    Endpoint listed in console

Testing failover

Once you have created an endpoint, you can test the configuration by creating “catch all” rules on the primary and secondary event buses. The simplest way to see events being processed is to create rules with CloudWatch log group target.

Testing global endpoint failure over is accomplished by inverting the Route 53 health check. This can be accomplished using any of the Route 53 APIs. Using the console, open the Route 53 health checks landing page and edit the “LatencyFailuresHealthCheck” associated with your global endpoint. Check “Invert health check status” and save to update the health check.

Within a few minutes, the health check changes state from “Healthy” to “Unhealthy” and you see events flowing to the event bus in the secondary.

Configure health check

Using the PutEvents API with global endpoints

To use global endpoints in your applications, you must update your current PutEvents API call. All AWS SDKs have been updated to include an optional EndpointId parameter that you must set when publishing events to a global endpoint. Even though you are no longer putting events directly on the event bus, the EventBusName must be defined to validate the endpoint configuration.

PutEvents SDK support for global endpoints requires the AWS Common Runtime (CRT) library, which is available for multiple programming languages, including Python, Node.js, and Java:

https://github.com/awslabs/aws-crt-python
https://github.com/awslabs/aws-crt-nodejs
https://github.com/awslabs/aws-crt-java

To install the awscrt module for Python using pip, run:

python3 -m pip install boto3 awscrt

This example shows how to send an event to a global endpoint using the Python SDK:

import json
import boto3
from datetime import datetime
import uuid
import random

client = session.client('events', config=my_config)

detail = {
    "order_date": datetime.now().isoformat(),
    "customer_id": str(uuid.uuid4()),
    "order_id": str(uuid.uuid4()),
    "order_total": round(random.uniform(1.0, 1000.0), 2)
}

put_response = client.put_events(
    EndpointId=" y6gho8g4kc.veo",
    Entries=[
        {
            'Source': 'com.aws.Orders',
            'DetailType': 'OrderCreated',
            'Detail': json.dumps(detail),
            'EventBusName': 'orders-bus'
        }
    ]
)

Event producers can suffer data loss if the PutEvents API call fails, even if you are using global endpoints. Global endpoints allow you to automate the re-routing of events to another event-bus in another Region, but the health checks triggering the failover won’t be invoked for at least 5 minutes. It’s possible that your applications experience increased error rates for PutEvents operations before the failover occurs and events are routed to a healthy Region. To safeguard against message loss during this time, it’s best practice to use exponential retry and back-off patterns and durable store-and-forward capability at the producer level.

Conclusion

This blog shows how to create an EventBridge global endpoint to improve the availability and reliability of event ingestion of event-driven applications. This example shows how to use the PutEvents in the Python AWS SDK to publish events to a global endpoint.

To create a global endpoint using the API, see CreateEndpoint in the Amazon EventBridge API Reference. You can also create a global endpoint by using AWS CloudFormation using an AWS::Events:: Endpoints resource.

To learn more about EventBridge global endpoints, see the EventBridge Developer Guide. For more serverless learning resources, visit Serverless Land.

Git Credential Manager: authentication for everyone

Post Syndicated from Matthew John Cheetham original https://github.blog/2022-04-07-git-credential-manager-authentication-for-everyone/

Universal Git Authentication

“Authentication is hard. Hard to debug, hard to test, hard to get right.” – Me

These words were true when I wrote them back in July 2020, and they’re still true today. The goal of Git Credential Manager (GCM) is to make the task of authenticating to your remote Git repositories easy and secure, no matter where your code is stored or how you choose to work. In short, GCM wants to be Git’s universal authentication experience.

In my last blog post, I talked about the risk of proliferating “universal standards” and how introducing Git Credential Manager Core (GCM Core) would mean yet another credential helper in the wild. I’m therefore pleased to say that we’ve managed to successfully replace both GCM for Windows and GCM for Mac and Linux with the new GCM! The source code of the older projects has been archived, and they are no longer shipped with distributions like Git for Windows!

In order to celebrate and reflect this successful unification, we decided to drop the “Core” moniker from the project’s name to become simply Git Credential Manager or GCM for short.

Git Credential Manager

If you have followed the development of GCM closely, you might have also noticed we have a new home on GitHub in our own organization, github.com/GitCredentialManager!

We felt being homed under github.com/microsoft or github.com/github didn’t quite represent the ethos of GCM as an open, universal and agnostic project. All existing issues and pull requests were migrated, and we continue to welcome everyone to contribute to the project.

GCM Home

Interacting with HTTP remotes without the help of a credential helper like GCM is becoming more difficult with the removal of username/password authentication at GitHub and Bitbucket. Using GCM makes it easy, and with exciting developments such as using GitHub Mobile for two-factor authentication and OAuth device code flow support, we are making authentication more seamless.

Hello, Linux!

In the quest to become a universal solution for Git authentication, we’ve worked hard on getting GCM to work well on various Linux distributions, with a primary focus on Debian-based distributions.

Today we have Debian packages available to download from our GitHub releases page, as well as tarballs for other distributions (64-bit Intel only). Being built on the .NET platform means there should be a reduced effort to build and run anywhere the .NET runtime runs. Over time, we hope to expand our support matrix of distributions and CPU architectures (by adding ARM64 support, for example).

Due to the broad and varied nature of Linux distributions, it’s important that GCM offers many different credential storage options. In addition to GPG encrypted files, we added support for the Secret Service API via libsecret (also see the GNOME Keyring), which provides a similar experience to what we provide today in GCM on Windows and macOS.

Windows Subsystem for Linux

In addition to Linux distributions, we also have special support for using GCM with Windows Subsystem for Linux (WSL). Using GCM with WSL means that all your WSL installations can share Git credentials with each other and the Windows host, enabling you to easily mix and match your development environments.

Easily mix and match your development environments

You can read more about using GCM inside of your WSL installations here.

Hello, GitLab

Being universal doesn’t just mean we want to run in more places, but also that we can help more users with whatever Git hosting service they choose to use. We are very lucky to have such an engaged community that is constantly working to make GCM better for everyone.

On that note, I am thrilled to share that through a community contribution, GCM now has support for GitLab.  Welcome to the family!

GCM for everyone

Look Ma, no terminals!

We love the terminal and so does GCM. However, we know that not everyone feels comfortable typing in commands and responding to prompts via the keyboard. Also, many popular tools and IDEs that offer Git integration do so by shelling out to the git executable, which means GCM may be called upon to perform authentication from a GUI app where there is no terminal(!)

GCM has always offered full graphical authentication prompts on Windows, but thanks to our adoption of the Avalonia project that provides a cross-platform .NET XAML framework, we can now present graphical prompts on macOS and Linux.

GCM continues to support terminal prompts as a first-class option for all prompts.

GCM continues to support terminal prompts as a first-class option for all prompts. We detect environments where there is no GUI (such as when connected over SSH without display forwarding) and instead present the equivalent text-based prompts. You can also manually disable the GUI prompts if you wish.

Securing the software supply chain

Keeping your source code secure is a critical step in maintaining trust in software, whether that be keeping commercially sensitive source code away from prying eyes or protecting against malicious actors making changes in both closed and open source projects that underpin much of the modern world.

In 2020, an extensive cyberattack was exposed that impacted parts of the US federal government as well as several major software companies. The US president’s recent executive order in response to this cyberattack brings into focus the importance of mechanisms such as multi-factor authentication, conditional access policies, and generally securing the software supply chain.

Store ALL the credentials

Git Credential Manager creates and stores credentials to access Git repositories on a host of platforms. We hold in the highest regard the need to keep your credentials and access secure. That’s why we always keep your credentials stored using industry standard encryption and storage APIs.

GCM makes use of the Windows Credential Manager on Windows and the login keychain on macOS.

In addition to these existing mechanisms, we also support several alternatives across supported platforms, giving you the choice of how and where you wish to store your generated credentials (such as GPG-encrypted credential files).

Store all your credentials

GCM can now also use Git’s git-credential-cache helper that is commonly built and available in many Git distributions. This is a great option for cloud shells or ephemeral environments when you don’t want to persist credentials permanently to disk but still want to avoid a prompt for every git fetch or git push.

Modern windows authentication (experimental)

Another way to keep your credentials safe at rest is with hardware-level support through technologies like the Trusted Platform Module (TPM) or Secure Enclave. Additionally, enterprises wishing to make sure your device or credentials have not been compromised may want to enforce conditional access policies.

Integrating with these kinds of security modules or enforcing policies can be tricky and is platform-dependent. It’s often easier for applications to hand over responsibility for the credential acquisition, storage, and policy
enforcement to an authentication broker.

An authentication broker performs credential negotiation on behalf of an app, simplifying many of these problems, and often comes with the added benefit of deeper integration with operating system features such as biometrics.

Authentication broker diagram

I’m happy to announce that GCM has gained experimental support for brokered authentication (Windows-only at the moment)!

On Windows, the authentication broker is a component that was first introduced in Windows 10 and is known as the Web Account Manager (WAM). WAM enables apps like GCM to support modern authentication experiences such as Windows Hello and will apply conditional access policies set by your work or school.

Please note that support for the Windows broker is currently experimental and limited to authentication of Microsoft work and school accounts against Azure DevOps.

Click here to read more about GCM and WAM, including how to opt-in and current known issues.

Even more improvements

GCM has been a hive of activity in the past 18 months, with too many new features and improvements to talk about in detail! Here’s a quick rundown of additional updates since our July 2020 post:

  • Automatic on-premises/self-hosted instance detection
  • GitHub Enterprise Server and GitHub AE support
  • Shared Microsoft Identity token caches with other developer tools
  • Improved network proxy support
  • Custom TLS/SSL root certificate support
  • Admin-less Windows installer
  • Improved command line handling and output
  • Enterprise default setting support on Windows
  • Multi-user support
  • Better diagnostics

Thank you!

The GCM team would also like to personally thank all the people who have made contributions, both large and small, to the project:

@vtbassmatt, @kyle-rader, @mminns, @ldennington, @hickford, @vdye, @AlexanderLanin, @derrickstolee, @NN, @johnemau, @karlhorky, @garvit-joshi, @jeschu1, @WormJim, @nimatt, @parasychic, @cjsimon, @czipperz, @jamill, @jessehouwing, @shegox, @dscho, @dmodena, @geirivarjerstad, @jrbriggs, @Molkree, @4brunu, @julescubtree, @kzu, @sivaraam, @mastercoms, @nightowlengineer

Future work

While we’ve made a great deal of progress toward our universal experience goal, we’re not slowing down anytime soon; we’re still full steam ahead with GCM!

Our focus for the next period will be on iterating and improving our authentication broker support, providing stronger protection of credentials, and looking to increase performance and compatibility with more environments and uses.

AWS Security Profile: Philip Winstanley, Security Engineering

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-philip-winstanley-security-engineering/

AWS Security Profile: Philip Winstanley, Security Engineering
In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services (AWS) Security and help keep our customers safe and secure. This interview is with Philip Winstanley, a security engineer and AWS Guardian. The Guardians program identifies and develops security experts within engineering teams across AWS, enabling these teams to use Amazon Security more effectively. Through the empowerment of these security-minded Amazonians called “Guardians,” we foster a culture of informed security ownership throughout the development lifecycle.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for just over three years now. I joined in Dublin, Ireland, and I’ve since transferred back to the UK, back to my home city of Manchester. I’m a security engineer on the service team for AWS Managed Services (AMS). We support customer workloads in the cloud and help customers manage them, optimize them, and keep them safe and secure.

How did you get started in the world of security?

I was a software developer for many years, and in building software I discovered that security is an integral part of delivering safe and secure solutions to my customers. That really sparked my interest in the security space, and I started researching and learning about all the different types of attacks that were out there, and learning about organized crime. That led me to work with the UK’s National Crime Agency, where I became a special officer, and to the United Kingdom Royal Airforce, where I worked in the cyber defense team. I managed to merge my technical knowledge with my law enforcement and military knowledge, and then bring them all together as the security engineer that I am today.

What are you currently working on that you’re excited about?

I have the joy of working with full-spectrum security, which is everything from protecting our environments to detecting risks within our environments to responding to those risks. But the bulk of my work is in helping our service teams build safe and secure software. Sometimes we call that AppSec (application security), sometimes we call it secure development. As part of that, I work with a group of volunteers and specialists within engineering teams that we call Guardians. They are our security specialists embedded within AWS service teams. These are people who champion security and make sure that everything we build meets a high security bar, which often goes beyond what we’re asked to do by compliance or regulation. We take it that extra mile. As Guardians, we push our development teams to continually raise the bar on security, privacy, compliance, and the confidentiality of customer data.

What are the most important aspects of being a Guardian?

A Guardian is there to help teams do the right thing when it comes to security—to contextualize knowledge of their team’s business and technology and help them identify areas and opportunities to improve security. Guardians will often think outside the box. They will come at things from a security point of view, not just a development point of view. But they do it within the context of what our customers need. Guardians are always looking around corners; they’re looking at what’s coming next. They’re looking at the risks that are out there, looking at the way environments are evolving, and trying to build in protections now for issues that will come down the line. Guardians are there to help our service teams anticipate and protect against future risks.

How have you as a Guardian improved the quality of security outcomes for customers?

Many of our customers are moving to the cloud, some for the first time, and they have high standards around data sovereignty, around the privacy of the data they manage. In addition to helping service teams meet the security bar, Guardians seek to understand our customers’ security and privacy requirements. As a result, our teams’ Guardians inform the development of features that not only meet our security bar, but also help our customers meet their security, privacy, and compliance requirements.

How have you helped develop security experts within your team?

I have the joy of working with security experts from many different fields. Inside Amazon, we have a huge community of security expertise, touching every single domain of security. What we try to do is cross-pollinate; we teach each other about our own areas of expertise. I focus on application security and work very closely with my colleagues who work in threat intelligence and incident response. We all work together and collaborate to raise the bar for each of us, sharing our knowledge, our skills, our expertise. We do this through training that we build, we do it through knowledge-sharing sessions where we get together and talk about security issues, we do it through being jointly introspective about the work that we’ve done. We will even do reviews of each other’s work and bar raise, adding our own specialist knowledge and expertise to that of our colleagues.

What advice would you give to customers who are considering their own Guardians program?

Security culture is something that comes from within an organization. It’s also something that’s best when it’s done from the ground up. You can’t just tell people to be secure, you have to find people who are passionate about security and empower them. Give them permission to put that passion into their work and give them the opportunity to learn from security training and experts. What you’ll see, if you have people with that passion for security, is that they’ll bring that enthusiasm into the work from the start. They’ll already care about security and want to do more of it.

You’re a self-described “disruptive anti-CISO.” What does that mean?

I wrote a piece on LinkedIn about what it really is, but I’ll give a shorter answer. The world of information security is not new—it’s been around for 20, 30 years, so all the thinking around security comes from a world of on-premises infrastructure. It’s from a time before the cloud even existed and unfortunately, a lot of the security thinking out there is still borne of that age. When we’re in a world of hyper-scaled environments, where we’re dealing with millions of resources, millions of endpoints, we can’t use that traditional thinking anymore. We can’t just lock everything in a box and make sure no one’s got access to it. Quite the opposite, we need to enable innovations, we need to let the business drive that creativity and produce solutions, which means security needs to be an enabler of creativity, not a blocker. I have a firm belief that security plays a part in delivering solutions, in helping solutions land, and making sure that they succeed. Security is not and should never be a gatekeeper to success. More often than not in industries, that was the position that security took. I believe in the opposite—security should enable business. I take that thinking and use it to help AWS customers succeed, through sharing our experience and knowledge with them to keep them safe and secure in the cloud.

What’s the thing you’re most proud of in your career?

When I was at the National Crime Agency, I worked in the dark web threat intelligence unit and some of my work was to combat child exploitation and human trafficking. The work I did there was some of the most rewarding I’ve ever done, and I’m incredibly proud of what we achieved. But it wasn’t just within that agency, it was partnering with other organizations, police forces around the world, and cloud providers such as AWS that combat exploitation and help move vulnerable children into safety. Working to protect victims of crime, especially the most vulnerable, helped me build a customer-centric view to security, ensuring we always think about our end customers and their customers. It’s all about people; we are here to protect and defend families and real lives, not just 1’s and 0’s.

If you had to pick an industry outside of security, what would you want to do?

I have always loved space and would adore working in the space sector. I’m fascinated by all of the renewed space exploration that’s happening at the moment, be it through Blue Origin or Space X or any of these other people out there doing it. If I could have my time again, or even if I could pivot now in my career, I would go and be a space man. I don’t need to be an astronaut, but I would want to contribute to the success of these missions and see humanity go out into the stars.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Philip Winstanley

Philip Winstanley

Philip works in Security Engineering to help people, teams, and organizations succeed in the cloud. Philip brings his law enforcement and military experience, combined with technical expertise, to deliver innovative pragmatic security solutions.

Integrate Amazon Redshift native IdP federation with Microsoft Azure AD and Power BI

Post Syndicated from Maneesh Sharma original https://aws.amazon.com/blogs/big-data/integrate-amazon-redshift-native-idp-federation-with-microsoft-azure-ad-and-power-bi/

Amazon Redshift accelerates your time to insights with fast, easy, and secure cloud data warehousing at scale. Tens of thousands of customers rely on Amazon Redshift to analyze exabytes of data and run complex analytical queries.

As enterprise customers look to build their data warehouse on Amazon Redshift, they have many integration needs with the business intelligence (BI) tools they’re using. For customers who want to integrate Amazon Redshift with their existing identity provider (IdP) such as Microsoft Azure Active Directory (Azure AD) using BI tools and services such as Power BI Desktop and Power BI service, we have introduced a native IdP for Amazon Redshift to help you implement authentication and authorization for these tools in a seamless way.

Amazon Redshift native IdP simplifies the administration process of managing identities and permission. This feature provides native integration with Microsoft Azure AD, which you can use for authentication and authorization with tools like Power BI. It uses your existing IdP to simplify authentication and managing permissions. It does this by making it possible to share identity metadata to Amazon Redshift from your IdP. In this approach, an external IdP (such as Azure AD) issues an access token, which is passed to Amazon Redshift via a client, and then Amazon Redshift performs the token validation and claim extraction natively.

This post shows a step-by-step implementation of the Amazon Redshift native IdP setup with Azure AD, which demonstrates how to manage users and groups with an organizational directory, and how to federate into Amazon Redshift. You don’t need to create AWS Identity and Access Management (IAM) roles, policies, separate database users, or groups in Amazon Redshift with this setup.

Solution overview

Using an Amazon Redshift native IdP has the following benefits:

  • You can manage users and groups from a centralized IdP
  • Enables your users to be automatically signed in to Amazon Redshift with their Azure AD accounts
  • You can automatically create Amazon Redshift roles with a namespace that represents external groups (such as Azure AD groups)
  • External user group membership is natively mirrored with Amazon Redshift roles and users

The general configuration steps of the Amazon Redshift native IdP approach are as follows:

  1. Register an application in Azure AD and set up groups.
  2. Collect Azure AD information for the Amazon Redshift IdP.
  3. Set up the IdP on Amazon Redshift.
  4. Set up Amazon Redshift permissions to external identities.
  5. Configure the client connection.

The following diagram illustrates the resulting solution.

To get authorized, the Power BI client sends an authentication request to the Azure enterprise application using Azure AD credentials. After verification, Azure sends a JSON web token (OAuth token) to the Power BI application. The Power BI application forwards the connection string with the OAuth token to Amazon Redshift. Amazon Redshift parses and validates the token, and requests group information from Azure AD. Upon reception, Amazon Redshift automatically creates the user and roles, and does the respective mapping.

Prerequisites

You need the following prerequisites to set up this solution:

  • A Microsoft Azure account that has an active subscription. You need to an admin role to set up the application on Azure AD.
  • Power BI Desktop version 2.102.683.0 64-bit and above downloaded and installed. In this example, we have use a Windows environment.
  • The latest version of the Microsoft Enterprise/Standard Gateway installed.
  • An AWS account with an Amazon Redshift cluster. In this post, we connect Power BI Desktop and service with a publicly accessible Amazon Redshift cluster.

Register an application in Azure AD and set up groups

To set up the Azure application and group permission, complete the following steps:

  1. Sign in to the Azure portal with your Microsoft account.
  2. Navigate to the Azure Active Directory application.
  3. Under Manage, choose App registrations and New registration.
  4. For Name, enter an application name (for example, nativeoauthsetup).
  5. Keep the default settings for the rest of the fields.
  6. Choose Register to complete the initial application registration.
  7. On the newly created application Overview page, locate the client ID and tenant ID and note down these IDs in order to register the IdP in Amazon Redshift later.
  8. Under Manage in the navigation pane, choose API permissions.
  9. Choose Add a permission.
  10. Choose Microsoft Graph and then choose Application permissions.
  11. Search for directory and select the Directory.Read.All permission.
  12. Choose Add permissions.
  13. Choose Grant admin consent.
  14. In the popup box, choose Yes to grant the admin consent.

The status of the permission shows Granted for with a green check mark.

  1. Under Manage in the navigation pane, choose Certificates & secrets.
  2. Choose Client secrets and choose New client secret.
  3. Enter a Description, select an expiration for the secret or specify a custom lifetime. We are keeping Microsoft recommended default expiration value of 6 months. Choose Add.
  4. Copy the secret value.

It would only be present one time and after that you cannot read it.

  1. On the Azure AD home page, under Manage in the navigation pane, choose Groups.
  2. Choose New group.
  3. In the New Group section, provide the required information.
  4. Choose No members selected and then search for the members.
  5. Select your members and choose Select. For this example, you can search your username and click select.

You can see the number of members in the Members section.

  1. Choose Create.

Collect Azure AD Information for Amazon Redshift IdP

Before we collect the Azure AD information, we need to identify the access token version from the application which you have created earlier. In the navigation pane, under Manage, choose Manifest section, then view the accessTokenAcceptedVersion parameter: null and 1 indicate v1.0 tokens, and 2 indicates v2.0 tokens.

To configure your IdP in Amazon Redshift, collect the following parameters from Azure AD. If don’t have these parameters, contact your Azure admin.

  • issuer – This is known as <Microsoft_Azure_issuer_value>. If you’re using the v1.0 token, use https://sts.windows.net/<Microsoft_Azure_tenantid_value>/. Currently, Power BI only uses v1.0 token. If you’re using the v2.0 token, use https://login.microsoftonline.com/<Microsoft_Azure_tenantid_value>/v2.0. To find your Microsoft Azure tenant ID, complete the following steps:
    • Sign in to the Azure portal with your Microsoft account.
    • Under Manage, choose App registrations.
    • Choose the Amazon Redshift application you created earlier.
    • Click on the Overview (left panel) page and under Essentials, note down the values.
  • client_id – This is known as <Microsoft_Azure_clientid_value> in the following sections. An example of a client ID is 5ab12345-1234-1a12-123a-11abc1a12ab1). To get your client ID value, locate the Amazon Redshift application you created earlier on the Azure portal; it’s listed in the Essentials section.
  • client_secret – This is known as <Microsoft_Azure_client_secret_value> in the following sections. An example of a client secret value is KiG7Q~FEDnE.VsWS1IIl7LV1R2BtA4qVv2ixB). To create your client secret value, refer to the steps in the previous section.
  • audience – This is known as <Microsoft_Azure_token_audience_value> in the following sections. With Power BI Desktop, you need to set the audience value as https://analysis.windows.net/powerbi/connector/AmazonRedshift.

Set up the IdP on Amazon Redshift

To set up the IdP on Amazon Redshift, complete the following steps:

  1. Log in to Amazon Redshift with a superuser user name and password using query editor v2 or any SQL client.
  2. Run the following SQL:
    CREATE IDENTITY PROVIDER <idp_name> TYPE azure 
    NAMESPACE '<namespace_name>' 
    PARAMETERS '{ 
    "issuer":"<Microsoft_Azure_issuer_value>", 
    "audience":["<Microsoft_Azure_token_audience_value>"],
    "client_id":"<Microsoft_Azure_clientid_value>", 
    "client_secret":"<Microsoft_Azure_client_secret_value>"
    }';

In our example, we use the v1.0 token issuer because as of this writing, Power BI only uses the v1.0 token:

CREATE IDENTITY PROVIDER oauth_standard TYPE azure
NAMESPACE 'aad'
PARAMETERS '{
"issuer":"https://sts.windows.net/e12b1bb1-1234-12ab-abc1-1ab012345a12/",
"audience":["https://analysis.windows.net/powerbi/connector/AmazonRedshift"],
"client_id":"5ab12345-1234-1a12-123a-11abc1a12ab1",
"client_secret":"KiG7Q~FEDnE.VsWS1IIl7LV1R2BtA4qVv2ixB"
}'
  1. To alter the IdP, use the following command (this new set of parameter values completely replaces the current values):
    ALTER IDENTITY PROVIDER <idp_name> PARAMETERS 
    '{
    "issuer":"<Microsoft_Azure_issuer_value>",
    "audience":["<Microsoft_Azure_token_audience_value>"], 
    "client_id":"<Microsoft_Azure_clientid_value>", 
    "client_secret":"<Microsoft_Azure_client_secret_value>"
    }';

  2. To view a single registered IdP in the cluster, use the following code:
    DESC IDENTITY PROVIDER <idp_name>;

  3. To view all registered IdPs in the cluster, use the following code:
    select * from svv_identity_providers;

  4. To drop the IdP, use the following command:
    DROP IDENTITY PROVIDER <idp_name> [CASCADE];

Set up Amazon Redshift permissions to external identities

The users, roles, and role assignments are automatically created in your Amazon Redshift cluster during the first login using your native IdP unless they were manually created earlier.

Create and assign permission to Amazon Redshift roles

In this step, we create a role in the Amazon Redshift cluster based on the groups that you created on the Azure AD portal.

The role name in the Amazon Redshift cluster looks like <namespace>:<azure_ad_group_name>, where the namespace is the one we provided in the IdP creation command and the group name is the one we specified when we were setting up the Azure application. In our example, it’s aad:rsgroup.

Run the following command in the Amazon Redshift cluster:

create role "<namespace_name>:<Azure AD groupname>";

For example:

create role "aad:rsgroup";

To grant permission to the Amazon Redshift role, enter the following command:

GRANT { { SELECT | INSERT | UPDATE | DELETE | DROP | REFERENCES } [,...]
 | ALL [ PRIVILEGES ] }
ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] }
TO role "<namespace_name>:<Azure AD groupname>";

Then grant relevant permission to the role as per your requirement. For example:

grant select on all tables in schema public to role "aad:rsgroup";

Create and assign permission to an Amazon Redshift user

This step is only required if you want to grant permission to an Amazon Redshift user instead of roles. We create an Amazon Redshift user that maps to a Azure AD user and then grant permission to it. If you don’t want to explicitly assign permission to an Amazon Redshift user, you can skip this step.

To create the user, use the following syntax:

CREATE USER "<namespace_name>:<Azure AD username>" PASSWORD DISABLE;

For example:

CREATE USER "aad:[email protected]" PASSWORD DISABLE;

We use the following syntax to grant permission to the Amazon Redshift user:

GRANT { { SELECT | INSERT | UPDATE | DELETE | DROP | REFERENCES } [,...]
 | ALL [ PRIVILEGES ] }
ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] }
TO "<namespace_name>:<Azure AD username>";

For example:

grant select on all tables in schema public to "aad:[email protected]"

Configure your client connection using an Amazon Redshift native IdP

In this section, we provide instructions to set up your client connection for either Power BI Desktop or the Power BI service.

Connect Power BI Desktop

In this example, we use Power BI Desktop to connect with Amazon Redshift using a native IdP. Use Power BI Desktop version: 2.102.683.0 64-bit and above.

  1. In your Power BI Desktop, choose Get data.
  2. Search for the Amazon Redshift connector, then choose it and choose Connect.
  3. For Server, enter your Amazon Redshift cluster’s endpoint. For example, test-cluster.ct4abcufthff.us-east-1.redshift.amazonaws.com.
  4. For Database, enter your database name. In this example, we use dev.
  5. Choose OK.
  6. Choose Microsoft Account.
  7. Choose Sign in.
  8. Enter your Microsoft Account credentials.

When you’re connected, you can see the message You are currently signed in.

  1. Choose Connect.

Congratulations! You are signed in using the Amazon Redshift native IdP with Power BI Desktop. Now you can browse your data.

After that, you can create your own Power BI report on the desktop version and publish it to your Microsoft account. For this example, we created and published a report named RedshiftOAuthReport, which I refer to later in this post.

Connect Power BI service

Now, let’s connect a Power BI gateway with Amazon Redshift using a native IdP. Before proceeding with below setup, please make sure you have downloaded and installed the latest version of the Microsoft Enterprise/Standard Gateway.

  1. Open the Power BI web application and sign in if necessary.

You can see the RedshiftOAuthReport report that we created earlier.

  1. In the navigation pane, under Datasets, choose the menu icon (three dots) next to the report name and then choose Settings.
  2. Enable Gateway connection on the settings page.
  3. Click on the arrow on right side and select Manually add to gateway.

  4. In the Data Source Settings section, enter the appropriate values:
    1. For Data Source Name, enter a name.
    2. For Data Source Type, choose Amazon Redshift.
    3. For Server, enter your Amazon Redshift cluster’s endpoint.
    4. For Database, enter your database name (for this post, we use dev).
    5. For Authentication Method, choose OAuth2.
  5. Choose Edit credentials.
  6. In the pop-up box, choose Sign in.
  7. Enter your Microsoft account credentials and follow the authentication process.
  8. After the authentication, choose Add on the Data Source Settings page.
  9. Make sure that Gateway connection is enabled. If not, enable it.
  10. Select your gateway from the gateway list.
  11. On the Maps to menu, choose your data source.
  12. Choose Apply.

Congratulations! You have completed the Amazon Redshift native IdP setup with Power BI web service.

Best Practices with Redshift native IdP:

  • Pre-create the Amazon Redshift roles based upon the groups which you have created on the Azure AD portal.
  • Assign permissions to Redshift roles instead of assigning to each individual external user. This will provide smoother end user experience as user will have all the required permission when they login using native IdP.

Troubleshooting

If your connection didn’t work, consider the following:

  • Enable logging in the driver. For instructions, see Configure logging.
  • Make sure to use the latest Amazon Redshift JDBC driver version 2.1.0.4 onwards, which supports Amazon Redshift native IdP authentication.
  • If you’re getting errors while setting up the application on Azure AD, make sure you have admin access.
  • If you can authenticate via the SQL client but get a permission issue or can’t see objects, grant the relevant permission to the role, as detailed earlier in this post.
  • If you get the error “claim value does not match expected value,” make sure you provided the correct parameters during Amazon Redshift IdP registration.
  • Check stl_error or stl_connection_log views on the Amazon Redshift cluster for authentication failures.

Summary

In this post, we covered the step-by-step process of integrating Amazon Redshift with Azure AD and Power BI Desktop and web service using Amazon Redshift native IdP federation. The process consisted of registering a Azure application, creating Azure AD groups, setting up the Amazon Redshift IdP, creating and assigning permission to Amazon Redshift roles, and finally configuring client connections.

For more information about Amazon Redshift native IdP federation, see:

If you have questions or suggestions, please leave a comment.


About the Authors

Maneesh Sharma is a Senior Database Engineer at AWS with more than a decade of experience designing and implementing large-scale data warehouse and analytics solutions. He collaborates with various Amazon Redshift Partners and customers to drive better integration.

Ilesh Garish is a Software Development Engineer at AWS. His role is to develop connectors for Amazon Redshift. Prior to AWS, he built database drivers for the Oracle RDBMS, TigerLogic XDMS, and OpenAccess SDK. He also worked in the database internal technologies at San Francisco Bay Area startups.

Debu-PandaDebu Panda is a Senior Manager, Product Management at AWS. He is an industry leader in analytics, application platform, and database technologies, and has more than 25 years of experience in the IT world.

Sergey Konoplev is a Senior Database Engineer on the amazon Redshift Team at AWS. Sergey has been focusing on Automation and improvement of database and data operations for more than a decade.

Rust 1.60.0 released

Post Syndicated from original https://lwn.net/Articles/890634/

Version
1.60.0
of the Rust language is available. Changes include
coverage-testing improvements, the return of incremental compilation, and
changes to the Instant type:

Prior to 1.60, the monotonicity guarantees were provided through
mutexes or atomics in std, which can introduce large performance
overheads to Instant::now(). Additionally, the panicking behavior
meant that Rust software could panic in a subset of environments,
which was largely undesirable, as the authors of that software may
not be able to fix or upgrade the operating system, hardware, or
virtualization system they are running on.

Journey to Adopt Cloud-Native Architecture Series #5 – Enhancing Threat Detection, Data Protection, and Incident Response

Post Syndicated from Anuj Gupta original https://aws.amazon.com/blogs/architecture/journey-to-adopt-cloud-native-architecture-series-5-enhancing-threat-detection-data-protection-and-incident-response/

In Part 4 of this series, Governing Security at Scale and IAM Baselining, we discussed building a multi-account strategy and improving access management and least privilege to prevent unwanted access and to enforce security controls.

As a refresher from previous posts in this series, our example e-commerce company’s “Shoppers” application runs in the cloud. The company experienced hypergrowth, which posed a number of platform and technology challenges, including enforcing security and governance controls to mitigate security risks.

With the pace of new infrastructure and software deployments, we had to ensure we maintain strong security. This post, Part 5, shows how we detect security misconfigurations, indicators of compromise, and other anomalous activity. We also show how we developed and iterated on our incident response processes.

Threat detection and data protection

With our newly acquired customer base from hypergrowth, we had to make sure we maintained customer trust. We also needed to detect and respond to security events quickly to reduce the scope and impact of any unauthorized activity. We were concerned about vulnerabilities on our public-facing web servers, accidental sensitive data exposure, and other security misconfigurations.

Prior to hypergrowth, application teams scanned for vulnerabilities and maintained the security of their applications. After hypergrowth, we established dedicated security team and identified tools to simplify the management of our cloud security posture. This allowed us to easily identify and prioritize security risks.

Use AWS security services to detect threats and misconfigurations

We use the following AWS security services to simplify the management of cloud security risks and reduce the burden of third-party integrations. This also minimizes the amount of engineering work required by our security team.

Detect threats with Amazon GuardDuty

We use Amazon GuardDuty to keep up with the newest threat actor tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs).

GuardDuty saves us time and reduces complexity, because we don’t have to continuously engineer detections for new TTPs and IOCs for static events and machine-learning-based detections. This allows our security analysts to focus on building runbooks and quickly responding to security findings.

Discover sensitive data with Amazon Macie for Amazon S3

To host our external website, we use a few public Amazon Simple Storage Service (Amazon S3) buckets with static assets. We don’t want developers to accidentally put sensitive data in these buckets, and we wanted to understand which S3 buckets contain sensitive information, such as financial or personally identifiable information (PII).

We explored building a custom scanner to search for sensitive data, but maintaining the search patterns was too complex. It was also costly to continuously re-scan files each month. Therefore, we use Amazon Macie to continuously scan our S3 buckets for sensitive data. After Macie makes its initial scan, it will only scan new or updated objects in those S3 buckets, which reduces our costs significantly. We added filter rules to exclude files of larger size and S3 prefixes to scan required objects and provided a sampling rate to further cost optimize scanning large S3 buckets (in our case, S3 buckets greater than 1 TB).

Scan for vulnerabilities with Amazon Inspector

Because we use a wide variety of operating systems and software, we must scan our Amazon Elastic Compute Cloud (Amazon EC2) instances for known software vulnerabilities, such as Log4J.

We use Amazon Inspector to run continuous vulnerability scans on our EC2 instances and Amazon Elastic Container Registry (Amazon ECR) container images. With Amazon Inspector, we can continuously detect if our developers are deploying and releasing vulnerable software on our EC2 instances and ECR images without setting up a third-party vulnerability scanner and installing additional endpoint agents.

Aggregate security findings with AWS Security Hub

We don’t want our security analysts to arbitrarily act on one security finding over another. This is time-consuming and does not properly prioritize the highest risks to address. We also need to track ownership, view progress of various findings, and build consistent responses for common security findings.

With AWS Security Hub, our analysts can seamlessly prioritize findings from GuardDuty, Macie, Amazon Inspector, and many other AWS services. Our analysts also use Security Hub’s built-in security checks and insights to identify AWS resources and accounts that have a high number of findings and act on them.

Setting up the threat detection services

This is how we set up these services:

Our security analysts use Security Hub-generated Jira tickets to view, prioritize, and respond to all security findings and misconfigurations across our AWS environment.

Through this configuration, our analysts no longer need to pivot between various AWS accounts, security tool consoles, and Regions, which makes the day-to-day management and operations much easier. Figure 1 depicts the data flow to Security Hub.

Aggregation of security services in security tooling account

Figure 1. Aggregation of security services in security tooling account

Delegated administrator setup

Figure 2. Delegated administrator setup

Incident response

Before hypergrowth, there was no formal way to respond to security incidents. To prevent future security issues, we built incident response plans and processes to quickly address potential security incidents and minimize the impact and exposure. Following the AWS Security Incident Response Guide and NIST framework, we adopted the following best practices.

Playbooks and runbooks for repeatability

We developed incident response playbooks and runbooks for repeatable responses for security events that include:

  • Playbooks for more strategic scenarios and responses based on some of the sample playbooks found here.
  • Runbooks that provide step-by-step guidance for our security analysts to follow in case an event occurs. We used Amazon SageMaker notebooks and AWS Systems Manager Incident Manager runbooks to develop repeatable responses for pre-identified incidents, such as suspected command and control activity on an EC2 instance.

Automation for quicker response time

After developing our repeatable processes, we identified areas where we could accelerate responses to security threats by automating the response. We used the AWS Security Hub Automated Response and Remediation solution as a starting point.

By using this solution, we didn’t need to build our own automated response and remediation workflow. The code is also easy to read, repeat, and centrally deploy through AWS CloudFormation StackSets. We used some of the built-in remediations like disabling active keys that have not been rotated for more than 90 days, making all Amazon Elastic Block Store (Amazon EBS) snapshots private, and many more. With automatic remediation, our analysts can respond quicker and in a more holistic and repeatable way.

Simulations to improve incident response capabilities

We implemented quarterly incident response simulations. These simulations test how well prepared our people, processes, and technologies are for an incident. We included some cloud-specific simulations like an S3 bucket exposure and an externally shared Amazon Relational Database Service (Amazon RDS) snapshot to ensure our security staff are prepared for an incident in the cloud. We use the results of the simulations to iterate on our incident response processes.

Conclusion

In this blog post, we discussed how to prepare for, detect, and respond to security events in an AWS environment. We identified security services to detect security events, vulnerabilities, and misconfigurations. We then discussed how to develop incident response processes through building playbooks and runbooks, performing simulations, and automation. With these new capabilities, we can detect and respond to a security incident throughout hypergrowth.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other blog posts in this series

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/04/07/lessons-in-iot-hacking-how-to-dead-bug-a-bga-flash-memory-chip/

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Dead-bugging — what is that, you ask? The concept comes from the idea that a memory chip, once it’s flipped over so you can attach wires to it, looks a little like a dead bug on its back.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

So why would we do this for the purposes of IoT hacking? The typical reason is if you want to extract the memory from the device, and you either don’t have a chip reader socket for that chip package type or your chip reader and socket pinouts don’t match the device.

I encounter this issue regularly with Ball Grid Array (BGA) memory devices. BGA devices don’t have legs like the chip shown above, but they do have small pads on the bottom, with small solder balls for attaching the device to a circuit board. The following BGA chip has 162 of these pads — here it is placed on a penny for size comparison.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Sometimes, I encounter memory chips and don’t have a socket for attaching it to my chip reader. Sourcing the correct socket could take months, often from China, and I need to extract the data today. Other times, it’s just not cost-effective to purchase one of these sockets for my lab because I don’t encounter that chip package type very often. However, I do encounter the chip package type shown above all the time on embedded Multi Chip Packages (eMCP), and I have a chip reader for that device type.

Unfortunately, further research on this flash memory chip revealed that it is a Multi-Chip Package (MCP), meaning it does not have a built-in embedded controller, so my chip readers can’t interact with it. Also, I couldn’t find a chip reader socket that was even available to support this. This is where a little research and the dead-bugging method came in handy.

Getting started

The first step was to track down a datasheet for this Macronix memory chip MX63U1GC12HA. Once I located the datasheet, I searched it to identify key characteristics of the chip that I would help me match it to another chip package type, which I could target with my chip reader, an RT809H.

Although this MCP chip package has 162 pads on the bottom, most of those aren’t necessary for us to be able to access the flash memory. MCP packages contain both RAM and NAND Flash memory, so I only needed to find the pads associated with the NAND flash along with ground and power connection.

The next step I identified the correct chip type using the datasheet and identification number MX63U1GC12HA. Here’s what the components of that number mean:

  • MX = Macronix
  • 63 = NAND + LPDRAM
  • U = NAND Voltage: 1.8V
  • 1G = 1Gig NAND Density
  • C = x8 Bus

Next, the NAND flash pads I needed to identify and connect to were:

  • I/O 0-7 = Data Input/Output x8
  • CLE = Command Latch Enable
  • ALE = Address Latch Enable
  • CE# = Chip Enable
  • WE# = Write Enable
  • RE# = Read Enable
  • WP# = Write Protect
  • R/B# = Ready / Busy Out
  • VCC = Voltage
  • VSSm = Ground
  • PT = Chip Protection Enable

With the datasheet, I also identified the above listed connection on the actual chip pad surface.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Typically, the hardest part is soldering the wires to these pads. This is the part that often scares most people away, but it looks harder than it really is. To avoid making it any harder than it has to be, I recommend going light on the coffee that morning – a recommendation I often don’t follow myself, which I end up regretting.

I have found one trick that works well to make attaching wires easier. This adds an extra step to the process but will speed things up later and remove much of the frustration. I recommend first attaching BGA balls to pads you need to attach wires to. Since the pads on this MCP chip are only 0.3 mm, I recommend using a microscope. I typically lay the balls by hand — once flux is placed on the chip surface, it’s simple to move the balls onto the pads one at a time and have them stay in place. Of course, this can also be done with solder paste and stencil. So, pick your favorite poison.

Once the balls have been placed on the correct pads, I place the chip in an InfraRed (IR) reflow oven to fix the balls to the pads. The lead-based BGA balls I use are Sn63/Pb37 and should melt at 183°C or 361°F.  I use the following temperature curve set on my IR oven, which I determined using a thermal probe along with some trial-and-error methods. During the reflow process, it’s easy to accidentally damage a chip by overheating it, so take caution. My curve tops out just above 200°C, which has worked well, and I have yet to damage the chips using this curve.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Once the oven has run through its cycle and the chip has cooled down, I clean the chip with alcohol to remove any remaining flux. If all goes well with the reballing process, the chip should have balls attached at each of the required locations, as shown below.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Attaching the wires

The next part is attaching wires to each of these pads. The wire I use for this is 40 gauge magnet wire, which is small enough to be attached to pads that are often .25 to .35 mm in size. This magnet wire is insulated with a thin coat of clear enamel, which can be problematic when soldering it to very small pads and trying to keeping the heat to a reasonable level. To resolve this issue, I burn the enamel insulation away and also coat the end of the wire with a thin coat of solder during that process. To do this, I melt solder onto the end of my solder iron and then stick the end of the magnet wire into the ball of solder on the end of the iron. This method works to remove the enamel insulation and tin the end of the wire, as shown below.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Once the magnet wire has been tinned, I next cut off the excess tinned area with wire cutters. How much you clip off depends on how big the pads are on the chip you’re attaching it to. The goal is to leave enough to properly solder it but not enough overhanging that could cause it to electrically short to other pads.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

By pre-tinning the wire and adding solder balls to the chip pads, the process of attaching the wires becomes much quicker and less frustrating. To attach the wires, I take the tinned magnet wire and place a small amount of flux on the tinned area. Then, I push the wire against the solder ball on the chip pad I am attaching it to, and with the hot solder iron, I just barely touch the solder ball on the pad – instantly, the wire is attached. I use a micro-tip solder iron and set the heat high, so it is instant when I do this process. An example of this is shown below:

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

For the MX63U1GC12HA MCP chip, I used this process to attach all 17 of the needed wires, as shown below, and then held them in place using E6000 brand glue to prevent accidentally knocking the wires loose from mechanical stress on the solder joints.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Reading the chip

Next, it’s time to figure out how to read this chip to extract the firmware data from it. First, we need to attach the 17 wires to the chip reader. To do this, I custom-built a 48-pin Zero insertion force (ZIF) plug with screw terminals that I could attach to the ZIF socket of my RT809H chip programmer. This jig allows each wire to be attached via the screw terminals to any of the 48 pins as needed.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

How we wire up a dead-bugged memory chip for reading depends on several things.

  • Do we have a datasheet?
  • Does the chip we are dead-bugging come in other package styles?
  • Does the chip reader support the chip we have, and we just don’t have the correct socket?
  • Does the manufacturer of our chip produce an unrelated chip that has a similar memory size, bus width, and layout?

Since I didn’t have a chip reader that supports this 162 BGA MCP device, I started looking for another Macronix chip that:

  • Had 48 pins or less so I could wire it up to my chip reader
  • Was a NAND Single Level Cell (SLC)
  • Had 1g in density
  • Had 8 bit bus
  • Had operational voltage of 1.8v

After a little time Googling followed by digging through several different datasheets, I found a MX30UF1G18AC-TI, which was for a 48 TSOP package and appeared to match the key areas I was looking for.

Here’s what the name MX30UF1G18AC-TI tells us:

  • MX = Macronix
  • 30 = NAND
  • U = 1.7V to 1.95V
  • F = SLC
  • 1G= 1G-bit
  • 18A= 4-bit ECC with standard feature, x8

The diagrams found in the MX30UF1G18AC datasheet showed the pinout for the TSOP48 NAND memory chip. Using that data, I was able to match each of the required pins to the 162 BGA MCP MX63U1GC12HA so I could correctly wire each connection to the 48-pin ZIF socket for my RT809H chip programmer.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Once all of the connecting wires were properly connected to the screw terminal of my Zif socket, I selected the MX30UF1G18AC chip from the drop-down on the chip programmer and clicked “read.” As expected, the chip programmer first queried the chip for its ID. If it does not match, it will prompt you with “Chip ID does not match,” as shown below.

In this case, I selected “Ignore,” and the devices successfully extracted the data from the NAND flash chip. Some chip readers allow you to just turn this off before attempting to read the chip. Also, if the chip you’re reading is only different in package style, the chip ID will probably match.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

The perfect solution is always to have all the proper equipment needed to read all memory chips you encounter, but very few pockets are that deep — or maybe the correct socket is months out for delivery, and you need the data from the chip today. In those cases, having the skills to do this work is important.  

I have successfully used this process in a pinch many times to extract firmware from chips when I didn’t have the proper sockets at hand – and in some cases, I didn’t have full datasheets either. If you have not done this, I recommend giving it a try. Expand those soldering skills, and build out test platforms and methods to further simplify the process. Eventually, you may need to use this method, and it’s always better to be prepared.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Simplify management of database privileges in Amazon Redshift using role-based access control

Post Syndicated from Milind Oke original https://aws.amazon.com/blogs/big-data/simplify-management-of-database-privileges-in-amazon-redshift-using-role-based-access-control/

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. With Amazon Redshift, you can analyze all your data to derive holistic insights about your business and your customers. One of the challenges with security is that enterprises don’t want to have a concentration of superuser privileges amongst a handful of users. Instead, enterprises want to design their overarching security posture based on the specific duties performed via roles and assign these elevated privilege roles to different users. By assigning different privileges to different roles and assigning these roles to different users, enterprises can have more granular control of elevated user access.

In this post, we explore the role-based access control (RBAC) features of Amazon Redshift and how you can use roles to simplify managing privileges required to your end-users. We also cover new system views and functions introduced alongside RBAC.

Overview of RBAC in Amazon Redshift

As a security best practice, it’s recommended to design security by applying the principle of least privileges. In Amazon Redshift, RBAC applies the same principle to users based on their specific work-related role requirements, regardless of the type of database objects involved. This granting of privileges is performed at a role level, without the need to grant permissions for the individual user or user groups. You have four system-defined roles to get started, and can create additional, more granular roles with privileges to run commands that used to require the superuser privilege. With RBAC, you can limit access to certain commands and assign roles to authorized users. And you can assign object-level as well as system-level privileges to roles across Amazon Redshift native objects.

System-defined roles in Amazon Redshift

Amazon Redshift provides four system-defined roles that come with specific privileges. These can’t be altered or customized, but you can create your own roles as required. The system-defined roles use the sys: prefix, and you can’t use this prefix for the roles you create.

The following table summarizes the roles and their privileges.

Role Name Description of Privileges
sys:operator Can access catalog or system tables, and analyze, vacuum, or cancel queries.
sys:dba Can create schemas, create tables, drop schemas, drop tables, truncate tables, create or replace stored procedures, drop procedures, create or replace functions, create or replace external functions, create views, and drop views. Additionally, this role inherits all the privileges from the sys:operator role.
sys:superuser Has the same privileges as the Amazon Redshift superuser.
sys:secadmin Can create users, alter users, drop users, create roles, drop roles, and grant roles. This role can have access to user tables only when the privilege is explicitly granted to the role.

System privileges

Amazon Redshift also adds support for system privileges that can be granted to a role or a user. A system privilege allows admins to grant a limited set of privileges to a user, such as the ability to create and alter users. These system-defined privileges are immutable and can’t be altered, removed, or added to.

Create custom roles for RBAC in Amazon Redshift

To further granularize the system privileges being granted to users to perform specific tasks, you can create custom roles that authorize users to perform those specific tasks within the Amazon Redshift cluster.

RBAC also supports nesting of roles via role hierarchy, and Amazon Redshift propagates privileges with each role authorization. In the following example, granting role R1 to role R2 and then granting role R2 to role R3 authorizes role R3 with all the privileges from the three roles. Therefore, by granting role R3 to a user, the user has all the privileges from roles R1, R2, and R3.

Amazon Redshift doesn’t allow creation of a cyclic role authorization cycle, so role R3 can’t be granted to role R1, as that would be cyclic role authorization.

You can use the Amazon Redshift commands for privileges to create role, grant role, revoke role, and the admin options for the grant and revoke. Only superusers or regular users who have been granted create role privileges can use those commands.

RBAC example use cases

For this post, we use the industry standard TPC-H dataset to demonstrate our example use cases.

We have three different teams in the organization: Sales, Marketing, and Admin. For this example, we have two schemas, sales and marketing, in the Amazon Redshift database. Each schema has the following tables: nation, orders, part, partsupp, supplier, region, customer, and lineitem.

We have two different database roles, read-only and read/write, for both the Sales team and Marketing team individually. Each role can only perform operations to the objects belonging to the schema to which the role is assigned. For example, a role assigned to the sales schema can only perform operations based on assigned privileges to the sales schema, and can’t perform any operation on the marketing schema.

The read-only role has read-only access to the objects in the respective schema when the privilege is granted to the objects.

The read/write role has read and write (insert, update) access to the objects in the respective schema when the privileges are granted to the objects.

The Sales team has read-only ( role name sales_ro) and read/write ( role name sales_rw) privileges.

The Marketing team has similar roles: read-only ( role name marketing_ro) and read/write ( role name marketing_rw).

The Admin team has one role (db_admin), which has privileges to drop or create database roles, truncate tables, and analyze the entire database. The admin role can perform at the database level across both sales and marketing schemas.

Set up for the example use cases

To set up for the example use cases, create a database admin role and attach it to a database administrator. A superuser must perform all these steps.

All the queries for this post are run in the Amazon Redshift native Query Editor v2, but can be run just the same in any query editor, such as SQLWorkbench/J.

  1. Create the admin role (db_admin):
    create role db_admin;

  2. Create a database user named dbadmin:
    create user dbadmin password 'Test12345';

  3. Assign a system-defined role named sys:dba to the db_admin role:
    grant role sys:dba to role db_admin;

This role has the privileges to create schemas, create tables, drop schemas, drop tables, truncate tables, create or replace stored procedures, drop procedures, create or replace functions, create or replace external functions, create views, drop views, access catalog or system tables, analyze, vacuum, and cancel queries.

  1. Assign a system-defined role named sys:secadmin to the db_admin role:
    grant role sys:secadmin to role db_admin;

This role has the privileges to create users, alter users, drop users, create roles, drop roles, and grant roles.

  1. Assign the user dbadmin to the db_admin role:
    grant role db_admin to dbadmin;

From this point forward, we use the dbadmin user credential for performing any of the following steps when no specific user is mentioned.

  1. Create the sales and marketing database schema:
    create schema sales;
    
    create schema marketing;

  2. Create all the eight tables (nation, orders, part, partsupp, supplier, region, customer, lineitem) in the sales and marketing schemas.

You can use the DDL available on the GitHub repo to create and populate the tables.

After the tables are created and populated, let’s move to the example use cases.

Example 1: Data read-only task

Sales analysts may want to get the list of suppliers with minimal cost. For this, the sales analyst only needs read-only access to the tables in the sales schema.

  1. Let’s create the read-only role (sales_ro) in the sales schema:
    create role sales_ro;

  2. Create a database user named salesanalyst:
    create user salesanalyst password 'Test12345';

  3. Grant the sales schema usage and select access to objects of the sales schema to the read-only role:
    grant usage on schema sales to role sales_ro;
    
    grant select on all tables in schema sales to role sales_ro;

  4. Now assign the user to the read-only sales role:
    grant role sales_ro to salesanalyst;

Now the salesanalyst database user can access the sales schema in the Amazon Redshift database using the salesanalyst credentials.

The salesanalyst user can generate a report of least-expensive suppliers using the following query:

set search_path to sales;
SELECT	TOP 100
	S_ACCTBAL,
	S_NAME,
	N_NAME,
	P_PARTKEY,
	P_MFGR,
	S_ADDRESS,
	S_PHONE,
	S_COMMENT
FROM	PART,
	SUPPLIER,
	PARTSUPP,
	NATION,
	REGION
WHERE	P_PARTKEY	= PS_PARTKEY AND
	S_SUPPKEY	= PS_SUPPKEY AND
	P_SIZE		= 34 AND
	P_TYPE		LIKE '%COPPER' AND
	S_NATIONKEY	= N_NATIONKEY AND
	N_REGIONKEY	= R_REGIONKEY AND
	R_NAME		= 'MIDDLE EAST' AND
	PS_SUPPLYCOST	= (	SELECT	MIN(PS_SUPPLYCOST)
				FROM	PARTSUPP,
					SUPPLIER,
					NATION,
					REGION
				WHERE	P_PARTKEY	= PS_PARTKEY AND
					S_SUPPKEY	= PS_SUPPKEY AND
					S_NATIONKEY	= N_NATIONKEY AND
					N_REGIONKEY	= R_REGIONKEY AND
					R_NAME		= 'MIDDLE EAST'
			  )
ORDER	BY	S_ACCTBAL DESC,
		N_NAME,
		S_NAME,
		P_PARTKEY
;

The salesanalyst user can successfully read data from the region table of the sales schema.

select * from sales.region;

In the following example, the salesanalyst user wants to update the comment for Region key 0 and Region name AFRICA in the region table. But the command fails with a permission denied error because they only have select permission on the region table in the sales schema.

update sales.region
set r_comment = 'Comment from Africa'
where r_regionkey = 0;

The salesanalyst user also wants to access objects from the marketing schema, but the command fails with a permission denied error.

select * from marketing.region;

Example 2: Data read/write task

In this example, the sales engineer who is responsible for building the extract, transform, and load (ETL) pipeline for data processing in the sales schema is given read and write access to perform their tasks. For these steps, we use the dbadmin user unless otherwise mentioned.

  1. Let’s create the read/write role (sales_rw) in the sales schema:
    create role sales_rw;

  2. Create a database user named salesengineer:
    create user salesengineer password 'Test12345';

  3. Grant the sales schema usage and select access to objects of the sales schema to the read/write role by assigning the read-only role to it:
    grant role sales_ro to role sales_rw;

  4. Now assign the user salesengineer to the read/write sales role:
    grant role sales_rw to salesengineer;

Now the salesengineer database user can access the sales schema in the Amazon Redshift database using the salesengineer credentials.

The salesengineer user can successfully read data from the region table of the sales schema.

select * from sales.region;

However, they can’t read tables from the marketing schema because the salesengineer user doesn’t have permission.

select * from marketing.region;

The salesengineer user then tries to update the region table in the sales schema but fails to do so.

update sales.region
set r_comment = 'Comment from Africa'
where r_regionkey = 0;

  1. Now, grant additional insert, update, and delete privileges to the read/write role:
grant update, insert, delete on all tables in schema sales to role sales_rw;

The salesengineer user then retries to update the region table in the sales schema and is able to do so successfully.

update sales.region
set r_comment = 'Comment from Africa'
where r_regionkey = 0;


When they read the data, it shows that the comment was updated for Region key 0 (for AFRICA) in the region table in the sales schema.

select * from sales.region;

Now salesengineer wants to analyze the region table since it was updated. However, they can’t do so, because this user doesn’t have the necessary privileges and isn’t the owner of the region table in the sales schema.

analyze sales.region;

Finally, the salesengineer user wants to vacuum the region table since it was updated. However, they can’t do so because they don’t have the necessary privileges and aren’t the owner of the region table.

vacuum sales.region;

Example 3: Database administration task

Amazon Redshift automatically sorts data and runs VACUUM DELETE in the background.

Similarly, Amazon Redshift continuously monitors your database and automatically performs analyze operations in the background. In some situations, such as a major one-off data load, the database administrator may want to perform maintenance on objects in the sales and marketing schemas immediately. They access the database using dbadmin credentials to perform these tasks.

The dbadmin database user can access the Amazon Redshift database using their credentials to perform analyze and vacuum of the region table in the sales schema.

analyze sales.region;

Vacuum sales.region;


Now the dbadmin database user accesses the Amazon Redshift database to perform analyze and vacuum of the region table in the marketing schema.

analyze marketing.region;

vacuum marketing.region;


As part of developing the ETL process, the salesengineer user needs to truncate the region table in the sales schema. However, they can’t perform a truncate because they don’t have the necessary privileges, and aren’t the owner of the region table in the sales schema.

truncate sales.region;


The dbadmin database user can access the Amazon Redshift database to provide truncate table privileges to the sales_rw role.

grant truncate table to role sales_rw;

Now the salesengineer can perform a truncate on the region table in the sales schema successfully.

First, they read the data:

select * from sales.region;


Then they perform the truncate:

truncate sales.region;


They read the data again to see the changes:

select * from sales.region;


For the marketing schema, you must perform similar operations for the marketing analyst and marketing engineer. We include the following scripts for your reference. The dbadmin user can use the following SQL commands to create the marketing roles and database users, assign privileges to those roles, and attach the users to the roles.

create role marketing_ro;

create role marketing_rw;

grant usage on schema marketing to role marketing_ro, role marketing_rw;

grant select on all tables in schema marketing to role marketing_ro;

grant role marketing_ro to role marketing_rw;

grant insert, update, delete on all tables in schema marketing to role marketing_rw;

create user marketinganalyst password 'Test12345';

create user marketingengineer password 'Test12345';

grant role marketing_ro to  marketinganalyst;

grant role marketing_rw to  marketingengineer;

System functions for RBAC in Amazon Redshift

Amazon Redshift has introduced two new functions to provide system information about particular user membership and role membership in additional groups or roles: role_is_member_of and user_is_member_of. These functions are available to superusers as well as regular users. Superusers can check all role memberships, whereas regular users can only check membership for roles that they have been granted access to.

role_is_member_of(role_name, granted_role_name)

The role_is_member_of function returns true if the role is a member of another role. Superusers can check all roles memberships; regular users can only check roles to which they have access. You receive an error if the provided roles don’t exist or the current user doesn’t have access to them. The following two role memberships are checked using the salesengineer user credentials:

select role_is_member_of('sales_rw', 'sales_ro');

select role_is_member_of('sales_ro', 'sales_rw');

user_is_member_of( user_name, role_or_group_name)

The user_is_member_of function returns true if the user is a member of the specified role or group. Superusers can check all user memberships; regular users can only check their own membership. You receive an error if the provided identities don’t exist or the current user doesn’t have access to them. The following user membership is checked using the salesengineer user credentials, and fails because salesengineer doesn’t have access to salesanalyst:

select user_is_member_of('salesanalyst', 'sales_ro');


When the same user membership is checked using the superuser credential, it returns a result:

select user_is_member_of('salesanalyst', 'sales_ro');

When salesengineer checks their own user membership, it returns the correct results:

select user_is_member_of('salesengineer', 'sales_ro');

select user_is_member_of('salesengineer', 'marketing_ro');

select user_is_member_of('marketinganalyst', 'sales_ro');

System views for RBAC in Amazon Redshift

Amazon Redshift has added several new views to be able to view the roles, the assignment of roles to users, the role hierarchy, and the privileges for database objects via roles. These views are available to superusers as well as regular users. Superusers can check all role details, whereas regular users can only check details for roles that they have been granted access to.

For example, you can query svv_user_grants to view the list of users that are explicitly granted roles in the cluster, or query svv_role_grants to view a list of roles that are explicitly granted roles in the cluster. For the full list of system views, refer to SVV views.

Conclusion

In this post, we demonstrated how you can use role-based access control to further fortify your security posture by granularizing privileged access across users without needing to centralize superuser privileges in your Amazon Redshift cluster. Try out using database roles for your future Amazon Redshift implementations, and feel free to leave a comment about your experience.

In the future posts, we will show how these roles also integrate tightly with workload management. You can use them when defining WLM queues, and also while implementing single sign-on via identity federation with Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD and other SAML-based applications.


About the Authors

Milind Oke is a Data Warehouse Specialist Solutions Architect based out of New York. He has been building data warehouse solutions for over 15 years and specializes in Amazon Redshift.

Dipankar Kushari is a Sr. Specialist Solutions Architect, Analytics with AWS.

Harshida Patel is a Specialist Sr. Solutions Architect, Analytics with AWS.

Debu Panda is a Senior Manager, Product Management with AWS. He is an industry leader in analytics, application platform, and database technologies, and has more than 25 years of experience in the IT world. Debu has published numerous articles on analytics, enterprise Java, and databases and has presented at multiple conferences such as re:Invent, Oracle Open World, and Java One. He is lead author of the EJB 3 in Action (Manning Publications 2007, 2014) and Middleware Management (Packt).

Huiyuan Wang is a software development engineer of Amazon Redshift. She has been working on MPP databases for over 6 years and has focused on query processing, optimization and metadata security.

Announcing Partner Program Enhancements

Post Syndicated from Elton Carneiro original https://www.backblaze.com/blog/announcing-partner-program-enhancements/

Here at Backblaze, we can definitively say that we get by with a little (okay, a lot of) help from our friends. We’ve always been committed to building an open, transparent, and interoperable ecosystem which has helped us grow an incredible partner network. We provide easy, affordable, and trusted cloud storage as a neutral partner, and they provide all manner of services, products, and solutions that use our storage. But there’s always room for improvement, right?

Which is why, today, we’re enhancing our partner program with two major new offerings:

  • Backblaze B2 Reserve: A predictable, capacity pricing model to empower our Channel Partners.
  • Backblaze Partner API: A new API that empowers our Alliance Partners to easily integrate and manage B2 Cloud Storage within their products and platforms.

Read on to learn a bit more about each component, and stay tuned throughout this week and next for deeper dives into each element.

Capacity Pricing With Backblaze B2 Reserve

Backblaze B2 Reserve is a new offering for our Channel Partners. Predictable, affordable pricing is our calling card, but for a long time our Channel Partners had a harder time than other customers when it came to accessing this value. Backblaze B2 Reserve brings them a capacity-based, annualized SKU which works seamlessly with channel billing models. The offering also provides seller incentives, Tera-grade support, and expanded migration services to empower the channel’s acceleration of cloud storage adoption and revenue growth.

The key benefits include:

  • Enhanced margin opportunity and a predictable pricing model.
  • Easier conversations with customers accustomed to an on-premises or capacity model.
  • Discounts and seller incentives.

The program is capacity based, starting at 20TB, with key features, including:

  • Free egress up to the amount of storage purchased per month.
  • Free transaction calls.
  • Enhanced migration services.
  • No delete penalties.
  • Tera support.

It’s all of the same great functionality folded in. Partners get more margin, seller incentives, and a predictable growth model for customers.

“Backblaze’s ease and reliability, paired with their price leadership, has always been attractive, but having their pricing aligned with our business model will bring them into so many more conversations we’re having across the types of customers we work with.”
—Mike Winkelmann, Owner of CineSys Inc.

User Management and Usage Reporting With the Backblaze Partner API

The Backblaze Partner API empowers independent software vendors participating in Backblaze’s Alliance Partner Program to add Backblaze B2 Cloud Storage as a seamless backend extension within their own platform, where they can programmatically provision accounts, run reports, and create a bundled solution or managed service for a unified user experience. By unlocking an improved customer experience for the partner, the Partner API allows Alliance Partners to build additional cloud services into their product portfolio to generate new revenue streams and/or grow existing margin.

Features of the Partner API include:

  • Account provisioning.
  • Managing a practically unlimited number of accounts or groups.
  • Comprehensive usage reporting.

In using the Partner API, partners can offer a proprietary branded, bundled solution with a unified bill, or create a solution that is “Powered by Backblaze B2.”

“Our customers produce thousands of hours of content daily, and, with the shift to leveraging cloud services like ours, they need a place to store both their original and transcoded files. The Backblaze Partner API allows us to expand our cloud services and eliminate complexity for our customers—giving them time to focus on their business needs, while we focus on innovations that drive more value.”
—Murad Mordukhay, CEO at Qencode

Other Benefits

To unlock the value inherent in Backblaze B2 Reserve and the Partner API, Backblaze is offering free migration to help customers painlessly copy or migrate their data from practically any source into B2 Cloud Storage.

This service supports truly free data mobility without complexity or downtime, including coverage of all data transfer costs and any egress fees charged by legacy vendors. Stay tuned for more on this feature that benefits both partners and all of our customers.

The addition of Tera support brings the benefit of a four-hour target response time for email support and named customer contacts to ensure that partners and their end users can troubleshoot at speed.

What’s Next?

These are the first of many features and programs that Backblaze will be rolling out this year to make our partners’ experience working with us better. Tomorrow, we’ll dive deeper into the Backblaze B2 Reserve offering. On Monday, we’ll offer more detail on the Backblaze Partner API feature. In the coming months, we’ll be sharing even more. Stay tuned.

Want to Learn More?

Reach out to us via email to schedule a meeting. If you’re going to the Channel Partners Conference, April 11–14th, we’ll be there and we’d love to see you! If not, reach out and we’d be happy to start a conversation about how the new program can move your business forward.

The post Announcing Partner Program Enhancements appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

US Disrupts Russian Botnet

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/us-disrupts-russian-botnet.html

The Justice Department announced the disruption of a Russian GRU-controlled botnet:

The Justice Department today announced a court-authorized operation, conducted in March 2022, to disrupt a two-tiered global botnet of thousands of infected network hardware devices under the control of a threat actor known to security researchers as Sandworm, which the U.S. government has previously attributed to the Main Intelligence Directorate of the General Staff of the Armed Forces of the Russian Federation (the GRU). The operation copied and removed malware from vulnerable internet-connected firewall devices that Sandworm used for command and control (C2) of the underlying botnet. Although the operation did not involve access to the Sandworm malware on the thousands of underlying victim devices worldwide, referred to as “bots,” the disabling of the C2 mechanism severed those bots from the Sandworm C2 devices’ control.

The botnet “targets network devices manufactured by WatchGuard Technologies Inc. (WatchGuard) and ASUSTek Computer Inc. (ASUS).” And note that only the command-and-control mechanism was disrupted. Those devices are still vulnerable.

The Justice Department made a point that they did this before the botnet was used for anything offensive.

Four more news articles. Slashdot post.

EDITED TO ADD (4/13): WatchGuard knew and fixed it nearly a year ago, but tried to keep it hidden. The patches were reverse-engineered.

[$] Private memory for KVM guests

Post Syndicated from original https://lwn.net/Articles/890224/

Cloud computing is a wonderful thing; it allows efficient use of computing
systems and makes virtual machines instantly available at the click of a
mouse or API call. But cloud computing can also be problematic; the
security of virtual machines is dependent on the security of the
host system. In most deployed systems, a host computer can dig through its
guests’ memory at will; users running guest systems have to just hope that
doesn’t happen. There are a number of solutions to that problem under
development, including this
KVM guest-private memory patch set
by Chao Peng and
others, but some open questions remain.

Security updates for Thursday

Post Syndicated from original https://lwn.net/Articles/890620/

Security updates have been issued by Arch Linux (bind), Debian (firefox-esr), Fedora (fribidi, gdal, and mingw-gdal), openSUSE (pdns-recursor and SDL2), Oracle (kernel), Slackware (mozilla), SUSE (glibc and openvpn-openssl1), and Ubuntu (fribidi and linux-azure-5.13, linux-oracle-5.13).

Three new reasons to register for Coolest Projects Global 2022

Post Syndicated from original https://www.raspberrypi.org/blog/coolest-projects-global-2022-feedback-swag-medals/

Over the last ten years, thousands of young people from all over the world have shared their digital creations at a Coolest Projects event. This year, there are a few brand-new and exciting reasons why young people will want to get involved in Coolest Projects Global online tech showcase and share their tech creations in the online gallery, for the worldwide Coolest Projects community to discover them.

Two teenage girls participating in Coolest Projects shows off their tech project.

Not only will each Coolest Projects Global participant get unique feedback on their project, they’ll also receive a cool piece of limited-edition Coolest Projects swag. And young tech creators have a shot at winning a coveted Coolest Projects medal if their creation is selected as a judges’ favourite. We’ve added all of these new enhancements thanks to the thoughtful feedback we’ve received from participants in previous showcases.

White text on blue background saying New in 2022.

1. Personalised project feedback

Young people who’ve showcased at an in-person Coolest Projects event know how great it is to see how other people react to their project. This year, creators participating in our online showcase will automatically get reactions and feedback from our Coolest Projects staff and partners who are reviewing projects.

A Coolest Projects participant

That means each creator will find out what’s great about their project and how they might be able to improve it. All of this feedback will be shown in the creator’s online account on coolestprojects.org after the celebratory livestream in June.

2. Limited-edition Coolest Projects art

All young creators will also get limited-edition swag: a Coolest Projects poster designed by New York City-based artist Joey Rex. Creators can proudly display this memento of their participation in Coolest Projects Global 2022 on their bedroom wall, and as a digital phone or computer screen background.

An illustration of two young tech creators working on digital projects in a room filled with devices, gadgets, and tools.
The limited-edition Coolest Projects poster designed by Joey Rex.

The poster design was inspired by all the young makers who have participated in Coolest Projects over the last 10 years. It evokes themes of collaboration, invention, and creativity. Here’s what Joey, the artist, had to say about the design:

“This project was really exciting for me to work on, since I love geeking out over tech and building custom electronics, and I’m really grateful to the Coolest Projects team for trusting me with this vision. I hope my design can inspire the creators to keep up the great work and continue bringing their awesome ideas to reality!”

Artist Joey Rex

To claim their printed poster and backgrounds for their digital devices, creators will receive a link via email after the celebratory livestream in June.

3. Custom Coolest Projects medals

And behold, your first look at the Coolest Projects medal:

A Coolest Projects medal.

As you may already know, VIP judges select their favourite projects in each project category. Creators of projects that are selected as favourites will receive this custom die-cast medal to commemorate their unique accomplishment. The medal hangs on a full color Coolest Projects ribbon and would be the coolest addition to any wall or trophy shelf.

Three young tech creators show off their tech project at Coolest Projects.

Creators who want to aim for a medal should keep in mind that judges’ favourite projects are selected based on their complexity, presentation, design, and of course their coolness. See the Coolest Projects FAQs for more information.

White text on blue background saying Get involved.

With all these new enhancements to Coolest Projects Global, there is a multitude of reasons for young tech creators to register a project for the online showcase.

To help young people get involved in Coolest Projects, we have planned a few livestreamed codealong events on our YouTube channel:

  • 26 April at 7pm BST, a good time for creators in Europe
  • 27 April at 7pm EDT, a good time for creators in the Americas

During these livestreams, you’ll also learn about the new project topics we’ve introduced for the online gallery this year. We’ll especially explore the ‘environment’ topic, sponsored by our friends at EPAM and Liberty Global.

More details are coming soon, so be sure to sign up for email updates to be the first to hear them.

That’s all of the latest news about Coolest Projects. Until next time… be cool.

The post Three new reasons to register for Coolest Projects Global 2022 appeared first on Raspberry Pi.

„Културната терапия“ на Путин

Post Syndicated from Яна Хашъмова original https://toest.bg/kulturnata-terapiya-na-putin/

На 24 февруари целият свят, в т.ч. и западноевропейски и американски изследователи на руската култура и история, беше потресен от започналите военни действия в Украйна. Мнозина подозираха или спекулираха, че Путин само заплашва и се опитва да изнудва западните политици и държавници. За други обаче поведението на руския държавен глава не беше изненада. По време на управлението си, особено през последните 10 години, Путин и администрацията му системно и целенасочено работят за повишаване на руското национално самочувствие, градейки представата за неповторимостта на руската история, култура и идентичност. Обособяването на руската уникалност обаче не е самоцел, а крие имперски амбиции, провокирани от разпада на Съветския съюз – най-голямата геополитическа катастрофа на ХХ век според Путин.

Стратегията на руския президент съдържа много опорни точки,

но аз ще обърна внимание само на тези, които считам за по-важни. Деветдесетте години в Русия се характеризират с корупция и провали на Прехода, много от които бяха в резултат на съветите на западни консултанти. Десетилетието беше белязано с тежка икономическа криза, болезнена емоционална травма и дълбоко национално унижение. Седем години след като бе посочен от Елцин за президент и по-късно редовно избран за поста, на Мюнхенската конференция за сигурност Путин изнесе остра, критична реч, в която нарече САЩ глобална дестабилизираща сила, разширението на НАТО – предателство, а международните норми за човешки права – обидна западна самонадеяност. (Аз не отричам греховете на САЩ, за които може да се пише и е писано много, но този текст е за Русия.)

Така започнаха откритите нападки и представянето на Запада и САЩ като нещо чуждо и неприсъщо на Русия, на нейната култура и ценности. От една страна, се подчертаваха политическите грешки и социалните проблеми на Запада, а от друга, Русия се представяше като страна с по-богата култура, по-значима история и по-здраво общество – картина, която завладя колективното въображение на мнозинството от руските граждани.

За целта администрацията на Путин наложи контрол над почти всички медии и започна да ревизира историята.

Насочената към чуждоезичната публика държавна телевизия RT (Russia Today, или „Русия днес“) бе създадена през 2005 г. първо на английски, а впоследствие се разви в международна мрежа от канали на арабски, немски, френски и испански. Заради войната в Украйна и дезинформацията, която тези канали разпространяват, на 1 март т.г. Европейският съюз спря излъчването им на своята територия. През последните няколко години пропагандните материали на RT не пропускаха да отбележат расизма и културната война в Щатите, протестите срещу ковид ограниченията в Западна Европа, а във връзка с Украйна – т.нар. от руското правителство геноцид в Донбас.

Междувременно в пренаписаните учебниците по история започна да се премълчава за руските завоевателни и агресивни периоди, да се изтъква (с право) приносът на Съветския съюз за победата над нацизма в Европа, а що се отнася до етническите и национални общности – да се подчертава общата история на Русия и Украйна и единството на двата народа.

Многоетническата и многонационална същност на Руската федерация също разграничава руското общество от американското и западното, смята Путин.

През януари 2012 г., два месеца преди да бъде преизбран за трети мандат, руският президент публикува статия в „Независимая газета“, в която очерта стратегията си за постигане на пълна многоетническа и многонационална хармония в РФ – изключително важна цел за всеки държавен лидер.

Следвайки познати методи, той първо остро разкритикува политиката на Запада, наречена мултикултурализъм, която защитава правата на етнически малцинства да съхраняват културата и езика си и им осигурява държавна подкрепа за това. Според руския президент това е крайно опасен политически подход, защото води до сегрегация и до издигане на правата на малцинствата над тези на традиционното местно население, над тези на мнозинството. След изброяване на провалите на Запада в това отношение Путин побърза да успокои читателите си, че за РФ няма опасност от подобни социални проблеми, защото Русия традиционно и през вековете се е създавала като многоетническа и многонационална държава.

Факт е, че с развитието и разширяването си Руската империя покори десетки етнически и национални групи с религии, различни от православието, и с езици, различни от руския, но това не означава, че съвместното съществуване между мнозинството доминиращо руско население и всички останали общности е хармонично. За това свидетелстват сталинските принудителни преселвания на татари, чеченци, калмици и много други. А днес расистките прояви в медии и по улиците на РФ срещу неруското население са твърде много, за да се описват.

Каква е стратегията на Путин в тази обстановка?

Крайно противоречива по мое мнение. Той очертава и защитава идеята, която нарича „културна терапия“ и която издига руската култура и език като водещи и доминиращи за постигането на пълна многоетническа и многонационална хармония. Самото използване на думата „терапия“ вече подсказва, че според него обществото на РФ има нужда от лечение. Без да отрича правата на етническите малцинства да упражняват своя език и култура, той настоява, че всички трябва да усвоят и приемат руската култура като своя.

Държавообразуващата сила на руския народ през вековете е исторически факт, смята Путин, а мисията на този народ е да споява една цивилизация с помощта на руския език и култура, да обединява руски арменци, руски азербайджанци, руски татари и т.н. Той е убеден, че само с водещата роля на руската култура и общите традиционни ценности обществото може да се излекува от различията на неруските култури и да се постигне многоетническа хармония и единство.

За реализирането на тази идея кремълската администрация работи на много фронтове.

В редактираните учебници по история например не се споменава почти нищо за приноса на неруските общности в победата на Съветския съюз във Втората световна война. Историкът Александър Шубин, автор на учебник за IX клас, даже твърди, че отразяването на великите събития в руската история трябва да е етнически независимо. А за „правилното“ възприемане на „военната операция“ в Украйна Министерството на образованието разпространи указания как точно да се представи войната на учениците, а именно – като освободителна мисия, която е необходима.

Наскоро публикуваният от Министерството на културата списък на традиционните руски ценности, които са застрашени от идеологическото и политическото въздействие на екстремисти, терористи и САЩ, също допринася за осъществяването на „културната терапия“. Според документа тези „агенти“ се стремят да отслабят ролята на руския народ като държавообразуващ; да подкопаят традиционно здравия руски морал и да го заместят с чужди разрушителни ценности, като култ към егоизма и безнравствеността (спомнете си закона за криминализиране на т.нар. хомосексуална пропаганда в Русия, чиято необходимост бе обяснена с упадъчното влияние на Запада върху руското население); да разклатят приятелските и семейните отношения и т.н. Последните научни изследвания действително показват, че процентът на самотните майки в съвременните руски семейства нараства значително, но това се дължи не на пагубното западно влияние, а на хроничния алкохолизъм на мъжете и слабата социална политика на държавата.

Сред традиционните ценности се изтъкват достойнството, патриотизмът, правата и свободите на човека (!?), здравото семейство, съзидателният труд, хуманизмът, милосърдието, единството на народите в Русия и др. В документа е начертано как правителството да поддържа и развива традиционните ценности и как да се бори с упадъчното западно влияние. За читателите, които помнят нашия социалистически опит, тези подходи и дискурс несъмнено ще прозвучат познато и тревожно.

Ето един простичък визуален пример за „културната терапия“ на Путин.

Създаденият през 2005 г. Ден на народното единство (День народного единства), честван на 4 ноември, е сред събитията, които би трябвало да свидетелстват за уникалността на руската култура. Да, но на 1 ноември 2019 г. вестник „Известия“ отбелязва, че половината от анкетираните граждани възприемат този празник като обикновен почивен ден.

Същата година в Санкт Петербург жители на града участват във фестивала в чест на празника. Включват се представители на различни етноси и националности – буряти, поляци, татари, естонци и много други. Облечени в традиционни костюми, те демонстрират типични за своята култура танци или занаяти. Входът на фестивала представлява грандиозна арка, висока около 6–7 метра и оцветена с багрите на руския флаг. На най-високата част на бял фон в червено и синьо е изписано мотото на фестивала „Русия обединява“. На двете колони на арката с по-малък шрифт са отбелязани датата „4 ноември“ и „Ден на народното единство“, а в най-долната част на колоните, с най-дребния шрифт – хаштага #мыедины („Ние сме единни“).

Арката красноречиво илюстрира не само „културната терапия“ на Путин, но и действителното състояние на многоетническата и многонационална хармония в РФ.

На фона на тази системна пропаганда през последните две десетилетия не е никак изненадващ подходът на руския президент в Украйна. Не без основание е и тревогата сред някои бивши съветски републики, че „културната терапия“ може да бъде използвана и в техните общества и държави. Като се добави и фактът, че в последните години Путин нееднократно е споменавал, че Украйна и Русия имат обща история и са един народ, а сега – че „военната операция“ цели да унищожи нацизма в Украйна, за мнозина от нас става ясно, че това са риторика и политика, които крият амбициите на един имперски национализъм за разширяване на влиянието или на границите на РФ, за корекция на „най-голямата геополитическа катастрофа на ХХ век“.

А докато САЩ, Русия и Китай се опитват да решат как да разделят света на сфери на влияние, демократичната воля на „малките“ народи се оказва маловажна.

Благодаря на г-жа Анна Велковска за редакторските бележки.

Заглавна снимка: Ден на народното единство в Москва, 2014 г. © Moscow-live / Flickr

Източник

Великденски заек или просто заек за БНБ?

Post Syndicated from Емилия Милчева original https://toest.bg/velikdenski-zaek-ili-prosto-zaek-za-bnb/

Когато премиерът Кирил Петков казва, че подкрепя кандидатурата на Андрей Гюров за управител на БНБ, това не значи нищо. Нима някой е очаквал един от двамата лидери на „Продължаваме промяната“ да не защити номинацията на политическата сила? „Аз го познавам – човек с невероятен етичен кодекс, професионалист. Така че – да, аз лично слагам своето име зад неговата кандидатура“, заяви Кирил Петков. Ако тези суперлативи бяха казани например от някого като Марио Драги, бившия председател на Европейската централна банка, щяха да са стойностна референция.

Неубедително е Петков да гарантира за професионализма на Гюров –

със същия успех може да го направи и Христо Стоичков. Поради тази причина и атестацията, дадена от лидера на „Има такъв народ“ Слави Трифонов за кандидата на ИТН Любомир Каримански и неговите професионални и морални качества, не тежи. „За позицията шеф на Българска народна банка аз и моите сътрудници имаме достойнството животът да ни е срещнал с човек, за когото сме се убедили, че е най-хубавата кандидатура“, казва Трифонов. Виж ти, какъв matchmaker бил този живот!

Изобщо, от дебатите за нов управител на БНБ за следващите 6 години професионализмът отсъства – както сред депутатите анкетьори на номинираните при изслушването им в Бюджетната комисия, така и в повечето медии, където банкерите просто ги няма. Ако не броим бившия шеф на „УниКредит Булбанк“, бивш председател на Асоциацията на търговските банки Левон Хампарцумян и настоящ председател на Българския форум на бизнес лидерите, който впрочем също заяви, че двамата кандидати не са особено убедителни. „Постиженията им не са толкова впечатляващи, но са си „напудрили“ CV-тата. Като ги сравним с екипа, който сега управлява БНБ, не са особено сравними“, каза Хампарцумян по bTV.

Ако двамата кандидати бяха извън политиката, дали от ПП или ИТН щяха да ги издигнат,

запита реторично и проф. Гарабед Минасян, професор в Икономическия институт на БАН, бивш член на Управителния съвет на БНБ. „За Българската банка за развитие вие трябва да сте довереник на тези, които управляват в момента, докато за БНБ, която е част от емисионната система на ЕС, следователно и на еврото, трябва да имате доверие на ЕЦБ, на МВФ, Европейската банка за развитие, Европейската инвестиционна банка като минимум“, обясни и икономистът Красен Станчев по БНР. Според него това, което се случва с избора на шеф на БНБ, е „доста тревожно, да не кажа по-тежка дума“.

Именно номинациите, които са повече политически, отколкото експертни (и зад всяка стоят определени кръгове), създават опасения. Политическото интересчийство доминираше при изборите на най-дълго управлявалия гуверньор на БНБ Иван Искров – от времената на Царя до втория кабинет на Бойко Борисов, в чийто мандат гръмна Корпоративна търговска банка. Искров пя „Назад, назад, моме Калино“ с мажоритарния собственик на КТБ Цветан Василев, прегръщаше се с министъра на финансите на ГЕРБ Владислав Горанов. БНБ през всичките тези години така и не провидя, че държавата крепи КТБ с парите на данъкоплатците, а банката ги прелива в свързани фирми.

Осем години след рухването на КТБ банковата система още изпитва ефектите,

а Фондът за гарантиране на влоговете в банките не е напълно възстановен. Сметката на загубените в КТБ пари е за близо 5 млрд. лв. Нов банков фалит би струвал много повече, включително и като спирачка по пътя към еврото. Може ли Гюров и Каримански да гарантират, че „Банков надзор“ ще работи толкова съвестно, че никоя банка няма да гръмне? Ръководството на БНБ отговаря за стабилността на банковата система и под негов контрол са всички останали банки след първите големи пет, които са под пряк надзор на ЕЦБ след приемането на България в Банковия съюз на ЕС.

Настоящият управител на БНБ Димитър Радев бе издигнат от ГЕРБ през 2015 г., но в пленарната зала бе избран със 130 гласа, тъй като получи подкрепа и от депутати от Реформаторския блок, БДЦ, АБВ и седмина от ДПС. Професионалната му кариера включва 21 години в Министерството на финансите, където стигна до поста на заместник-министър, и последвали 14 години в Международния валутен фонд.

Политическите апетити към поста на управител на БНБ са огромни.

Не отсега, както е добре известно, но се разпалват от възможността за управление (в т.ч. инвестиране) на милиарди – когато България се присъедини към еврозоната. Което дори да не е от 1 януари 2024 г., все пак ще стане в 6-годишния мандат на бъдещия гуверньор. (В едно от редките си интервюта, дадено в началото на март пред „24 часа“, управителят на БНБ Димитър Радев казва, че имаме всички шансове да се присъединим на уречената дата.)

Част от валутните резерви на България ще бъдат прехвърлени към ЕЦБ, но част от тях ще останат тук. Колко – отсега няма как да се каже, но със сигурност десетки милиарди. Към момента валутните резерви на България са близо 60 млрд. лв., а след няколко години стойността им ще е още по-голяма. Тези средства ще се инвестират в съответствие с изисквания на ЕЦБ, но все пак при по-голяма свобода. Без значение какви политически сили ще влизат тогава в правителството, гуверньорът на БНБ ще принадлежи към определен кръг – в случай че парламентът одобри един от двамата кандидати. И разбира се, в случай че той действително подкрепя пътя към еврото, без да се опита да шиканира процеса.

Законът за БНБ дава възможност на управителя да избере подуправители,

а мандатите на двама от настоящите бездруго са изтекли. Каквито и хора да са подбрали за тези постове Гюров или Каримански (готови са с номинациите), те едва ли ще имат по-голям опит от тях самите. Това означава, че който и да поеме управление „Емисионно“ (валутния борд) например, няма да притежава по-голяма експертност от настоящия подуправител Калин Христов, който работи в БНБ от 1997 г. и повече от 15 години е член на Комитета по парична политика на ЕЦБ. С над 16 години опит в БНБ и специализация в ЕЦБ е и другият подуправител, чийто мандат е изтекъл – Нина Стоянова, шеф на управление „Банково“.

Когато България приеме еврото, съвместно с Министерството на финансите БНБ ще води монетарна политика – нещо, което до настоящия момент не е вършила активно заради системата на валутния борд. Но след приемането на еврото ще трябва да работи в по-голям синхрон с правителството и конкретно с министъра на финансите, а главно – с ЕЦБ. Така че опит в БНБ, МФ или международна финансова институция на висока позиция е от съществено значение за гуверньора на централната банка.

Само дето никой от двамата кандидати не го притежава.

Основателят на Института за пазарна икономика Красен Станчев предупреди, че е доста неразумно да се правят смени в БНБ в момента. „При положение че имате хора с опит, които са в момента начело на централна банка, преминали в миналото си през различни кризисни ситуации – както вътрешни, така и външни, то най-добре е сега, когато се очаква голям стрес на европейската банкова система, ЕЦБ и ЕС не само поради войната в Украйна, да не се сменят тези хора“, заяви Станчев.

Има основание за такова предупреждение – заради войната бяха наложени сериозни санкции на Русия, като забрани за транзакции, свързани с руските резерви, спиране на евро към Русия и изключване на 7 руски банки от SWIFT. Тези действия може да предизвикат кибератаки, да носят рискове за платежните системи и други заплахи и неопитни хора начело на БНБ ще се затруднят да управляват подобни кризи.

Нелепи и смехотворни са опитите на представители на управляващата коалиция да обясняват наличието на двама свои кандидати със

силата на демокрацията.

„В нашата коалиция има възможност за демокрация, всеки да си представи кандидата, който счита, че е най-добър, и вече парламентът да реши“, каза премиерът Кирил Петков. В действителност липсата на единна кандидатура, за което например успя да се договори коалицията в Германия за гуверньор на Бундесбанк, е сериозна слабост. Четирите политически сили не поставиха темата за предстоящите смени в регулаторите (КЕВР, КФН и др.), както и в ръководството на БНБ на преговорите за коалиционно споразумение. Всъщност най-сериозните въпроси просто бяха заобиколени и последствията вече са налице само след първите три месеца управление.

В ситуация, в която управляващата коалиция не е единна, никой от двамата кандидати не може да бъде избран, ако не получи помощ от опозицията – ГЕРБ, ДПС или ГЕРБ и ДПС. Дали това ще се случи, или не, ще стане ясно преди Великден. В опит максимално да улеснят избора, от ПП решиха да повтарят срамните практики от предишни парламенти –

игрите с кворума.

Председателят на 47-мото НС Никола Минчев (ПП) обяви, че връща практиката на 44-тото НС относно гласуването на законите и необходимия кворум: „За да бъде едно предложение прието, трябва да са гласували повече от половината от последно регистрираните или съобразно последната проверка на кворума.“ Това означава, че кандидат за управител на БНБ може да бъде одобрен само с 61 гласа – тъй като кворумът е 121. Срамно предложение. Досега всеки гуверньор на БНБ е бил избиран с гласовете на повече от половината народни представители, тоест над 120.

Докато се стигне до гласуването в пленарната зала, тече пазарлъкът между партиите. До момента ПП и ДБ са декларирали публично подкрепа за Андрей Гюров. БСП се пазари кой ще даде повече, за да реши Корнелия Нинова кому да дари 26-те гласа на депутатите социалисти, но по-вероятно е да ги прехвърли към Гюров. Има какво да търгува – нови постове, отказ да приемат предложението на ДБ за изгонването на руската посланичка Митрофанова. Досега Нинова е показала, че добре се справя с тази работа – България ще прати на Украйна само 4000 каски и бронежилетки, не и оръжие, както настояваше „Демократична България“.

И така, новият управител на БНБ едва ли ще е великденски заек. По-скоро зайче, измъкнато с фокус-мокус.

Заглавна снимка: Bjoertvedt / Wikimedia

Източник

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close