Няма само една ясна истина. Разговор с писателката Селя Ахава

Post Syndicated from Марин Бодаков original https://toest.bg/selja-ahava-interview/

Финландската писателка Селя Ахава (р. 1974) има вече два публикувани романа на български – „Неща, които падат от небето“ (с Награда за литература на Европейския съюз за 2016 г.) и „Преди да изчезне мъжът ми“. И двата са преведени от фински от Росица Цветанова за издателство „Колибри“.

Фокус на този разговор е втората творба. В „Преди да изчезне мъжът ми“ една жена чува от съпруга си: „Винаги съм искал да бъда жена.“ По какво героинята е сходна с Колумб, който тръгва към Западните Индии, а открива нов континент – Америка, който, прочее, ще наименуват на друг… И по какво двамата се различават.

Повод за срещата на Марин Бодаков със Селя Ахава беше участието ѝ в Софийския международен литературен фестивал 2020.


Какво значи да си мъж? Какво значи да си жена? Това са въпросите, които си задавах с Вашата книга. Но Вас искам да Ви попитам какво означава да си човек? Каква е човешката есенция? Какво остава завинаги общото между мъжа и жената, между съпруга и съпругата във Вашата история? 

Кога полът има значение? Какво значение има? Доколко има значение? Това са въпроси, с които разказвачката в моята книга се бори. Едно е да се каже „Всички сме хора“, „Обичам те като човек“ и „Уважавам твоята другост, непознатото в теб“, а друго е наистина да провериш какво означават тези изречения в реалния живот. През книгата разказвачът пита: колко остава от човека, когото обичам, ако „мъжът“ в него бъде отнет. Това е труден въпрос. И в този случай тя осъзнава, че от него остава малко. За нея той като любовник трябва да бъде той. В известен смисъл тя открива своята хетеросексуална идентичност чрез процеса.

Значението на пола вероятно е нещо, за което средностатистическият хетеросексуален човек от цисджендъра дори не трябва да мисли твърде много, тъй като светът е създаден за нас, той следва нашите правила. Живеем в общество, в което се предполага, че сме това, което изглеждаме, мъжете обичат жени, жените обичат мъже и тези идентичности не се променят. Когато принадлежите към малцинство, вие се сблъсквате с тези предположения, за които чувствате, че не са правилни – и вероятно трябва да дефинирате и тествате своята самоличност много повече, отколкото някога ще ви се наложи да го правите, когато сте част от мнозинството.

Имаме работа с дълбоко лични проблеми, когато се занимаваме с пола и идентичността. И сме склонни да правим предположения, че другите усещат, определят и оценяват нещата като нас. Но съм била свидетелка много пъти, когато се срещам с читателите на книгата, че тези предположения варират и си противоречат. Полът е нещо флуидно. Има хора, които изобщо не мислят за пола на другия – спомням си, че слушах човек, който описваше как се е влюбил, и чак на третата среща започнал да се чуди от какъв пол е другият човек. За мен това е очарователно.

Щеше ли да бъде възможен романът „Преди да изчезне мъжът ми“ без опората в паралелната история с Колумб, който изобщо не знае, че открива континент и че нито един от назованите от него острови няма да запази името си за в бъдеще?

Не, изобщо. Комбинацията от тези две нива, две ска̀ли, е неразделна за мен. Едното ниво е пейзаж, исторически мащаб, а другото е интимен мащаб на отношенията. Но по същество и двете описват едно и също: момент или фаза във времето, когато нещо, което сте приели за истина, като нещо известно, се окаже погрешно. Било е просто вяра, а не истина. И след такъв момент човек трябва да дефинира всичко наново – да начертае нова карта, да модифицира имената.

За мен „Преди да изчезне мъжът ми“ е история за това да бъдеш в грешка. Колко трудно e да признаеш, че си сгрешил. Колко сме склонни да подправяме (картите, биографиите, портретите, датите) за собствени цели. Сметнах персонажа на Христофор Колумб за дълбоко трогателен. Как той извиква: „Намерих Индия, намерих Индия!“, когато почти всичко противоречи на истината му. Неслучайно тези две изречения: „Намерих Индия“ и „Имах мъж“ се повтарят заедно. И разбира се, към края на книгата Колумб и разказвачът се сливат и има части, в които не можете да разберете кой точно говори.

Другият аспект, който искам да спомена, е въпросът кой може да разкаже историята и каква манипулативна сила е свързана с акта на разказване. Разказвачът никога не е невинен и това е толкова по-очевидно, колкото повече заглушава останалите гласове. В моята книга разказвачът използва много сила – точно както Колумб е направил с картите си, с морските си пътища, с личната си история. Онова, което линията с Колумб носи, е този въпрос: колко от тази история е подправена? Колко е оставено и скрито? Доколко имате доверие на този разказвач?

Съпругът в книгата избира за себе си като жена името Лили. Същото име за себе си избира и прочутото „момиче от Дания“… Това ме провокира да Ви попитам има ли литературна традиция, която стои зад Вашата книга? Примерно „Орландо“ на Вирджиния Улф?

Името Лили не е препратка към Лили Елбе, не. Нарекох я първо Лулу, но някой изтъкна, че името има проститутскo звучене (поне в немски контекст), каквото аз не търсех. Затова го промених на Лили. Разбира се, бях наясно с връзката с Лили Елбe и прецених, че това не вреди.

Интересно, че споменавате „Орландо“ на Улф! В последната ми книга – „Жената, която обичаше насекоми“, има главен герой, който живее 370 години. В тази книга малко си играя с идеята за Орландо. Романът продължава с темата за метаморфозите, този път от гледна точка на насекомите. Предполагам, че промяната е тема, която ме вдъхновява. Светът като постоянно променящо се, непредсказуемо, ненадеждно място.

Дали опоетизирането на фактите във Вашата уникална житейска история не е модусът, през който става възможно да ги изразим?

Използването на автобиографичен материал в художествената литература не ме интересува като такъв. Никога не съм се интересувала от автофикция и като читател. Стана възможно да напиша книгата едва когато намерих литературен израз за тези събития.

Без постиженията в медицината щеше ли да изчезне мъжът в книгата Ви? И какъв е моралът на съвременната медицина, що се отнася до смяната на пола?

Що се отнася до трансджендърните въпроси, бих изслушала трансджендър хората и бих ги оставила те да говорят. Не е наша работа да коментираме какво трябва или не трябва да се прави. Съжалявам, но не знам каква е в момента ситуацията с пола и джендър малцинствата в България в момента, имат ли глас, имат ли модели за подражание, какви са техните права. Но ще кажа това: единственият начин да се грижим за трансджендър хората е да ги оставим те да решат. Законът трябва да бъде такъв, че отделните лекари да нямат място за лични интерпретации, а здравните работници да имат знания и образование по въпросите относно секса и джендър малцинствата.

И бих помолила всеки, който прочете книгата ми и осъзнае, че липсва другият глас, да потърси тези гласове. Кога за последно прочетохте интервю с трансчовек? Колко герои от малцинствата сте срещали в детската художествена литература? Това са въпроси, които имат значение.

Какво за Вас вече означава тялото? Мъжкото тяло? Човешкото тяло? Неговата роля в любовното представление?

Тялото… това е огромна тема! Но ако говорим за желание, за това от кого се чувствам привлечена, тялото означава много. Може би никога не бях осъзнавала колко е важно. Още веднъж – това е нещо много лично и индивидуално. Едно от най-хубавите неща на остаряването е, че опознавам по-добре собственото си тяло. Има толкова по-малко срам и срамежливост, това е прекрасно!

Как реагираха читателите на „Преди да изчезне мъжът ми“? Кои реакции преобладаваха – либералните или консервативните? Как книгата се приема във Финландия и как – в чужбина?

Не съм сигурна към какво препращате, когато казвате либерални/консервативни. Реакциите бяха разделени. Имаше читатели, които симпатизираха силно на разказвача и искаха да пропуснат цялата част с Колумб, а имаше и такива, които намериха книгата за дълбоко обидна заради ограничената ѝ гледна точка. Честно казано, сметнах и двете реакции за малко неинтелектуални.

Има ли сродство между героите в „Неща, които падат от небето“ и съдбоносните случайности в техния живот, от една страна, и Вашата история, от друга?

Понякога в изкуството стават мистериозни неща. Знам, че не съм единственият автор, който е преживял това: първо пишеш за нещо, после то ти се случва. Пишеш за болест, разболяваш се и т.н. Това стана с мен. Почти бях завършила „Неща, които падат от небето“, когато бившият ми съпруг ми каза, че е трансджендър, и да, почувствах се сякаш съм се превърнала в художествен герой в книгата си.

Мисля, че това е добър пример как художествената литература и реалността се преплитат. Не става така, че първо да изпиташ нещо, а после да пишеш за това в текстовете си. Може да се случи едновременно, реалността може да се превърне в художествена литература – и обратно. И най-добрата част от творчеството е, че и подсъзнанието работи. Така че дори да планирате да пишете за нещо, може да се озовете с друго.

След всичко това какво за Вас означава интимност? А истина?

Ако имате предвид „загубих ли доверието си в хората“, отговорът ми е отрицателен. Истината винаги е сложна, а понякога е и противоречива и трябваше да приема, че в живота ми винаги ще я има тази част, в която няма само една ясна истина, а няколко противоречиви версии, които съществуват едновременно.

Превод от английски Лилия Трифонова
Заглавна снимка: Стопкадър от интервю със Селя Ахава по повод Наградата за литература на Европейския съюз за 2016 г. Източник: EUPL Prize

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

По буквите: Копланд, Москова, Надаш

Post Syndicated from Марин Бодаков original https://toest.bg/kopland-moskova-nadas/

От човек до човек, с нова книга в ръка – ходенето по буквите продължава. Всеки месец Марин Бодаков представя по три нови литературни заглавия. И пита с какво точно тези книги ни променят.

„Каквото казах, нека“ от Рутхер Копланд

подбор, превод от нидерландски и послеслов Боряна Кацарска, София: Издателство за поезия ДА, 2020

като дете сънувах че искат да ме вземат със себе си
сега знам че някъде са щели да ме изоставят

Така Рутхер Копланд пише за едни гъски. Да, точно за гъски в стихотворение, което се занимава с усещането за дълбочина. И температурата на това стихотворение сякаш е точно нула градуса, но знаем, че в нас тя ще продължи да пада – и да се появят кристалните структури на психиката. Блясъкът на приемането и покоя.

Нищо учудващо: Копланд е и прочут професор по психиатрия.

Защо има опасност лириката на колоса Копланд да мине под актуалните читателски радари, да остане незабелязана? Защото тя наистина е настроена на много, много фина честота. И ни хваща по бели гащи.

Много по-лесно е да се опише какво не е тя: ни най-малко не е зрелищна, в нея няма едри думи и жестове, прилагателните имена почти липсват. И същевременно няма никакъв флирт с неяснотата и загадъчността. Липсата и пълнотата взаимно се неутрализират.

Човек има усещането, че пред него, в точно този момент, Рутхер Копланд се свързва с най-дълбоките си, с най-живите си извори. Пред нас е необичайно себеразкриване. В това число – необичайно тихо. Големият поет не е свръхлаконичен автор, но думите му са премерени, успокоени, умиротворени. Тревогата е освободила от мястото си за тях.

и, ах, как простота намира своята загадка

Нобеловият лауреат Дж. М. Кутси пише: „Копланд не въздиша носталгично по изчезналото минало – той е твърде стоически настроен за това, – а изразява просто печал, че светът трябва да е, както е, печал, която е неизбежна част от това да живееш в света и да го обичаш, и да знаеш, че трябва да го оставиш.“

Превъзходен превод от нидерландски на Боряна Кацарска. С много нови и редактирани преводи, защото това е второто издание на Рутхер Копланд на български – след „Човек в градината“ (София: изд. „Стигмати“, 2012). Тогава съм казал: „Клиничната практика на проф. Копланд личи под думите – неразбираемото и самопонятното ту се припокриват, ту се раздалечават неотвратимо, истините са безсмислени, но няма по-добри…“

Остарял съм – и от тогавашния си отзив ще повторя май-май само това.

„Минава – заминава…“ от Рада Москова

София: Фондация за българска литература, 2020

Поезията на Рада Москова е в съгласие с преходността. Преходността облекчава. Мимолетността е интимна. И Рада го знае: и това ще мине.

И ние с него.

Някога един психотерапевт ми обясни, че никой не може да спре хода на времето. И действително е така. А Рада винаги се е интересувала от времето – заглавията на книгите ѝ го потвърждават: „Между минутите“, „Парчета време“. Дали това не е така и защото по образование тя всъщност е хуманен лекар?

Стихотворенията на Рада са звено между все по-шумната и пъстра природа и все по-тихия и просветен стар човек. Откъм протяжната житейска история те се отнасят с уважение и разбиране към настоящия миг – и заради това отбелязват трогателни детайли от живота на естеството. Светлата умора и светлата тъга придават тежест и самоирония на преживените десетилетия.

Неизбежността е приемана с достойнство. Тя не е вайкане, а констатация:

Падането на Икар

На М.

Земята го посреща както подобава –
с крила на птици,
ласкав вятър,
зелено кимване на клони
и блясъци от погледи на хора.

И после всичко се превръща в удар.

Така пише сценаристката на прочути филми като „Горе на черешата“ и „Куче в чекмедже“. Една благородна градска жена, вече на 87 години, която имаше много неприятни отношения със социалистическия режим, но запази непокътнати и до днес откровеността, скромността и неподкупността си.

Човещината си. И в поезията на Рада Москова това си личи.

„Солта на живота“ от Петер Надаш

превод от унгарски Светла Кьосева, София: изд. СОНМ, 2020

Мъдрата книга на големия унгарски писател е посветена на 500-годишнината от Реформацията – и на един твърде особен случай.

Известно е, че в храмовете на лутераните има само разпятие над олтара. Калвинистите се отказват дори от разпятието. В едно прочуто със солта си градче писателят Петер Надаш се озовава в протестантска катедрала, която прелива от изящни произведения на изкуството. В стремежа си към простота и чистота богомолците, за разлика от много други места, не са унищожили нито един артефакт.

Как така? Защо?

„Виновник“ е протестантският проповедник Йохан Бренц (1499–1570), който при пристигането си на служба в катедралата е на 23 години. Как той променя склонността към варварство на своите съграждани? Неговата идеология според eсето на Надаш гласи следното: „… в новата вяра цялостно се побира и старата вяра заедно с църковната утвар. Нямало е как да нападат, да грабят или да изгарят, без да осквернят и собствената си вяра. Всичко в катедралата е на мястото си до ден днешен.“

Как Бренц и следовниците му са се справили с човешкото несъвършенство? Чрез учене през целия живот. Чрез безплатно училище за всички, независимо от материалното им състояние. Чрез учене на латински, но и на родния им алемански език. Училище, в което естествено място имат и момичетата.

Както и чрез освобождаване на осъдените „вещици“, защото вещици на практика няма. Чрез подпомагане на бедни и самотни болни, на вдовици и сираци. Чрез грижа за словото, чрез свързаност през словото. Както пише Надаш, точно така била озаптена собствената жажда за насилие и низост у новопокръстените.

Съкровената за мен „Солта на живота“ на Петер Надаш ме препрати към друга, много важна за мен книга от 2020 г. – „Една среща“, сборник есета на Милан Кундера. И особено есето „Вражди и приятелства“. В него Кундера изследва разногласието между тези, за които политическата борба е по-важна от конкретния живот, от изкуството, от мисълта, и тези, за които смисълът на политиката е да обслужва конкретния живот, изкуството, мисълта…

Ясно е чия страна избирам.

(Тези дни приятел ме попита кои са важните за мен текстове напоследък – посочих му есетата на Надаш и на Кундера. Ето какво ми отговори той:

„Вражди и приятелства“ ми се струва продължение на темата от текста на Надаш – неспособността за съчетаване на неща, които не си противоречат (възгледи и приятелство, човещина). И произтичащата от това омраза.

„Прав си. Не бях се замислял за връзката. Може би това е нуждата от цялост, от включване“, отговорих аз.

„Да, може би – отговори той. – Колко добре би било да имаме повече доверие в тази естествена потребност.“)

Заглавна илюстрация: © Александра Димитрова

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

Седмицата в „Тоест“ (11–15 януари)

Post Syndicated from Тоест original https://toest.bg/editorial-11-15-january-2021/

Емилия Милчева

Важният момент от отминаващата седмица е, че президентът насрочи за 4 април следващите парламентарни избори за състава на 45-тото Народно събрание. Емилия Милчева започва с цитат от Шекспирова драма анализа на ситуацията, която в момента се явява отправна точка за бъдещото развитие на политическия театър в навечерието на тези избори. Прочетете „Титаномахия или wannabe. Българската битка“.


Светла Енчева

Световното внимание продължава да бъде фокусирано върху събитията във Вашингтон. Доналд Тръмп успя да се запише в историята с два импийчмънта в един мандат. Междувременно корпулентна частица от нашия политически елит избра да се тревожи от съвсем други новини, макар и също свързани с Конгреса на САЩ. За „Джендърното объркване на Красимир Каракачанов“ прочетете в новата статия на Светла Енчева.


Венелина Попова

През седмицата Венелина Попова разговаря с адв. Албена Белянова, която внесе сигнал в КПКОНПИ с искане премиерът Борисов да бъде разследван за корупция и конфликт на интереси. „По-непоносимо е да живея в страх, отколкото да се изправя срещу него“, казва адв. Белянова в интервюто за „Тоест“.


Йоанна Елми

Нашата авторка Йоанна Елми заедно с колегите си Георги Михайлов и Йоана Шопова участва в хакатон за журналистически разследвания, организиран от „АЕЖ-България“ по темата кой стои зад дезинформационните кампании и какъв е пътят им от социалните мрежи до масовите медии. Техният труд бе сред трите материала, отличени от журито. Публикуваме разследването им, озаглавено „До Чикагото и назад: Как кемтрейлс и конспирациите за коронавируса се сляха в едно“.


Новината за планираното озеленяване на емблематичния парижки булевард „Шанз Елизе“ обходи голяма част от българските медии. В тази връзка в статията си „Градските гори на Париж и зелените клинове на София“ арх. Анета Василева ни припомня, че сходни идеи за озеленяването на българската столица са разписани още в т.нар. План „Мусман“ от 1938 г., останал ненапълно реализиран.


Марин Бодаков

„Няма само една ясна истина“ е заглавието на интервюто на Марин Бодаков с финландската писателка Селя Ахава, която от края на миналата година има вече втори роман, преведен на български език, и по този повод гостува на Софийския международен литературен фестивал. Интервюто и книгата „Преди да изчезне мъжът ми“ започват с това какво се случва, когато една жена чува от съпруга си, че винаги е искал да бъде жена. И продължава с други важни въпроси за човешкото и интимното.


Време е и за първите три препоръчани от Марин книги през новата година в неговата редовна рубрика „По буквите“. Този път това са стихосбирките „Каквото казах, нека“ от Рутхер Копланд и „Минава – заминава…“ от Рада Москова, както и мъдрата книга „Солта на живота“ от Петер Надаш, с когото миналата година успяхме да направим и интервю.

Приятно четене!

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

Developing enterprise application patterns with the AWS CDK

Post Syndicated from Krishnakumar Rengarajan original https://aws.amazon.com/blogs/devops/developing-application-patterns-cdk/

Enterprises often need to standardize their infrastructure as code (IaC) for governance, compliance, and quality control reasons. You also need to manage and centrally publish updates to your IaC libraries. In this post, we demonstrate how to use the AWS Cloud Development Kit (AWS CDK) to define patterns for IaC and publish them for consumption in controlled releases using AWS CodeArtifact.

AWS CDK is an open-source software development framework to model and provision cloud application resources in programming languages such as TypeScript, JavaScript, Python, Java, and C#/.Net. The basic building blocks of AWS CDK are called constructs, which map to one or more AWS resources, and can be composed of other constructs. Constructs allow high-level abstractions to be defined as patterns. You can synthesize constructs into AWS CloudFormation templates and deploy them into an AWS account.

AWS CodeArtifact is a fully managed service for managing the lifecycle of software artifacts. You can use CodeArtifact to securely store, publish, and share software artifacts. Software artifacts are stored in repositories, which are aggregated into a domain. A CodeArtifact domain allows organizational policies to be applied across multiple repositories. You can use CodeArtifact with common build tools and package managers such as NuGet, Maven, Gradle, npm, yarn, pip, and twine.

Solution overview

In this solution, we complete the following steps:

  1. Create two AWS CDK pattern constructs in Typescript: one for traditional three-tier web applications and a second for serverless web applications.
  2. Publish the pattern constructs to CodeArtifact as npm packages. npm is the package manager for Node.js.
  3. Consume the pattern construct npm packages from CodeArtifact and use them to provision the AWS infrastructure.

We provide more information about the pattern constructs in the following sections. The source code mentioned in this blog is available in GitHub.

Note: The code provided in this blog post is for demonstration purposes only. You must ensure that it meets your security and production readiness requirements.

Traditional three-tier web application construct

The first pattern construct is for a traditional three-tier web application running on Amazon Elastic Compute Cloud (Amazon EC2), with AWS resources consisting of Application Load Balancer, an Autoscaling group and EC2 launch configuration, an Amazon Relational Database Service (Amazon RDS) or Amazon Aurora database, and AWS Secrets Manager. The following diagram illustrates this architecture.

 

Traditional stack architecture

Serverless web application construct

The second pattern construct is for a serverless application with AWS resources in AWS Lambda, Amazon API Gateway, and Amazon DynamoDB.

Serverless application architecture

Publishing and consuming pattern constructs

Both constructs are written in Typescript and published to CodeArtifact as npm packages. A semantic versioning scheme is used to version the construct packages. After a package gets published to CodeArtifact, teams can consume them for deploying AWS resources. The following diagram illustrates this architecture.

Pattern constructs

Prerequisites

Before getting started, complete the following steps:

  1. Clone the code from the GitHub repository for the traditional and serverless web application constructs:
    git clone https://github.com/aws-samples/aws-cdk-developing-application-patterns-blog.git
    cd aws-cdk-developing-application-patterns-blog
  2. Configure AWS Identity and Access Management (IAM) permissions by attaching IAM policies to the user, group, or role implementing this solution. The following policy files are in the iam folder in the root of the cloned repo:
    • BlogPublishArtifacts.json – The IAM policy to configure CodeArtifact and publish packages to it.
    • BlogConsumeTraditional.json – The IAM policy to consume the traditional three-tier web application construct from CodeArtifact and deploy it to an AWS account.
    • PublishArtifacts.json – The IAM policy to consume the serverless construct from CodeArtifact and deploy it to an AWS account.

Configuring CodeArtifact

In this step, we configure CodeArtifact for publishing the pattern constructs as npm packages. The following AWS resources are created:

  • A CodeArtifact domain named blog-domain
  • Two CodeArtifact repositories:
    • blog-npm-store – For configuring the upstream NPM repository.
    • blog-repository – For publishing custom packages.

Deploy the CodeArtifact resources with the following code:

cd prerequisites/
rm -rf package-lock.json node_modules
npm install
cdk deploy --require-approval never
cd ..

Log in to the blog-repository. This step is needed for publishing and consuming the npm packages. See the following code:

aws codeartifact login \
     --tool npm \
     --domain blog-domain \
     --domain-owner $(aws sts get-caller-identity --output text --query 'Account') \
     --repository blog-repository

Publishing the pattern constructs

  1. Change the directory to the serverless construct:
    cd serverless
  2. Install the required npm packages:
    rm package-lock.json && rm -rf node_modules
    npm install
    
  3. Build the npm project:
    npm run build
  4. Publish the construct npm package to the CodeArtifact repository:
    npm publish

    Follow the previously mentioned steps for building and publishing a traditional (classic Load Balancer plus Amazon EC2) web app by running these commands in the traditional directory.

    If the publishing is successful, you see messages like the following screenshots. The following screenshot shows the traditional infrastructure.

    Successful publishing of Traditional construct package to CodeArtifact

    The following screenshot shows the message for the serverless infrastructure.

    Successful publishing of Serverless construct package to CodeArtifact

    We just published version 1.0.1 of both the traditional and serverless web app constructs. To release a new version, we can simply update the version attribute in the package.json file in the traditional or serverless folder and repeat the last two steps.

    The following code snippet is for the traditional construct:

    {
        "name": "traditional-infrastructure",
        "main": "lib/index.js",
        "files": [
            "lib/*.js",
            "src"
        ],
        "types": "lib/index.d.ts",
        "version": "1.0.1",
    ...
    }

    The following code snippet is for the serverless construct:

    {
        "name": "serverless-infrastructure",
        "main": "lib/index.js",
        "files": [
            "lib/*.js",
            "src"
        ],
        "types": "lib/index.d.ts",
        "version": "1.0.1",
    ...
    }

Consuming the pattern constructs from CodeArtifact

In this step, we demonstrate how the pattern constructs published in the previous steps can be consumed and used to provision AWS infrastructure.

  1. From the root of the GitHub package, change the directory to the examples directory containing code for consuming traditional or serverless constructs.To consume the traditional construct, use the following code:
    cd examples/traditional

    To consume the serverless construct, use the following code:

    cd examples/serverless
  2. Open the package.json file in either directory and note that the packages and versions we consume are listed in the dependencies section, along with their version.
    The following code shows the traditional web app construct dependencies:

    "dependencies": {
        "@aws-cdk/core": "1.30.0",
        "traditional-infrastructure": "1.0.1",
        "aws-cdk": "1.47.0"
    }

    The following code shows the serverless web app construct dependencies:

    "dependencies": {
        "@aws-cdk/core": "1.30.0",
        "serverless-infrastructure": "1.0.1",
        "aws-cdk": "1.47.0"
    }
  3. Install the pattern artifact npm package along with the dependencies:
    rm package-lock.json && rm -rf node_modules
    npm install
    
  4. As an optional step, if you need to override the default Lambda function code, build the npm project. The following commands build the Lambda function source code:
    cd ../override-serverless
    npm run build
    cd -
  5. Bootstrap the project with the following code:
    cdk bootstrap

    This step is applicable for serverless applications only. It creates the Amazon Simple Storage Service (Amazon S3) staging bucket where the Lambda function code and artifacts are stored.

  6. Deploy the construct:
    cdk deploy --require-approval never

    If the deployment is successful, you see messages similar to the following screenshots. The following screenshot shows the traditional stack output, with the URL of the Load Balancer endpoint.

    Traditional CloudFormation stack outputs

    The following screenshot shows the serverless stack output, with the URL of the API Gateway endpoint.

    Serverless CloudFormation stack outputs

    You can test the endpoint for both constructs using a web browser or the following curl command:

    curl <endpoint output>

    The traditional web app endpoint returns a response similar to the following:

    [{"app": "traditional", "id": 1605186496, "purpose": "blog"}]

    The serverless stack returns two outputs. Use the output named ServerlessStack-v1.Api. See the following code:

    [{"purpose":"blog","app":"serverless","itemId":"1605190688947"}]

  7. Optionally, upgrade to a new version of pattern construct.
    Let’s assume that a new version of the serverless construct, version 1.0.2, has been published, and we want to upgrade our AWS infrastructure to this version. To do this, edit the package.json file and change the traditional-infrastructure or serverless-infrastructure package version in the dependencies section to 1.0.2. See the following code example:

    "dependencies": {
        "@aws-cdk/core": "1.30.0",
        "serverless-infrastructure": "1.0.2",
        "aws-cdk": "1.47.0"
    }

    To update the serverless-infrastructure package to 1.0.2, run the following command:

    npm update

    Then redeploy the CloudFormation stack:

    cdk deploy --require-approval never

Cleaning up

To avoid incurring future charges, clean up the resources you created.

  1. Delete all AWS resources that were created using the pattern constructs. We can use the AWS CDK toolkit to clean up all the resources:
    cdk destroy --force

    For more information about the AWS CDK toolkit, see Toolkit reference. Alternatively, delete the stack on the AWS CloudFormation console.

  2. Delete the CodeArtifact resources by deleting the CloudFormation stack that was deployed via AWS CDK:
    cd prerequisites
    cdk destroy –force
    

Conclusion

In this post, we demonstrated how to publish AWS CDK pattern constructs to CodeArtifact as npm packages. We also showed how teams can consume the published pattern constructs and use them to provision their AWS infrastructure.

This mechanism allows your infrastructure for AWS services to be provisioned from the configuration that has been vetted for quality control and security and governance checks. It also provides control over when new versions of the pattern constructs are released, and when the teams consuming the constructs can upgrade to the newly released versions.

About the Authors

Usman Umar

 

Usman Umar is a Sr. Applications Architect at AWS Professional Services. He is passionate about developing innovative ways to solve hard technical problems for the customers. In his free time, he likes going on biking trails, doing car modifications, and spending time with his family.

 

 

Krishnakumar Rengarajan

 

Krishnakumar Rengarajan is a DevOps Consultant with AWS Professional Services. He enjoys working with customers and focuses on building and delivering automated solutions that enables customers on their AWS cloud journeys.

Metasploit Wrap-Up

Post Syndicated from Alan David Foster original https://blog.rapid7.com/2021/01/15/metasploit-wrap-up-94/

Commemorating the 2020 December Metasploit community CTF

Metasploit Wrap-Up

A new commemorative banner has been added to the Metasploit console to celebrate the teams that participated in the 2020 December Metasploit community CTF and achieved 100 or more points:

Metasploit Wrap-Up

If you missed out on participating in this most recent event, be sure to follow the Metasploit Twitter and Metasploit blog posts. If there are any future Metasploit CTF events, all details will be announced there!

If the banners aren’t quite your style, you can always disable them with the quiet flag:

msfconsole -q

Windows privilege escalation via Cloud Filter driver

Our very own gwillcox-r7 has created a new module for CVE-2020-1170 Cloud Filter Arbitrary File Creation EOP, with credit to James Foreshaw for the initial vulnerability discovery and proof of concept. The Cloud Filter driver, cldflt.sys, on Windows 10 v1803 and later, prior to December 2020, did not set the IO_FORCE_ACCESS_CHECK or OBJ_FORCE_ACCESS_CHECK flags when calling FltCreateFileEx() and FltCreateFileEx2() within its HsmpOpCreatePlaceholders() function with attacker-controlled input. This meant that files were created with KernelMode permissions, thereby bypassing any security checks that would otherwise prevent a normal user from being able to create files in directories they don’t have permissions to create files in.

This module abuses this vulnerability to perform a DLL hijacking attack against the Microsoft Storage Spaces SMP service, which grants the attacker code execution as the NETWORK SERVICE user. Users are strongly encouraged to set the PAYLOAD option to one of the Meterpreter payloads, as doing so will allow them to subsequently escalate their new session from NETWORK SERVICE to SYSTEM by using Meterpreter’s getsystem command to perform RPCSS Named Pipe Impersonation and impersonate the SYSTEM user.

New Modules (3)

Enhancements and Features

  • #14562 from zeroSteiner Improves the readability of Meterpreter error messages by replacing the command ID with the command name
  • #14582 from zeroSteiner This adds the possibility to run post module actions as commands. This also consolidates and improves existing VSS modules into one new single module with multiple actions.
  • #14600 from zeroSteiner The FileSystem mixin has been reorganized and a number of function aliases have been added to assist developers in using the module. Additionally new YARD documentation has been added to better explain the functionality of several of the FileSystem mixin’s functions to assist developers in determining when to use these functions.
  • #14606 from bwatters-r7 This adds a banner commemorating all of the teams that participated in the Q4 2020 CTF.

Bugs Fixed

  • #14515 from timwr This fixes an issue with both cmd/unix/reverse_awk and cmd/unix/bind_awk payloads that were not correctly terminating when after a session was closed. This was causing endless session creations and high CPU consumption on the target.
  • #14605 from zeroSteiner This PR fixes an issue where the VHOST option was not being correctly populated when the RHOST option was a domain name
  • #14613 from adfoster-r7 Fixes a regression error with modules depending on NTLM such as cve_2019_0708_bluekeep
  • #14614 from zeroSteiner A bug within the module for CVE-2020-17136 occurred where a relative path was used instead of an absolute path when attempting to load the C# exploit exe. The code has been replaced with a call to File.expand_path() to allow the module to dynamically determine the full path to this file, allowing users to use the module regardless of which directory they are in when running msfconsole.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

Stenberg: Food on the table while giving away code

Post Syndicated from original https://lwn.net/Articles/842837/rss

Daniel Stenberg writes
about getting paid to work on curl
— 21 years after starting the
project. “I ran curl as a spare time project for decades. Over the
years it became more and more common that users who submitted bug reports
or asked for help about things were actually doing that during their paid
work hours because they used curl in a commercial surrounding – which
sometimes made the situation almost absurd. The ones who actually got paid
to work with curl were asking the unpaid developers to help them
out.

Click Here to Kill Everybody Sale

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/click-here-to-kill-everybody-sale.html

For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just $6, plus shipping.

Note that I have had occasional problems with international shipping. The book just disappears somewhere in the process. At this price, international orders are at the buyer’s risk. Also, the USPS keeps reminding us that shipping — both US and international — may be delayed during the pandemic.

I have 500 copies of the book available. When they’re gone, the sale is over and the price will revert to normal.

Order here.

EDITED TO ADD: I was able to get another 500 from the publisher, since the first 500 sold out so quickly.

Please be patient on delivery. There are already 550 orders, and that’s a lot of work to sign and mail. I’m going to be doing them a few at a time over the next several weeks. So all of you people reading this paragraph before ordering, understand that there are a lot of people ahead of you in line.

EDITED TO ADD (1/16): I am sold out. If I can get more copies, I’ll hold another sale after I sign and mail the 1,000 copies that you all purchased.

Uganda’s January 13, 2021 Internet Shut Down

Post Syndicated from Celso Martinho original https://blog.cloudflare.com/uganda-january-13-2021-internet-shut-down/

Uganda's January 13, 2021 Internet Shut Down

Two days ago, through its communications regulator, Uganda’s government ordered the “Suspension Of The Operation Of Internet Gateways” the day before the country’s general election. This action was confirmed by several users and journalists who got access to the letter sent to Internet providers. In other words, the government effectively cut off Internet access from the population to the rest of the world.

On Cloudflare Radar, we want to help anyone understand what happens on the Internet. We are continually monitoring our network and exposing insights, threats, and trends based on the aggregated data that we see.

Uganda’s unusual traffic patterns quickly popped up in our charts. Our 7-day change in Internet Traffic chart in Uganda shows a clear drop to near zero starting around 1900 local time, when the providers received the letter.

Uganda's January 13, 2021 Internet Shut Down

This is also obvious in the Application-level Attacks chart.

Uganda's January 13, 2021 Internet Shut Down

The traffic drop was also confirmed by the Uganda Internet eXchange point, a place where many providers exchange their data traffic, on their public statistics page.

Uganda's January 13, 2021 Internet Shut Down

We keep an eye on traffic levels and BGP routing to our edge network, and are able to see which networks carry traffic to and from Uganda and their relative traffic levels. The cutoff is clear in those statistics also. Each colored line is a different network inside Uganda (such as ISPs, mobile providers, etc.)

Uganda's January 13, 2021 Internet Shut Down

We will continue to keep an eye on traffic levels from Uganda and update the blog when we see significant changes. At the time of writing, Internet access appears to be still cut off.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/842834/rss

Security updates have been issued by Debian (flatpak, ruby-redcarpet, and wavpack), Fedora (dia, mingw-openjpeg2, and openjpeg2), Mageia (awstats, bison, cairo, kernel, kernel-linus, krb5, nvidia-current, nvidia390, php, and thunderbird), openSUSE (cobbler, firefox, kernel, libzypp, zypper, nodejs10, nodejs12, and nodejs14), Scientific Linux (thunderbird), Slackware (wavpack), SUSE (kernel, nodejs8, open-iscsi, openldap2, php7, php72, php74, slurm_20_02, and thunderbird), and Ubuntu (ampache and linux, linux-hwe, linux-hwe-5.4, linux-hwe-5.8, linux-lts-xenial).

[$] Fast commits for ext4

Post Syndicated from original https://lwn.net/Articles/842385/rss

The Linux 5.10 release included a change
that is expected to significantly increase the performance of the ext4
filesystem; it goes by the name “fast commits” and introduces a new,
lighter-weight journaling method. Let us look into how the feature works, who
can benefit from it, and when its use may be appropriate.

Злоупотреби и конфликти на интереси в “Софийска вода”

Post Syndicated from original https://bivol.bg/veolia-sofia-water.html

петък 15 януари 2021


Недостатъчно вода за потребителите, съмнително управление на тръжните процедури, постоянно нарастване на цените на водата в условията на значителни течове от остарелите инфраструктури Тези “особености” на глобалния воден гигант “Веолия”…

NICER Protocol Deep Dive: Internet Exposure of DNS-over-TLS

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/01/15/nicer-protocol-deep-dive-internet-exposure-of-dns-over-tls/

NICER Protocol Deep Dive: Internet Exposure of DNS-over-TLS

Welcome to the NICER Protocol Deep Dive blog series! When we started researching what all was out on the internet way back in January, we had no idea we’d end up with a hefty, 137-page tome of a research report. The sheer length of such a thing might put off folks who might otherwise learn a thing or two about the nature of internet exposure, so we figured, why not break up all the protocol studies into their own reports?

So, here we are! What follows is taken directly from our National / Industry / Cloud Exposure Report (NICER), so if you don’t want to wait around for the next installment, you can cheat and read ahead!

[Research] Read the full NICER report today

Get Started

DNS-over-TLS (DoT) (TCP/853)

Encrypting DNS is great! Unless it’s baddies doing the encrypting.

TLDR

  • WHAT IT IS: DNS over TLS is just what it says on the tin: the DNS protocol embedded in a TLS connection, ostensibly to make your DNS request more confidential.
  • HOW MANY: 3,237 discovered nodes. A hodgepodge mix of vendor/version information was discernible, but you’ll need to read the details to find out more.
  • VULNERABILITIES: Whatever is in the DNS that backs the service or in the code that presents TLS (more often than not, a plain, ol’ web server).
  • ADVICE: It’s complicated (read on to find out why!)
  • ALTERNATIVES: Plain, simple, uncomplicated, and woefully unconfidential UDP DNS; DNS over HTTPS (DoH); DNS over QUIC (DoQ); DNS over avian carriers (DoAC).
  • GETTING: Drunk with power. There are nearly two times as many as April 2019.

At face value, DNS over TLS (henceforth referred to as DoT) aims to be the confidentiality solution for a legacy cleartext protocol that has managed to resist numerous other confidentiality (and integrity) fixup attempts. It is one of a handful of modern efforts to help make DNS less susceptible to eavesdropping and person-in-the-middle attacks.

Discovery details

We chose to examine DoT because web browsers have become the new operating system of the internet, and DoT and cousins all allow browsers (or any app, really) to bypass your home, ISP, or organization’s choices of DNS resolution method and resolution provider. Since it’s presented over TLS, it can also be a great way for attackers to continue to use DNS as a command-and-control channel as well as an exfiltration channel.

We chose to examine DoT versus DoH because, well, it is far easier to enumerate DoT endpoints than it is DoH endpoints. It’s getting easier to enumerate DoH since there seems to be some agreement on the standard way to query it, so that will likely make it to a future report, but for now, let’s take a look at what DoT Project Sonar found:

NICER Protocol Deep Dive: Internet Exposure of DNS-over-TLS

Yes, you read that chart correctly! Ireland is No. 1 in terms of the number of nodes running a DoT service, and it’s all thanks to a chap named Daniel Cid, who co-runs CleanBrowsing, which is a “DNS-based content filtering service that offers a safe way to browse the web without surprises.” Daniel has his name on AS205157, which is allocated to Ireland, but the CleanBrowsing service itself is run out of California. In fact, CleanBrowsing comprises almost 50% of the DoT corpus (1,612 nodes), with 563 nodes attributed to the United States and a tiny number of servers attributed to a dozen or so other country network spaces.

Both the U.S. and Germany have a cornucopia of server types and autonomous systems presenting DoT services (none really stand out besides CleanBrowsing).

Since Bulgaria rarely makes it into top 10 exposure lists, we took a look at what was there and it’s a ton (relatively, anyway: 242) of DoT servers in Fiber Optics Bulgaria OOD, which is a kind of “meta” service provider for ISPs. Given the relative scarcity of IPv4 addresses, setting aside 242 of them just for DoT is a pretty major investment.

Even though the numbers are small, Japan’s presence is interesting, as it’s nearly all due to a single ISP: Internet Initiative Japan Inc.

NICER Protocol Deep Dive: Internet Exposure of DNS-over-TLS

In case you have been left unawares, Google is a big player] in the DoT space, but it tends to concentrate DNS exposure to a tiny handful of IP addresses (i.e., that bar is not Google-proper). When we filter out CleanBrowsing (yep, they’re everywhere), we’re left with the major exposure in Google being … a couple dozen servers running an instance of Pi-hole (dnsmasq-pi-hole-2.80, to be precise). Cut/paste that finding for OV and DigitalOcean and yep, that same Pi-hole setup is tops in those two clouds as well.

You don’t need to get all fancy and run a Pi-hole setup to host your own DoT server. Just fire up an nginx instance, create a basic configuration, set up your own DNS behind it, and now, you too can stop your ISP from snooping your DNS queries.

Exposure information

Here is where we’d normally talk about versions and CVEs, etc., but the DoT situation is complicated by a few things. First, we have big players in this space using proprietary solutions, so version fingerprints such as  “CleanBrowsing v1.6a” are not very useful information. Second, should we focus on the version of the web server or of the back-end DNS server (or, both)? The latter might not be useful, since you can configure an nginx DoT setup to proxy to a third party, and that’s what will get picked up in the response. Lastly, even if we focus on the second-tier “big guns,” such as PowerDNS, we end up with a situation like this:

NICER Protocol Deep Dive: Internet Exposure of DNS-over-TLS

Giving you that glimpse does help to show it’s utter chaos even in PowerDNS-land, but DNS and chaos seem to go hand in hand.

Attacker’s view

There are no DoT honeypots in project Heisenberg, but DoT is just a TLS wrapper over a traditional DNS binary-format query. When we looked for that in the TCP/853 full packet captures, we saw us (!) and a couple other researchers. Not very exciting, but with the goal of DoT being privacy, we really shouldn’t see random DoT requests.

Attackers are more likely to stand up their own DoT servers or reconfigure other DoT servers to use their DNS back-ends and then use those as covert channels once they gain a foothold after a successful phishing attack. This is a big reason we enumerate/catalog DoT, and we’re starting to see more DoT in residential ISP space and traditional hosting provider IP space. It looks like more folks are experimenting with DoT with each monthly study.

Our advice

IT and IT security teams should block TCP/853, lock down DoT and DoH browser settings as much as possible so there is no way to bypass organizational IT policies, and monitor for all attempts to use DoT or DoH services internally (or externally). In other words, unless you’re the ones setting them up, disallowing rogue, internal DoT is the safest course.

Cloud providers should consider offering managed DoT solutions and provide patched, secure disk images for folks who want to stand up their own. (This is one of the few cases where organizational advice and cloud advice are quite nearly opposite.)

Government cybersecurity agencies should monitor for malicious use of DoT and provide timely updates to the public. These centers should also be a source of unbiased, expert information on DoT, DoH, DoQ (et al).

[Research] Read the full NICER report today

Get Started

Cell Phone Location Privacy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/cell-phone-location-privacy.html

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

KEMTLS: Post-quantum TLS without signatures

Post Syndicated from Sofía Celi original https://blog.cloudflare.com/kemtls-post-quantum-tls-without-signatures/

KEMTLS: Post-quantum TLS without signatures

KEMTLS: Post-quantum TLS without signatures

The Transport Layer Security protocol (TLS), which secures most Internet connections, has mainly been a protocol consisting of a key exchange authenticated by digital signatures used to encrypt data at transport[1]. Even though it has undergone major changes since 1994, when SSL 1.0 was introduced by Netscape, its main mechanism has remained the same. The key exchange was first based on RSA, and later on traditional Diffie-Hellman (DH) and Elliptic-curve Diffie-Hellman (ECDH). The signatures used for authentication have almost always been RSA-based, though in recent years other kinds of signatures have been adopted, mainly ECDSA and Ed25519. This recent change to elliptic curve cryptography in both at the key exchange and at the signature level has resulted in considerable speed and bandwidth benefits in comparison to traditional Diffie-Hellman and RSA.

TLS is the main protocol that protects the connections we use everyday. It’s everywhere: we use it when we buy products online, when we register for a newsletter — when we access any kind of website, IoT device, API for mobile apps and more, really. But with the imminent threat of the arrival of quantum computers (a threat that seems to be getting closer and closer), we need to reconsider the future of TLS once again. A wide-scale post-quantum experiment was carried out by Cloudflare and Google: two post-quantum key exchanges were integrated into our TLS stack and deployed at our edge servers as well as in Chrome Canary clients. The goal of that experiment was to evaluate the performance and feasibility of deployment of two post-quantum key exchanges in TLS.

Similar experiments have been proposed for introducing post-quantum algorithms into the TLS handshake itself. Unfortunately, it seems infeasible to replace both the key exchange and signature with post-quantum primitives, because post-quantum cryptographic primitives are bigger, or slower (or both), than their predecessors. The proposed algorithms under consideration in the NIST post-quantum standardization process use mathematical objects that are larger than the ones used for elliptic curves, traditional Diffie-Hellman, or RSA. As a result, the overall size of public keys, signatures and key exchange material is much bigger than those from elliptic curves, Diffie-Hellman, or RSA.

How can we solve this problem? How can we use post-quantum algorithms as part of the TLS handshake without making the material too big to be transmitted? In this blogpost, we will introduce a new mechanism for making this happen. We’ll explain how it can be integrated into the handshake and we’ll cover implementation details. The key observation in this mechanism is that, while post-quantum algorithms have bigger communication size than their predecessors, post-quantum key exchanges have somewhat smaller sizes than post-quantum signatures, so we can try to replace signatures with key exchanges in some places to save space.  We will only focus on the TLS 1.3 handshake as it is the TLS version that should be currently used.

Past experiments: making the TLS 1.3 handshake post-quantum

KEMTLS: Post-quantum TLS without signatures

TLS 1.3 was introduced in August 2018, and it brought many security and performance improvements (notably, having only one round-trip to complete the handshake). But TLS 1.3 is designed for a world with classical computers, and some of its functionality will be broken by quantum computers when they do arrive.

The primary goal of TLS 1.3 is to provide authentication (the server side of the channel is always authenticated, the client side is optionally authenticated), confidentiality, and integrity by using a handshake protocol and a record protocol. The handshake protocol, the one of interest for us today, establishes the cryptographic parameters for securing and authenticating a connection. It can be thought of as having three main phases, as defined in RFC8446:

–  The Parameter Negotiation phase (referred to as ‘Server Parameters’ in RFC8446), which establishes other handshake parameters (whether the client is authenticated, application-layer protocol support, etc).

–  The Key Exchange phase, which establishes shared keying material and selects the cryptographic parameters to be used. Everything after this phase will be encrypted.

–  The Authentication phase, which authenticates the server (and, optionally, the client) and provides key confirmation and handshake integrity.

The main idea of past experiments that introduced post-quantum algorithms into the handshake of TLS 1.3 was to use them in place of classical algorithms by advertising them as part of the supported groups[2] and key share[3] extensions, and, therefore, establishing with them the negotiated connection parameters. Key encapsulation mechanisms (KEMs) are an abstraction of the basic key exchange primitive, and were used to generate the shared secrets. When using a pre-shared key, its symmetric algorithms can be easily replaced by post-quantum KEMs as well; and, in the case of password-authenticated TLS, some ideas have been proposed on how to use post-quantum algorithms with them.

Most of the above ideas only provide what is often defined as ‘transitional security’, because its main focus is to provide quantum-resistant confidentiality, and not to take quantum-resistant authentication into account. It is possible to use post-quantum signatures for TLS authentication, but the post-quantum signatures are larger than traditional ones. Furthermore, it is worth noting that using post-quantum signatures is much more expensive than using post-quantum KEMs.

We can estimate the impact of such a replacement on network traffic by simply looking at the sum of the cryptographic objects that are transmitted during the handshake. A typical TLS 1.3 handshake using elliptic curve X25519 and RSA-2048 would transmit 1,376 bytes, which would correspond to the public keys for key exchange, the certificate, the signature of the handshake, and the certificate chain. If we were to replace X25519 by the post-quantum KEM Kyber512 and RSA by the post-quantum signature Dilithium II, two of the more efficient proposals, the size transmitted data would increase to 10,036 bytes[4]. The increase is mostly due to the size of the post-quantum signature algorithm.

The question then is: how can we achieve full post-quantum security and give a handshake that is efficient to be used?

A more efficient proposal: KEMTLS

There is a long history of other mechanisms, besides signatures, being used for authentication. Modern protocols, such as the Signal protocol, the Noise framework, or WireGuard, rely on key exchange mechanisms for authentication; but they are unsuitable for the TLS 1.3 case as they expect the long-term key material to be known in advance by the interested parties.

The OPTLS proposal by Krawczyk and Wee authenticates the TLS handshake without signatures by using a non-interactive key exchange (NIKE). However, the only somewhat efficient construction for a post-quantum NIKE is CSIDH, the security of which is the subject of an ongoing debate. But we can build on this idea, and use KEMs for authentication. KEMTLS, the current proposed experiment, replaces the handshake signature by a post-quantum KEM key exchange. It was designed and introduced by Peter Schwabe, Douglas Stebila and Thom Wiggers in the publication ‘Post-Quantum TLS Without Handshake Signatures’.

KEMTLS, therefore, achieves the same goals as TLS 1.3 (authentication, confidentiality and integrity) in the face of quantum computers. But there’s one small difference compared to the TLS 1.3 handshake. KEMTLS allows the client to send encrypted application data in the second client-to-server TLS message flow when client authentication is not required, and in the third client-to-server TLS message flow when mutual authentication is required. Note that with TLS 1.3, the server is able to send encrypted and authenticated application data in its first response message (although, in most uses of TLS 1.3, this feature is not actually used). With KEMTLS, when client authentication is not required, the client is able to send its first encrypted application data after the same number of handshake round trips as in TLS 1.3.

Intuitively, the handshake signature in TLS 1.3 proves possession of the private key corresponding to the public key certified in the TLS 1.3 server certificate. For these signature schemes, this is the straightforward way to prove possession; another way to prove possession is through key exchanges. By carefully considering the key derivation sequence, a server can decrypt any messages sent by the client only if it holds the private key corresponding to the certified public key. Therefore, implicit authentication is fulfilled. It is worth noting that KEMTLS still relies on signatures by certificate authorities to authenticate the long-term KEM keys.

With KEMTLS, application data transmitted during the handshake is implicitly authenticated rather than explicitly (as in TLS 1.3), and has slightly weaker downgrade resilience and forward secrecy; but full downgrade resilience and forward secrecy are achieved once the KEMTLS handshake completes.

KEMTLS: Post-quantum TLS without signatures

By replacing the handshake signature by a KEM key exchange, we reduce the size of the data transmitted in the example handshake to 8,344 bytes, using Kyber512 and Dilithium II — a significant reduction. We can reduce the handshake size even for algorithms such as the NTRU-assumption based KEM NTRU and signature algorithm Falcon, which have a less-pronounced size gap. Typically, KEM operations are computationally much lighter than signing operations, which makes the reduction even more significant.

KEMTLS was presented at ACM CCS 2020. You can read more about its details in the paper. It was initially implemented in the RustTLS library by Thom Wiggers using optimized C/assembly implementations of the post-quantum algorithms provided by the PQClean and Open Quantum Safe projects.

Cloudflare and KEMTLS: the implementation

As part of our effort to show that TLS can be completely post-quantum safe, we implemented the full KEMTLS handshake in Golang’s TLS 1.3 suite. The implementation was done in several steps:

  1. We first needed to clone our own version of Golang, so we could add different post-quantum algorithms to it. You can find our own version here. This code gets constantly updated with every release of Golang, following these steps.
  2. We needed to implement post-quantum algorithms in Golang, which we did on our own cryptographic library, CIRCL.
  3. As we cannot force certificate authorities to use certificates with long-term post-quantum KEM keys, we decided to use Delegated Credentials. A delegated credential is a short-lasting key that the certificate’s owner has delegated for use in TLS. Therefore, they can be used for post-quantum KEM keys. See its implementation in our Golang code here.
  4. We implemented mutual auth (client and server authentication) KEMTLS by using Delegated Credentials for the authentication process. See its implementation in our Golang code here. You can also check its test for an overview of how it works.

Implementing KEMTLS was a straightforward process, although it did require changes to the way Golang handles a TLS 1.3 handshake and how the key schedule works.

A “regular” TLS 1.3 handshake in Golang (from the server perspective) looks like this:

func (hs *serverHandshakeStateTLS13) handshake() error {
    c := hs.c

    // For an overview of the TLS 1.3 handshake, see RFC 8446, Section 2.
    if err := hs.processClientHello(); err != nil {
   	 return err
    }
    if err := hs.checkForResumption(); err != nil {
   	 return err
    }
    if err := hs.pickCertificate(); err != nil {
   	 return err
    }
    c.buffering = true
    if err := hs.sendServerParameters(); err != nil {
   	 return err
    }
    if err := hs.sendServerCertificate(); err != nil {
   	 return err
    }
    if err := hs.sendServerFinished(); err != nil {
   	 return err
    }
    // Note that at this point we could start sending application data without
    // waiting for the client's second flight, but the application might not
    // expect the lack of replay protection of the ClientHello parameters.
    if _, err := c.flush(); err != nil {
   	 return err
    }
    if err := hs.readClientCertificate(); err != nil {
   	 return err
    }
    if err := hs.readClientFinished(); err != nil {
   	 return err
    }

    atomic.StoreUint32(&c.handshakeStatus, 1)

    return nil
}

We had to interrupt the process when the server sends the Certificate (sendServerCertificate()) in order to send the KEMTLS specific messages. In the same way, we had to add the appropriate KEM TLS messages to the client’s handshake. And, as we didn’t want to change so much the way Golang handles TLS 1.3, we only added one new constant to the configuration that can be used by a server in order to ask for the Client’s Certificate (the constant is serverConfig.ClientAuth = RequestClientKEMCert).

The implementation is easy to work with: if a delegated credential or a certificate has a public key of a supported post-quantum KEM algorithm, the handshake will proceed with KEMTLS. If the server requests a Client KEMTLS Certificate, the handshake will use client KEMTLS authentication.

Running the Experiment

So, what’s next? We’ll take the code we have produced and run it on actual Cloudflare infrastructure to measure how efficiently it works.

Thanks

Many thanks to everyone involved in the project: Chris Wood, Armando Faz-Hernández, Thom Wiggers, Bas Westerbaan, Peter Wu, Peter Schwabe, Goutam Tamvada, Douglas Stebila, Thibault Meunier, and the whole Cloudflare Research team.

1It is worth noting that the RSA key transport in TLS ≤1.2 has the server only authenticated by RSA public key encryption, although the server’s RSA public key is certified using RSA signatures by Certificate Authorities.
2An extension used by the client to indicate which named groups -Elliptic Curve Groups, Finite Field Groups- it supports for key exchange.
3An extension which contains the endpoint’s cryptographic parameters.
4These numbers, as it is noted in the paper, are based on the round-2 submissions.

RetroPie booze barrel

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/retropie-booze-barrel/

What do we want? Retro gaming, adult beverages, and our favourite Spotify playlist. When do we want them? All at the same time.

Luckily, u/breadtangle took to reddit to answer our rum-soaked prayers with this beautifully crafted beer barrel-cum-arcade machine-cum-drinks cabinet.

A beer barrel with drinks inside two opening doors cut into the front of the barrel and a retro arcade console serving as the lid of the barrel with joystick and buttons on a ledge in front
We approve of this drink selection

The addition of a sneaky hiding spot for your favourite tipple, plus a musical surprise, set this build apart from the popular barrel arcade projects we’ve seen before, like this one featured a few years back on the blog.

Retro gaming

A Raspberry Pi 3 Model B+ runs RetroPie, offering all sorts of classic games to entertain you while you sample from the grownup goodies hidden away in the drinks cabinet.

The maker’s top choice is Tetris Attack for the SNES.

A beer barrel with drinks inside two opening doors cut into the front of the barrel and a retro arcade console serving as the lid of the barrel with joystick and buttons on a ledge in front
Such a beautiful finish

Background music

What more could you want now you’ve got retro games and an elegantly hidden drinks cabinet at your fingertips? u/breadtangle‘s creation has another trick hidden inside its smooth wooden curves.

The Raspberry Pi computer used in this build also runs Raspotify, a Spotify Connect client for Raspberry Pi that allows you to stream your favourite tunes and playlists from your phone while you game.

You can set Raspotify to play via Bluetooth speakers, but if you’re using regular speakers and are after a quick install, whack this command in your Terminal:

curl -sL https://dtcooper.github.io/raspotify/install.sh | sh
Booze barrel joystick and buttons panel during the making process
Behind the scenes

u/breadtangle neatly tucked a pair of Logitech z506 speakers on the sides of the barrel, where they could be protected by the overhang of the glass screen cover.

Hardware

The build’s joysticks and buttons came from Amazon, and they’re set into an off-cut piece of kitchen countertop. The glass screen protector is another Amazon find and sits on a rubber car-door edge protector.

The screen itself is lovingly tilted towards the controls, to keep players’ necks comfortable, and u/breadtangle finished off the build’s look with a barstool to sit on while gaming.

We love it, but we have one very important question left…

Can we come round and play?

The post RetroPie booze barrel appeared first on Raspberry Pi.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close