Tag Archives: tv

Медийна свобода и плурализъм

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/17/sofia_16052018/

В София се проведе международна  конференция “Медийна свобода и плурализъм: Как да рестартираме основния стълб на ЕС“.

Записи от   конференцията могат да се видят тук:  сесия  I and   сесия II) или тук.

Пълният текст на заключителната декларация, в края са  препоръките:

Свобода на медиите в Европа: Код червено

През 1997 г. Софийската декларация на ЮНЕСКО за свободни и плуралистични медии бе ревностен призив за напредък в контекста, в който появата на нови информационни и комуникационни технологии се считаше за нова възможност за плурализъм, икономическо и социално развитие, демокрация и мир. Сега, 21 години по-късно, независимите медии в Европа претърпяха безпрецедентен натиск. Комбинацията от различни фактори, като убийствата на журналисти и физическите заплахи срещу тях, нарастващия политически и институционален натиск, репресивното законодателство, насочено към медиите, разрушителните технологии и финансовата криза, поставят съществуването на свободните медии в редица европейски страни в риск.

Свободният достъп до разнообразна информация и мнение е не само основно право на човека, но е от съществено значение за гражданите да участват в демократичното общество. Това е основната рамка, позволяваща на хората да държат отговорни представителите на властта, за да се ограничат престъпността и корупцията, което е ключов фактор за осигуряване на функционираща демокрация.

Създаването на контролирани медии е първата стъпка към моделите на публично управление, известни като “меки диктатури” или “завладяна държава”, което създава сериозни заплахи за нормалното функциониране на демокрацията не само за съответните държави, но и за целия Европейски съюз.

Неотдавнашните събития в някои от държавите-членки на ЕС очевидно нарушават националното и международното право, когато става въпрос за защита на медийната свобода, а именно Всеобщата декларация за правата на човека, Международния пакт за граждански и политически права, Европейската конвенция за правата на човека и Хартата на основните права на Европейския съюз (ЕС). ЕС и Съветът на Европа създадоха правни процедури за защита на свободата на изразяване в Съда на Европейския съюз и Европейския съд по правата на човека. Въпреки това тези структури понякога не са достатъчни, за да поддържат и наложат основните европейски ценности и конституционните си традиции.

Бюджет на ЕС и върховенство на закона

На 2 май 2018 г. Европейската комисия предложи бюджет за периода 2021-2027 г. и изготви стратегически план за наказване на страни, за които твърди, че са нарушили основните ценности на ЕС. Планът обвързва финансирането от Европейския съюз с принципите на правовата държава, но е твърде ограничен и не споменава свободата на медиите.

Комисията предлага нов механизъм за защита на бюджета на ЕС от финансовите рискове, свързани с общите недостатъци на правовата държава в държавите-членки. Новите инструменти биха позволили на Съюза да преустанови, намали или ограничи достъпа до финансиране от ЕС по начин, който е в съответствие с вида, тежестта и обхвата на недостатъците на правовия ред. Такова решение ще бъде предложено от Комисията и ще бъде прието от Съвета чрез гласуване с квалифицирано мнозинство, което ще направи невъзможно една или две държави да блокират наказателните мерки.

Предложението се очакваше и беше направено в отговор на Полша, първата и единствена страна, която досега е обект на механизма на върховенството на закона, тъй като ЕК установи системно поведение, поставящо демокрацията в опасност. Унгария също е на радара на Комисията, където демократичните условия се влошиха, след като продължителни усилия за институционализиране на “нелибералната демокрация” в страната. Полша и България, председателстваща Съвета на ЕС, бяха сред първите държави, които реагираха отрицателно на предложения нов регламент.

Убийства и физически заплахи

Ерозията на европейския демократичен модел, тенденция, наблюдавана през последните години, продължава и става все по-тревожна. Регионът е разтърсен от две убийства и от заплахи към разследващи репортери, както и безпрецедентни вербални атаки срещу медиите. Традиционно безопасната среда за журналисти в Европа започна да се влошава. Две убийства за пет месеца, първото в Малта и второто в Словакия, показват тревожен спад за демокрациите на континента. В Малта, смъртта на журналистката и блогър Дафне Каруана Галиция, в следствие на умишлено поставена бомба в колата ѝ, повдигна завесата на съдебния тормоз и заплахи, на които постоянно са подложени журналисти от островната държава.

Каруана Галиция е била заплашвана от години и е била обект на 42 граждански и пет наказателни дела. Словакия все още е разтърсена от убийството на 27-годишния репортер, разследващ корупцията и мафията. През април италианските правоохранителни органи осуетиха подготвяното от мафията убийство на журналиста Паоло Боромети.

Икономическа устойчивост

Независимостта на медиите и свободната журналистика е възможна само ако медийните компании са икономически независими и финансово устойчиви. През 21 век издателите на печатни медии остават основните инвеститори в журналистическо съдържание и са увеличили усилия и инвестиции, за да предложат най-новите иновативни дигитални услуги за читателите в Европа и в останалата част на света. Тези постижения се оказват плодотворни, тъй като вестникарските публикации достигат до безпрецедентно висок брой читатели. Този успех само потвърждава, че бъдещето на пресата е не само дигитално, но и пълно с възможности за разширяване на читателския интерес към публикациите на вестници и списания.

За съжаление, през последното десетилетие значителна част от икономическата база на независимите медии е ерозирала. Медийните компании изпитаха двоен шок от кризата в бизнес цикъла и бизнес модела си.

Възстановяването от глобалната финансова криза в Европа беше твърде бавно и твърде скъпо и съвпадна с кризата в медийния бизнес модел. Също така, появата на дигитални платформи и глобални дигитални гиганти като Google и Facebook засили неравнопоставеността между посредниците и инвеститорите и създателите на съдържание, а именно издателите. Въпреки нарастващото търсене на новини и коментари, осигуряването на приходи от такова съдържание се оказва предизвикателство, тъй като авторското право и законите за ДДС от периода преди въвеждането на дигиталните технологии не могат да защитят инвестициите и пазарния дял, както и широкият достъп до онлайн съдържание предоставят на основните платформи лъвския дял от рекламните приходи.

Данните на Бюрото за интерактивна реклама от 2016 г. показват, че 89% от разходите за онлайн реклама са отишли за Google и Facebook, като останалите 11% са за всички останали дигитални играчи.
Много предложения на ЕС, свързани с дигиталната сфера, заплашват с тежки и несправедливи съдебни и административни процедури. Като например предложения за електронна конфиденциалност, които биха дали преимущество на най-силните технологични играчи и биха въпрепятствали по-малките играчи, които са зависими от «бисквитки» и от сложно сътрудничество с трети страни, за да бъдат част от икономиката на данни.

В много страни от ЕС икономическите трудности, които медийните компании са преживели, доведоха директно до концентрация на политически контрол над медиите и засилена зависимост от правителственото финансиране. В някои случаи управляващите политически елити използват средства на правителството и ЕС, за да подкрепят лоялните медии и да манипулират общественото мнение. В тези страни обществената телевизия и радио също са загубили независимост или са под нарастващ политически натиск. На практика тези процеси доведоха до това, че големи части от медийния пазар минаха под контрола на управляващите политици и техните поддръжници за целите на пропагандата, като същевременно предприеха тежки атаки срещу малкото останали независими медии. Близо сме до карйната «цел» за безотчетна власт в някои от страните в ЕС.

Код червено за медийната свобода в държави в Европейския съюз

Полша

Изглежда нищо не е в състояние да спре “Право и справедливост”, национално-консервативната партия, спечелила изборите през октомври 2015 г., която се стреми към радикално реформиране на Полша, както сметне за подходящо, без да зачита онези, които мислят по различен начин. Свободата на медиите е една от основните жертви на техния проект. Обществените медии официално са преименувани на “национални медии” и са преобразувани в говорители на правителствената пропаганда. Техните нови ръководители не търпят нито опозиция, нито неутралност от страна на служителите и отстраняват онези, които отказват да се съобразят.

Разследващият журналист Томаш Пиатек беше заплашен с лишаване от свобода заради критиките, отправени към министъра на отбраната относно връзките му с руските разузнавателни служби и трябваше да изчака много месеци преди обвиненията да бъдат окончателно оттеглени. Съветът за радио и телевизия, който сега е под контрола на правителството, се опита да наложи глоба на частния телевизионен канал TVN за излъчване на антиправителствени послания при отразяването на вълна от протести през декември 2016 г. Впоследствие глобата беше отменена под международен натиск. На всички призиви за умереност правителството отговаря с познатите аргументи, нетърпящи несъгласие.

Унгария

Бизнесмените, които са в тесни връзки с партия «Фидес» на премиера Виктор Орбан, не само успяха да придобият нови медии през 2017 г., но и да заместят чуждестранните медийни компании, инвестирали в унгарски медии. Най-големият им успех бе поемането на контрол над последните три регионални ежедневника. Независимо от това, унгарският медиен пейзаж все още е разнообразен и печатни и онлайн издания не се колебаят да публикуват разследвания за предполагаема корупция, включваща най-влиятелните личности от Фидес и държавни служители. В Унгария съжителстват два типа медии. Единият се състои от проправителствени и про-Фидес медии, обсебени от темата за миграцията, “защитата на Унгария и нейните граници” и очернящата кампания срещу унгарско-американския милиардер филантроп Джордж Сорос.

Другият тип медии са насочени към разкриване на корупционни скандали. Оцеляването на медиите, критикуващи правителството, се дължи до голяма степен на бившия съратник на Орбан Лайош Симичка, който през февруари 2015 г. се разграничи публично от премиера и продължава да финансира медийна империя, създадена първоначално за подкрепа на Фидес. Правителството и неговите бизнес съюзници вече са се наточили на две медии – най-големият търговски канал RTL Klub и водещият политически информационен сайт Index.hu. И двете критикуват правителството.

България

През изминалите години свободата на медиите в България се влошава с тревожни темпове. Според световния индекс за свободата на медиите на Репортери без граници, България се е смъкнала със 75 позиции през последните 12 години – от 36-та през 2006 г. до 111-то през 2018 г. Налице е нарастващ политически натиск и нарастващ брой физически заплахи срещу разследващи журналисти, издатели и независими медии. Основният инструмент за упражняване на натиск е концентрацията на собственост върху медиите, икономическите зависимости и други форми на политически контрол върху по-голямата част от медийното пространство и монопол върху каналите за разпространение на медийно съдържание. Моделът включва също така силно влияние върху правителството, прокуратурата и съдебната власт, както и контрол над повечето независими регулатори. Всичко това представлява огромен политически и бизнес конгломерат, ръководен от действащия политик, бивш магистрат, бизснесмен и медиен собственик Делян Славчев Пеевски.

От 2009 г., с кратки прекъсвания, България е управлявана от ГЕРБ и техния лидер и премиер с три мандата – Бойко Борисов, който се радва на комфорт от страна на контролираните от Пеевски медии. Премиерът Борисов не само постоянно отказва да признае, че съществува заплаха за свободата на медиите, но играе ключова роля за увеличаване на достъпа на г-н Пеевски до публични ресурси, като същевременно му предоставя допълнителни институционални инструменти за репресия, включително законодателни решения, използвани срещу независимите медии.

Малта

2017-та бе белязана от бомбения атентат срещу Дафне Каруана Галиция, разследваща журналистка, която бе разкрила “мръсните тайни” на местната политика и косвено предизвика предсрочни общи избори през юни 2017 г. Години наред тя е била под нарастващ натиск заради популярността на нейния блог и работата ѝ по разплитане на местните връзки от т.нар. Досиета Панама и т.н. Към момента на убийството ѝ срещу нея вече са били заведени 42 граждански иска и пет наказателни дела за клевета. Тя също бе постоянен обект на заплахи и други форми на тормоз. Съдебният тормоз имаше за цел да я отстрани от обществения живот. Нейният случай беше класически пример за съдебни дела, в които влиятелни ищци се опитват да използват страха от огромни разходи за правна защита, за да затворят устата на критиците си. Под заплаха от страна на известни личности или бизнес групи, независимите медии са принуждавани да отстъпят и да премахнат публикации от своите сайтове.

Словакия

Убийството на разследващия репортер Ян Куцяк през февруари 2018 г. предизвика безпрецедентен политически трус в Словакия и стресна международната общност. Куцяк провеждаше разследване за уебсайта Aktuality.sk относно предполагаеми връзки между италианската мафия и Smer-SD (ляво-популистката партия, която оглавява управляващата коалиция) и предполагаемото присвояване на средства от ЕС. В недовършена статия, публикувана след смъртта му, той обвинява премиера Роберт Фицо в пряко участие.

Министрите на културата и вътрешните работи бяха принудени да подадат оставка и след големи улични протести, самият Фицо трябваше да последва примера им. Подобно на други словашки политици, Фицо бе подложен на засилени атаки в медиите. През ноември 2016 г. той описва журналистите като “мръсни антисловашки проститутки” и ги обвинява, че се опитват да възпрепятстват европейското председателство на Словакия. Така той реагира в отговор на въпрос за предполагаеми нередности в обществените поръчки, свързани с председателството. При липсата на силни институции, които биха могли да ги защитят, журналистите в Словакия все повече са изложени на всякакъв вид тормоз, сплашване и оскърбления.

Убийството на Куцяк възобнови въпросите за необяснимото изчезване на двама журналисти, единият през 2008 г., а другият през 2015 г. и отново постави въпроса за безопасността на журналистите. През последните години словашки медии, които преди това бяха собственост на водещи международни медийни компании, бяха придобити от местни олигарси, чиито основни бизнес интереси са извън журналистиката. В момента е запллашен общественият радио и телевизионен оператор RTVS, който през последните години стана символ на журналистически интегритет.

През август 2017 г. неговият генерален директор закри единствената разследваща телевизионна програма в страната, след излъчването на критичен репортаж за по-малката партия в управляващата коалиция. Правото на отговор на критично медийното отразяване, което политиците получиха от медийния закон от 2007 г., бе ограничено в изменение от 2011 г., но клеветата все още се наказва със затвор до 8 години затвор, съгласно разпоредба на Наказателния кодекс, която политиците продължават да използват за подаване на жалби срещу индивидуални журналисти и медии.

Чехия

Трудно е да си представим президент да извади огнестрелно оръжие пред журналисти, но това направи президентът на Чешката република Милош Земан на пресконференция през октомври 2017 г., размахвайки «Калашников» с надпис “за журналисти”. Преизбран през януари 2018 г., Земан има слабост към този вид провокации и многократно е описвал журналистите като “оборска тор” и “хиени”. Президентът и няколко други политически лидери наскоро засилиха вербалните си атаки срещу независимостта на обществените медии, особено на Чешката телевизия. Също така има няколко нови законопроекти, които биха увеличили обхвата на наказателните санкции за клевета, особено клеветата срещу президента. Нивото на концентрация на собственост върху медиите стана критично, тъй като новите олигарси започнаха да използват своето богатство през 2008 г., за да купуват вестници и да засилят влиянието си. Един от тези олигарси, премиерът Андрей Бабиш, притежава един от най-влиятелните ежедневници в Чехия.

Препоръки за провеждане на бъдещи политики:

1. Журналистите, издателите, НПО и други ключови заинтересовани страни трябва да обединят усилията си за подобряване на ефективността при използването на механизми за правна защита в Съда на Европейския съюз и Европейския съд по правата на човека. Една практическа идея би могла да бъде създаването на експертно юридическо лице “Фонд за защита на свободата на медиите”, който да подпомага гражданите, независимите журналисти, издателите и медийните компании при прилагането на международните закони срещу злоупотребата с власт на местните правителства. Такъв фонд би могъл също да инициира и подкрепи независими международни разследвания на случаи на медиен натиск от високопоставени личности в държавите-членки на ЕС;

2. Европейската комисия следва да разшири новопредложената разпоредба, като обвърже отпускането на средства от ЕС не само с правовата държава и върховенството на закона, но и със свободата на медиите в държавите-членки и кандидатите. Освен върховенството на закона, комисията следва също изрично да следи за спазването на местното и европейското законодателство за нарушаване на правата на човека, свободата на изразяване, гражданското общество и функционирането на демокрацията. В страни като България, Унгария и Полша репресиите срещу свободата на медиите се правят през повечето време с правни институционални инструменти, създадени от извънредно законодателство на национално ниво.

На 3 май 2018 г. Европейският парламент гласува резолюция, споед която Комисията трябва да работи за създаването на механизъм на ЕС за демокрация, върховенство на закона и основните права, придружен от независими механизми за наблюдение, които да оценят състоянието на свободата и плурализма на медиите и всички нарушения, свързани с това.

3. Медийният бизнес модел е в преход. Икономическото оцеляване на медийните компании и независимата журналистика на по-малките пазари е много трудно. Свободните медии обаче са крайъгълният камък на гражданското общество и функционирането на демокрацията. ЕС разглежда свободната преса и свободата на изразяване като “обществено благо” и трябва да разработи обществени механизми за устойчивото си финансиране, за да гарантира своята независимост. Това би могло да включва финансиране от ЕС, насочено пряко към журналисти и медийни компании в държавите-членки, като се избягва посредничеството на местното правителство;

4. Насърчаване на иновациите и подпомагане на дигиталната трансформация на медиите в ЕС. ЕС трябва да разработи по-разнообразен инструментариум, който да помогне за преодоляване на технологичните различия между европейските медийни компании и глобалните платформи. Също така, да се насърчи предприемачеството в областта на медиите и новосъздадените компании в търсене на нови, устойчиви бизнес модели и иновативни начини за осигуряване на приходи;

5. Заинтересованите страни от ЕС и държавите членкитрябва да подкрепят категорично правото на издателите във връзка с прегледа на Директивата за авторското право, за да могат издателите да прилагат по-добре своите вече съществуващи права и да спомогнат за преговорите с основните платформи. В допълнение, трябва да се оеднаквят ставките на ДДС за печатни и онлайн издания. Освен това трябва да се гарантира, че дигиталната сфера е място, където всички участници могат да успяват, като осигурят равнопоставеност и повече баланс с технологичните гиганти и платформи. Подкрепата на професионалните медии е от съществено значение за демократичния живот и просветеността на европейските граждани и единственото дългосрочно решение за противодействие на дезинформацията.

Австрия: регулаторът не разрешава собствен канал на ORF в YouTube и нова филмова услуга на ORF

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/14/orf-3/

Информация от Австрия за начина, по който обществената телевизия е под надзора на местния регулатор за изпълнение на обществената мисия.

Регулаторът не е одобрил две предложения на ORF –  за собствен канал в YouTube и за нова услуга – предавания/филми от програмите на ORF срещу абонамент.

Видимо в Австрия  регулаторът има роля ex ante в  по-широк обхват – което е в защита на аудиторията.   БНТ е в YouYube  – или поне в интернет се вижда “Официален канал на БНТ” в YouTube. Но това не е единствената илюстрация за ограничената роля на българския регулатор – не е ясно например какво става с политематичната БНТ2, нито може да се прочете мониторингов доклад как БНТ изпълнява лицензиите си.

*

Eто какво става в Австрия:

Според прессъобщение от днес, австрийският медиен орган KommAustria е отхвърлил плановете на обществения телевизионен оператор ORF да създаде собствен канал в YouTube .

ORF кандидатства за разрешение да добави канал в YouTube към своите медийни дейности. Този канал трябваше да предложи главно предавания на ORF, които поради правни ограничения понастоящем не могат да бъдат предоставени за повече от 7 дни чрез catch-up услугата ORF TVthek.

KommAustria   твърди, че изключителното сътрудничество между ORF и YouTube би дискриминирало други, сравними компании.

Регулаторът също така  взема предвид съществуващите услуги, когато одобрява нови услуги на ORF. KommAustria предполага намаляване на интереса към ORF TVthek, ако се създаде  канал на ORF в YouTube. Освен това регулаторът твърди, че е възможно да се удължи общият период за предоставяне на програми на ORF TVthek (повече от 7 дни) чрез преразглеждане на правната рамка.

Не е одобрено и искането на ORF да превърне Flimmit в австрийска услуга тип Netflix, предлагайки предавания, които вече са били излъчвани по телевизионните канали или са предназначени за излъчване. ORF притежава Filmmit чрез дъщерно дружество.

Според KommAustria   по принцип не е забранено на ORF да предлага абонаментна услуга. В конкретния случай обаче нито икономически, нито правно искането не е обосновано, напр. остава “напълно неясно” колко голям ще бъде делът от таксата за тази услуга – доколкото разбирам, не е изяснена  пропорцията между абонамент и обществено финансиране.

ORF може да обжалва.

 

Съвместима ли е таксата за радио и телевизия с правото на ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/13/fee_psm/

През март  2018 г. Oberverwaltungsgericht Rheinland-Pfalz (Върховен административен съд на Рейнланд-Пфалц – OVG Rheinland-Pfalz) решава, че таксата за радио и телевизия в Германия е съвместима с правото на ЕС (дело № 7 A 11938/17) . Съдът отхвърля тезата, че таксата е несъвместима с правото на ЕС, тъй като предоставя на обществените радио- и телевизионни доставчици на медийни услуги несправедливо предимство пред техните частни конкуренти.

Съдът посочва, че през 2016 г. Bundesverwaltungsgericht (Федерален административен съд – BVerwG) вече е установил съответствието на таксата  – в новата й форма, въведена през 2013 г. – с правото на ЕС (решение от 18 март 2016 г., BVerwG 6 С 6.15). Съгласно това решение въвеждането на таксата  не изисква съгласието на Европейската комисия и е с съвместимо с Директивата за аудиовизуалните медийни услуги.  Обществените и частните радио- и телевизионни оператори  неизбежно ще бъдат финансирани по различни начини. Това обаче не означава непременно, че обществените радио- и телевизионни оператори са получили несправедливо предимство, тъй като за разлика от частните радио- и телевизионни оператори те са подложени на много по-ограничителни правила за рекламиране и следователно са финансово зависими от таксата.

Междувременно Landgericht Tübingen (Районен съд в Тюбинген, решение от 3 август 2017 г., дело № 5 T 246/17 и др.) е постановил, че таксата  нарушава правото на ЕС  – и в резултат има подадено преюдициално запитване до Съда на ЕС –  дело  С-492/17.

 

Преюдициални въпроси:

1)

Несъвместим ли е с правото на Съюза националният Gesetz vom 18.10.2011 zur Geltung des Rundfunkbeitragsstaatsvertrags (RdFunkBeitrStVtrBW) vom 17 Dezember 2010 (Закон от 18 октомври 2011 г. за прилагане на Държавния договор за вноската за радио- и телевизионно разпространение от 17 декември 2010 г., наричан по-нататък „RdFunkBeitrStVtrBW“) на провинция Баден-Вюртемберг, последно изменен с член 4 от Neunzehnter Rundfunkänderungsstaatsvertrag (Деветнадесети държавен договор за изменение на Държавните договори за радио- и телевизионно разпространение) от 3 декември 2015 г. (Закон от 23 февруари 2016 г., GBl. стр. 126, 129), поради това че вноската, събирана от 1 януари 2013 г. съгласно този закон безусловно по принцип от всяко живеещо в германската федерална провинция Баден-Вюртемберг пълнолетно лице в полза на радио- и телевизионните оператори SWR и ZDF, представлява помощ, която противоречи на правото на Съюза и предоставя по-благоприятно третиране само в полза на тези обществени радио- и телевизионни оператори спрямо частни радио- и телевизионни оператори? Трябва ли членове 107 и 108 ДФЕС да се тълкуват в смисъл, че за Закона за вноската за радио- и телевизионно разпространение е трябвало да се получи разрешението на Комисията и поради липсата на разрешение той е невалиден?

2)

Трябва ли член 107 ДФЕС, съответно член 108 ДФЕС да се тълкува в смисъл, че в обхвата му попада правна уредба, установена в националния закон „RdFunkBeitrStVtrBW“, която предвижда, че по принцип от всяко живеещо в Баден-Вюртемберг пълнолетно лице безусловно се събира вноска в полза само на държавни/обществени радио- и телевизионни оператори, поради това че тази вноска съдържа противоречаща на правото на Съюза и предоставяща по-благоприятно третиране помощ с цел изключването по технически причини на оператори от държави от Европейския съюз, доколкото вноските са предназначени да се използват за създаването на конкурентен начин на пренос (монопол върху DVB-T2), без да е предвидено той да се използва от чуждестранни оператори? Трябва ли член 107 ДФЕС, съответно член 108 ДФЕС да се тълкува в смисъл, че в обхвата му попадат не само преки субсидии, но и други релевантни от икономическа гледна точка привилегии (право на издаване на изпълнителен лист, правомощия за предприемане на действия както в качеството на стопанско предприятие, така и в качеството на орган, поставяне в по-благоприятно положение при изчисляването на дълговете)?

3)

Съвместимо ли е с принципа на равно третиране и със забраната за предоставящи привилегии помощи положение, при което на основание национален закон на провинция Баден-Вюртемберг германски телевизионен оператор, който се урежда от нормите на публичното право и има предоставени правомощия на орган, но същевременно се конкурира с частни радио- и телевизионни оператори на рекламния пазар, е привилегирован в сравнение с тези оператори поради това че не трябва като частните конкуренти да иска по общия съдебен ред да му бъде издаден изпълнителен лист за вземанията му срещу зрителите, преди да може да пристъпи към принудително изпълнение, а самият той има право, без участието на съд, да издаде титул, който същевременно му дава право на принудително изпълнение?

4)

Съвместимо ли е с член 10 от ЕКПЧ /член [11] от Хартата на основните права (свобода на информация) положение, при което държава членка предвижда в национален закон на провинция Баден-Вюртемберг, че телевизионен оператор, на който са предоставени правомощия на орган, има право да изисква плащането на вноска от всяко живеещо в зоната на радио- и телевизионното излъчване пълнолетно лице за целите на финансирането на точно този оператор, при неплащането на която е предвидена глоба, независимо дали това лице въобще разполага с приемник или само използва услугите на други, а именно чуждестранни или други, частни оператори?

5)

Съвместим ли е националният закон „RdFunkBeitrStVtrBW“, и по-специално членове 2 и 3, с установените в правото на Съюза принципи на равно третиране и на недопускане на дискриминация в положение, при което вноската, която следва да се плаща безусловно от всеки жител за целите на финансирането на обществен телевизионен оператор, налага на всяко лице, което само отглежда детето си, тежест в размер, многократно по-висок от сумата, дължима от лице, което живее в общо жилище с други хора? Следва ли Директива 2004/113/ЕО (1) да се тълкува в смисъл, че спорната вноска също попада в обхвата ѝ и че e достатъчно да е налице косвено поставяне в по-неблагоприятно положение, след като с оглед на реалните дадености 90 % от жените понасят по-голяма тежест?

6)

Съвместим ли националният закон „RdFunkBeitrStVtrBW“, и по-специално членове 2 и 3, с установените в правото на Съюза принципи на равно третиране и на недопускане на дискриминация в положение, при което вноската, която следва да се плаща безусловно от всеки жител за целите на финансирането на обществен телевизионен оператор, за нуждаещите се от второ жилище лица по свързана с работата причина е двойно по-голяма, отколкото за други работници?

7)

Съвместим ли е националният закон „RdFunkBeitrStVtrBW“, и по-специално членове 2 и 3, с установените в правото на Съюза принципи на равно третиране и на недопускане на дискриминация и със свободата на установяване, ако вноската, която следва да се плаща безусловно от всеки жител за целите на финансирането на обществен телевизионен оператор, е уредена по такъв начин, че при еднаква възможност за приемане на радио- и телевизионно разпространение непосредствено преди границата със съседна държава от ЕС германски гражданин дължи вноската само поради мястото си на пребиваване, докато германският гражданин, живущ непосредствено от другата страна на границата, не дължи вноската, също както гражданинът на друга държава — членка на ЕС, който по свързани с работата причини трябва да се установи непосредствено от другата страна на вътрешна граница на ЕС, понася тежестта на вноската, но не и гражданинът на ЕС, живущ непосредствено преди границата, дори и никой от двамата да не се интересува от приемането на излъчванията на германския оператор?

Коментар по въпрос №4:  допуснат е въпрос за съвместимост с чл.10 от Конвенцията за правата на човека. Съдът за правата на човека вече се е произнасял, има съображения за недопустимост по сходно дело отпреди десетина години –  ето тук съм писала – вж Faccio v Italy – но нека да се произнесе и Съдът на ЕС.

И – отново за характера на таксата: ако  плащат и хората без приемник, това очевидно не е такса в смисъл цена за услуга, а данъчно вземане, по мое мнение това е тенденцията.

Чакаме решението на Съда на ЕС. Нека да се развива и множи практиката.

Securing Your Cryptocurrency

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-up-your-cryptocurrency/

Securing Your Cryptocurrency

In our blog post on Tuesday, Cryptocurrency Security Challenges, we wrote about the two primary challenges faced by anyone interested in safely and profitably participating in the cryptocurrency economy: 1) make sure you’re dealing with reputable and ethical companies and services, and, 2) keep your cryptocurrency holdings safe and secure.

In this post, we’re going to focus on how to make sure you don’t lose any of your cryptocurrency holdings through accident, theft, or carelessness. You do that by backing up the keys needed to sell or trade your currencies.

$34 Billion in Lost Value

Of the 16.4 million bitcoins said to be in circulation in the middle of 2017, close to 3.8 million may have been lost because their owners no longer are able to claim their holdings. Based on today’s valuation, that could total as much as $34 billion dollars in lost value. And that’s just bitcoins. There are now over 1,500 different cryptocurrencies, and we don’t know how many of those have been misplaced or lost.



Now that some cryptocurrencies have reached (at least for now) staggering heights in value, it’s likely that owners will be more careful in keeping track of the keys needed to use their cryptocurrencies. For the ones already lost, however, the owners have been separated from their currencies just as surely as if they had thrown Benjamin Franklins and Grover Clevelands over the railing of a ship.

The Basics of Securing Your Cryptocurrencies

In our previous post, we reviewed how cryptocurrency keys work, and the common ways owners can keep track of them. A cryptocurrency owner needs two keys to use their currencies: a public key that can be shared with others is used to receive currency, and a private key that must be kept secure is used to spend or trade currency.

Many wallets and applications allow the user to require extra security to access them, such as a password, or iris, face, or thumb print scan. If one of these options is available in your wallets, take advantage of it. Beyond that, it’s essential to back up your wallet, either using the backup feature built into some applications and wallets, or manually backing up the data used by the wallet. When backing up, it’s a good idea to back up the entire wallet, as some wallets require additional private data to operate that might not be apparent.

No matter which backup method you use, it is important to back up often and have multiple backups, preferable in different locations. As with any valuable data, a 3-2-1 backup strategy is good to follow, which ensures that you’ll have a good backup copy if anything goes wrong with one or more copies of your data.

One more caveat, don’t reuse passwords. This applies to all of your accounts, but is especially important for something as critical as your finances. Don’t ever use the same password for more than one account. If security is breached on one of your accounts, someone could connect your name or ID with other accounts, and will attempt to use the password there, as well. Consider using a password manager such as LastPass or 1Password, which make creating and using complex and unique passwords easy no matter where you’re trying to sign in.

Approaches to Backing Up Your Cryptocurrency Keys

There are numerous ways to be sure your keys are backed up. Let’s take them one by one.

1. Automatic backups using a backup program

If you’re using a wallet program on your computer, for example, Bitcoin Core, it will store your keys, along with other information, in a file. For Bitcoin Core, that file is wallet.dat. Other currencies will use the same or a different file name and some give you the option to select a name for the wallet file.

To back up the wallet.dat or other wallet file, you might need to tell your backup program to explicitly back up that file. Users of Backblaze Backup don’t have to worry about configuring this, since by default, Backblaze Backup will back up all data files. You should determine where your particular cryptocurrency, wallet, or application stores your keys, and make sure the necessary file(s) are backed up if your backup program requires you to select which files are included in the backup.

Backblaze B2 is an option for those interested in low-cost and high security cloud storage of their cryptocurrency keys. Backblaze B2 supports 2-factor verification for account access, works with a number of apps that support automatic backups with encryption, error-recovery, and versioning, and offers an API and command-line interface (CLI), as well. The first 10GB of storage is free, which could be all one needs to store encrypted cryptocurrency keys.

2. Backing up by exporting keys to a file

Apps and wallets will let you export your keys from your app or wallet to a file. Once exported, your keys can be stored on a local drive, USB thumb drive, DAS, NAS, or in the cloud with any cloud storage or sync service you wish. Encrypting the file is strongly encouraged — more on that later. If you use 1Password or LastPass, or other secure notes program, you also could store your keys there.

3. Backing up by saving a mnemonic recovery seed

A mnemonic phrase, mnemonic recovery phrase, or mnemonic seed is a list of words that stores all the information needed to recover a cryptocurrency wallet. Many wallets will have the option to generate a mnemonic backup phrase, which can be written down on paper. If the user’s computer no longer works or their hard drive becomes corrupted, they can download the same wallet software again and use the mnemonic recovery phrase to restore their keys.

The phrase can be used by anyone to recover the keys, so it must be kept safe. Mnemonic phrases are an excellent way of backing up and storing cryptocurrency and so they are used by almost all wallets.

A mnemonic recovery seed is represented by a group of easy to remember words. For example:

eye female unfair moon genius pipe nuclear width dizzy forum cricket know expire purse laptop scale identify cube pause crucial day cigar noise receive

The above words represent the following seed:

0a5b25e1dab6039d22cd57469744499863962daba9d2844243fec 9c0313c1448d1a0b2cd9e230a78775556f9b514a8be45802c2808e fd449a20234e9262dfa69

These words have certain properties:

  • The first four letters are enough to unambiguously identify the word.
  • Similar words are avoided (such as: build and built).

Bitcoin and most other cryptocurrencies such as Litecoin, Ethereum, and others use mnemonic seeds that are 12 to 24 words long. Other currencies might use different length seeds.

4. Physical backups — Paper, Metal

Some cryptocurrency holders believe that their backup, or even all their cryptocurrency account information, should be stored entirely separately from the internet to avoid any risk of their information being compromised through hacks, exploits, or leaks. This type of storage is called “cold storage.” One method of cold storage involves printing out the keys to a piece of paper and then erasing any record of the keys from all computer systems. The keys can be entered into a program from the paper when needed, or scanned from a QR code printed on the paper.

Printed public and private keys

Printed public and private keys

Some who go to extremes suggest separating the mnemonic needed to access an account into individual pieces of paper and storing those pieces in different locations in the home or office, or even different geographical locations. Some say this is a bad idea since it could be possible to reconstruct the mnemonic from one or more pieces. How diligent you wish to be in protecting these codes is up to you.

Mnemonic recovery phrase booklet

Mnemonic recovery phrase booklet

There’s another option that could make you the envy of your friends. That’s the CryptoSteel wallet, which is a stainless steel metal case that comes with more than 250 stainless steel letter tiles engraved on each side. Codes and passwords are assembled manually from the supplied part-randomized set of tiles. Users are able to store up to 96 characters worth of confidential information. Cryptosteel claims to be fireproof, waterproof, and shock-proof.

image of a Cryptosteel cold storage device

Cryptosteel cold wallet

Of course, if you leave your Cryptosteel wallet in the pocket of a pair of ripped jeans that gets thrown out by the housekeeper, as happened to the character Russ Hanneman on the TV show Silicon Valley in last Sunday’s episode, then you’re out of luck. That fictional billionaire investor lost a USB drive with $300 million in cryptocoins. Let’s hope that doesn’t happen to you.

Encryption & Security

Whether you store your keys on your computer, an external disk, a USB drive, DAS, NAS, or in the cloud, you want to make sure that no one else can use those keys. The best way to handle that is to encrypt the backup.

With Backblaze Backup for Windows and Macintosh, your backups are encrypted in transmission to the cloud and on the backup server. Users have the option to add an additional level of security by adding a Personal Encryption Key (PEK), which secures their private key. Your cryptocurrency backup files are secure in the cloud. Using our web or mobile interface, previous versions of files can be accessed, as well.

Our object storage cloud offering, Backblaze B2, can be used with a variety of applications for Windows, Macintosh, and Linux. With B2, cryptocurrency users can choose whichever method of encryption they wish to use on their local computers and then upload their encrypted currency keys to the cloud. Depending on the client used, versioning and life-cycle rules can be applied to the stored files.

Other backup programs and systems provide some or all of these capabilities, as well. If you are backing up to a local drive, it is a good idea to encrypt the local backup, which is an option in some backup programs.

Address Security

Some experts recommend using a different address for each cryptocurrency transaction. Since the address is not the same as your wallet, this means that you are not creating a new wallet, but simply using a new identifier for people sending you cryptocurrency. Creating a new address is usually as easy as clicking a button in the wallet.

One of the chief advantages of using a different address for each transaction is anonymity. Each time you use an address, you put more information into the public ledger (blockchain) about where the currency came from or where it went. That means that over time, using the same address repeatedly could mean that someone could map your relationships, transactions, and incoming funds. The more you use that address, the more information someone can learn about you. For more on this topic, refer to Address reuse.

Note that a downside of using a paper wallet with a single key pair (type-0 non-deterministic wallet) is that it has the vulnerabilities listed above. Each transaction using that paper wallet will add to the public record of transactions associated with that address. Newer wallets, i.e. “deterministic” or those using mnemonic code words support multiple addresses and are now recommended.

There are other approaches to keeping your cryptocurrency transaction secure. Here are a couple of them.

Multi-signature

Multi-signature refers to requiring more than one key to authorize a transaction, much like requiring more than one key to open a safe. It is generally used to divide up responsibility for possession of cryptocurrency. Standard transactions could be called “single-signature transactions” because transfers require only one signature — from the owner of the private key associated with the currency address (public key). Some wallets and apps can be configured to require more than one signature, which means that a group of people, businesses, or other entities all must agree to trade in the cryptocurrencies.

Deep Cold Storage

Deep cold storage ensures the entire transaction process happens in an offline environment. There are typically three elements to deep cold storage.

First, the wallet and private key are generated offline, and the signing of transactions happens on a system not connected to the internet in any manner. This ensures it’s never exposed to a potentially compromised system or connection.

Second, details are secured with encryption to ensure that even if the wallet file ends up in the wrong hands, the information is protected.

Third, storage of the encrypted wallet file or paper wallet is generally at a location or facility that has restricted access, such as a safety deposit box at a bank.

Deep cold storage is used to safeguard a large individual cryptocurrency portfolio held for the long term, or for trustees holding cryptocurrency on behalf of others, and is possibly the safest method to ensure a crypto investment remains secure.

Keep Your Software Up to Date

You should always make sure that you are using the latest version of your app or wallet software, which includes important stability and security fixes. Installing updates for all other software on your computer or mobile device is also important to keep your wallet environment safer.

One Last Thing: Think About Your Testament

Your cryptocurrency funds can be lost forever if you don’t have a backup plan for your peers and family. If the location of your wallets or your passwords is not known by anyone when you are gone, there is no hope that your funds will ever be recovered. Taking a bit of time on these matters can make a huge difference.

To the Moon*

Are you comfortable with how you’re managing and backing up your cryptocurrency wallets and keys? Do you have a suggestion for keeping your cryptocurrencies safe that we missed above? Please let us know in the comments.


*To the Moon — Crypto slang for a currency that reaches an optimistic price projection.

The post Securing Your Cryptocurrency appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Augmented-reality projection lamp with Raspberry Pi and Android Things

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/augmented-reality-projector/

If your day has been a little fraught so far, watch this video. It opens with a tableau of methodically laid-out components and then shows them soldered, screwed, and slotted neatly into place. Everything fits perfectly; nothing needs percussive adjustment. Then it shows us glimpses of an AR future just like the one promised in the less dystopian comics and TV programmes of my 1980s childhood. It is all very soothing, and exactly what I needed.

Android Things – Lantern

Transform any surface into mixed-reality using Raspberry Pi, a laser projector, and Android Things. Android Experiments – http://experiments.withgoogle.com/android/lantern Lantern project site – http://nordprojects.co/lantern check below to make your own ↓↓↓ Get the code – https://github.com/nordprojects/lantern Build the lamp – https://www.hackster.io/nord-projects/lantern-9f0c28

Creating augmented reality with projection

We’ve seen plenty of Raspberry Pi IoT builds that are smart devices for the home; they add computing power to things like lights, door locks, or toasters to make these objects interact with humans and with their environment in new ways. Nord ProjectsLantern takes a different approach. In their words, it:

imagines a future where projections are used to present ambient information, and relevant UI within everyday objects. Point it at a clock to show your appointments, or point to speaker to display the currently playing song. Unlike a screen, when Lantern’s projections are no longer needed, they simply fade away.

Lantern is set up so that you can connect your wireless device to it using Google Nearby. This means there’s no need to create an account before you can dive into augmented reality.

Lantern Raspberry Pi powered projector lamp

Your own open-source AR lamp

Nord Projects collaborated on Lantern with Google’s Android Things team. They’ve made it fully open-source, so you can find the code on GitHub and also download their parts list, which includes a Pi, an IKEA lamp, an accelerometer, and a laser projector. Build instructions are at hackster.io and on GitHub.

This is a particularly clear tutorial, very well illustrated with photos and GIFs, and once you’ve sourced and 3D-printed all of the components, you shouldn’t need a whole lot of experience to put everything together successfully. Since everything is open-source, though, if you want to adapt it — for example, if you’d like to source a less costly projector than the snazzy one used here — you can do that too.

components of Lantern Raspberry Pi powered augmented reality projector lamp

The instructions walk you through the mechanical build and the wiring, as well as installing Android Things and Nord Projects’ custom software on the Raspberry Pi. Once you’ve set everything up, an accelerometer connected to the Pi’s GPIO pins lets the lamp know which surface it is pointing at. A companion app on your mobile device lets you choose from the mini apps that work on that surface to select the projection you want.

The designers are making several mini apps available for Lantern, including the charmingly named Space Porthole: this uses Processing and your local longitude and latitude to project onto your ceiling the stars you’d see if you punched a hole through to the sky, if it were night time, and clear weather. Wouldn’t you rather look at that than deal with the ant problem in your kitchen or tackle your GitHub notifications?

What would you like to project onto your living environment? Let us know in the comments!

The post Augmented-reality projection lamp with Raspberry Pi and Android Things appeared first on Raspberry Pi.

Raspberry Pi in your favourite films and TV shows

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-films-tv/

If, like us, you’ve been bingeflixing your way through Netflix’s new show, Lost in Space, you may have noticed a Raspberry Pi being used as futuristic space tech.

Raspberry Pi Netflix Lost in Space

Danger, Will Robinson, that probably won’t work

This isn’t the first time a Pi has been used as a film or television prop. From Mr. Robot and Disney Pixar’s Big Hero 6 to Mr. Robot, Sense8, and Mr. Robot, our humble little computer has become quite the celeb.

Raspberry Pi Charlie Brooker Election Wipe
Raspberry Pi Big Hero 6
Raspberry Pi Netflix

Raspberry Pi Spy has been working hard to locate and document the appearance of the Raspberry Pi in some of our favourite shows and movies. He’s created this video covering 2010-2017:

Raspberry Pi TV and Film Appearances 2012-2017

Since 2012 the Raspberry Pi single board computer has appeared in a number of movies and TV shows. This video is a run through of those appearances where the Pi has been used as a prop.

For 2018 appearances and beyond, you can find a full list on the Raspberry Pi Spy website. If you’ve spotted an appearance that’s not on the list, tell us in the comments!

The post Raspberry Pi in your favourite films and TV shows appeared first on Raspberry Pi.

IoT Inspector Tool from Princeton

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/iot_inspector_t.html

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They’ve already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers’ servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.
  • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest’s website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that “Many IoT devices lack basic encryption and authentication” and that “User behavior can be inferred from encrypted IoT device traffic.” No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

Stream to Twitch with the push of a button

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-twitch-streaming/

Stream your video gaming exploits to the internet at the touch of a button with the Twitch-O-Matic. Everyone else is doing it, so you should too.

Twitch-O-Matic: Raspberry Pi Twitch Streaming Device – Weekend Hacker #1804

Some gaming consoles make it easy to stream to Twitch, some gaming consoles don’t (come on, Nintendo). So for those that don’t, I’ve made this beta version of the “Twitch-O-Matic”. No it doesn’t chop onions or fold your laundry, but what it DOES do is stream anything with HDMI output to your Twitch channel with the simple push of a button!

eSports and online game streaming

Interest in eSports has skyrocketed over the last few years, with viewership numbers in the hundreds of millions, sponsorship deals increasing in value and prestige, and tournament prize funds reaching millions of dollars. So it’s no wonder that more and more gamers are starting to stream live to online platforms in order to boost their fanbase and try to cash in on this growing industry.

Streaming to Twitch

Launched in 2011, Twitch.tv is an online live-streaming platform with a primary focus on video gaming. Users can create accounts to contribute their comments and content to the site, as well as watching live-streamed gaming competitions and broadcasts. With a staggering fifteen million daily users, Twitch is accessible via smartphone and gaming console apps, smart TVs, computers, and tablets. But if you want to stream to Twitch, you may find yourself using third-party software in order to do so. And with more buttons to click and more wires to plug in for older, app-less consoles, streaming can get confusing.

Enter Tinkernut.

Side note: we ❤ Tinkernut

We’ve featured Tinkernut a few times on the Raspberry Pi blog – his tutorials are clear, his projects are interesting and useful, and his live-streamed comment videos for every build are a nice touch to sharing homebrew builds on the internet.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

So, yes, we love him. [This is true. Alex never shuts up about him. – Ed.] And since he has over 500K subscribers on YouTube, we’re obviously not the only ones. We wave our Tinkernut flags with pride.

Twitch-O-Matic

With a Raspberry Pi Zero W, an HDMI to CSI adapter, and a case to fit it all in, Tinkernut’s Twitch-O-Matic allows easy connection to the Twitch streaming service. You’ll also need a button – the bigger, the better in our opinion, though Tinkernut has opted for the Adafruit 16mm Illuminated Pushbutton for his build, and not the 100mm Massive Arcade Button that, sadly, we still haven’t found a reason to use yet.

Adafruit massive button

“I’m sorry, Dave…”

For added frills and pizzazz, Tinketnut has also incorporated Adafruit’s White LED Backlight Module into the case, though you don’t have to do so unless you’re feeling super fancy.

The setup

The Raspberry Pi Zero W is connected to the HDMI to CSI adapter via the camera connector, in the same way you’d attach the camera ribbon. Tinkernut uses a standard Raspbian image on an 8GB SD card, with SSH enabled for remote access from his laptop. He uses the simple command Raspivid to test the HDMI connection by recording ten seconds of video footage from his console.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

One lead is all you need

Once you have the Pi receiving video from your console, you can connect to Twitch using your Twitch stream key, which you can find by logging in to your account at Twitch.tv. Tinkernut’s tutorial gives you all the commands you need to stream from your Pi.

The frills

To up the aesthetic impact of your project, adding buttons and backlights is fairly straightforward.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

Pretty LED frills

To run the stream command, Tinketnut uses a button: press once to start the stream, press again to stop. Pressing the button also turns on the LED backlight, so it’s obvious when streaming is in progress.

The tutorial

For the full code and 3D-printable case STL file, head to Tinketnut’s hackster.io project page. And if you’re already using a Raspberry Pi for Twitch streaming, share your build setup with us. Cheers!

The post Stream to Twitch with the push of a button appeared first on Raspberry Pi.

MagPi 69: affordable 3D printing with a Raspberry Pi

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-69/

Hi folks, Rob from The MagPi here with the good news that The MagPi 69 is out now! Nice. Our latest issue is all about 3D printing and how you can get yourself a very affordable 3D printer that you can control with a Raspberry Pi.

Raspberry Pi MagPi 69 3D-printing

Get 3D printing from just £99!

Pi-powered 3D printing

Affordability is always a big factor when it comes to 3D printers. Like any new cosumer tech, their prices are often in the thousands of pounds. Over the last decade, however, these prices have been dropping steadily. Now you can get budget 3D printers for hundreds rather than thousands – and even for £99, like the iMakr. Pairing an iMakr with a Raspberry Pi makes for a reasonably priced 3D printing solution. In issue 69, we show you how to do just that!

Portable Raspberry Pis

Looking for a way to make your Raspberry Pi portable? One of our themes this issue is portable Pis, with a feature on how to build your very own Raspberry Pi TV stick, coincidentally with a 3D-printed case. We also review the Noodle Pi kit and the RasPad, two products that can help you take your Pi out and about away from a power socket.


And of course we have a selection of other great guides, project showcases, reviews, and community news.

Get The MagPi 69

Issue 69 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days for a print copy. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

New subscription offer!

Want to support the Raspberry Pi Foundation and the magazine? We’ve launched a new way to subscribe to the print version of The MagPi: you can now take out a monthly £4 subscription to the magazine, effectively creating a rolling pre-order system that saves you money on each issue.

Raspberry Pi MagPi 69 3D-printing

You can also take out a twelve-month print subscription and get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

We hope you enjoy this issue! See you next month.

The post MagPi 69: affordable 3D printing with a Raspberry Pi appeared first on Raspberry Pi.

Continued: the answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-2/

Last week, we shared the first half of our Q&A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. Today we follow up with all your other questions, including your expectations for a Raspberry Pi 4, Eben’s dream add-ons, and whether we really could go smaller than the Zero.

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

With internet security becoming more necessary, will there be automated versions of VPN on an SD card?

There are already third-party tools which turn your Raspberry Pi into a VPN endpoint. Would we do it ourselves? Like the power button, it’s one of those cases where there are a million things we could do and so it’s more efficient to let the community get on with it.

Just to give a counterexample, while we don’t generally invest in optimising for particular use cases, we did invest a bunch of money into optimising Kodi to run well on Raspberry Pi, because we found that very large numbers of people were using it. So, if we find that we get half a million people a year using a Raspberry Pi as a VPN endpoint, then we’ll probably invest money into optimising it and feature it on the website as we’ve done with Kodi. But I don’t think we’re there today.

Have you ever seen any Pis running and doing important jobs in the wild, and if so, how does it feel?

It’s amazing how often you see them driving displays, for example in radio and TV studios. Of course, it feels great. There’s something wonderful about the geographic spread as well. The Raspberry Pi desktop is quite distinctive, both in its previous incarnation with the grey background and logo, and the current one where we have Greg Annandale’s road picture.

The PIXEL desktop on Raspberry Pi

And so it’s funny when you see it in places. Somebody sent me a video of them teaching in a classroom in rural Pakistan and in the background was Greg’s picture.

Raspberry Pi 4!?!

There will be a Raspberry Pi 4, obviously. We get asked about it a lot. I’m sticking to the guidance that I gave people that they shouldn’t expect to see a Raspberry Pi 4 this year. To some extent, the opportunity to do the 3B+ was a surprise: we were surprised that we’ve been able to get 200MHz more clock speed, triple the wireless and wired throughput, and better thermals, and still stick to the $35 price point.

We’re up against the wall from a silicon perspective; we’re at the end of what you can do with the 40nm process. It’s not that you couldn’t clock the processor faster, or put a larger processor which can execute more instructions per clock in there, it’s simply about the energy consumption and the fact that you can’t dissipate the heat. So we’ve got to go to a smaller process node and that’s an order of magnitude more challenging from an engineering perspective. There’s more effort, more risk, more cost, and all of those things are challenging.

With 3B+ out of the way, we’re going to start looking at this now. For the first six months or so we’re going to be figuring out exactly what people want from a Raspberry Pi 4. We’re listening to people’s comments about what they’d like to see in a new Raspberry Pi, and I’m hoping by early autumn we should have an idea of what we want to put in it and a strategy for how we might achieve that.

Could you go smaller than the Zero?

The challenge with Zero as that we’re periphery-limited. If you run your hand around the unit, there is no edge of that board that doesn’t have something there. So the question is: “If you want to go smaller than Zero, what feature are you willing to throw out?”

It’s a single-sided board, so you could certainly halve the PCB area if you fold the circuitry and use both sides, though you’d have to lose something. You could give up some GPIO and go back to 26 pins like the first Raspberry Pi. You could give up the camera connector, you could go to micro HDMI from mini HDMI. You could remove the SD card and just do USB boot. I’m inventing a product live on air! But really, you could get down to two thirds and lose a bunch of GPIO – it’s hard to imagine you could get to half the size.

What’s the one feature that you wish you could outfit on the Raspberry Pi that isn’t cost effective at this time? Your dream feature.

Well, more memory. There are obviously technical reasons why we don’t have more memory on there, but there are also market reasons. People ask “why doesn’t the Raspberry Pi have more memory?”, and my response is typically “go and Google ‘DRAM price’”. We’re used to the price of memory going down. And currently, we’re going through a phase where this has turned around and memory is getting more expensive again.

Machine learning would be interesting. There are machine learning accelerators which would be interesting to put on a piece of hardware. But again, they are not going to be used by everyone, so according to our method of pricing what we might add to a board, machine learning gets treated like a $50 chip. But that would be lovely to do.

Which citizen science projects using the Pi have most caught your attention?

I like the wildlife camera projects. We live out in the countryside in a little village, and we’re conscious of being surrounded by nature but we don’t see a lot of it on a day-to-day basis. So I like the nature cam projects, though, to my everlasting shame, I haven’t set one up yet. There’s a range of them, from very professional products to people taking a Raspberry Pi and a camera and putting them in a plastic box. So those are good fun.

Raspberry Shake seismometer

The Raspberry Shake seismometer

And there’s Meteor Pi from the Cambridge Science Centre, that’s a lot of fun. And the seismometer Raspberry Shake – that sort of thing is really nice. We missed the recent South Wales earthquake; perhaps we should set one up at our Californian office.

How does it feel to go to bed every day knowing you’ve changed the world for the better in such a massive way?

What feels really good is that when we started this in 2006 nobody else was talking about it, but now we’re part of a very broad movement.

We were in a really bad way: we’d seen a collapse in the number of applicants applying to study Computer Science at Cambridge and elsewhere. In our view, this reflected a move away from seeing technology as ‘a thing you do’ to seeing it as a ‘thing that you have done to you’. It is problematic from the point of view of the economy, industry, and academia, but most importantly it damages the life prospects of individual children, particularly those from disadvantaged backgrounds. The great thing about STEM subjects is that you can’t fake being good at them. There are a lot of industries where your Dad can get you a job based on who he knows and then you can kind of muddle along. But if your dad gets you a job building bridges and you suck at it, after the first or second bridge falls down, then you probably aren’t going to be building bridges anymore. So access to STEM education can be a great driver of social mobility.

By the time we were launching the Raspberry Pi in 2012, there was this wonderful movement going on. Code Club, for example, and CoderDojo came along. Lots of different ways of trying to solve the same problem. What feels really, really good is that we’ve been able to do this as part of an enormous community. And some parts of that community became part of the Raspberry Pi Foundation – we merged with Code Club, we merged with CoderDojo, and we continue to work alongside a lot of these other organisations. So in the two seconds it takes me to fall asleep after my face hits the pillow, that’s what I think about.

We’re currently advertising a Programme Manager role in New Delhi, India. Did you ever think that Raspberry Pi would be advertising a role like this when you were bringing together the Foundation?

No, I didn’t.

But if you told me we were going to be hiring somewhere, India probably would have been top of my list because there’s a massive IT industry in India. When we think about our interaction with emerging markets, India, in a lot of ways, is the poster child for how we would like it to work. There have already been some wonderful deployments of Raspberry Pi, for example in Kerala, without our direct involvement. And we think we’ve got something that’s useful for the Indian market. We have a product, we have clubs, we have teacher training. And we have a body of experience in how to teach people, so we have a physical commercial product as well as a charitable offering that we think are a good fit.

It’s going to be massive.

What is your favourite BBC type-in listing?

There was a game called Codename: Druid. There is a famous game called Codename: Droid which was the sequel to Stryker’s Run, which was an awesome, awesome game. And there was a type-in game called Codename: Druid, which was at the bottom end of what you would consider a commercial game.

codename druid

And I remember typing that in. And what was really cool about it was that the next month, the guy who wrote it did another article that talks about the memory map and which operating system functions used which bits of memory. So if you weren’t going to do disc access, which bits of memory could you trample on and know the operating system would survive.

babbage versus bugs Raspberry Pi annual

See the full listing for Babbage versus Bugs in the Raspberry Pi 2018 Annual

I still like type-in listings. The Raspberry Pi 2018 Annual has a type-in listing that I wrote for a Babbage versus Bugs game. I will say that’s not the last type-in listing you will see from me in the next twelve months. And if you download the PDF, you could probably copy and paste it into your favourite text editor to save yourself some time.

The post Continued: the answers to your questions for Eben Upton appeared first on Raspberry Pi.

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

Backblaze at NAB 2018 in Las Vegas

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backblaze-at-nab-2018-in-las-vegas/

Backblaze B2 Cloud Storage NAB Booth

Backblaze just returned from exhibiting at NAB in Las Vegas, April 9-12, where the response to our recent announcements was tremendous. In case you missed the news, Backblaze B2 Cloud Storage continues to extend its lead as the most affordable, high performance cloud on the planet.

Backblaze’s News at NAB

Backblaze at NAB 2018 in Las Vegas

The Backblaze booth just before opening

What We Were Asked at NAB

Our booth was busy from start to finish with attendees interested in learning more about Backblaze and B2 Cloud Storage. Here are the questions we were asked most often in the booth.

Q. How long has Backblaze been in business?
A. The company was founded in 2007. Today, we have over 500 petabytes of data from customers in over 150 countries.

B2 Partners at NAB 2018

Q. Where is your data stored?
A. We have data centers in California and Arizona and expect to expand to Europe by the end of the year.

Q. How can your services be so inexpensive?
A. Backblaze’s goal from the beginning was to offer cloud backup and storage that was easy to use and affordable. All the existing options were simply too expensive to be viable, so we created our own infrastructure. Our purpose-built storage system — the Backblaze’s Storage Pod — is recognized as one of the most cost efficient storage platforms available.

Q. Tell me about your hardware.
A. Backblaze’s Storage Pods hold 60 HDDs each, containing as much as 720TB data per pod, stored using Reed-Solomon error correction. Storage Pods are arranged in Tomes with twenty Storage Pods making up a Vault.

Q. Where do you fit in the data workflow?
A. People typically use B2 in for archiving completed projects. All data is readily available for download from B2, making it more convenient than off-line storage. In addition, DAM and MAM systems such as CatDV, axle ai, Cantemo, and others have integrated with B2 to store raw images behind the proxies.

Q. Who uses B2 in the M&E business?
A. KLRU-TV, the PBS station in Austin Texas, uses B2 to archive their entire 43 year catalog of Austin City Limits episodes and related materials. WunderVu, the production house for Pixvana, uses B2 to back up and archive their local storage systems on which they build virtual reality experiences for their customers.

Q. You’re the company that publishes the hard drive stats, right?
A. Yes, we are!

Backblaze Case Studies and Swag at NAB 2018 in Las Vegas

Were You at NAB?

If you were, we hope you stopped by the Backblaze booth to say hello. We’d like to hear what you saw at the show that was interesting or exciting. Please tell us in the comments.

The post Backblaze at NAB 2018 in Las Vegas appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

American Public Television Embraces the Cloud — And the Future

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/american-public-television-embraces-the-cloud-and-the-future/

American Public Television website

American Public Television was like many organizations that have been around for a while. They were entrenched using an older technology — in their case, tape storage and distribution — that once met their needs but was limiting their productivity and preventing them from effectively collaborating with their many media partners. APT’s VP of Technology knew that he needed to move into the future and embrace cloud storage to keep APT ahead of the game.
Since 1961, American Public Television (APT) has been a leading distributor of groundbreaking, high-quality, top-rated programming to the nation’s public television stations. Gerry Field is the Vice President of Technology at APT and is responsible for delivering their extensive program catalog to 350+ public television stations nationwide.

In the time since Gerry  joined APT in 2007, the industry has been in digital overdrive. During that time APT has continued to acquire and distribute the best in public television programming to their technically diverse subscribers.

This created two challenges for Gerry. First, new technology and format proliferation were driving dramatic increases in digital storage. Second, many of APT’s subscribers struggled to keep up with the rapidly changing industry. While some subscribers had state-of-the-art satellite systems to receive programming, others had to wait for the post office to drop off programs recorded on tape weeks earlier. With no slowdown on the horizon of innovation in the industry, Gerry knew that his storage and distribution systems would reach a crossroads in no time at all.

American Public Television logo

Living the tape paradigm

The digital media industry is only a few years removed from its film, and later videotape, roots. Tape was the input and the output of the industry for many years. As a consequence, the tools and workflows used by the industry were built and designed to work with tape. Over time, the “file” slowly replaced the tape as the object to be captured, edited, stored and distributed. Trouble was, many of the systems and more importantly workflows were based on processing tape, and these have proven to be hard to change.

At APT, Gerry realized the limits of the tape paradigm and began looking for technologies and solutions that enabled workflows based on file and object based storage and distribution.

Thinking file based storage and distribution

For data (digital media) storage, APT, like everyone else, started by installing onsite storage servers. As the amount of digital data grew, more storage was added. In addition, APT was expanding its distribution footprint by creating or partnering with distribution channels such as CreateTV and APT Worldwide. This dramatically increased the number of programming formats and the amount of data that had to be stored. As a consequence, updating, maintaining, and managing the APT storage systems was becoming a major challenge and a major resource hog.

APT Online

Knowing that his in-house storage system was only going to cost more time and money, Gerry decided it was time to look at cloud storage. But that wasn’t the only reason he looked at the cloud. While most people consider cloud storage as just a place to back up and archive files, Gerry was envisioning how the ubiquity of the cloud could help solve his distribution challenges. The trouble was the price of cloud storage from vendors like Amazon S3 and Microsoft Azure was a non-starter, especially for a non-profit. Then Gerry came across Backblaze. B2 Cloud Storage service met all of his performance requirements, and at $0.005/GB/month for storage and $0.01/GB for downloads it was nearly 75% less than S3 or Azure.

Gerry did the math and found that he could economically incorporate B2 Cloud Storage into his IT portfolio, using it for both program submission and for active storage and archiving of the APT programs. In addition, B2 now gives him the foundation necessary to receive and distribute programming content over the Internet. This is especially useful for organizations that can’t conveniently access satellite distribution systems. Not to mention downloading from the cloud is much faster than sending a tape through the mail.

Adding B2 Cloud Storage to their infrastructure has helped American Public Television address two key challenges. First, they now have “unlimited” storage in the cloud without having to add any hardware. In addition, with B2, they only pay for the storage they use. That means they don’t have to buy storage upfront trying to match the maximum amount of storage they’ll ever need. Second, by using B2 as a distribution source for their programming APT subscribers, especially the smaller and remote ones, can get content faster and more reliably without having to perform costly upgrades to their infrastructure.

The road ahead

As APT gets used to their file based infrastructure and workflow, there are a number of cost saving and income generating ideas they are pondering which are now worth considering. Here are a few:

Program Submissions — New content can be uploaded from anywhere using a web browser, an Internet connection, and a login. For example, a producer in Cambodia can upload their film to B2. From there the film is downloaded to an in-house system where it is processed and transcoded using compute. The finished film is added to the APT catalog and added to B2. Once there, the program is instantly available for subscribers to order and download.

“The affordability and performance of Backblaze B2 is what allowed us to make the B2 cloud part of the APT data storage and distribution strategy into the future.” — Gerry Field

Easier Previews — At any time, work in process or finished programs can be made available for download from the B2 cloud. One place this could be useful is where a subscriber needs to review a program to comply with local policies and practices before airing. In the old system, each “one-off” was a time consuming manual process.

Instant Subscriptions — There are many organizations such as schools and businesses that want to use just one episode of a desired show. With an e-commerce based website, current or even archived programming kept in B2 could be available to download or stream for a minimal charge.

At APT there were multiple technologies needed to make their file-based infrastructure work, but as Gerry notes, having an affordable, trustworthy, cloud storage service like B2 is one of the critical building blocks needed to make everything work together.

The post American Public Television Embraces the Cloud — And the Future appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

Recently, we launched AWS Secrets Manager, a service that makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. You can control access to your secrets by using fine-grained AWS Identity and Access Management (IAM) policies. To retrieve secrets, employees replace plaintext secrets with a call to Secrets Manager APIs, eliminating the need to hard-code secrets in source code or update configuration files and redeploy code when secrets are rotated.

In this post, I introduce the key features of Secrets Manager. I then show you how to store a database credential for a MySQL database hosted on Amazon RDS and how your applications can access this secret. Finally, I show you how to configure Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for Amazon RDS databases for MySQL, PostgreSQL, and Amazon Aurora. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets. For example, you can create an AWS Lambda function to rotate OAuth tokens used in a mobile application. Users and applications retrieve the secret from Secrets Manager, eliminating the need to email secrets to developers or update and redeploy applications after AWS Secrets Manager rotates a secret.
  • Secure and manage secrets centrally. You can store, view, and manage all your secrets. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. Using fine-grained IAM policies, you can control access to secrets. For example, you can require developers to provide a second factor of authentication when they attempt to retrieve a production database credential. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Monitor and audit easily. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.

Get started with Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a MySQL database hosted on Amazon RDS. To demonstrate how to retrieve and use the secret, I use a python application running on Amazon EC2 that requires this database credential to access the MySQL instance. Finally, I show how to configure Secrets Manager to rotate this database credential automatically. Let’s get started.

Phase 1: Store a secret in Secrets Manager

  1. Open the Secrets Manager console and select Store a new secret.
     
    Secrets Manager console interface
     
  2. I select Credentials for RDS database because I’m storing credentials for a MySQL database hosted on Amazon RDS. For this example, I store the credentials for the database superuser. I start by securing the superuser because it’s the most powerful database credential and has full access over the database.
     
    Store a new secret interface with Credentials for RDS database selected
     

    Note: For this example, you need permissions to store secrets in Secrets Manager. To grant these permissions, you can use the AWSSecretsManagerReadWriteAccess managed policy. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. Next, I review the encryption setting and choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKeyDefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS KMS.
     
    Select the encryption key interface
     
  4. Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance mysql-rds-database, and then I select Next.
     
    Select the RDS database interface
     
  5. In this step, I specify values for Secret Name and Description. For this example, I use Applications/MyApp/MySQL-RDS-Database as the name and enter a description of this secret, and then select Next.
     
    Secret Name and description interface
     
  6. For the next step, I keep the default setting Disable automatic rotation because my secret is used by my application running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. I then select Next.

    Note: If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See our AWS Secrets Manager getting started guide on rotation for details.

     
    Configure automatic rotation interface
     

  7. Review the information on the next screen and, if everything looks correct, select Store. We’ve now successfully stored a secret in Secrets Manager.
  8. Next, I select See sample code.
     
    The See sample code button
     
  9. Take note of the code samples provided. I will use this code to update my application to retrieve the secret using Secrets Manager APIs.
     
    Python sample code
     

Phase 2: Update an application to retrieve secret from Secrets Manager

Now that I have stored the secret in Secrets Manager, I update my application to retrieve the database credential from Secrets Manager instead of hard coding this information in a configuration file or source code. For this example, I show how to configure a python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    import MySQLdb
    import config

    def no_secrets_manager_sample()

    # Get the user name, password, and database connection information from a config file.
    database = config.database
    user_name = config.user_name
    password = config.password

    # Use the user name, password, and database connection information to connect to the database
    db = MySQLdb.connect(database.endpoint, user_name, password, database.db_name, database.port)

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client and retrieves and decrypts the secret Applications/MyApp/MySQL-RDS-Database. I’ve added comments to the code to make the code easier to understand.
    # Use the code snippet provided by Secrets Manager.
    import boto3
    from botocore.exceptions import ClientError

    def get_secret():
    #Define the secret you want to retrieve
    secret_name = "Applications/MyApp/MySQL-RDS-Database"
    #Define the Secrets mManager end-point your code should use.
    endpoint_url = "https://secretsmanager.us-east-1.amazonaws.com"
    region_name = "us-east-1"

    #Setup the client
    session = boto3.session.Session()
    client = session.client(
    service_name='secretsmanager',
    region_name=region_name,
    endpoint_url=endpoint_url
    )

    #Use the client to retrieve the secret
    try:
    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name
    )
    #Error handling to make it easier for your code to tolerate faults
    except ClientError as e:
    if e.response['Error']['Code'] == 'ResourceNotFoundException':
    print("The requested secret " + secret_name + " was not found")
    elif e.response['Error']['Code'] == 'InvalidRequestException':
    print("The request was invalid due to:", e)
    elif e.response['Error']['Code'] == 'InvalidParameterException':
    print("The request had invalid params:", e)
    else:
    # Decrypted secret using the associated KMS CMK
    # Depending on whether the secret was a string or binary, one of these fields will be populated
    if 'SecretString' in get_secret_value_response:
    secret = get_secret_value_response['SecretString']
    else:
    binary_secret_data = get_secret_value_response['SecretBinary']

    # Your code goes here.

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/MySQL-RDS-Database secret from Secrets Manager. You can visit the AWS Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.
    {
    "Version": "2012-10-17",
    "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/MySQL-RDS-Database"
    }
    }

Phase 3: Enable Rotation for Your Secret

Rotating secrets periodically is a security best practice because it reduces the risk of misuse of secrets. Secrets Manager makes it easy to follow this security best practice and offers built-in integrations for rotating credentials for MySQL, PostgreSQL, and Amazon Aurora databases hosted on Amazon RDS. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Next, I show you how to configure Secrets Manager to rotate the secret Applications/MyApp/MySQL-RDS-Database automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in the first step Applications/MyApp/MySQL-RDS-Database.
     
    List of secrets in the Secrets Manager console
     
  2. I scroll to Rotation configuration, and then select Edit rotation.
     
    Rotation configuration interface
     
  3. To enable rotation, I select Enable automatic rotation. I then choose how frequently I want Secrets Manager to rotate this secret. For this example, I set the rotation interval to 60 days.
     
    Edit rotation configuration interface
     
  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the superuser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use the secret that I provided in step 1, and then select Next.
     
    Select which secret to use in the Edit rotation configuration interface
     
  5. The banner on the next screen confirms that I have successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
     
    Confirmation banner message
     

Summary

I introduced AWS Secrets Manager, explained the key benefits, and showed you how to help meet your compliance requirements by configuring AWS Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Ефектите на медийната консолидация: по сценарий

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/04/sinclair-2/

 Sinclair Broadcast Group е най-голяма тв група по брой медии и също така по покритие в САЩ. Известни са с  новинарско съдържание и предавания, които популяризират консервативни политически позиции  и са в подкрепа на Републиканската партия.

 Когато Тръмп каже следното –

 

– ето какво следва в медиите на Синклер  – каскада от еднотипни изпълнения по сценарий, както се вижда във видеото:

“Някои  медии  използват своите платформи, за да наложат своето лично пристрастие. Това е изключително опасно за нашата демокрация.”

Видеото е публикувано и в NYT:

 

Показва ефектите  на медийната консолидация за правото на информация.  А също показва и как това управление означава критиката като fake.

+ op-ed от вчера

 

Performing Unit Testing in an AWS CodeStar Project

Post Syndicated from Jerry Mathen Jacob original https://aws.amazon.com/blogs/devops/performing-unit-testing-in-an-aws-codestar-project/

In this blog post, I will show how you can perform unit testing as a part of your AWS CodeStar project. AWS CodeStar helps you quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can set up your continuous delivery (CD) toolchain and manage your software development from one place.

Because unit testing tests individual units of application code, it is helpful for quickly identifying and isolating issues. As a part of an automated CI/CD process, it can also be used to prevent bad code from being deployed into production.

Many of the AWS CodeStar project templates come preconfigured with a unit testing framework so that you can start deploying your code with more confidence. The unit testing is configured to run in the provided build stage so that, if the unit tests do not pass, the code is not deployed. For a list of AWS CodeStar project templates that include unit testing, see AWS CodeStar Project Templates in the AWS CodeStar User Guide.

The scenario

As a big fan of superhero movies, I decided to list my favorites and ask my friends to vote on theirs by using a WebService endpoint I created. The example I use is a Python web service running on AWS Lambda with AWS CodeCommit as the code repository. CodeCommit is a fully managed source control system that hosts Git repositories and works with all Git-based tools.

Here’s how you can create the WebService endpoint:

Sign in to the AWS CodeStar console. Choose Start a project, which will take you to the list of project templates.

create project

For code edits I will choose AWS Cloud9, which is a cloud-based integrated development environment (IDE) that you use to write, run, and debug code.

choose cloud9

Here are the other tasks required by my scenario:

  • Create a database table where the votes can be stored and retrieved as needed.
  • Update the logic in the Lambda function that was created for posting and getting the votes.
  • Update the unit tests (of course!) to verify that the logic works as expected.

For a database table, I’ve chosen Amazon DynamoDB, which offers a fast and flexible NoSQL database.

Getting set up on AWS Cloud9

From the AWS CodeStar console, go to the AWS Cloud9 console, which should take you to your project code. I will open up a terminal at the top-level folder under which I will set up my environment and required libraries.

Use the following command to set the PYTHONPATH environment variable on the terminal.

export PYTHONPATH=/home/ec2-user/environment/vote-your-movie

You should now be able to use the following command to execute the unit tests in your project.

python -m unittest discover vote-your-movie/tests

cloud9 setup

Start coding

Now that you have set up your local environment and have a copy of your code, add a DynamoDB table to the project by defining it through a template file. Open template.yml, which is the Serverless Application Model (SAM) template file. This template extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables required by your serverless application.

AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar

Parameters:
  ProjectId:
    Type: String
    Description: CodeStar projectId used to associate new resources to team members

Resources:
  # The DB table to store the votes.
  MovieVoteTable:
    Type: AWS::Serverless::SimpleTable
    Properties:
      PrimaryKey:
        # Name of the "Candidate" is the partition key of the table.
        Name: Candidate
        Type: String
  # Creating a new lambda function for retrieving and storing votes.
  MovieVoteLambda:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: python3.6
      Environment:
        # Setting environment variables for your lambda function.
        Variables:
          TABLE_NAME: !Ref "MovieVoteTable"
          TABLE_REGION: !Ref "AWS::Region"
      Role:
        Fn::ImportValue:
          !Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
      Events:
        GetEvent:
          Type: Api
          Properties:
            Path: /
            Method: get
        PostEvent:
          Type: Api
          Properties:
            Path: /
            Method: post

We’ll use Python’s boto3 library to connect to AWS services. And we’ll use Python’s mock library to mock AWS service calls for our unit tests.
Use the following command to install these libraries:

pip install --upgrade boto3 mock -t .

install dependencies

Add these libraries to the buildspec.yml, which is the YAML file that is required for CodeBuild to execute.

version: 0.2

phases:
  install:
    commands:

      # Upgrade AWS CLI to the latest version
      - pip install --upgrade awscli boto3 mock

  pre_build:
    commands:

      # Discover and run unit tests in the 'tests' directory. For more information, see <https://docs.python.org/3/library/unittest.html#test-discovery>
      - python -m unittest discover tests

  build:
    commands:

      # Use AWS SAM to package the application by using AWS CloudFormation
      - aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml

artifacts:
  type: zip
  files:
    - template-export.yml

Open the index.py where we can write the simple voting logic for our Lambda function.

import json
import datetime
import boto3
import os

table_name = os.environ['TABLE_NAME']
table_region = os.environ['TABLE_REGION']

VOTES_TABLE = boto3.resource('dynamodb', region_name=table_region).Table(table_name)
CANDIDATES = {"A": "Black Panther", "B": "Captain America: Civil War", "C": "Guardians of the Galaxy", "D": "Thor: Ragnarok"}

def handler(event, context):
    if event['httpMethod'] == 'GET':
        resp = VOTES_TABLE.scan()
        return {'statusCode': 200,
                'body': json.dumps({item['Candidate']: int(item['Votes']) for item in resp['Items']}),
                'headers': {'Content-Type': 'application/json'}}

    elif event['httpMethod'] == 'POST':
        try:
            body = json.loads(event['body'])
        except:
            return {'statusCode': 400,
                    'body': 'Invalid input! Expecting a JSON.',
                    'headers': {'Content-Type': 'application/json'}}
        if 'candidate' not in body:
            return {'statusCode': 400,
                    'body': 'Missing "candidate" in request.',
                    'headers': {'Content-Type': 'application/json'}}
        if body['candidate'] not in CANDIDATES.keys():
            return {'statusCode': 400,
                    'body': 'You must vote for one of the following candidates - {}.'.format(get_allowed_candidates()),
                    'headers': {'Content-Type': 'application/json'}}

        resp = VOTES_TABLE.update_item(
            Key={'Candidate': CANDIDATES.get(body['candidate'])},
            UpdateExpression='ADD Votes :incr',
            ExpressionAttributeValues={':incr': 1},
            ReturnValues='ALL_NEW'
        )
        return {'statusCode': 200,
                'body': "{} now has {} votes".format(CANDIDATES.get(body['candidate']), resp['Attributes']['Votes']),
                'headers': {'Content-Type': 'application/json'}}

def get_allowed_candidates():
    l = []
    for key in CANDIDATES:
        l.append("'{}' for '{}'".format(key, CANDIDATES.get(key)))
    return ", ".join(l)

What our code basically does is take in the HTTPS request call as an event. If it is an HTTP GET request, it gets the votes result from the table. If it is an HTTP POST request, it sets a vote for the candidate of choice. We also validate the inputs in the POST request to filter out requests that seem malicious. That way, only valid calls are stored in the table.

In the example code provided, we use a CANDIDATES variable to store our candidates, but you can store the candidates in a JSON file and use Python’s json library instead.

Let’s update the tests now. Under the tests folder, open the test_handler.py and modify it to verify the logic.

import os
# Some mock environment variables that would be used by the mock for DynamoDB
os.environ['TABLE_NAME'] = "MockHelloWorldTable"
os.environ['TABLE_REGION'] = "us-east-1"

# The library containing our logic.
import index

# Boto3's core library
import botocore
# For handling JSON.
import json
# Unit test library
import unittest
## Getting StringIO based on your setup.
try:
    from StringIO import StringIO
except ImportError:
    from io import StringIO
## Python mock library
from mock import patch, call
from decimal import Decimal

@patch('botocore.client.BaseClient._make_api_call')
class TestCandidateVotes(unittest.TestCase):

    ## Test the HTTP GET request flow. 
    ## We expect to get back a successful response with results of votes from the table (mocked).
    def test_get_votes(self, boto_mock):
        # Input event to our method to test.
        expected_event = {'httpMethod': 'GET'}
        # The mocked values in our DynamoDB table.
        items_in_db = [{'Candidate': 'Black Panther', 'Votes': Decimal('3')},
                        {'Candidate': 'Captain America: Civil War', 'Votes': Decimal('8')},
                        {'Candidate': 'Guardians of the Galaxy', 'Votes': Decimal('8')},
                        {'Candidate': "Thor: Ragnarok", 'Votes': Decimal('1')}
                    ]
        # The mocked DynamoDB response.
        expected_ddb_response = {'Items': items_in_db}
        # The mocked response we expect back by calling DynamoDB through boto.
        response_body = botocore.response.StreamingBody(StringIO(str(expected_ddb_response)),
                                                        len(str(expected_ddb_response)))
        # Setting the expected value in the mock.
        boto_mock.side_effect = [expected_ddb_response]
        # Expecting that there would be a call to DynamoDB Scan function during execution with these parameters.
        expected_calls = [call('Scan', {'TableName': os.environ['TABLE_NAME']})]

        # Call the function to test.
        result = index.handler(expected_event, {})

        # Run unit test assertions to verify the expected calls to mock have occurred and verify the response.
        assert result.get('headers').get('Content-Type') == 'application/json'
        assert result.get('statusCode') == 200

        result_body = json.loads(result.get('body'))
        # Verifying that the results match to that from the table.
        assert len(result_body) == len(items_in_db)
        for i in range(len(result_body)):
            assert result_body.get(items_in_db[i].get("Candidate")) == int(items_in_db[i].get("Votes"))

        assert boto_mock.call_count == 1
        boto_mock.assert_has_calls(expected_calls)

    ## Test the HTTP POST request flow that places a vote for a selected candidate.
    ## We expect to get back a successful response with a confirmation message.
    def test_place_valid_candidate_vote(self, boto_mock):
        # Input event to our method to test.
        expected_event = {'httpMethod': 'POST', 'body': "{\"candidate\": \"D\"}"}
        # The mocked response in our DynamoDB table.
        expected_ddb_response = {'Attributes': {'Candidate': "Thor: Ragnarok", 'Votes': Decimal('2')}}
        # The mocked response we expect back by calling DynamoDB through boto.
        response_body = botocore.response.StreamingBody(StringIO(str(expected_ddb_response)),
                                                        len(str(expected_ddb_response)))
        # Setting the expected value in the mock.
        boto_mock.side_effect = [expected_ddb_response]
        # Expecting that there would be a call to DynamoDB UpdateItem function during execution with these parameters.
        expected_calls = [call('UpdateItem', {
                                                'TableName': os.environ['TABLE_NAME'], 
                                                'Key': {'Candidate': 'Thor: Ragnarok'},
                                                'UpdateExpression': 'ADD Votes :incr',
                                                'ExpressionAttributeValues': {':incr': 1},
                                                'ReturnValues': 'ALL_NEW'
                                            })]
        # Call the function to test.
        result = index.handler(expected_event, {})
        # Run unit test assertions to verify the expected calls to mock have occurred and verify the response.
        assert result.get('headers').get('Content-Type') == 'application/json'
        assert result.get('statusCode') == 200

        assert result.get('body') == "{} now has {} votes".format(
            expected_ddb_response['Attributes']['Candidate'], 
            expected_ddb_response['Attributes']['Votes'])

        assert boto_mock.call_count == 1
        boto_mock.assert_has_calls(expected_calls)

    ## Test the HTTP POST request flow that places a vote for an non-existant candidate.
    ## We expect to get back a successful response with a confirmation message.
    def test_place_invalid_candidate_vote(self, boto_mock):
        # Input event to our method to test.
        # The valid IDs for the candidates are A, B, C, and D
        expected_event = {'httpMethod': 'POST', 'body': "{\"candidate\": \"E\"}"}
        # Call the function to test.
        result = index.handler(expected_event, {})
        # Run unit test assertions to verify the expected calls to mock have occurred and verify the response.
        assert result.get('headers').get('Content-Type') == 'application/json'
        assert result.get('statusCode') == 400
        assert result.get('body') == 'You must vote for one of the following candidates - {}.'.format(index.get_allowed_candidates())

    ## Test the HTTP POST request flow that places a vote for a selected candidate but associated with an invalid key in the POST body.
    ## We expect to get back a failed (400) response with an appropriate error message.
    def test_place_invalid_data_vote(self, boto_mock):
        # Input event to our method to test.
        # "name" is not the expected input key.
        expected_event = {'httpMethod': 'POST', 'body': "{\"name\": \"D\"}"}
        # Call the function to test.
        result = index.handler(expected_event, {})
        # Run unit test assertions to verify the expected calls to mock have occurred and verify the response.
        assert result.get('headers').get('Content-Type') == 'application/json'
        assert result.get('statusCode') == 400
        assert result.get('body') == 'Missing "candidate" in request.'

    ## Test the HTTP POST request flow that places a vote for a selected candidate but not as a JSON string which the body of the request expects.
    ## We expect to get back a failed (400) response with an appropriate error message.
    def test_place_malformed_json_vote(self, boto_mock):
        # Input event to our method to test.
        # "body" receives a string rather than a JSON string.
        expected_event = {'httpMethod': 'POST', 'body': "Thor: Ragnarok"}
        # Call the function to test.
        result = index.handler(expected_event, {})
        # Run unit test assertions to verify the expected calls to mock have occurred and verify the response.
        assert result.get('headers').get('Content-Type') == 'application/json'
        assert result.get('statusCode') == 400
        assert result.get('body') == 'Invalid input! Expecting a JSON.'

if __name__ == '__main__':
    unittest.main()

I am keeping the code samples well commented so that it’s clear what each unit test accomplishes. It tests the success conditions and the failure paths that are handled in the logic.

In my unit tests I use the patch decorator (@patch) in the mock library. @patch helps mock the function you want to call (in this case, the botocore library’s _make_api_call function in the BaseClient class).
Before we commit our changes, let’s run the tests locally. On the terminal, run the tests again. If all the unit tests pass, you should expect to see a result like this:

You:~/environment $ python -m unittest discover vote-your-movie/tests
.....
----------------------------------------------------------------------
Ran 5 tests in 0.003s

OK
You:~/environment $

Upload to AWS

Now that the tests have passed, it’s time to commit and push the code to source repository!

Add your changes

From the terminal, go to the project’s folder and use the following command to verify the changes you are about to push.

git status

To add the modified files only, use the following command:

git add -u

Commit your changes

To commit the changes (with a message), use the following command:

git commit -m "Logic and tests for the voting webservice."

Push your changes to AWS CodeCommit

To push your committed changes to CodeCommit, use the following command:

git push

In the AWS CodeStar console, you can see your changes flowing through the pipeline and being deployed. There are also links in the AWS CodeStar console that take you to this project’s build runs so you can see your tests running on AWS CodeBuild. The latest link under the Build Runs table takes you to the logs.

unit tests at codebuild

After the deployment is complete, AWS CodeStar should now display the AWS Lambda function and DynamoDB table created and synced with this project. The Project link in the AWS CodeStar project’s navigation bar displays the AWS resources linked to this project.

codestar resources

Because this is a new database table, there should be no data in it. So, let’s put in some votes. You can download Postman to test your application endpoint for POST and GET calls. The endpoint you want to test is the URL displayed under Application endpoints in the AWS CodeStar console.

Now let’s open Postman and look at the results. Let’s create some votes through POST requests. Based on this example, a valid vote has a value of A, B, C, or D.
Here’s what a successful POST request looks like:

POST success

Here’s what it looks like if I use some value other than A, B, C, or D:

 

POST Fail

Now I am going to use a GET request to fetch the results of the votes from the database.

GET success

And that’s it! You have now created a simple voting web service using AWS Lambda, Amazon API Gateway, and DynamoDB and used unit tests to verify your logic so that you ship good code.
Happy coding!