#savethenet: авторско право: какво ще гласува ЕП

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/03/23/savethenet/

Благодарение на Юлия Реда, член на ЕП, се ориентираме във версиите на проекта на Директива за авторското право в цифровия единен пазар. Става ясно, че най-противоречивите разпоредби – чл.11, права на издателите и чл.13, филтри – вече са чл.15 и съответно чл.17 от проекта – и така ще останат във версията за гласуване – то е насрочено за във вторник, 26 март, 12 и 30.

Ето и целият проект на директивата.

Разпоредбата на известния ни член 13 е разположена (вече като чл.17) на страници 120-129.

Говорят експертите

Рапортът на Мълър – прогнози

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2212

Току-що Робърт Мълър официално предаде своя рапорт за почти двегодишното разследване на намесата на Русия в изборите за американски президент през 2016 г. Получилият го Робърт Бар твърди, че до ден-два ще предаде на Конгреса, и в крайна сметка ще направи публично каквото може от него.

Какво да очакваме?

Според мен – нищо особено.

За съжаление Джордж Конуей е абсолютно прав за Тръмп. (Ако не знаете – Джордж е виден консервативен юрист. Също така е съпругът на Келиан Конуей, част от администрацията на Тръмп, станала известна с фразата „алтернативни факти“.) Уважаемият президент е буквално жив еталон за нарцистично разстройство на личността – покрива 9 от 9 критерия при необходими 5. Покрива достатъчно за диагноза и психиатричната дефиниция за злокачествен нарцисизъм. И освен тези двете покрива и критериите за антисоциално разстройство на личността (психопатия). Комбинацията на първото и третото може да бъде изключително опасна, когато е притисната в ъгъла – особено когато е върховен главнокомандващ на армията и държи ядрения бутон. (На прост български тя означава, че Тръмп поставя себе си много по-високо, отколкото поставят себе си психически нормалните хора, и в същото време поставя обществото и задълженията си към него много по-ниско, отколкото го правят психически нормалните хора. Като се има предвид, че дори сред нормалните някои не биха се поколебали на негово място да започнат гражданска война или да се опитат да направят преврат, за да се опазят от съда…)

Мълър е известен с брутална праволинейност. Тя е, която го държа 13 години начело на ФБР – той е най-дълго командвалият го след страховития Едгар Хувър. В същото време обаче той няма как да не е напълно наясно какво ще стане, ако рапортът му включва информация, заради която Тръмп може да бъде импийчнат. Никой разумен човек с отговорност към страната си не желае да я види разкъсвана от гражданска война, така че няма да поеме този риск. Затова дори ако той е намерил такава информация, моето предположение е, че тя ще бъде скрита. Че усилията ще бъдат насочени към предотвратяване на същото в момента, в който Тръмп ще трябва да сдаде президентството, след две или шест години. Дотогава има достатъчно време за подготовка, и планирането много ходове напред е слабо място на Тръмп.

(Всъщност, съществува ли такава информация изобщо? Работи ли Тръмп за руснаците? Надали има как да се знае с неоспорима сигурност. Личното ми мнение обаче е, че не. Че те подкрепиха избора му просто защото отлично знаят колко опропастителни са социопатите и психопатите начело на демократична държава. Георги Първанов и Виктор Орбан, anyone?… Няма нужда да нареждат на Тръмп какво да прави, той ще бъде безотговорен към страната си и ще съсипва и компрометира демокрацията в нея просто защото в това вижда изгода за себе си.)

Ето затова според мен рапортът няма да съдържа нищо особено. Тръмп ще се бие в гърдите до края на мандата си, че разследването е показало, че той не сътрудничи с руснаците, и щастливо ще досъсипва външнополитическите позиции на САЩ в света. И толкова.

Разбира се, Бар е избран от Тръмп именно по абсолютната му лоялност. Дори ако в рапорта има нещо сериозно срещу Тръмп, той вероятно би направил всичко възможно, за да го скрие. Не вярвам да има как да го скрие завинаги, но както вече казах, не вярвам и в рапорта да има кой знае какво за криене.

А си мисля, че май Тръмп съсипва и икономиката на САЩ. Данъчната му реформа накратко гласи: „Раздай пари на хората, като им намалиш данъците, и вържи бюджета чрез заеми“. Нещо като баща, който взима заеми, за да купува на децата си играчки и да им е любим. Само че за такива случаи важи приказката „като пили пеяли, като плащали плакали“. Тези заеми ще трябва да се връщат, и няма откъде освен от данъци. На който следващ президент се падне, ако не е също толкова безотговорен и не плаща заемите с нови по-големи заеми, ще бъде „лошият“ и „обирджията“, а Тръмп ще е „добрият“. На народите акълът им е толкова, дори на американския, и Тръмп го знае отлично.

Принципно повечето харчене би трябвало да доведе до повече икономически растеж, той до повече данъци, и така заемите да се изплатят по-лесно. (Това е тезата на Тръмп.) На практика обаче федералният бюджет, който ще връща заемите, е около 20% от БВП на САЩ. Данъчната реформа на Тръмп увеличи годишните заеми на САЩ от около 500 милиарда на около 1000 – иначе казано, на година само тя взема като заем около 3% от БВП. А растежът на БВП се увеличи спрямо този през последната година на Обама с около 1.5%, от които 20% (които влизат във федералния бюджет) са 0.3% от БВП – една десета от заемите за тази реформа, които бюджетът взема. За да изплащат повечето данъци заемите, КПД на процеса трябва да стигне над 100%. В този случай това е теоретично възможно (нещата не са zero-sum както във физиката), но на практика постигнатото КПД излиза около 10%. За който може да смята е очевидно, че икономически „тръмпономиката“ е провал със закъснител.

В икономика с размерите на американската е възможно сривът да започне чак след 3-4 години – когато Тръмп вероятно вече няма да е на власт. А ако случайно е спечелил втори мандат, винаги може да обвини за потъването демократите в Конгреса. Ако това не сработи, военните и гражданската война продължават да са опция (виж по-горе за комбинацията от нарцистичен социопат и антисоциален психопат). Но нито намирането на оправдание, нито дори установяването на диктатура оправя икономиката.

А икономическата картина май не е дори толкова розова, колкото показват официалните рапорти. Основният показател за съживяване на икономиката, който те подчертават, е намаляването на безработицата. Пропуска се обаче, че правителството на Тръмп тихомълком оряза имиграцията и издаването на работни визи много сериозно. Като резултат, за изминалите 2 години са приети в по-малко към 1% от работната ръка в САЩ – иначе казано, 1% от спада на безработицата се дължи всъщност на това орязване, а не на наемане на хора. Като се има предвид и че работни визи обикновено получават най-кадърните, работливи и продуктивни хора, нищо чудно реалният растеж да се насочва към спад с повече от 1% – а растежът му спрямо последната година на Обама, повтарям, е около 1.5%.

Много ми се иска тези ми изчисления да са грешни. Напълно е възможно, не съм гуру на икономиката. Влязат ли САЩ в икономическа криза, няма как да не се отрази на целия свят, включително на нас. Но… да видим.

А междувременно лично аз очаквам в рапорта на Мълър да няма нищо особено.

Friday Squid Blogging: New Research on Squid Camouflage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/friday_squid_bl_669.html

From the New York Times:

Now, a paper published last week in Nature Communications suggests that their chromatophores, previously thought to be mainly pockets of pigment embedded in their skin, are also equipped with tiny reflectors made of proteins. These reflectors aid the squid to produce such a wide array of colors, including iridescent greens and blues, within a second of passing in front of a new background. The research reveals that by using tricks found in other parts of the animal kingdom — like shimmering butterflies and peacocks — squid are able to combine multiple approaches to produce their vivid camouflage.

Researchers studied Doryteuthis pealeii, or the longfin squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

AWS Security Profiles: Nathan Case, Senior Security Specialist, Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-nathan-case-senior-security-specialist-solutions-architect/

AWS Security Profiles: Nathan Case, Senior Security Specialist, Solutions Architect

Leading up to the AWS Santa Clara Summit, we’re sharing our conversation with Nathan Case, who will be presenting at the event, so you can learn more about him and some of the interesting work that he’s doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for three years, and I’m a Senior Security Specialist, Solutions Architect. I started out working on our Commercial Sector team, but I moved fairly quickly to Public Sector, where I was the tech lead for our work with the U.S. Department of Defense (DOD) and the first consultant in our U.S. DOD practice. I was offered a position back on the commercial side of the company, which entailed building out how we, as AWS, think about incident response and threat detection. I took that job because it was way too interesting to pass up, and it gave me an opportunity to have more impact for our customers. I love doing incident response and threat detection, so I had that moment where I thought, “Really? You’re going to pay me to do this?” I couldn’t turn it down. It did break my heart a little to step away from the public sector, but it’s great getting to work more intimately with some of our commercial customers.

What do you wish more people understood about incident response?

Because of my role, I generally talk with customers after one of their applications has been breached or something has been broken into. The thing I wish more people knew is that it will be okay. This happens to a lot of people, and it’s not the end of the world. Life will go on.

That said, the process does work much better if you call me before an incident happens. Prevention is so much better than the cure. I’m happy to help during an incident, but there are lots of ways we can proactively make things better.

What’s your favorite part of your job?

I think people like myself, who enjoy incident response, have a slight hero complex: You get to jump in, get involved, and make a difference for somebody. I love walking away from an engagement where something bad has happened, knowing that I’ve made a difference for that person and they’re now a happy customer again.

I also enjoy getting to do the pre-work sessions. While I have to make sure that customers understand that security is something they have to do, I help them reach the point where they’re happy about that fact. I get to help them realize that it’s something they’re capable of doing and it’s not as scary as they thought.

What’s the most challenging part of your job?

It’s that moment when I get the call—maybe in the middle of the night—and somebody says this thing has just happened, and can I help them?

The hardest aspect of that conversation is working through the event with the CISO or the individual who’s in charge of the response and convincing them that all the steps they’ll need to take will still be there tomorrow and that there’s nothing else they can do in the moment. It’s difficult to watch the pain that accompanies that realization. There’s eventually a certain catharsis at the end of the conversation, as the customer starts to see the light at the end of the tunnel and to understand that everything is going to be all right. But that first moment, when the pit has dropped out of someone’s stomach and I have to watch it on their face—that’s hard.

What’s the most common misperception about cloud security that you encounter?

I used to work in data centers, so I have a background that’s steeped in building out networking switches, and stacks, and points of presence, and so on. I spent a lot of time protecting and securing these things, and doing some impressive data center implementations. But now that I work in the cloud, I look back at that whole experience and ask, “Why?”

I think the misconception still exists that it’s easier to protect a data center than the cloud. But frankly, I wouldn’t be doing this job if I thought data centers were more secure. They’re not. There are so many more things that you can see and take care of in a cloud environment. You’re able to detect more threats than you could in a data center, and there’s so much more instrumentation to enable you to keep track of all of those threats.

What does cloud security mean to you, personally?

I view my current role as a statement of my belief in cloud security; it’s a way for me to offer help to a large number of people.

When I worked for the U.S. Department of Defense, through AWS, it was really important to me to help protect the country and to make sure that we were safe. That’s still really important to me—and I believe the cloud can help achieve that. If you look at the world as a whole, I think there’s evidence of a nefarious substructure that operates in a manner similar to organized crime: It exists, but it’s hopefully not something that most people have to see or interact with. I feel a certain calling to be one of the individuals that helps shield others from these influences that they generally wouldn’t have the knowledge to protect themselves against. For example, I’ve done work that helps protect people from attacks by nation states. It’s very satisfying to be able to help defend and protect customers from things like that.

Five years from now, what changes do you think we’ll see across the cloud security landscape?

I think that cloud security will begin shifting toward the question, “How did you implement? Is your architecture correct?” Right now, I hear this statement a lot: “We built this application like we have for the last [X] years. It’s fine!” And I believe that attitude will disappear as it becomes painfully obvious that it’s not fine, and it wasn’t fine. The way we architect and build and secure applications will change dramatically. Security will be included to begin with, and designing for failure will become the norm. We’ll see more people building security and detection in layers so that attackers’ actions can be seen and responded to automatically. With the services that are coming into being now, the options for new applications are just so different. It’s very exciting to see what they are and how we can secure applications and infrastructure.

You’re hosting a session at the Santa Clara summit titled “Threat detection and mitigation at AWS.” Where did the idea for this topic come from?

There’s no incident response without the ability to detect the threat. As AWS (and, frankly, as technology professionals), we need to teach people how to detect threats. That includes teaching them appropriate habits and appropriate architectures that allow for detecting, rather than simply accepting the attitude that “whatever happens, happens.”

My talk focuses on describing how you need to architect your environment so that you’re able to see a threat when it’s present. This enables you to know that there’s an issue in advance, rather than finding out two and half years later that a threat has been present all along and you just didn’t know about it. That’s an untenable scenario to me. If we begin to follow appropriate cloud hygiene, then that risk goes way down.

What are some of the most common challenges customers face when it comes to threat detection in the cloud?
I often see customers struggling to let go of the idea that a human has to touch production to make it work correctly. I think you can trace this back to the fact that people are used to having a rack down in the basement that they can go play with. As humans, we get locked in this “we’re used to it” concept. Change is scary! Technology is evolving and people need to change with it and move forward along the technical path. There are so many opportunities out there for someone who takes the time to learn about new technologies.

What are you hoping your audience will do differently as a result of your session?

Let me use sailing a boat as an example: If you don’t have a complex navigation system and you can’t tell exactly which course you’re on, there are times when you pick something off in the distance and steer toward that. You’ll probably have to correct course as you go. If the wind blows heavily, you might have to swing left or right before making your way back to your original course. But you have something to steer toward.

I hope that my topic gives people that end-point, that place in the distance to travel toward. I don’t think the talk will make everyone suddenly jump up and take action—although it would be great if that happens! But I’d settle for the realization that, “Gee, wouldn’t it be nice if we could get to the place Nathan is talking about?” Simply figuring out what to steer toward is the obstacle standing in the way of a lot of people.

You’re known for your excellent BBQ. Can you give us some tips on cooking a great brisket?

I generally cook brisket for about 18 hours, between 180 – 190 degrees Fahrenheit, using a homemade dry rub, heavy on salt, sugar, and paprika. I learned this technique (indirectly: https://rudysbbq.com/) from a guy named Rudy, who lived in San Antonio and opened a restaurant called Rudy’s Bar-B-Q (the “worst BBQ in Texas”) that I used to visit every summer. If you’re using a charcoal grill, maintaining 180 – 190 degrees for eighteen hours is a real pain in the butt—so I cheat and use an electric smoker. But if you do this a fair bit, you’ll notice that 180 – 190 isn’t hot enough to generate enough smoke, and you generally want brisket to be smoky. I add some smoldering embers to the smoke tray to keep it smoking. (I know that an electric smoker is cheating. I’m sure Rudy would be horribly offended.)

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Author photo

Nathan Case

Nathan is a Senior Security Specialist, Solutions Architect. He joined AWS in 2016.

EСПЧ: ново решение за отговорността на онлайн изданията за коментари

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/03/22/echr-8-10/

ЕСПЧ отново се е занимавал с въпроси, обсъждани по делото Delfi v Estonia. Това беше делото за отговорността на собственика на сайт за коментари във форумите, по което ЕСПЧ прие, че собственикът може да е отговорен при определени обстоятелства.

След това решение – потвърдено впрочем от Голямата камара през 2015 г. – Съдът отново се занима с отговорността за форумите и по делото Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (2016) прие, че собственикът на сайта не е отговорен. Така трябваше да стане ясно, че няма общо правило, в някои случаи собственикът е отговорен, в някои случаи – не. Преценката всеки път е конкретна, но има някои общи принципи.

Третото и последно засега произнасяне беше по делото Pihl v. Sweden (application no. 74742/14) – тук жалбата е обявена за недопустима с оглед най-вече на факта, че коментарът  не съдържа подбуждане към насилие и е публикуван в един малък блог, управляван от сдружение с нестопанска цел, което прекратява  достъпа в деня след искане на заявителя и девет дни, след като е бил публикуван коментарът.

Има съобщение за четвърто решение относно отговорността на собственика на сайт за коментари на граждани – това е решението от 19 март 2019 по делото Høiness v Norway. Като обсъжда баланса между чл.8 и чл. 10 от Конвенцията, Европейският съд по правата на човека единодушно постановява, че няма нарушение на член 8 (право на зачитане на личния живот).

Mona Høiness,   известен адвокат, завежда дело  срещу компанията Hegnar Media AS и г-н Х., редактор, работещ за интернет портала Hegnar Online, за клевета. Тя заявява, че нейната чест е засегната в три коментара, направени анонимно на форума на Hegnar Online. Ответниците твърдят, че не са знаели за коментарите и че те са били премахнати веднага след като са разбрали за тях.  Съдилищата в Норвегия  се  произнасят в полза на медията.  

Mona Høiness се обръща към Съда в Страсбург.

В решението на ЕСПЧ първо се напомня, че при обсъждане на конкурентните интереси съгласно член 8 и член 10, Съдът е установил общи принципи, обобщени в Delfi AS срещу Естония.

В случая:

  • твърденията не представляват клевета по националното право,
  • в никакъв случай не са реч на омразата или подбуждане към насилие;
  • дискусионните форуми не са били особено интегрирани в представянето на новини и по този начин не изглежда коментарите да са продължение на редакционни статии;
  • по отношение на мерките, приети от Hegnar Online – има изградена система за модериране, освен това читателите имат възможност да кликнат върху „предупреждение“ и да участват в модерирането; един от коментарите дори е изтрит по собствена инициатива на изданието преди получаването на уведомяване от адвоката на г-жа Høiness. От изданието са действали по подходящ начин.

В съответствие с установените принципи в решението Delfi AS срещу Естония, няма основание Съдът да замени с различно становище решението на националните съдилища.

Съдът намира съответно, че националните съдилища са действали в рамките на тяхната свобода на преценка и няма нарушение на чл.8 ЕКПЧ.

Кодекс за поведение във връзка с дезинформацията: резултати от наблюдението

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/03/22/code/

На 20 март ЕК публикува най-актуалните месечни доклади на Google, Twitter и Facebook за постигнатия през февруари напредък по изпълнението на поетите ангажименти за борба с дезинформацията. Подписалите Кодекса за поведение във връзка с дезинформацията онлайн платформи имат ангажимент да докладват своя напредък преди изборите за Европейски парламент през май 2019 г.

  • Всички платформи са потвърдили, че техните инструменти за оценка на прозрачността на рекламите с политическо съдържание ще започнат да функционират преди европейските избори през май.
  • Комисията призовава онлайн платформите да работят съвместно с научни изследователи и проверители на факти. Този достъп би могъл да спомогне за получаването на пълна и безпристрастна картина на моделите и тенденциите за дезинформация и следва да бъде осъществен при пълно зачитане на Общия регламент относно защитата на данните.
  • Комисията настоява разработваните от онлайн платформите инструменти да са достъпни във всичките 28 държави — членки на ЕС, а не само в някои от тях.

Scribus team mourns the passing of Peter “mrdocs” Linnell

Post Syndicated from jake original https://lwn.net/Articles/783763/rss

The team behind the Scribus libre desktop-publishing tool
is mourning the passing of Peter Linnell. “It is no understatement to say that without Peter Scribus wouldn’t be what it is today. It was Peter who spotted the potential of Franz Schmid’s initially humble Python program and, as a pre-press consultant at the time, contacted Franz to make him aware of the necessities of PostScript and PDF support, among other things. Peter also wrote the first version of the Scribus online documentation, which resulted in his nickname ‘mrdocs’ in IRC and elsewhere. Until recently, and despite his detoriating health, Peter continued to be involved in building and releasing new Scribus versions.

Scribus was the project he helped to set on track and which marked the beginning of his journey into the world of Free Software development. While it remained at the heart of his commitments to Open Source in general and Libre Graphics software in particular, Peter contributed to Free Software in many other ways as well. For example via contributions to projects related to freedesktop.org, as a package builder of many Free programs for several Linux distributions on the openSUSE Build Service, and later as an openSUSE board member. Peter was also crucial in bringing the Libre Graphics community together by way of sharing his expertise with other graphics-oriented projects and his assistance in organizing the first Libre Graphics Meetings. In the sometimes ego-driven and often emotional world of Open Source development, Peter managed to get along very well with almost everybody and never lost his sense of humour.”

БСП пита Конституционния съд залагат ли цензура промените в закона за личните данни

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/03/22/const-zzld-journ/

Заглавието е на Капитал – изданието съобщава, че 55 депутати от БСП оспорват текста от ЗЗЛД, който въвежда журналистическото изключение, предвидено в GDPR.

Президентът наложи вето върху този текст, но мнозинството го прегласува.

Чл. 25з, ал. 2 от ЗЗЛД

При разкриване чрез предаване, разпространяване или друг начин, по който лични данни, събрани за целите по ал. 1, стават достъпни, балансът между свободата на изразяване и правото на информация и правото на защита на личните данни се преценява въз основа на следните критерии, доколкото са относими:
1. естеството на личните данни;
2. влиянието, което разкриването на личните данни или тяхното обществено оповестяване би оказало върху неприкосновеността на личния живот на субекта на данни и неговото добро име;
3. обстоятелствата, при които личните данни са станали известни на администратора;
4. характера и естеството на изявлението, чрез което се упражняват правата по ал. 1;
5. значението на разкриването на лични данни или общественото им оповестяване за изясняването на въпрос от обществен интерес;
6. отчитане дали субектът на данни е лице, което заема длъжност по чл. 6 от Закона за противодействие на корупцията и за отнемане на незаконно придобитото имущество, или е лице, което поради естеството на своята дейност или ролята му в обществения живот е с по-занижена защита на личната си неприкосновеност или чиито действия имат влияние върху обществото;
7. отчитане дали субектът на данни с действията си е допринесъл за разкриване на свои лични данни и/или информация за личния си и семеен живот;
8. целта, съдържанието, формата и последиците от изявлението, чрез което се упражняват правата по ал. 1;
9. съответствието на изявлението, чрез което се упражняват правата по ал. 1, с основните права на гражданите;
10. други обстоятелства, относими към конкретния случай.

Искането до Конституционния съд, публикувано в lex.bg, твърди, че съществува противоречие на изменението на ЗЗЛД с принципите на правовата държава, чл.4 Конституцията – и съдържа широко позоваване на Решение 7/1996 на Конституционния съд относно свободата на изразяване.

КЗК разреши придобиването на Нова Броудкастинг Груп

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/03/22/cpc-nova/

Адванс Медиа Груп, търговско дружество, свързано с К.Домусчиев и Г.Домусчиев, има намерение да придобие едноличен контрол върху „Нова Броудкастинг Груп“ АД.

В КЗК е постъпило искане да направи оценка на сделката и да постанови решение, че настоящата сделка не представлява концентрация; или да постанови, че концентрацията не попада в обхвата на чл. 24 от ЗЗК; или да разреши концентрацията,
тъй като тя не води до установяване или засилване на господстващо положение, което значително би попречило на ефективната конкуренция на съответния пазар.

Произнасянето на КЗК е крайно любопитно поради скорошното решение, с което се отказва на Келнер да придобие Нова Броудкастинг Груп –

Предвид характеристиките на всеки един от съответните пазари в медийния сектор е установено, че придобиваната група разполага със значителен финансов и организационен ресурс, възможност за реализиране на икономии от мащаба и обхвата, и утвърден имидж. Значителният брой средства за масова информация, с които ще разполага обединената група, ще й даде съществено предимство пред останалите участници, предоставящи медийни услуги.

При анализа на нотифицираната сделка Комисията отчита водещите позиции на придобиваното предприятие в областта на медийните услуги, което от своя странаповдига основателни опасения за ефекта от сделка върху конкурентна среда на горепосочените пазари, както и хоризонтално припокриване на дейностите на участниците в концентрацията на пазара на онлайн търговия.

По този начин, участниците в концентрацията биха имали стимул и реална възможност да променят своята търговска политика под различни форми, изразяващи се в ограничаване на достъпа, повишаване на цените или промяна в условията по сключените договори. С оглед на гореизложеното и предвид значителния опит на придобиващото дружество и неговите инвестиционни намерения се създават предпоставки сделката да доведе до установяване или засилване на господстващо положение, което значително би възпрепятствало конкуренцията на съответните пазари. Такова поведение би ограничило и нарушило не само конкуренцията на пазара, но и интересите на крайните потребители, предвид обществената значимост на медиите.

КЗК оповести на 21 март 2019 решението си, с което – този път – разрешава сделката:

Реално осъществяваната от „Нова“ дейност включва:

  • създаване на телевизионно съдържание за собствена употреба, както и придобиване на права за разпространение на телевизионно съдържание;
  • разпространение на телевизионно съдържание – „Нова“ създава и разпространява 7 (седем) телевизионни програми с национален обхват: „Нова телевизия“; „Диема“; „Кино Нова“; „Диема Фемили“; „Нова спорт“; „Диема спорт“; „Диема спорт 2“;
  • телевизионна реклама – „Нова“ продава достъп до аудиторията си посредством излъчване на рекламни материали на рекламодатели и рекламни агенции в посочените телевизионни програми;
  • поддържане на интернет сайтове, предоставящи основно информация за програмите на ТВ-каналите, които оперира, както и възможност за гледане на част от излъчваните предавания в Интернет – https://nova.bg/, https://play.nova.bg/,
    https://diemaxtra.nova.bg/, https://play.diemaxtra.bg/, http://www.diema.bg, https://kino.nova.bg/, https://diemafamily.nova.bg/. Допълнително, Нова е разработила услугата Play DiemaXtra, която се състои от интернет сайт и мобилно приложение, даващи
    възможност за линейно гледане на пакета от телевизионните канали Диема Спорт и Диема Спорт 2, както и на отделни спортни събития по избор на потребителите (PPV);
  • предоставяне на услуги (…..)*.
    Дружеството „Атика Ева“ АД, контролирано от „Нова Броудкастинг Груп“ АД, е е специализирано в издаването на месечните списания: Еva, Playboy, Esquire, Joy, Grazia, OK „Атика Ева“ ООД, чрез които извършва издателска дейност, търговия с
    печатни произведения, реклама в печатни издания. Дружеството също така администрира интернет сайтове на част от посочените списания, на които самостоятелно продава интернет реклама.
    Другите предприятия, контролирани от „Нова Броудкастинг Груп“ АД – чрез „Нет Инфо“ АД, предоставят следните ключови продукти и услуги:
  • уеб-базирана електронна поща (www.abv.bg), която позволява на крайните потребители да отворят електронни пощенски кутии и да обменят пощенски съобщения; сайтът http://www.abv.bg също така позволява на крайните потребители да си съставят адресен указател и да общуват с други потребители във виртуални чат стаи и онлайн форуми;
  • електронни директории – платформа за организиране, съхранение и споделяне на файлове онлайн – http://www.dox.bg;
  • търсене – в сътрудничество с международния доставчик на тази услуга, Google, компанията предоставя уеб-търсене на основните си страници – http://www.abv.bg и http://www.gbg.bg;
  • новини и информация – Нет Инфо предоставя цифрови новини и информация чрез новинарския сайт http://www.vesti.bg, специализирания спортен новинарски сайт – http://www.gong.bg, сайта за прогнозата за времето http://www.sinoptik.bg, сайта за финансова
    информация http://www.pariteni.bg и сайта, посветен на модерната жена http://www.edna.bg;
  • обяви за автомобили – http://www.carmarket.bg дава възможност на потребителите да публикуват и разглеждат обяви за продажба на автомобили;
  • проверка на цени и сравнение на продукти – чрез сайта http://www.sravni.bg интернет потребителите могат да сравняват цените на продукти, продавани в различни онлайн магазини.

Пазари, върху които сделката ще окаже въздействие.
Придобиващата контрол група извършва разнообразни дейности на територията на страната, като участва на множество пазари в различни области. Придобиваното предприятие и дружествата под негов контрол оперират на пазари в областта на медиите (телевизионни, печатни и интернет). Известна връзка между техните дейности е налице по отношение на пазара на телевизионно съдържание и по-точно в сегмента придобиване на права за разпространение, на който оперира придобиваното предприятие „Нова Броудкастинг Груп“АД и дружеството „Футбол Про Медия“ ЕООД
от групата на придобиващия контрол „Адванс Медиа Груп” ЕАД.
„Футбол Про Медиа“ ЕООД има сключени договори, както следва:
(…..)* Въз основа на тези договори, (…..). От гореизложеното може да се направи извод, че дружеството „Футбол Про Медиа“ ЕООД извършва дейност, свързана с (…..) права за телевизионно разпространение. Групата на „Нова“, също създава и купува телевизионно съдържание.
Следователно, сделката ще окаже въздействие единствено върху пазара на телевизионно съдържание и в частност по отношение на придобиване на права за разпространение ((…..)*), на който е налице известно припокриване между дейностите на участниците в концентрацията.

КЗК отчита факта, че участниците в сделката не са преки
конкуренти на съответния пазар и техните отношения са по вертикала, определящо се от качеството, в което оперира всеки от тях, а именно: „Футбол Про Медия“ ЕООД се явява (…..)* на права, а предприятието – цел е купувач на телевизионни права.
По своето естество правата за излъчване на спортни събития са ексклузивни и е обичайна търговска практика да се притежават от едно предприятие за определен период от време и за определена територия. В разглеждания случай (…..). Изхождайки от анализираните данни, Комисията приема, че на съответния пазар оперират значителен брой търговци на съдържание, от които телевизионните оператори в България купуват правата за разпространение на спортни събития. Наличието на голям брой конкуренти, включително утвърдени на пазара чуждестранни имена, води до извода, че те ще са в състояние да окажат ефективен конкурентен натиск на новата икономическа група и същата няма да е независима от тях в своето търговско поведение. Допълнително, с оглед изискванията на ЗРТ и характеристиките на продукта „телевизионно съдържание”, КЗК намира, че пазарът на телевизионно съдържание се отличава с преодолими бариери и е достъпен за навлизане на нови участници. В своя анализ Комисията взема под внимание и обстоятелството, че (…..).
Предвид изложеното и доколкото предприятията –участници в концентрацията, оперират на различни нива на пазара на телевизионно съдържание, Комисията намира, че нотифицираната сделка няма да промени значително пазарното положение на НБГ
на съответния пазар, респективно няма потенциал да увреди конкурентната среда на него.
Въз основа на извършената оценка може да се заключи, че планираната концентрация не води до създаване или засилване на господстващо положение, което значително да ограничи или възпрепятства ефективната конкуренция на анализирания
съответен пазар. Следователно, нотифицираната сделка не би могла да породи антиконкурентни ефекти и следва да бъде безусловно разрешена при условията на чл. 26, ал. 1 от ЗЗК.

Разрешава концентрацията.

[$] The congestion-notification conflict

Post Syndicated from corbet original https://lwn.net/Articles/783673/rss

Most of the time, the dreary work of writing protocol standards at
organizations like the IETF and beyond happens in the background, with most
of us being blissfully unaware of what is happening. Recently, though, a
disagreement over protocols for congestion notification and latency
reduction has come to a head in a somewhat messy conflict. The outcome of
this discussion may well affect how well the Internet of the future works —
and whether Linux systems can remain first-class citizens of that net.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/783757/rss

Security updates have been issued by CentOS (firefox), Debian (cron and ntfs-3g), Fedora (firefox, ghostscript, libzip, python2-django1.11, PyYAML, tcpflow, and xen), Mageia (ansible, firefox, and ImageMagick/GraphicsMagick), Red Hat (ghostscript), Scientific Linux (firefox and ghostscript), SUSE (libxml2, unzip, and wireshark), and Ubuntu (firefox, ghostscript, libsolv, ntfs-3g, p7zip, and snapd).

The Raspberry Pi shop, one month in

Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/the-raspberry-pi-shop-one-month-in/

Five years ago, I spent my first day working at the original Pi Towers (Starbucks in Cambridge). Since then, we’ve developed a whole host of different products and services which our customers love, but there was always one that we never got around to until now: a physical shop. (Here are opening times, directions and all that good stuff.)

Years ago, my first idea was rather simple: rent a small space for the Christmas month and then open a pop-up shop just selling Raspberry Pis. We didn’t really know why we wanted to do it, but suspected it would be fun! We didn’t expect it to take five years to organise, but last month we opened the first Raspberry Pi store in Cambridge’s Grand Arcade – and it’s a much more complete and complicated affair than that original pop-up idea.

Given that we had access to a bunch of Raspberry Pis, we thought that we should use some of them to get some timelapse footage of the shop being set up.

Raspberry Pi Shop Timelapse

Uploaded by Raspberry Pi on 2019-03-22.

The idea behind the shop is to reach an audience that wouldn’t necessarily know about Raspberry Pi, so its job is to promote and display the capabilities of the Raspberry Pi computer and ecosystem. But there’s also plenty in there for the seasoned Pi hacker: we aim to make sure there’s something for you whatever your level of experience with computing is.

Inside the shop you’ll find a set of project centres. Each one contains a Raspberry Pi project tutorial or example that will help you understand one advantage of the Raspberry Pi computer, and walk you through getting started with the device. We start with a Pi running Scratch to control a GPIO, turning on and off an LED. Another demos a similar project in Python, reading a push button and lighting three LEDs (can you guess what colour the three LEDs are?) –  you can also see project centres based around Kodi and RetroPi demonstrating our hardware (the TV-HAT and the Pimoroni Picade console), and an area demonstrating the various Raspberry Pi computer options.

store front

There is a soft seating area, where you can come along, sit and read through the Raspberry Pi books and magazines, and have a chat with the shop staff.  Finally we’ve got shelves of stock with which you can fill yer boots. This is not just Raspberry Pi official products, but merchandise from all of the ecosystem, totalling nearly 300 different lines (with more to come). Finally, we’ve got the Raspberry Pi engineering desk, where we’ll try to answer even the most complex of your questions.

Come along, check out the shop, and give us your feedback. Who knows – maybe you’ll find some official merchandise you can’t buy anywhere else!

The post The Raspberry Pi shop, one month in appeared first on Raspberry Pi.

timeShift(GrafanaBuzz, 1w) Issue 82

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2019/03/22/timeshiftgrafanabuzz-1w-issue-82/

Welcome to TimeShift

This week we have a number of articles from the Grafana Labs blog we’d like to share, plus a new panel plugin and other updates. Also, take a look at our upcoming events. If you’re going to be attending any of them, please come and say hello!

See an article we missed? Contact us.


Latest Stable Release: Grafana v6.0.2

Bug Fixes
  • Alerting: Fixed issue with AlertList panel links resulting in panel not found errors. #15975, @torkelo
  • Dashboard: Improved error handling when rendering dashboard panels. #15970, @torkelo
  • LDAP: Fix allow anonymous server bind for ldap search. #15872, @marefr
  • Discord: Fix discord notifier so it doesn’t crash when there are no image generated. #15833, @marefr
  • Panel Edit: Prevent search in VizPicker from stealing focus. #15802, @peterholmberg
  • Datasource admin: Fixed url of back button in datasource edit page, when root_url configured. #15759, @dprokop

Check out all the new features and enhancements in v6.0

Download Grafana v6.0.2 Now


From the Blogosphere

Everything You Need to Know About the OSS Licensing War, Part 1.: Grafana Labs Co-founder and CEO Raj Dutt describes how a new breed of commercial open source company and dominance of the cloud has set off a licensing war that calls into question the very meaning of open source, and has the potential to change how software is developed, funded, and delivered. As a primer, check out the spirited panel discussion on the same subject from GrafanaCon.

GrafanaCon L.A. Recap: Grafana 6.0, LGTM, and More!: If you didn’t get a chance to join us last month at GrafanaCon LA we wanted to put together a recap of what you missed, plus link to all the videos of the talks to check out.

Pro Tips: How Booking.com Handles Millions of Metrics Per Second with Graphite: This talk from GrafanaCon EU 2018 discusses how Booking.com has scaled up its Graphite monitoring stack to handle over 10 million unique points per second and the challenges of handling metrics at scale.

Getting Started with Application Monitoring with Prometheus on VMware Enterprise PKS: With the migration from monolith applications to microservices and Kubernetes, the way you monitor your apps is changing. But where do you start? Xiao shows you how to get started painlessly with Prometheus and Grafana.

Visualize your Azure Sentinel data with Grafana: Learn how to use Grafana and the Log Analytics connector that Microsoft provides for Grafana, to visualize your Azure Sentinel data.

Setup Prometheus/Grafana Monitoring On Azure Kubernetes Cluster (AKS): This walkthrough provides some background info on Grafana and Prometheus and takes you step by step to install and configure everything on AKS.


Grafana Plugin Update

We have a new panel plugin and a few plugin updates to share this week. To update or install any plugin on your on-prem Grafana, use the grafana-cli tool, or for Hosted Grafana, update with one-click.

NEW PLUGIN

Flowcharting Panel – Create flowcharts, diagrams, floorplans and more with this new panel.

Install

UPDATED PLUGIN

JSON Data Source – Version 0.1.3 allows you to use variables in the Additional JSON Data field.

Install

UPDATED PLUGIN

Statusmap Panel – New enhancements include Grafana 6.0 support, InfluxDB and MySQL support, initial annotation support and more.

Install

UPDATED PLUGIN

Sun and Moon Data Source – 0.1.4 of the Sun and Moon Data Source adds support for datasource provisioning, and annotations for noon and midnight.

Install

UPDATED PLUGIN

TrackMap Panel – This latest update fixes issues with Grafana v6.X and adds support for Snapshots.

Install


Upcoming Events

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

Webinar: Why Open Source Works for DevOps Monitoring | 03.26.19, 1PM EDT:

Learn how to use open source tools for your performance monitoring of your stacks, systems, and sensors in a way that is faster, easier, and to scale.

In this webinar, Jacob Lisi from Grafana, and Chris Churilo from InfluxData will provide you with step by step instruction on how to use InfluxDB with Grafana, two popular open source projects, to capture and analyze untapped data from virtual and physical assets, giving you new visibility to customer experience and business growth.

Register Now

DevOps Days Vancouver 2019 | Vancouver BC, Canada – 03.29.19-03.30.19:

Callum Styan: Grafana Loki – Log Aggregation for Incident Investigations – Get an inside look at Grafana Labs’ latest open source log aggregation project Loki, and learn how to better investigate issues using Grafana’s new Explore UI.

Register Now

KubeCon + CloudNativeCon Europe 2019 | Barcelona, Spain – 05.20.19-05.23.19:

May 21 – Tom Wilkie, Intro: Cortex
May 22 – Tom Wilkie, Deep Dive: Cortex

Cortex provides horizontally scalable, highly available, multi-tenant, long term storage for Prometheus metrics, and a horizontally scalable, Prometheus-compatible query API. Cortex allows users to deploy a centralized, globally aggregated view of all their Prometheus instances, storing data indefinitely. In this talk we will discuss the benefits of, and how to deploy, a fully disaggregated, microservice oriented Cortex architecture. We’ll also discuss some of the challenges operating Cortex at scale, and what the future holds for Cortex. Cortex is a CNCF sandbox project.

May 23 – Tom Wilkie, Grafana Loki: Like Prometheus, But for logs.
Loki is a horizontally-scalable, highly-available log aggregation system inspired by Prometheus. It is designed to be cost effective and easy to operate, as it does not index the contents of the logs, but rather labels for each log stream.

Loki initially targets Kubernetes logging, using Prometheus service discovery to gather labels for log streams. As such, Loki enables you to easily switch between metrics and logs, streamlining the incident response process – a workflow we have built into the latest version of Grafana.

In this talk we will discuss the motivation behind Loki, its design and architecture, and what the future holds. Its early days after the launch at KubeCon Seattle, but so far the response to the project has been overwhelming, with more the 4.5k GitHub stars and over 12hrs at the top spot on Hacker News.

May 23 – David Kaltschmidt, Fool-Proof Kubernetes Dashboards for Sleep-Deprived Oncalls
Software running on Kubernetes can fail in various, but surprisingly well-defined ways. In this intermediate-level talk David Kaltschmidt shows how structuring dashboards in a particular way can be a helpful guide when you get paged in the middle of the night. Reducing cognitive load makes oncall more effective. When dashboards are organized hierarchically on both the service and the resource level, troubleshooting becomes an exercise of divide and conquer. The oncall person can quickly eliminate whole areas of problems and zone in on the real issue. At that point a single service or instance should have been identified, for which more detailed debugging can take place.

Register Now

Monitorama PDX 2019 | Portland, OR – 06.03.19-06.05.19:

Tom Wilkie: Grafana Loki – Prometheus-inspired open source logging – Imagine if you had Prometheus for log files. In this talk we’ll discuss Grafana Loki, our attempt at creating just that.

Learn More


We’re Hiring

Have fun solving real world problems building the next generation of open source tools from anywhere in the world. Check out all of our current opportunities on our careers page.

View All our Open Positions


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard or monitoring related tweet and show it off! #monitoringLove

Awesome! Having dashboards up for the whole team to see is beneficial for all, and helps democratize metrics.


How are we doing?

We’re always looking to make TimeShift better. If you have feedback, please let us know! Email or send us a tweet, or post something at our community forum.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Setting permissions to enable accounts for upcoming AWS Regions

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/setting-permissions-to-enable-accounts-for-upcoming-aws-regions/

The AWS Cloud spans 61 Availability Zones within 20 geographic regions around the world, and has announced plans to expand to 12 more Availability Zones and four more Regions: Hong Kong, Bahrain, Cape Town, and Milan. Customers have told us that they want an easier way to control the Regions where their AWS accounts operate. Based on this feedback, AWS is changing the default behavior for these four and all future Regions so customers will opt in the accounts they want to operate in each new Region. For new AWS Regions, Identity and Access Management (IAM) resources such as users and roles will only be propagated to the Regions that you enable. When the next Region launches, you can enable this Region for your account using the AWS Regions setting under My Account in the AWS Management Console. You will need to enable a new Region for your account before you can create and manage resources in that Region. At this time, there are no changes to existing AWS Regions.

We recommend that you review who in your account will have access to enable and disable AWS Regions. Additionally, you can prepare for this change by setting permissions so that only approved account administrators can enable and disable AWS Regions. Starting today, you can use IAM permissions policies to control which IAM principals (users and roles) can perform these actions.

In this post, I describe the new account permissions for enabling and disabling new AWS Regions. I also describe the updates we’ve made to deny these permissions in the AWS-managed PowerUserAccess policy that many customers use to restrict access to administrative actions. For customers who use custom policies to manage administrative access, I show how to secure access to enable and disable new AWS Regions using IAM permissions policies and Service Control Policies in AWS Organizations. Finally, I explain the compatibility of Security Token Service (STS) session tokens with Regions.

IAM Permissions to enable and disable new AWS Regions for your account

To control access to enable and disable new AWS Regions for your account, you can set IAM permissions using two new account actions. By default, IAM denies access to new actions unless you have explicitly allowed these permissions in an existing policy. You can use IAM permissions policies to allow or deny the actions to enable and disable AWS Regions to IAM principals in your account. The new actions are:

Action Description
account:EnableRegion Allows you to opt in an account to a new AWS Region (for Regions launched after March 20, 2019). This action propagates your IAM resources such as users and roles to the Region.
account:DisableRegion Allows you to opt out an account from a new AWS Region (for Regions launched after March 20, 2019). This action removes your IAM resources such as users and roles from the Region.

When granting permissions using IAM policies, some administrators may have granted full access to AWS services except for administrative services such as IAM and Organizations. These IAM principals will automatically get access to the new administrative actions in your account to enable and disable AWS Regions. If you prefer not to provide account permissions to enable or disable AWS Regions to these principals, we recommend that you add a statement to your policies to deny access to account permissions. To do this, you can add a deny statement for account:*. As new Regions launch, you will be able to specify the Regions where these permissions are granted or denied.

At this time, the account actions to enable and disable AWS Regions apply to all upcoming AWS Regions launched after March 20, 2019. To learn more about managing access to existing AWS Regions, review my post, Easier way to control access to AWS regions using IAM policies.

Updates to AWS managed PowerUserAccess Policy

If you’re using the AWS managed PowerUserAccess policy to grant permissions to AWS services without granting access to administrative actions for IAM and Organizations, we have updated this policy as shown below to exclude access to account actions to enable and disable new AWS Regions. You do not need to take further action to restrict these actions for any IAM principals for which this policy applies. We updated the first policy statement, which now allows access to all existing and future AWS service actions except for IAM, AWS Organizations, and account. We also updated the second policy statement to allow the read-only action for listing Regions. The rest of the policy remains unchanged.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "NotAction": [
                "iam:*",
                "organizations:*",
				"account:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole",
                "iam:DeleteServiceLinkedRole",
                "iam:ListRoles",
                "organizations:DescribeOrganization",
	  			"account:ListRegions"
            ],
            "Resource": "*"
        }
    ]
}

Restrict Region permissions across multiple accounts using Service Control Policies in AWS Organizations

You can also centrally restrict access to enable and disable Regions for all principals across all accounts in AWS Organizations using Service Control Policies (SCPs). You would use SCPs to restrict this access if you do not anticipate using new Regions. SCPs enable administrators to set permission guardrails that apply to accounts in your organization or an organization unit. To learn more about SCPs and how to create and attach them, read About Service Control Policies.

Next, I show how to restrict the Region enable and disable actions for accounts in an AWS organization using an SCP. In the policy below, I explicitly deny using the Effect block of the policy statement. In the Action block, you add the new permissions account:EnableRegion and account:DisableRegion.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "account:EnableRegion",
                "account:DisableRegion"
            ],
            "Resource": "*"
        }
    ]
}

Once you create the policy, you can attach this policy to the root of your organization. This will restrict permissions across all accounts in your organization.

Check if users have permissions to enable or disable new AWS Regions in my account

You can use the IAM Policy Simulator to check if any IAM principal in your account has access to the new account actions for enabling and disabling Regions. The simulator evaluates the policies that you choose for a user or role and determines the effective permissions for each of the actions that you specify. Learn more about using the IAM Policy Simulator.

Region compatibility of AWS STS session tokens

For new AWS Regions, we’re also changing region compatibility for session tokens from the AWS Security Token Service (STS) global endpoint. As a best practice, we recommend using the regional STS endpoints to reduce latency. If you’re using regional STS endpoints or don’t plan to operate in new AWS Regions, then the following change doesn’t apply to you and no action is required.

If you’re using the global STS endpoint (https://sts.amazonaws.com) for session tokens and plan to operate in new AWS Regions, the session token size is going to increase. This may impact functionality if you store session tokens in any of your systems. To ensure your systems work with this change, we recommend that you update your existing systems to use regional STS endpoints using the AWS SDK.

Summary

AWS is changing the default behavior for all new Regions going forward. For new AWS Regions, you will opt in to enable your account to operate in those Regions. This makes it easier for you to select the regions where you can create and manage AWS resources. To prepare for upcoming Region launches, we recommend that you validate the capability to enable and disable AWS Regions to ensure only approved IAM principals can enable and disable AWS Regions for your account.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Amazon Cognito for Alexa Skills User Management

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/amazon-cognito-for-alexa-skills-user-management/

This post is courtesy of Tom Moore, Solutions Architect – AWS

If your Alexa skill is a general information skill, such as a random facts skill or a news feed, you can provide information to any user who has an Alexa enabled device with your skill turned on. However, sometimes you need to know who the user is before you can provide information to them. You can fulfill this user management scenario with Amazon Cognito user pools.

This blog post will show you how to set up an Amazon Cognito user pool and how to use it to perform authentication for both your Alexa skill and a webpage.

Getting started

In order to complete the steps in this blog post you will need the following:

  • An AWS account
  • An Amazon developer account
  • A basic understanding of Amazon Alexa skill development

This example will use a sample Alexa skill deployed from one of the available skill templates. To fully develop your own Alexa skill, you will need a professional code editor or IDE, as well as knowledge of Alexa skill development. It is beyond the scope of this blog post to cover these details.

Before you begin, consider the set of services that you will use and their availability. To implement this solution, you will use Amazon Cognito for user accounts and AWS Lambda for the Alexa function.

Today, AWS Lambda supports calls from Alexa in the following regions:

  • Asia Pacific (Tokyo)
  • EU (Ireland)
  • US East (N. Virginia)
  • US West (Oregon)

These four regions also support Amazon Cognito. While it is possible to use Amazon Cognito in a different region than your Lambda function, I recommend choosing one of the four listed regions to deploy your entire solution for simplicity.

Setting up Amazon Cognito

To set up Amazon Cognito, you’ll need to create a user pool, create an Alexa client, and set up your authentication UI.

Create your Amazon Cognito user pool

  1. Sign in to the Amazon Cognito console. You might be prompted for your AWS credentials.
  2. From the console navigation bar, choose one of the four regions listed above. For the purposes of this blog, I’ll use US East (N. Virginia).
  3. Choose Manage User Pools.Amazon Cognito
  4. Choose Create a user pool, and provide a name for your user pool. Remember that user pools may be used across multiple applications and platforms including web, mobile, and Alexa. The pool name does not have to be globally unique, but it should be unique in your account so you can easily find the pool when needed. I have named my user pool “Alexa Demo.”Create A User Pool
  5. After you name your pool, choose Step through settings. You can accept the defaults for the remaining steps to set up your user pool, with the following exceptions:
    • Choose email address or phone number as the sign-in method, and then choose Allow both email addresses and phone numbers.
    • Enable Multi-Factor Authentication (MFA).You can use Amazon Cognito to enforce Multi-Factor Authentication for your users. Amazon Cognito also allows you to validate email and phone numbers when the user is created. The verification process for phone numbers requires that Amazon Cognito is able to access the Amazon Simple Notification Service (SNS) service in order to dispatch the SMS message for phone number verification. This access is granted through the use of an AWS Identity and Access Management (IAM) service role. The Amazon Cognito Setup process can automatically create this role for you.
  6. To set up Multi-Factor Authentication:
    • Under Do you want to enable Multi-Factor Authentication (MFA), choose Optional.
    • Choose SMS text message as a second authentication factor, and then choose the options you want to be verified.
    • Choose Create Role, and then choose Next Step.Configure Multi-Factor Authentication
    • For more information, see Adding Multi-Factor Authentication (MFA) to a User Pool.Because the verification process sends SMS messages, some costs will be incurred on your account. If you have not already done so, you will need to request a spending increase on your account to accommodate those charges. To learn more about costs for SMS messages, see SMS Text Messages MFA.
  7. Review the selections that you have made. If you are happy with the settings that you have selected, choose Create Pool.

Create the Alexa client

By completing the steps above, you will have created an Amazon Cognito user pool. The next task in setting up account linking is to create the Alexa client definition inside the Amazon Cognito user pool.

  1. From the Amazon Cognito console, choose Manage User Pools. Select the user pool you just created.
  2. From the General settings menu, choose App Clients to set up applications that will connect to your Amazon Cognito user pool.General Settings for App Clients
  3. Choose Add an App Client, and provide the App client name. In this example, I have chosen “Alexa.” Leave the rest of the options set to default and choose Create App Client to generate the client record for Alexa to use. This process creates an app client ID and a secret.App Client Settings
    To learn more, see Configuring a User Pool App Client.

Set up your Authentication UI

Amazon Cognito can set up and manage the Authentication UI for your application so that you don’t have to host your own sign-in and sign-up UI for your Alexa application.

  1. From the App integration menu, choose Domain name.Choose Domain Name
  2. For this example, I will use an Amazon Cognito domain. Provide a subdomain name and choose Check Availability. If the option is available, choose Save Changes.Choosing a Domain Name

Setting up the Alexa skill

Now you can create the Alexa skill and link it back to the Amazon Cognito user pool that you created.

For step-by-step instructions for creating a new Alexa skill, see Create a New Skill in the Alexa documentation. Follow those instructions, with the following specific selections:

Under Choose a model to add to your skill, keep the default option of Custom.


Under Choose a method to host your skill’s back end resources, keep the default selection of Self Hosted.
Self Hosted

For a custom skill, you can choose a predefined skill template for the back end code for your skill. For this example, I’ll use a Fact Skill template as a starting point. The skill template prepopulates the Lambda function that your Alexa skill uses.

Fact Skill
After you create your sample skill, you’ll need to complete a few basic operations:

  • Set the invocation name of the skill
  • Prepare a Lambda function to handle the skill invocation
  • Connect the Alexa skill to your lambda
  • Test your skill

A full description of these steps is beyond the scope of this blog post. To learn more, see Manage Skills in the Developer Console. Once you have completed these steps, return to this post to continue linking your skill with Amazon Cognito.

Linking Alexa with Amazon Cognito

To link your Alexa skill with Amazon Cognito user pools, you’ll need to update both the Amazon Cognito and Alexa interfaces with data from the other service. I recommend that you have both interfaces open in different tabs of your web browser to make it easy to move back and forth between the two services.

  1. In Amazon Cognito, open the app pool that you created. Under General Settings, choose App Clients. Next, choose Show Details in the section for the Alexa Client that you set up earlier. Make a note of the App client ID and the App client secret. These will be needed to configure Alexa skills app linking.App Client Settings
  2. Switch over to your Alexa developer account and open the skill that you are linking to Amazon Cognito. Choose Account Linking.
  3. Select the option to allow users to link accounts. Leave the default option for an Auth Code Grant selected.TheAccount Linking
    Authorization URI will be made up of the following template:

    https://{Sub-Domain}.auth.{Region}.amazoncognito.com/oauth2/authorize?response_type=code&redirect_uri=https://pitangui.amazon.com/api/skill/link/{Vendor ID}

  4. Replace the {Sub-Domain} with the sub domain that you selected when you set up your Amazon Cognito user pool. In my example, it was “mooretom-alexademo”
  5. Replace {Vendor ID} with your specific vendor ID for your Alexa development account. The easiest way to find this is to scroll down to the bottom of the account linking page. Your Vendor ID will be the final piece of information in the Redirect URI’s.Redirect URLs
  6. Replace {Region} with the name of the region you are deploying your resources into. In my example, was us-east-1.
  7. The Access Token URI will be made up of the following template:
    https://{Sub-Domain}.auth.{region}.amazoncognito.com/oauth2/token

  8. Enter the app client ID and the app client secret that you noted above, or return to the Amazon Cognito tab to copy and paste them.Grant Auth Code
  9. Choose Save at the top of the page. Make a note of the redirect URLs at the bottom of the page, as these will be required to finish the Amazon Cognito configuration in the next step.
  10. Switch back to your Amazon Cognito user pool. Under App Integration, choose App Client Settings. You will see the integration settings for the Alexa client in the details panel on the right.
  11. Under Enabled Identity Providers, choose Cognito User Pool.
  12. Under Callback URL(s) enter in the three callback URLs from your Alexa skill page. For example, here are all three URLs separated by commas:
    https://alexa.amazon.co.jp/api/skill/link/{Vendor ID},
    https://layla.amazon.com/api/skill/link/{Vendor ID},
    https://pitangui.amazon.com/api/skill/link/{Vendor ID}

    The Sign Out URL will follow this template:

    https://{SubDomain}.auth.us-east-1.amazoncognito.com/logout?response_type=code

  13. Under Allowed OAuth Flows, select Authorization code grant.
  14. Under Allowed OAuth Scopes, select phone, email, and openid.Enable Identity Providers
  15. Choose Save Changes.

Testing your Alexa skill

After you have linked Alexa with Amazon Cognito, return to the Alexa developer console and build your model. Then log into the Alexa application on your mobile phone and enable the skill. When the skill is enabled, you will be able to configure access and create a new user with phone number authentication included automatically.

After going through the account creation steps, you can return to your Amazon Cognito user pool and see the new user you created.

New Customer

Conclusion

By completing the steps in this post, you have leveraged Amazon Cognito as a source of authentication for your Amazon Alexa skill. Amazon Cognito provides user authentication as well as sign-in and sign-up functionality without requiring you to write any code. You can now use the Amazon Cognito user ID to personalize the user experience for your Alexa skill. You can also use Amazon Cognito to authenticate your users to a companion application or website.

[$] Building header files into the kernel

Post Syndicated from corbet original https://lwn.net/Articles/783578/rss

Kernel developers learn, one way or another, to be careful about memory
use; any memory taken by the kernel is not available for use by the actual
applications that people keep the computer around to run. So it is
unsurprising that eyebrows went up when Joel Fernandes proposed building
the source for all of the kernel’s headers files into the
kernel itself, at a cost of nearly 4MB of unswappable, kernel-space memory.
The discussion is
ongoing, but it has already highlighted some pain points felt by Android
developers in particular.

Our Blog Redesign — Coming Soon!

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/our-blog-redesign-coming-soon/

Our new blog is coming.

Software and user experience design (UX) that stands still never moves ahead, and that goes for our 11 year-old blog.

Backblaze’s blog launched in 2008 with a short post by our founder, Gleb Budman, about how a billion PCs are at risk of losing data. Since that first blog post, Backblaze has published 800 more posts. Last year we had over 2.6 million pageviews from an audience that continues to grow month over month.

The blog was updated in 2014 with a responsive design to serve our increasing number of mobile-based readers, new categories, a new commenting system (Disqus), and the ability to sign up for a blog email list. We also improved our site search function in 2017.

We’ve been feeling for a while that the blog is overdue for more improvements. We want to expose more of the content we’ve created to more readers and help readers find what interests them from among those hundreds of posts. Searching for content can work, but readers have to know or guess a search term to find out if we’ve written on that topic. That’s not good.

As the blog gets more and more posts, the challenge is to help readers find all the content on the blog that they might be interested in. Most of our readers come to the blog through organic search, but many are returning readers who are checking on what’s new, or perhaps they learned of a post that interests them through one of our newsletters. Another option is that a reader signed up for our blog mailing list (see the top of this page) and received an email about a post that sounds interesting.

For the next iteration of our blog, we wanted to expose more of the content we’ve created to more readers, help them find related content they might be interested in, and just make it much easier to navigate our site and find topics of interest.

These Are the Goals for Our Blog Update

  • Present a friendlier user interface
  • Make it easier for the reader to find content related to what they’re reading
  • Introduce the reader to content they might not know we wrote about
  • Make everything work faster

We hope the new design fulfills these goals, and we invite you — once the new design has launched — to tell us how we did. We made the design much more flexible so if it turns out that something doesn’t work as well as we hoped, or we get great suggestions from our readers, we can easily change the blog to incorporate those ideas.

We’re putting the finishing touches on the new design, so it won’t be long until it’s live. After it’s launched, we’ll write more about what we changed, and why.

We’re looking forward to showing the latest version of our blog to our readers. Stay tuned!

The post Our Blog Redesign — Coming Soon! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A cybersecurity strategy to thwart advanced attackers

Post Syndicated from Tim Rains original https://aws.amazon.com/blogs/security/a-cybersecurity-strategy-to-thwart-advanced-attackers/

Today, many Chief Information Security Officers and cybersecurity practitioners are looking for an effective cybersecurity strategy that will help them achieve measurably better security for their organizations. AWS has released two new whitepapers to help customers plan and implement a strategy that has helped many organizations protect, detect, and respond to modern-day attacks.

  • Breaking Intrusion Kill Chains with AWS provides context and shows you, in detail, how to mitigate advanced attackers’ favorite strategies and tactics using the AWS cloud platform. It also offers advice on how to measure the effectiveness of this approach.
  • Breaking Intrusion Kill Chains with AWS Reference Material contains a detailed example of how AWS services, features, functionality, and AWS Partner offerings can be used together to safeguard your organization’s data and cloud infrastructure. This paper will save you time and effort by providing you with a comprehensive AWS security control mapping to each phase of advanced attacks, which you’d otherwise have to do on your own.

    This document provides a list of some of the key AWS security controls, organized in an easy-to-understand format, and it includes a mapping to the AWS Cloud Adoption Framework (CAF). Many organizations use the CAF to build a comprehensive approach to cloud computing across their organization. If your organization uses the CAF, and you decide to implement some or all of the controls described in Breaking Intrusion Kill Chains with AWS, then the reference material in this whitepaper can be used to cross-reference with your other CAF efforts, potentially increasing your ROI.

For a high-level introduction, check out my webinar recording. In it, I discuss cybersecurity strategies, explain the framework that’s used, and talk about how to implement it on AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Rains

Tim is the Regional Leader for Security and Compliance in Europe, Africa, and the Middle East for AWS. He helps federal, regional and local governments, in addition to non-profit organizations, education and health care customers with their security and compliance needs. Tim is a frequent speaker at AWS and industry events. Prior to joining AWS, Tim held a variety of executive-level cybersecurity strategy positions at global companies.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close