Deeper Connection with the Local Tech Community in India

Post Syndicated from Tingting (Teresa) Huang original https://blog.cloudflare.com/deeper-connection-with-the-local-tech-community-in-india/

Deeper Connection with the Local Tech Community in India

On June 6th 2019, Cloudflare hosted the first ever customer event in a beautiful and green district of Bangalore, India. More than 60 people, including executives, developers, engineers, and even university students, have attended the half day forum.

Deeper Connection with the Local Tech Community in India

The forum kicked off with a series of presentations on the current DDoS landscape, the cyber security trends, the Serverless computing and Cloudflare’s Workers. Trey Quinn, Cloudflare Global Head of Solution Engineering, gave a brief introduction on the evolution of edge computing.

Deeper Connection with the Local Tech Community in India

We also invited business and thought leaders across various industries to share their insights and best practices on cyber security and performance strategy. Some of the keynote and penal sessions included live demos from our customers.

Deeper Connection with the Local Tech Community in India

At this event, the guests had gained first-hand knowledge on the latest technology. They also learned some insider tactics that will help them to protect their business, to accelerate the performance and to identify the quick-wins in a complex internet environment.

Deeper Connection with the Local Tech Community in India

To conclude the event, we arrange some dinner for the guests to network and to enjoy a cool summer night.

Deeper Connection with the Local Tech Community in India

Through this event, Cloudflare has strengthened the connection with the local tech community. The success of the event cannot be separated from the constant improvement from Cloudflare and the continuous support from our customers in India.

As the old saying goes, भारत महान है (India is great). India is such an important market in the region. Cloudflare will enhance the investment and engagement in providing better services and user experience for India customers.

New – VPC Traffic Mirroring – Capture & Inspect Network Traffic

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-vpc-traffic-mirroring/

Running a complex network is not an easy job. In addition to simply keeping it up and running, you need to keep an ever-watchful eye out for unusual traffic patterns or content that could signify a network intrusion, a compromised instance, or some other anomaly.

VPC Traffic Mirroring
Today we are launching VPC Traffic Mirroring. This is a new feature that you can use with your existing Virtual Private Clouds (VPCs) to capture and inspect network traffic at scale. This will allow you to:

Detect Network & Security Anomalies – You can extract traffic of interest from any workload in a VPC and route it to the detection tools of your choice. You can detect and respond to attacks more quickly than is possible with traditional log-based tools.

Gain Operational Insights – You can use VPC Traffic Mirroring to get the network visibility and control that will let you make security decisions that are better informed.

Implement Compliance & Security Controls – You can meet regulatory & compliance requirements that mandate monitoring, logging, and so forth.

Troubleshoot Issues – You can mirror application traffic internally for testing and troubleshooting. You can analyze traffic patterns and proactively locate choke points that will impair the performance of your applications.

You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC. As you will soon see, you can choose to capture all traffic or you can use filters to capture the packets that are of particular interest to you, with an option to limit the number of bytes captured per packet. You can use VPC Traffic Mirroring in a multi-account AWS environment, capturing traffic from VPCs spread across many AWS accounts and then routing it to a central VPC for inspection.

You can mirror traffic from any EC2 instance that is powered by the AWS Nitro system (A1, C5, C5d, M5, M5a, M5d, R5, R5a, R5d, T3, and z1d as I write this).

Getting Started with VPC Traffic Mirroring
Let’s review the key elements of VPC Traffic Mirroring and then set it up:

Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.

Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.

Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.

Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.

You can set this up using the VPC Console, EC2 CLI, or the EC2 API, with CloudFormation support in the works. I’ll use the Console.

I already have ENI that I will use as my mirror source and destination (in a real-world use case I would probably use an NLB destination):

The MirrorTestENI_Source and MirrorTestENI_Destination ENIs are already attached to suitable EC2 instances. I open the VPC Console and scroll down to the Traffic Mirroring items, then click Mirror Targets:

I click Create traffic mirror target:

I enter a name and description, choose the Network Interface target type, and select my ENI from the menu. I add a Blog tag to my target, as is my practice, and click Create:

My target is created and ready to use:

Now I click Mirror Filters and Create traffic mirror filter. I create a simple filter that captures inbound traffic on three ports (22, 80, and 443), and click Create:

Again, it is created and ready to use in seconds:

Next, I click Mirror Sessions and Create traffic mirror session. I create a session that uses MirrorTestENI_Source, MainTarget, and MyFilter, allow AWS to choose the VXLAN network identifier, and indicate that I want the entire packet mirrored:

And I am all set. Traffic from my mirror source that matches my filter is encapsulated as specified in RFC 7348 and delivered to my mirror target. I can then use tools like Suricata to capture, analyze, and visualize it.

Things to Know
Here are a couple of things to keep in mind:

Sessions Per ENI – You can have up to three active sessions on each ENI.

Cross-VPC – The source and target ENIs can be in distinct VPCs as long as they are peered to each other or connected through Transit Gateway.

Scaling & HA – In most cases you should plan to mirror traffic to a Network Load Balancer and then run your capture & analysis tools on an Auto Scaled fleet of EC2 instances behind it.

Bandwidth – The replicated traffic generated by each instance will count against the overall bandwidth available to the instance. If traffic congestion occurs, mirrored traffic will be dropped first.

Now Available
VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info.

Jeff;

 

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/792006/rss

Security updates have been issued by CentOS (python), Debian (bzip2, libvirt, python2.7, python3.4, rdesktop, and thunderbird), Fedora (thunderbird and tomcat), openSUSE (aubio, docker, enigmail, GraphicsMagick, and python-Jinja2), SUSE (kernel, libvirt, postgresql96, and tomcat), and Ubuntu (ceph, firefox, imagemagick, libmysofa, linux, linux-hwe, neutron, and policykit-desktop-privileges).

Introducing people.kernel.org

Post Syndicated from corbet original https://lwn.net/Articles/791975/rss

Konstantin Ryabitsev has announced
a new public blogging platform for kernel developers. “Ever since the demise of Google+, many developers have expressed a desire to have a service that would provide a way to create and manage content in a format that would be more rich and easier to access than email messages sent to LKML.

Today, we would like to introduce people.kernel.org, which is an
ActivityPub-enabled federated platform powered by WriteFreely and hosted by
very nice and accommodating folks at write.as.” (LWN looked at WriteFreely back in March).

Changes at the Apache Software Foundation

Post Syndicated from corbet original https://lwn.net/Articles/791973/rss

Here’s a
statement from the Apache Software Foundation
regarding changes in its
leadership: “It is with a mix of sadness and appreciation that the
ASF Board accepted the resignations of Board Member Jim Jagielski, Chairman
Phil Steitz, and Executive Vice President Ross Gardler last month.

There is no indication of why all these people decided to leave at the same
time.

Get Cloudflare insights in your preferred analytics provider

Post Syndicated from Simon Steiner original https://blog.cloudflare.com/cloudflare-partners-with-analytics-providers/

Get Cloudflare insights in your preferred analytics provider

Today, we’re excited to announce our partnerships with Chronicle Security, Datadog, Elastic, Looker, Splunk, and Sumo Logic to make it easy for our customers to analyze Cloudflare logs and metrics using their analytics provider of choice. In a joint effort, we have developed pre-built dashboards that are available as a Cloudflare App in each partner’s platform. These dashboards help customers better understand events and trends from their websites and applications on our network.


Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Cloudflare insights in the tools you’re already using

Data analytics is a frequent theme in conversations with Cloudflare customers. Our customers want to understand how Cloudflare speeds up their websites and saves them bandwidth, ranks their fastest and slowest pages, and be alerted if they are under attack. While providing insights is a core tenet of Cloudflare’s offering, the data analytics market has matured and many of our customers have started using third-party providers to analyze data—including Cloudflare logs and metrics. By aggregating data from multiple applications, infrastructure, and cloud platforms in one dedicated analytics platform, customers can create a single pane of glass and benefit from better end-to-end visibility over their entire stack.

Get Cloudflare insights in your preferred analytics provider

While these analytics platforms provide great benefits in terms of functionality and flexibility, they can take significant time to configure: from ingesting logs, to specifying data models that make data searchable, all the way to building dashboards to get the right insights out of the raw data. We see this as an opportunity to partner with the companies our customers are already using to offer a better and more integrated solution.

Providing flexibility through easy-to-use integrations

To address these complexities of aggregating, managing, and displaying data, we have developed a number of product features and partnerships to make it easier to get insights out of Cloudflare logs and metrics. In February we announced Logpush, which allows customers to automatically push Cloudflare logs to Google Cloud Storage and Amazon S3. Both of these cloud storage solutions are supported by the major analytics providers as a source for collecting logs, making it possible to get Cloudflare logs into an analytics platform with just a few clicks. With today’s announcement of Cloudflare’s Analytics Partnerships, we’re releasing a Cloudflare App—a set of pre-built and fully customizable dashboards—in each partner’s app store or integrations catalogue to make the experience even more seamless.

By using these dashboards, customers can immediately analyze events and trends of their websites and applications without first needing to wade through individual log files and build custom searches. The dashboards feature all 55+ fields available in Cloudflare logs and include 90+ panels with information about the performance, security, and reliability of customers’ websites and applications.

Get Cloudflare insights in your preferred analytics provider

Ultimately, we want to provide flexibility to our customers and make it easier to use Cloudflare with the analytics tools they already use. Improving our customers’ ability to get better data and insights continues to be a focus for us, so we’d love to hear about what tools you’re using—tell us via this brief survey. To learn more about each of our partnerships and how to get access to the dashboards, please visit our developer documentation or contact your Customer Success Manager. Similarly, if you’re an analytics provider who is interested in partnering with us, use the contact form on our analytics partnerships page to get in touch.

Явно пак чакаме да умре дете

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2019/qvno-chakame-da-umre-dete/

Защо все чакаме да умре дете, за да се сетим, че нещо не е наред? Може би, защото децата ни са скъпи. Навярно затова хората бяха толкова скандализирани от евтините имоти на властта – апартаментите също са ни скъпи. Бързо обаче забравихме и нехаем, че прокуратурата опъна чадър над политици и висши чиновници изпрали парите си по този начин.

Аналогично възмущението ни от поредната смърт на дете пряко или косвено породена от институционална немощ или бюрокрация е от ден до пладне. Виждаме го редовно и с Фонда за лечение на деца слят сега с Националната здравна каса – умря момиченце и макар мудността на касата да не е пряка причина, за други беше и ще бъде за още. Значи чакаме следващия ковчег, нали?

Тук обаче няма да говоря за скандалa в касата, а за друг проблем – ваксините. По-специално тази срещу морбили, паротит и рубеола. В България, както и в целия развит свят, тя се поставя около първата година с втора доза след няколко месеца. Доста държави препоръчват и реимунизация през 30-те, каквато си направих наскоро.

Макар да следя темата за антиваксърите и негативният им ефект не само върху доверието в науката, но върху общественото здраве като цяло и децата в частност, този текст не е за тях. Той е отново за Министерството на здравеопазването. Вдигнал съм им мерника този месец явно.

Преди да започна обаче, трябва да спомена, че докато предходните ми няколко статии за здравната система, както и другите ми на тема ваксини обсъждат публично достъпни данни и изводи на лекари, тук такива просто няма или не са достатъчно надеждни, за да ги спомена дори. Именно за този аспект ще стане въпрос.

Има проблем при имунизациите

Антиваксърите са само част от проблема. Друг сериозен е в следенето на имунизациите, покритието и превантивните кампании като цяло. Сигнали за това виждаме не само от отделни лекари, но и от хора работещи в системата. Истината е, че толкова важен аспект от общественото здраве е оставен почти изцяло в ръцете на лични лекари и педиатри. Разчита се на тяхната почтеност, отговорност и често – изобретателност. Те следят кой има нужда от имунизация, те се налага да спорят с „мамита“ дето чели нещо в нета и да успокояват онези родители, които съвсем обяснимо подават ухо на вълната от псевдонаучна жлъч, която ни залива отвсякъде. Те биват глобявани когато някой пропусне имунизация и дори понякога когато откаже.

Не ме разбирайте погрешно – някои лични лекари мамят не само с пътеки и бележки, но и с картони за имунизация. Здравна система е счупена по много параграфи. Според мен обаче пропуските в превантивната медицина са един от най-сериозните рискове към общественото здраве в дългосрочен план. Достатъчно е само да видим какво става в Украйна, където морбили епидемия от 54 хиляди души вече взима човешки животи, а се борят с редица други завръщащи се болести, включително увеличена заболеваемост от туберкулоза. Голяма роля за това имат именно личните им лекари, които са отказали да носят отговорност и в голяма степен правят всичко само по документи.

У нас този феномен не е изследван. Има слухове, има изказвания на експерти, но нямаме представа колко всъщност не са имунизирани така. Проблемът с проследяването на ваксинираните всъщност не се изчерпва с документалните измами. Според имунолози РЗИ-тата имат разминаваща се методология как да събират данните за покритието. Общо казано, питат личните лекари в своя регион кой колко са ваксинирали. Тук голям недостатък е как смятат колко е трябвало да бъдат ваксинирани, колко са получили ваксините късно, и прочие. Също отново педиатрите пазят информацията, но покриват само децата, които са регистрирани при тях. Макар да е трудно да получим прям отговор, изглежда, че семейства без избран личен лекар или без здравни осигуровки не се броят. Според оценки само последната група са 400 хиляди.

Съобщен е 1000-ния случай на морбили у нас

Фокусирам се върху морбили най-вече, защото е изключително заразна болест, която се връща в цяла Европа най-вече заради отказани ваксини. Втората причина е, че днес беше регистриран 1000-ния случай на морбили в България от началото на годината. Макар в страни като Украйна и Филипините да регистрират по толкова на седмица, у нас това показва с каква скорост пламна болестта сред населението спрямо предишните епидемии.

Последната голяма епидемия беше през 2010-та с първите случаи регистрирани още през 2009-та. Тогава 9000 деца бяха хоспитализирани, 24 починаха, а според доклад на НЦЗПБ, още около 36 са показали трайни мозъчни увреждания в последвалите години. По време на онази епидемия най-засегнати бяха ромски села и махали. Имало е кампании за имунизация покрили десетки хиляди. Доклад на БХК отбеляза пропуските в здравните услуги и по-специално имунизациите в тези общности.

10 години по-късно сме на прага на поредна такава епидемия. Средно смъртността в развития свят е 1 на 1000 болни, макар това да варира много според възрастовото разпределение и други фактори. В Румъния при последната епидемия беше 1 на 600. С други думи, явно чакаме първия смъртен случай. Както ние, така и Министерството на здравеопазването.

Несъмнено взимат мерки както тогава, така и сега. Случаите се следяха, имаше мобилни пунктове за имунизация, информационни кампании и прочие. Всичко това, когато започна епидемията. Аз обаче се интересувах какво са направили в последните 10 години, за да се предотврати това. По закон имунизациите са задължителни, макар, както многократно да съм писал, задължението да е на практика само формално.

Запитване до МЗ за взетите мерки

Пуснах запитване по ЗДОИ и след известно забавяне ми отговориха. Накратко казано, в последните 10 години почти нищо не е направено. Оправдават се с чл. 59 от Закона за здравето, че можело да правят имунизационни кампании само при епидемия или отчетено намаление на покритието. Същия член, впрочем, една националистическа парти и най-вече тяхна депутатка, която сега е зам. министър в социалното министерство, се опита съвсем да обезсили по предложение и проект на активаксърите.

Тук обаче идва един ключов момент – кой и как отчита това намаление и в какъв мащаб. Сега Министерството признава, че много деца в ромските махали и села засегнати в последните седмици не са имунизирани. При това говорим за хиляди деца. Навярно затова през 2013-2014-та РЗИ Благоевград отчита 95-98% покритие на МПР ваксината, а сега именно 5 годишните са сред най-засегнатите от болестта.

В интерес на истината, е имало една имунизационна кампания обхващаща 4700 деца в 5 области плюс София (без Благоеврградско). Причината са били 14 случая на морбили. През останалото време не е имало никакви опити да бъдат обхванати тези рискови групи, защото, както МЗ отговаря, не е имало случаи, а на национално ниво обхватът е бил добър.

Ако има нещо, което научаваме от епидемиите в България и в световен мащаб, то е, че започват от джобове с ниско ниво на имунизация. Макар често това да са градини и училища на антиваксъри, нерядко в Европа се случва да са маргинализирани групи с лош достъп до здравни грижи. Затова в щатите и Германия все повече въвеждат задължения и глоби за отказ от ваксини, но и работят директно с общностите.

У нас министерството сякаш с гордост отговаря, че мерките им не са насочени към отделен регион, демография или етнос, а следва да бъдат. Лесно е да се каже, както правят, че са осигурили формално начин тези родители да си имунизират децата в РЗИ-тата. Това е по-трудно обаче, отколкото звучи. Здравната култура е проблем и прехвърлянето на вината само върху родителите е отказ от отговорност.

Наистина, казват, че работят със здравните медиатори за повишаване на това образование и макар последните да правят много, това далеч не е достатъчно. Не е и сериозно хваленето им, че в рамките на „Европейската имунизационна седмица“ са провело кръгла маса в Народното събрание, срещи тук-таме и са пуснали новина на сайтовете на РЗИ-тата. Това е отбиване на номера и усвояване на бюджет, а не опит за подобряване на общественото здраве.

Превенцията срещу управление на кризи

Признавам, че е лесно да се изискват допълнителни здравни и социални услуги за малцинствата, но всъщност това има икономически смисъл. До тук в рамките на сегашната епидемия разходите за бюджета са били по-големи, отколкото биха били в последните 10 години, ако всяка година се обикаляха махалите и се имунизираха децата. При това не отчитаме човешките животи, уврежданията и допълнителното бреме за тези семейства задълбочаваща и без това трудното им положение. Превенцията винаги е по-евтина и има несъизмерима икономическа полза.

Едно решение на проследимостта е регистър на имунизациите. Малко известен факт е, че България има два. Първият е към електронните здравни досиета и практически не се използва. До голяма степен за това има вина и умишлено спряната електронна идентификация. Вторият е разработен от НЦЗПБ и е тестван в реални условия. Проблемът там обаче е, че нямат ресурс да го поддържат и никой в министерството не иска да вземе решение какво следва.

Всъщност, показателно е, че човешкият и финансов ресурс, отделен за следене, изследване и борба с инфекциозните болести и епидемии в България е сравним с този на чистачките в министерството. Не се опитвам да принизя работата на чистачките – явно вършат доста повече работа от някои чиновници в Здравната каса. Дава ни обаче представа за приоритетите

Проблемите не се изчерпват с един регистър и хората. Има обективни пречки като голямото движение на семействата из ЕС и липсата на проследимост на децата, дали и къде са имунизирани. Има и недостиг на ваксини както у нас, така и в световен мащаб.

Определени мерки засягащи организацията и бюрокрацията обаче могат да помогнат много в предотвратяването на епидемии и детска смърт. Първото и най-трудното е промяната на манталитета. Не е сериозно от Министерството да ми се хвали, че са направили кръгла маса в Народното събрание и ще са създали формално възможности някъде някак някой да си имунизира децата.

Другото важно нещо е не само да се правят стратегии, но и да се работи по тях. Когато са известни такива джобове на неимунизирани, а те са пределно известни, да се работи с тях. Извинението с чл. 59 не е сериозно, тъй като не отчита локалния спад в имунизациите. Ако смятат, че Законът за здравето създава пречки пред изпълнението на собствената им програма за изкореняване на морбили, то да се предприемат стъпки към подобряването на нормативната уредба.

Липсват критични данни за общественото здраве

И тук обаче има пропуски. Сред въпросите ми имаше такива за наложените глоби и отхвърлените от съда постановления. И по двете точки отговориха, че не пазят данни за тях. Тук възниква въпроса от къде идват числата за глобените родители за отказ от ваксини, които различни официални лица от МЗ цитират в медиите. По-важен обаче е факта, че явно МЗ няма идея колко глоби се налагат и дали имат рефект. А хич не е рядкост съда да отмени по формални причини постановления след такъв акт на РЗИ. Търсейки в портала на ВСС намираме десетки такива. Липсата на данни колко и защо означава, че не могат да коригират практиките си и да предложат промени в нормативната уредба, за да бъдат реално налагани тези глоби.

Както многократно съм посочвал тук, данните са просто инструмент, който ни помага да намерим къде да търсим отговори. В сферата на общественото здраве данните са ключови. При имунизациите в момента здравните власти – според отговорите на същите и сведения на лица работещи сферата – действат на сляпо, хаотично и реакционно.

Вината на родители отказващи или пропускащи по незнание или нехайство ваксините на децата си безспорна. Когато са шепа хора това не е такъв проблем, но явно далеч не е така. Сегашната епидемия го показва добре.

Днес случаите от морбили станаха 1000 и няма изгледи за забавяне. Макар имунолозите да говорят за цикличност на тези епидемии, факторите са предимно социални и институционални. Тази епидемия ще мине и замине както предишните. Ще отнесе със себе си няколко деца и няколко милиона от бюджета. Въпросът е дали в следващите 10 години ще правим просто кръгли маси в парламента или ще се работи с постоянство и разбиране там, където има повишен риск.

Защо следва да ви пука?

Ако смятате, че нищо от това не ви засяга, не забравяйте следните пет неща:

  • Морбили е изключително заразна – 9 от 10 души в контакт се заразяват
  • 1 на няколко стотин заразени деца ще умрат. Още толкова ще останат с увреждания
  • Бебетата нямат защита след първия месец на раждането и ваксината се поставя едва след годинка. Т.е. са уязвими на който и да е болен в близост до тях. Епидемии вършеят из цяла Европа
  • Никоя ваксина не е ефективна на 100%. Затова се поставят повече дози и опреснителни след време. Някои хора включително губят имунитет след няколко години
  • Другите болести, срещу които се ваксинираме са също толкова страшни, макар и по-малко заразни

Затова говорейки за ваксини говорим за обществено здраве. Защото вие може да се чувстване неуязвим, но болести като морбили повалят дори най-здравите. Отделно, че хората около вас може да не са далеч в толкова добро състояние. Най-вече бебетата, възрастните и тези с компрометирана имунна система.

A libertarian rethinks immigration

Post Syndicated from esr original http://esr.ibiblio.org/?p=8342

Instapundit recently linked to an article at the libertarian Reason magazine with a premise I found – considering the authors and the magazine – surprisingly dimwitted. No, a border wall is not necessarily morally equivalent to the Berlin Wall, or anywhere near it. Consider Hadrian’s Wall, or the Great Wall of China. Sometimes there are actual barbarians on the other side of it.

But this does motivate me to try to clarify my own thoughts about libertarianism and immigration. Is there, in fact, any libertarian defense of border and immigration controls?

Let’s dispose of a red herring first. The fact that immigration controls are enforced by a government is not dispositive for at least two reasons. One is that one may be a minarchist libertarian, holding that governments have a legitimate but small and rigidly constrained set of duties including national defense; to the extent that border and immigration controls are construed as national defense, there’s no problem in principle with them. That’s the easy case, which I’m going to ignore for the rest of this essay except to note that I think this is how the founders of the U.S. would have conceived the matter.

Even for anarcho-capitalists like myself, government enforcement of law may be regarded as a historical accident that in itself doesn’t tell us much about which laws arise from the natural rights of individuals. The question to be addressed here is whether any system of law founded on those natural rights could include border controls on a defined territory.

The first question on the way to answering that is what “natural right” could border controls possibly be a defense of? The obvious one is that they might be justified as a form of collective self-defense. If you’ve got a peaceful, prosperous libertopia going, you’d really prefer not to have a bunch of people who haven’t signed on to your social contract walking in. Because you’re likely to have to kill or expel a lot of them in self-defense, and who wants that aggravation? Better to keep them out in the first place, allowing in only those who are willing to contract. Or who are sponsored by a citizen who is willing to post a bond against their behavior for the first N years.

(I’m being vague about how the process of binding oneself to the libertopian social contract works because there are a couple of different theories about that. None of the differences among these theories is relevant to the present essay. I will note that under any of them, “libertopia enforces the law” would cash out to “insurance companies pay security agencies to do it because the alternative is profiting less on those crime-insurance premiums”.)

Generally speaking libertarians don’t have a problem with border controls when the people trying to cross them are organized invaders, or individual criminals. The problem case, related to why immigration has become a hot-button issue in today’s politics, is whether border controls that keep out peaceful immigrants protect any natural right of the libertopians.

Libertarians like to avoid making nebulous ethical claims about groups, so let’s reframe this. J. Random Foreigner shows up at the border of libertopia, claiming he wants to become a member in good standing. What policy should the insurance companies tell their security contractors to have in order to optimize the expected change in payout on their crime-insurance policies?

Notice how this helpfully concretizes the problem. Instead of having abstract arguments about rights, defense of the rights of libertopians is priced into the insurance company’s decisions by people with skin in the game. Notice also that this gives the insurance companies an incentive not only to keep out bad actors, but to let in good ones. Criminals are loss generators; people who genuinely want to join the libertopian social contract, and are capable of doing so, are profit generators.

Let’s start with some obvious extreme cases. The guy has MS-13 tattoos? Nope, nope, nope. Obvious high risk. The guy is wearing Amish plain clothing and has a Pennsylvania Dutch accent? Let him in – those people are famously law-abiding and we can always use good farmers. In both cases one could in the extreme be wrong; Amish guy could be a sociopath and MS-13 guy could have given up gang life. But no rational person would bet on this and the insurance company won’t if it wants to maximize its profits.

Let’s continue by disposing of some obvious objections. Will the insurance companies exclude black- or brown-skinned people? I don’t think so. And if you think so, you’re probably a racist I want nothing to do with.

Why do I say that? Remember, the insurance companies are trying to optimize the effect of immigration on their profits. If you believe that having a black or brown skin is a sufficiently reliable predictor of being a loss generator for the insurance companies to use it, there are only two possibilities. Either you are wrong, in which case you have an irrational fixation about race and should be deeply ashamed of yourself. Or you are right, in which case the entire objection to “racism” as a belief system pretty much vanishes. I think the former is much more likely.

On the other hand, screening for a minimum IQ threshold would make a lot of sense from what we know about the correlation between IQ, time preference, and criminality. Set at any reasonable level, almost all Ashkenazic Jews will pass that screen, while many Australian aborigines and sub-Saharan Africans will fail it. This looks like racism, but isn’t; the only ethical question here is how predictive your tests are of the qualities required for an individual to function as a libertopian.

(Which also disposes of the usual nonsense about cultural bias in IQ tests. Cultural bias is actually part of the point here; you want immigrants who can function, speak your language or at least learn it rapidly, assimilate. A bit of cultural bias in the tests might be a good thing, though I’d myself be inclined to try to tune it out.)

Since you probably don’t want a repeat of the Rotherham/Cologne/Malmo rape-gang atrocities, there are some combinations of age, religion and country of origin that should be a crash landing. Anyone you have good reason to suspect of believing infidel girls are fair game to be “taken with the right hand” (as the Koran puts it) should be turned away. Worst case there’ll be a rape or murder victim, best case somebody will have to shoot him.

The predicate for this isn’t as simple as “Muslim” or even “Muslim male”. The university-educated 40-something Persian engineer I used to have as a downstairs neighbor would have been a good bet; anyone aged 13 to 35 from the back county of Afghanistan or the Tribal Areas of Pakistan, on the other hand…

Now let’s talk about the subtler aspect of the screening problem, which our hypothetical tribesman is a good lead-in to. This is the part I didn’t understand until recently, and why I’m more sympathetic to immigration restrictionists than I used to be.

Libertopia has both tangible and intangible assets. The intangible ones include, for example, the intelligence and pro-social traits of its people. Another is its voluntary consensus about how things ought to be done – and (which is not quite the same thing) the social contract itself. If I am a member of the contract network of security professionals and arbitrators that enforce libertopia’s norms, I’m not going to think my job ends with defending the tangible assets of libertopians. In fact, I’d consider identifying and defending the intangible assets more important, because they’re more fragile.

Again, let’s concretize this. One of the intangible assets I benefit from as an American – and which I would expect libertopia to have – is that in my society, I can usually make handshake deals with strangers and expect them to be honored. I live in a context of what people who study this sort of thing call “high social trust”. (In part because I avoid the places in the U.S. where social trust levels are low.)

This is more important than anyone who has never lived outside a high-trust society really understands. In low-trust societies, you can’t count on anyone outside your family or tribe not to betray an agreement for short-term advantage. Large-scale cooperation is difficult. Rates of crime and violence are high, the law is unreliable, and at the extreme blood feuds are a common way of pursuing disputes.

The sociologist Robert Putnam is now (in)famous for noticing that diversity – whether it’s linguistic, ethno-racial, or religious – erodes social trust. This is why in “diverse” societies people tend to self-segregate into groups of like kind; they want to deal with neighbors whose behavior they can predict. But what Putnam found is that diversity does not merely erode trust across groups; it erodes trust within them as well.

If I’m a citizen of libertopia, one of the things I want defended with my crime-insurance premiums is the high trust level of my society.

This is why my position about immigration policy in the real world is different than it used to be. I started with the usual libertarian disposition in favor of open borders. I also started with – I’m now ashamed to admit – the usual Blue-Tribe presumption that opposition to unrestricted immigration is at best vulgar and plebeian, at worst narrow-minded if not actually racist.

I should have listened more and reflected the class prejudices of my birth SES less. I now understand that the core complaint of the anti-immigration Trump voters isn’t even about illegals low-balling them out of jobs, although that’s certainly a factor. It’s “I want to keep the high level of social trust I grew up with, and I see mass immigration – especially mass illegal immigration – eroding that.” They think the political elites of both parties, and corporations profit-taking in the labor market, are throwing away that intangible asset to plump up a bit more power and profit.

I now think that is a serious – and justified – complaint.

In the short term, the willful denial of this problem by our soi-disant “elites” is probably Donald Trump’s best hope for reelection in 2020. And no, I’m not excluding the booming economy; I think this matters more to his base, even if they have trouble articulating it. And I don’t think that priority is wrong.

In the longer term, what is to be done about it?

I think I’ve already shown that the contingent fact that real-world border controls would have to be enforced by a government is not really a bar to designing them. Americans made choices over generations to build the asset called “high social trust”; the fact that they must now, practically speaking, use government to defend it is no more problematic than are government-enforced laws against theft, rape, and murder. How we transition from the current system to libertopia is an orthogonally different question.

To begin with, I’d have the Border Patrol and ICE do what libertopians would do. Screen by individual merit and by culture of origin, deliberately excluding people from barbaric low-trust milieux, people who don’t speak English, people with seriously subnormal IQs.

Because I think I know what policies are ethically proper for libertopians to do to defend themselves, I think I know what is ethically proper for Americans to do. And it all has to begin with the premise that coming to the U.S. is not a right, it is a privilege you earn from the expectation that adding you will be good for the health and future of America.

iPhone Apps Surreptitiously Communicated with Unknown Servers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/iphone_apps_sur.html

Long news article (alternate source) on iPhone privacy, specifically the enormous amount of data your apps are collecting without your knowledge. A lot of this happens in the middle of the night, when you’re probably not otherwise using your phone:

IPhone apps I discovered tracking me by passing information to third parties ­ just while I was asleep ­ include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic.

Buster – the new version of Raspbian

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/buster-the-new-version-of-raspbian/

Amid all the furore about the release of a certain new piece of hardware, some people may have missed that we have also released a new version of Raspbian. While this is required for Raspberry Pi 4, we’ve always tried to maintain software backwards-compatibility with older hardware, and so the standard Raspbian image for all models of Raspberry Pi is now based on Buster, the latest version of Debian Linux.

Why Buster?

The first thing to mention about Buster (who was the actual dog in Pixar’s “Toy Story” films, as opposed to the toy one made out of a Slinky…) is that we are actually releasing it slightly in advance of the official Debian release date. The reason for this is that one of the important new features of Raspberry Pi 4 is that the open-source OpenGL video driver is now being used by default, and this was developed using the most recent version of Debian. It would have been a lot of work to port everything required for it back on to Raspbian Stretch, so we decided that we would launch on Raspbian Buster – the only question was whether Buster would be ready before the hardware was!

As it turns out, it wasn’t – not quite. The official launch date for Buster is July 7, so we are a couple of weeks ahead. That said, Buster has been in a “frozen” state for a couple of months now, with only minor changes being made to it, so the version we are releasing is pretty much identical to that which will be officially released by Debian on July 7.

We started using Buster internally in January this year, so it has had a lot of testing on Pi – while we may be releasing it a bit early, you need have no concerns about using it; it’s stable and robust, and you can use apt to update with any changes that do happen between now and July 7 without needing to reinstall everything.

What’s new?

There are no huge differences between Debian Stretch and Debian Buster. In a sad reflection of the way the world is nowadays, most of the differences are security changes designed to make Buster harder to hack. Any other differences are mostly small incremental changes that most people won’t notice, and this got us thinking…

When we moved from Jessie to Stretch, many people commented that they couldn’t actually see any difference between the two – as most of the changes were “under the hood”, the desktop and applications all looked the same. So we told people “you’ve now got Stretch!” and they said “so what?”

The overall appearance of the desktop hasn’t changed significantly for a few years, and was starting to look a bit dated, so we thought it would be nice to give the appearance a mild refresh for Buster. Then people would at least be able to see that their shiny new operating system looked different from the old one!

The new appearance

There has been a definite trend in the design of most computer graphical user interfaces over recent years to simplify and declutter; to reduce the amount of decoration, so that a button becomes a plain box rather than something that resembles a physical button. You can see this in both desktop OSes like Windows, and in mobile OSes like iOS – so we decided it was time to do something similar.

The overall appearance of most of the interface elements has been simplified; we’ve reduced things like the curvature of corners and the shading gradients which were used to give a pseudo-3D effect to things like buttons. This “flatter” design looks cleaner and more modern, but it’s a bit of a juggling act; it’s very easy to go too far and to make things look totally flat and boring, so we’ve tried to avoid that. Eben and I have had a mild tussle over this – he wanted as much flatness as possible, and I wanted to retain at least a bit of curvature, so we’ve met somewhere in the middle and produced something we both like!

We’ve also changed the default desktop for a new one of Greg Annandale’s gorgeous photographs, and we’ve moved to a grey highlight colour.

(If you really don’t like the new appearance, it is easy enough to restore the former appearance – the old desktop picture is still installed, as is the old UI theme.)

Other changes

We’ve been including the excellent Thonny Python development environment in Raspbian for some time now. In this release, it’s now our default Python editor, and to that end, we are no longer including IDLE by default. IDLE has always felt dated and not very pleasant to use, and Thonny is so much nicer that we’d strongly recommend moving to it, if you haven’t already!

(If you’d like an alternative to Thonny, the Mu Python IDE is also still available in Recommended Software.)

We’ve made some small tweaks to the taskbar. The ‘eject’ icon for removing USB devices is now only shown if you have devices to eject; it’s hidden the rest of the time. Similarly, if you are using one of the earlier Pis without Bluetooth support, the Bluetooth icon is now hidden rather than being greyed out. Also, the CPU activity gauge is no longer shown on the taskbar by default, because this has become less necessary on the more powerful recent Raspberry Pi models. If you’d still like to use it, you can add it back – right-click the taskbar and choose ‘Add / Remove Panel Items’. Press the ‘Add’ button and you’ll find it listed as ‘CPU Usage Monitor’. While you are in there, you’ll also find the new ‘CPU Temperature Monitor’, which you can add if you’re interested in knowing more about what the CPU is up to.

One program which is currently missing from Buster is Mathematica. Don’t worry – this is only a temporary removal! Wolfram are working on getting Mathematica to work properly with Buster, and as soon as it is ready, it’ll be available for installation from Recommended Software.

A few features of the old non-OpenGL video driver (such as pixel doubling and underscan) are not currently supported by the new OpenGL driver, so the settings for these are hidden in Raspberry Pi Configuration if the GL driver is in use. (The GL driver is the default on Raspberry Pi 4 – older Pis will still use the non-GL driver by default. Also, if using a Raspberry Pi 4 headless, we recommend switching back to the non-GL driver – choose ‘Legacy’ under the ‘GL Driver’ setting in ‘Advanced Options’ in raspi-config.)

If the GL driver is in use, there’s a new ‘Screen Configuration’ tool – this enables you to set up the arrangement of multiple monitors on a Raspberry Pi 4. It can also be used to set custom monitor resolutions, which can be used to simulate the effect of pixel doubling.

Finally, there are a couple of new buttons in ‘Raspberry Pi Configuration’ which control video output options for Raspberry Pi 4. (These are not shown when running on earlier models of Raspberry Pi.) It is not possible on the Raspberry Pi 4 to have both analogue composite video (over the 3.5mm jack) and HDMI output simultaneously, so the analogue video output is disabled by default. 4Kp60 resolution over HDMI is also disabled by default, as this requires faster clock speeds resulting in a higher operating temperature and greater power consumption. The new buttons enable either of these options to be enabled as desired.

How do I get it?

As ever with major version changes, our recommendation is that you download a new clean image from the usual place on our site – this will ensure that you are starting from a clean, working Buster system.

We do not recommend upgrading an existing Stretch (or earlier) system to Buster – we can’t know what changes everyone has made to their system, and so have no idea what may break when you move to Buster. However, we have tested the following procedure for upgrading, and it works on a clean version of the last Stretch image we released. That does not guarantee it will work on your system, and we cannot provide support (or be held responsible) for any problems that arise if you try it. You have been warned – make a backup!

1. In the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list, change every use of the word “stretch” to “buster”.
2. In a terminal,

sudo apt update

and then

sudo apt dist-upgrade

3. Wait for the upgrade to complete, answering ‘yes’ to any prompt. There may also be a point at which the install pauses while a page of information is shown on the screen – hold the ‘space’ key to scroll through all of this and then hit ‘q’ to continue.
4. The update will take anywhere from half an hour to several hours, depending on your network speed. When it completes, reboot your Raspberry Pi.
5. When the Pi has rebooted, launch ‘Appearance Settings’ from the main menu, go to the ‘Defaults’ tab, and press whichever ‘Set Defaults’ button is appropriate for your screen size in order to load the new UI theme.
6. Buster will have installed several new applications which we do not support. To remove these, open a terminal window and

sudo apt purge timidity lxmusic gnome-disk-utility deluge-gtk evince wicd wicd-gtk clipit usermode gucharmap gnome-system-tools pavucontrol

We hope that Buster gives a little hint of shiny newness for those of you who aren’t able to get your hands on a Raspberry Pi 4 immediately! As ever, your feedback is welcome – please leave your comments below.

The post Buster – the new version of Raspbian appeared first on Raspberry Pi.

The Pirate Bay Faces Massive ISP Blocks in Spain

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-faces-massive-isp-blocks-in-spain-190625/

While no longer the most visited ‘pirate’ site on the Internet, The Pirate Bay arguably remains the most recognizable brand. As a result, the platform has been at the center of dozens of legal processes around the world, each designed to make the platform less accessible to the public.

Aside from direct actions to take the site down, all of which have failed, the torrent index regularly finds itself listed in lawsuits and complaints which aim to force Internet service providers to block consumer access to the site.

Back in 2015, Spain was added to that expanding list when local ISP Vodafone admitted that following a government complaint, it had rendered the site’s main domain inaccessible.

According to the Ministry of Culture and Sport, procedures took place between June 2014 and November 2018 to block several associated domains, including those ending in .se, .org, .net, and .com. It now appears the government is attempting to finish the job.

Following a procedure initiated by rights holders represented by the Association of Intellectual Rights Management (AGEDI) and music group Promusicae (Productores de Música de España) and a subsequent request from the Commission of Intellectual Property (also known as the Anti-Piracy Commission) the central courts of the Contentious-Administrative Chamber of the National Court have authorized additional blocking.

The Ministry of Culture and Sport hasn’t detailed the precise targets but describes them as more than 60 additional domains/sites that are allegedly linked to the notorious torrent site. The site itself isn’t believed to operate that many alternative domains so it’s likely they’re proxies, mirrors and clones that utilize The Pirate Bay’s familiar branding.

“This massive blocking of web pages that, under the brand ThePirateBay, were illegally using the rights of our artists and creators, has an exemplary value for us because it shows that even with the greatest pirates who try repeatedly to circumvent the mechanisms of defense of copyright, the system of the Anti-Piracy Commission works,” says Adriana Moscoso del Prado, general director of Cultural Industries and Cooperation at the Ministry of Culture and Sports.

The government department says the order requires local Internet service providers to block subscriber access to the domains within 72 hours of being notified of the court order. Notifications were sent out yesterday meaning that ISPs should have blockades in place well before the end of the week. It is not yet clear which ISPs have been notified.

The Ministry of Culture notes that the Anti-Piracy Commission has to date targeted 479 sites but the overwhelming majority (92.69%) have removed infringing content once they’ve been officially notified. The Pirate Bay never removes any infringing content so faced with that intransigence, the authorities targeted it with judicial blocking orders, including the one handed down yesterday.

Following a complaint by several Hollywood studios, a similar order was handed down just a few days ago targeting Spanish-language sites Exvagos1.com, Seriesdanko.to, Seriespapaya.com, Cinecalidad.to , Repelis.live, Pelispedia.tv, Cliver.tv, Descagasdd.com and Pepecine.me.

Earlier this year, a local court ordered the country’s largest Internet service providers to begin blocking seven torrent sites including 1337x and LimeTorrents.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

The Serverlist: Serverless makes a splash at JSConf EU and JSConf Asia

Post Syndicated from Connor Peshek original https://blog.cloudflare.com/the-serverlist-newsletter-6/

The Serverlist: Serverless makes a splash at JSConf EU and JSConf Asia

Check out our sixth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.

Sign up below to have The Serverlist sent directly to your mailbox.



AWS Security Hub Now Generally Available

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available/

I’m a developer, or at least that’s what I tell myself while coming to terms with being a manager. I’m definitely not an infosec expert. I’ve been paged more than once in my career because something I wrote or configured caused a security concern. When systems enable frequent deploys and remove gatekeepers for experimentation, sometimes a non-compliant resource is going to sneak by. That’s why I love tools like AWS Security Hub, a service that enables automated compliance checks and aggregated insights from a variety of services. With guardrails like these in place to make sure things stay on track, I can experiment more confidently. And with a single place to view compliance findings from multiple systems, infosec feels better about letting me self-serve.

With cloud computing, we have a shared responsibility model when it comes to compliance and security. AWS handles the security of the cloud: everything from the security of our data centers up to the virtualization layer and host operating system. Customers handle security in the cloud: the guest operating system, configuration of systems, and secure software development practices.

Today, AWS Security Hub is out of preview and available for general use to help you understand the state of your security in the cloud. It works across AWS accounts and integrates with many AWS services and third-party products. You can also use the Security Hub API to create your own integrations.

Getting Started

When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year.

The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked.

Findings can be grouped into insights using aggregation statements and filters.

Integrations

In addition to the Compliance standards findings, AWS Security Hub also aggregates and normalizes data from a variety of services. It is a central resource for findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and from 30 AWS partner security solutions.

AWS Security Hub also supports importing findings from custom or proprietary systems. Findings must be formatted as AWS Security Finding Format JSON objects. Here’s an example of an object I created that meets the minimum requirements for the format. To make it work for your account, switch out the AwsAccountId and the ProductArn. To get your ProductArn for custom findings, replace REGION and ACCOUNT_ID in the following string: arn:aws:securityhub:REGION:ACCOUNT_ID:product/ACCOUNT_ID/default.

{
    "Findings": [{
        "AwsAccountId": "12345678912",
        "CreatedAt": "2019-06-13T22:22:58Z",
        "Description": "This is a custom finding from the API",
        "GeneratorId": "api-test",
        "Id": "us-east-1/12345678912/98aebb2207407c87f51e89943f12b1ef",
        "ProductArn": "arn:aws:securityhub:us-east-1:12345678912:product/12345678912/default",
        "Resources": [{
            "Type": "Other",
            "Id": "i-decafbad"
        }],
        "SchemaVersion": "2018-10-08",
        "Severity": {
            "Product": 2.5,
            "Normalized": 11
        },
        "Title": "Security Finding from Custom Software",
        "Types": [
            "Software and Configuration Checks/Vulnerabilities/CVE"
        ],
        "UpdatedAt": "2019-06-13T22:22:58Z"
    }]
}

Then I wrote a quick node.js script that I named importFindings.js to read this JSON file and send it off to AWS Security Hub via the AWS JavaScript SDK.

const fs    = require('fs');        // For file system interactions
const util  = require('util');      // To wrap fs API with promises
const AWS   = require('aws-sdk');   // Load the AWS SDK

AWS.config.update({region: 'us-east-1'});

// Create our Security Hub client
const sh = new AWS.SecurityHub();

// Wrap readFile so it returns a promise and can be awaited 
const readFile = util.promisify(fs.readFile);

async function getFindings(path) {
    try {
        // wait for the file to be read...
        let fileData = await readFile(path);

        // ...then parse it as JSON and return it
        return JSON.parse(fileData);
    }
    catch (error) {
        console.error(error);
    }
}

async function importFindings() {
    // load the findings from our file
    const findings = await getFindings('./findings.json');

    try {
        // call the AWS Security Hub BatchImportFindings endpoint
        response = await sh.batchImportFindings(findings).promise();
        console.log(response);
    }
    catch (error) {
        console.error(error);
    }
}

// Engage!
importFindings();

A quick run of node importFindings.js results in { FailedCount: 0, SuccessCount: 1, FailedFindings: [] }. And now I can see my custom finding in the Security Hub console:

Custom Actions

AWS Security Hub can integrate with response and remediation workflows through the use of custom actions. With custom actions, a batch of selected findings is used to generate CloudWatch events. With CloudWatch Rules, these events can trigger other actions such as sending notifications via a chat system or paging tool, or sending events to a visualization service.

First, we open Settings from the AWS Security Console, and select Custom Actions. Add a custom action and note the ARN.

Then we create a CloudWatch Rule using the custom action we created as a resource in the event pattern, like this:

{
  "source": [
    "aws.securityhub"
  ],
  "detail-type": [
    "Security Hub Findings - Custom Action"
  ],
  "resources": [
    "arn:aws:securityhub:us-west-2:123456789012:action/custom/DoThing"
  ]
}

Our CloudWatch Rule can have many different kinds of targets, such as Amazon Simple Notification Service (SNS) Topics, Amazon Simple Queue Service (SQS) Queues, and AWS Lambda functions. Once our action and rule are in place, we can select findings, and then choose our action from the Actions dropdown list. This will send the selected findings to Amazon CloudWatch Events. Those events will match our rule, and the event targets will be invoked.

Important Notes

  • AWS Config must be enabled for Security Hub compliance checks to run.
  • AWS Security Hub is available in 15 regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
  • AWS Security Hub does not transfer data outside of the regions where it was generated. Data is not consolidated across multiple regions.

AWS Security Hub is already the type of service that I’ll enable on the majority of the AWS accounts I operate. As more compliance standards become available this year, I expect it will become a standard tool in many toolboxes. A 30-day free trial is available so you can try it out and get an estimate of what your costs would be. As always, we want to hear your feedback and understand how you’re using AWS Security Hub. Stay in touch, and happy building!

— Brandon

AWS Control Tower – Set up & Govern a Multi-Account AWS Environment

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-control-tower-set-up-govern-a-multi-account-aws-environment/

Earlier this month I met with an enterprise-scale AWS customer. They told me that they are planning to go all-in on AWS, and want to benefit from all that we have learned about setting up and running AWS at scale. In addition to setting up a Cloud Center of Excellence, they want to set up a secure environment for teams to provision development and production accounts in alignment with our recommendations and best practices.

AWS Control Tower
Today we are announcing general availability of AWS Control Tower. This service automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. Control Tower incorporates the knowledge that AWS Professional Service has gained over the course of thousands of successful customer engagements, and also draws from the recommendations found in our whitepapers, documentation, the Well-Architected Framework, and training. The guidance offered by Control Tower is opinionated and prescriptive, and is designed to accelerate your cloud journey!

AWS Control Tower builds on multiple AWS services including AWS Organizations, AWS Identity and Access Management (IAM) (including Service Control Policies), AWS Config, AWS CloudTrail, and AWS Service Catalog. You get a unified experience built around a collection of workflows, dashboards, and setup steps. AWS Control Tower automates a landing zone to set up a baseline environment that includes:

  • A multi-account environment using AWS Organizations.
  • Identity management using AWS Single Sign-On (SSO).
  • Federated access to accounts using AWS SSO.
  • Centralize logging from AWS CloudTrail, and AWS Config stored in Amazon S3.
  • Cross-account security audits using AWS IAM and AWS SSO.

Before diving in, let’s review a couple of key Control Tower terms:

Landing Zone – The overall multi-account environment that Control Tower sets up for you, starting from a fresh AWS account.

Guardrails – Automated implementations of policy controls, with a focus on security, compliance, and cost management. Guardrails can be preventive (blocking actions that are deemed as risky), or detective (raising an alert on non-conformant actions).

Blueprints – Well-architected design patterns that are used to set up the Landing Zone.

Environment – An AWS account and the resources within it, configured to run an application. Users make requests (via Service Catalog) for new environments and Control Tower uses automated workflows to provision them.

Using Control Tower
Starting from a brand new AWS account that is both Master Payer and Organization Master, I open the Control Tower Console and click Set up landing zone to get started:

AWS Control Tower will create AWS accounts for log arching and for auditing, and requires email addresses that are not already associated with an AWS account. I enter two addresses, review the information within Service permissions, give Control Tower permission to administer AWS resources and services, and click Set up landing zone:

The setup process runs for about an hour, and provides status updates along the way:

Early in the process, Control Tower sends a handful of email requests to verify ownership of the account, invite the account to participate in AWS SSO, and to subscribe to some SNS topics. The requests contain links that I must click in order for the setup process to proceed. The second email also requests that I create an AWS SSO password for the account. After the setup is complete, AWS Control Tower displays a status report:

The console offers some recommended actions:

At this point, the mandatory guardrails have been applied and the optional guardrails can be enabled:

I can see the Organizational Units (OUs) and accounts, and the compliance status of each one (with respect to the guardrails):

 

Using the Account Factory
The navigation on the left lets me access all of the AWS resources created and managed by Control Tower. Now that my baseline environment is set up, I can click Account factory to provision AWS accounts for my teams, applications, and so forth.

The Account factory displays my network configuration (I’ll show you how to edit it later), and gives me the option to Edit the account factory network configuration or to Provision new account:

I can control the VPC configuration that is used for new accounts, including the regions where VPCs are created when an account is provisioned:

The account factory is published to AWS Service Catalog automatically. I can provision managed accounts as needed, as can the developers in my organization. I click AWS Control Tower Account Factory to proceed:

I review the details and click LAUNCH PRODUCT to provision a new account:

Working with Guardrails
As I mentioned earlier, Control Tower’s guardrails provide guidance that is either Mandatory or Strongly Recommended:

Guardrails are implemented via an IAM Service Control Policy (SCP) or an AWS Config rule, and can be enabled on an OU-by-OU basis:

Now Available
AWS Control Tower is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions, with more to follow. There is no charge for the Control Tower service; you pay only for the AWS resources that it creates on your behalf.

In addition to adding support for more AWS regions, we are working to allow you to set up a parallel landing zone next to an existing AWS account, and to give you the ability to build and use custom guardrails.

Jeff;

 

[$] Lockdown as a security module

Post Syndicated from corbet original https://lwn.net/Articles/791863/rss

Technologies like UEFI secure boot are intended to guarantee that a
locked-down system is running the software intended by its owner (for a
definition of “owner” as “whoever holds the signing key recognized by the
firmware”). That guarantee is hard to uphold, though, if a program run on
the system in question is able to modify the running kernel somehow. Thus,
proponents of secure-boot technologies have been trying for years to
provide the ability to lock
down many types of kernel functionality on secure systems. The latest
attempt posted by Matthew Garrett, at an eyebrow-raising version 34,
tries to address previous concerns by putting lockdown under the control of
a Linux security module (LSM).

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today

Post Syndicated from Tom Strickx original https://blog.cloudflare.com/how-verizon-and-a-bgp-optimizer-knocked-large-parts-of-the-internet-offline-today/

Massive route leak impacts major parts of the Internet, including Cloudflare

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today

What happened?

Today at 10:30UTC, the Internet had a small heart attack. A small company in Northern Pennsylvania became a preferred path of many Internet routes through Verizon (AS701), a major Internet transit provider. This was the equivalent of Waze routing an entire freeway down a neighborhood street — resulting in many websites on Cloudflare, and many other providers, to be unavailable from large parts of the Internet. This should never have happened because Verizon should never have forwarded those routes to the rest of the Internet. To understand why, read on.

We have blogged about these unfortunate events in the past, as they are not uncommon. This time, the damage was seen worldwide. What exacerbated the problem today was the involvement of a “BGP Optimizer” product from Noction. This product has a feature that splits up received IP prefixes into smaller, contributing parts (called more-specifics). For example, our own IPv4 route 104.20.0.0/20 was turned into 104.20.0.0/21 and 104.20.8.0/21. It’s as if the road sign directing traffic to “Pennsylvania” was replaced by two road signs, one for “Pittsburgh, PA” and one for “Philadelphia, PA”. By splitting these major IP blocks into smaller parts, a network has a mechanism to steer traffic within their network but that split should never have been announced to the world at large. When it was it caused today’s outage.

To explain what happened next, here’s a quick summary of how the underlying “map” of the Internet works. “Internet” literally means a network of networks and it is made up of networks called Autonomous Systems (AS), and each of these networks has a unique identifier, its AS number. All of these networks are interconnected using a protocol called Border Gateway Protocol (BGP). BGP joins these networks together and build the Internet “map” that enables traffic to travel from, say, your ISP to a popular website on the other side of the globe.

Using BGP, networks exchange route information: how to get to them from wherever you are. These routes can either be specific, similar to finding a specific city on your GPS, or very general, like pointing your GPS to a state. This is where things went wrong today.

An Internet Service Provider in Pennsylvania  (AS33154 – DQE Communications) was using a BGP optimizer in their network, which meant there were a lot of more specific routes in their network. Specific routes override more general routes (in the Waze analogy a route to, say, Buckingham Palace is more specific than a route to London).

DQE announced these specific routes to their customer (AS396531 – Allegheny Technologies Inc). All of this routing information was then sent to their other transit provider (AS701 – Verizon), who proceeded to tell the entire Internet about these “better” routes. These routes were supposedly “better” because they were more granular, more specific.

The leak should have stopped at Verizon. However, against numerous best practices outlined below, Verizon’s lack of filtering turned this into a major incident that affected many Internet services such as Amazon, Fastly,  Linode and Cloudflare.

What this means is that suddenly Verizon, Allegheny, and DQE had to deal with a stampede of Internet users trying to access those services through their network. None of these networks were suitably equipped to deal with this drastic increase in traffic, causing disruption in service. Even if they had sufficient capacity DQE, Allegheny and Verizon were not allowed to say they had the best route to Cloudflare, Amazon, Fastly, Linode, etc…

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today
BGP leak process with a BGP optimizer

During the incident, we observed a loss, at the worst of the incident, of about 15% of our global traffic.

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today
Traffic levels at Cloudflare during the incident.

How could this leak have been prevented?

There are multiple ways this leak could have been avoided:

A BGP session can be configured with a hard limit of prefixes to be received. This means a router can decide to shut down a session if the number of prefixes goes above the threshold. Had Verizon had such a prefix limit in place, this would not have occurred. It is a best practice to have such limits in place. It doesn’t cost a provider like Verizon anything to have such limits in place. And there’s no good reason, other than sloppiness or laziness, that they wouldn’t have such limits in place

A different way network operators can prevent leaks like this one is by implementing IRR-based filtering. IRR is the Internet Routing Registry, and networks can add entries to these distributed databases. Other network operators can then use these IRR records to generate specific prefix lists for the BGP sessions with their peers. If IRR filtering had been used, none of the networks involved would have accepted the faulty more-specifics. What’s quite shocking is that it appears that Verizon didn’t implement any of this filtering in their BGP session with Allegheny Technologies, even though IRR filtering has been around (and well documented) for over 24 years. IRR filtering would not have increased Verizon’s costs or limited their service in any way. Again, the only explanation we can conceive of why it wasn’t in place is sloppiness or laziness.

The RPKI framework that we implemented and deployed globally last year is designed to prevent this type of leak. It enables filtering on origin network and prefix size. The prefixes Cloudflare announces are signed for a maximum size of 20. RPKI then indicates any more-specific prefix should not be accepted, no matter what the path is. In order for this mechanism to take action, a network needs to enable BGP Origin Validation. Many providers like AT&T have already enabled it successfully in their network.

If Verizon had used RPKI, they would have seen that the advertised routes were not valid, and the routes could have been automatically dropped by the router.

Cloudflare encourages all network operators to deploy RPKI now!

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today
Route leak prevention using IRR, RPKI, and prefix limits

All of the above suggestions are nicely condensed into MANRS (Mutually Agreed Norms for Routing Security)

How it was resolved

The network team at Cloudflare reached out to the networks involved, AS33154 (DQE Communications) and AS701 (Verizon). We had difficulties reaching either network, this may have been due to the time of the incident as it was still early on the East Coast of the US when the route leak started.

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today
Screenshot of the email sent to Verizon

One of our network engineers made contact with DQE Communications quickly and after a little delay they were able to put us in contact with someone who could fix the problem. DQE worked with us on the phone to stop advertising these “optimized” routes to Allegheny Technologies Inc. We’re grateful for their help. Once this was done, the Internet stabilized, and things went back to normal.

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today
Screenshot of attempts to communicate with the support for DQE and Verizon

It is unfortunate that while we tried both e-mail and phone calls to reach out to Verizon, at the time of writing this article (over 8 hours after the incident), we have not heard back from them, nor are we aware of them taking action to resolve the issue.

At Cloudflare, we wish that events like this never take place, but unfortunately the current state of the Internet does very little to prevent incidents such as this one from occurring. It’s time for the industry to adopt better routing security through systems like RPKI. We hope that major providers will follow the lead of Cloudflare, Amazon, and AT&T and start validating routes. And, in particular, we’re looking at you Verizon — and still waiting on your reply.

Despite this being caused by events outside our control, we’re sorry for the disruption. Our team cares deeply about our service and we had engineers in the US, UK, Australia, and Singapore online minutes after this problem was identified.

Canonical backtracks on i386 packages

Post Syndicated from corbet original https://lwn.net/Articles/791936/rss

Canonical has let
it be known
that minds have been changed about removing all 32-bit x86
support from the Ubuntu distribution. “Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.

We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed.”

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close