[$] Python 3.9 is around the corner

Post Syndicated from coogle original https://lwn.net/Articles/831783/rss

Python
3.9.0rc2
was released on September 17, with the final version scheduled
for October 5, roughly a year after the release of Python 3.8. Python 3.9
will come with new operators for dictionary unions, a new parser, two string
operations meant to eliminate some longstanding confusion, as well as
improved time-zone handling and type hinting. Developers may need to do some
porting for code coming from Python 3.8 or earlier, as the new release has
removed several previously-deprecated features still lingering from Python
2.7.

Досиетата FinCen: Държавната ББР обслужва сметки свързани с бандитизма и оръжия за Ислямска държава

Post Syndicated from Атанас Чобанов original https://bivol.bg/fincen-files-bbr.html

сряда 23 септември 2020


Офшорна фирма със сметка в Пощенска банка плаща 31 милиона долара на друга офшорна фирма със сметка в държавната Българска Банка за Развитие (ББР). Първата офшорка е на лице, свързано с трафик на оръжия за Ислямска държава, а другата е на печално известен български оръжеен търговец, партньор на Боян Петракиев – “Барона”. Тези данни се откриват в Досиетата FinCen – изтекли топ-секретни доклади за съмнителни финансови трансакции, събирани от американската Мрежа за противодействие на финансовите престъпления към Министерството на финансите на САЩ.

Въпросният превод е извършен на 1 ноември 2016 г. Фирмата със сметка в Пощенска банка се казва Heptagon Global Trading и е регистрирана в Дубай. Тя изпраща 31 милиона долара по сметка на сейшелската фирма Osiris Global Trade, която е в Българска банка за развитие. Тъй като преводът е в долари, той е минал през кореспондентската банка The Bank of New York Mellon. Тя е докладвала за съмнителния превод на FinCen месец и половина по-късно в началото на 2017 г.

Частта от доклада SAR, в който е описан превода за 31 милиона долара между две български банки.

В подробния доклад за сделката е отбелязано, че Heptagon Global Trading е регистирана в Дубай и има уеб сайт, който обаче не дава информация за собственика. За Osiris Global Trade, регистрирана на Сейшелите се предполага, че е куха фирма. Не са открити данни за собствениците.

Проучване на Биволъ съвместно с партньори от Международния консорциум на разследващите журналисти (ICIJ) успя да установи кой стои зад тези две фирми.

От търговец на оръжия, попаднали в Ислямска държава…

Heptagon Global Trading е на американския оръжеен търговец Хелмут Г. Мертенс. Тя е имала филиал в САЩ, който е закрит през септември 2017 г.

Хелмут Мертенс. Снимка от публичния му профил в WhatsApp

Хелмут Мертенс има интересна биография. Той е син на немския есесовец Герхард Мертенс, който през 1943 г. участва в акцията на нацистките командоси на Отто Скорцени, освободили от плен Бенито Мусолини. След войната Мертенс-баща е вербуван от ЦРУ заедно с Отто Скорцени и двамата основават фирмата за търговия с оръжия Merex AG, използвана за тайни доставки на немско оръжие с благословията на немските служби. Бившият шеф на разузнаването на Чили Мануел Контрерас твърди в мемоарите си, че Мертенс  снабдявал режима на Пиночет с оръжия и хеликоптери.

Мертенс – баща почива през 1993 г. във Флорида. Неговият син Хелмут Мертенс наследява бизнеса с оръжие и също е замесен в скандални доставки. Неговата фирма United International Supplies, основана още през 1987 г. търгува с оръжия от Източна Европа. Доклад на изследователската група Conflict Armament Research от 2017 г. (финансиран от ЕС) идентифицира румънски гранатомети закупени от тази фирма през 2003 г. и впоследствие използвани от Ислямска държава.

Телефонният номер на Мертенс може да бъде открит в публични регистри и той поддържа профил в приложението WhatsApp. Опитите на Биволъ да се свърже с него и да му задада въпроси за българската сделка бяха неуспешни. От Пощенска банка, където е отворена сметката на Heptagon Global Trading отговориха на нашите въпроси лаконично: “фактите и обстоятелствата, засягащи наличностите и операциите по сметки, представляват банкова тайна по смисъла на чл.62, ал.2 от Закона за кредитните институции и такава информация може да бъде разкрита само при изпълнение на специфичните изисквания на закона.

Факт е, че в България няма законово ограничение да се откриват сметки на чуждестранни фирми регистрирани в офшорни дестинации. Но във всички случаи фирма с такъв предмет на дейност, оперираща с десетки милиони долари и със собственик свързван с трафик на оръжие, би трябвало да бъде сериозно проучена както от банката, така и от нашите служби.

…за търговец на оръжия свързан с “Барона”

Фирмата Osiris Global Trade, която е получила 31 милиона долара от Мертенс е със сметка в Българска банка за развитие и е собственост на Александър Любомиров Димитров.

До края на 2019 г. “Осирис Глобъл Трейд Лимитид” е собственик на българската фирма “Оро Инвестмънтс”. В декларацията за действителните собственици на това дружество е посочен Александър Димитров.

Същият Александър Димитров е собственик и на фирмата “Алгънс”, която има лиценз за търговия с оръжие. Специалитетът на “Алгънс” е да изкупува залежали партиди оръжия и муниции съветски модел и да ги пласира на международния пазар. Този бизнес процъфтява покрай програмата на Пентагона за въоръжаване на сирийските бунтовници.

България е един от големите доставчици на оръжия по програмата на Командването за специални операции към Министерството на отбраната на САЩ (SOCOM), както показа разследването “Убийствен удар“, получило няколко международни награди за разследваща журналистика.

Но през юни 2015 г. една от гранатите доставени от “Алгънс” се взривява на полигона в Анево, нает от фирмата. Взривът убива американския инструктор Франсис Норвило от Тексас, баща на две деца.

Тръгвайки по следите на този инцидент американският сайт Buzzfeed разкри, че “Алгънс” е партньор на американската фирма Purple Shovel, получила американски обществени поръчки за 26.7 милиона щатски долара за доставка на оръжия за Сирия.

Но също така разследването разкрива, че Димитров има връзка с Боян Петракиев – „Барона“, едно от знаковите лица на организираната престъпност у нас. Преди да се захване с оръжейния бизнес, Димитров е съдружник с Барона във фирмата “СИБ Метал”, която изкупува скрап.

 

 

Изданието се спира подробно на връзката между “Барона” и Димитров и повдига въпроса защо американската държава харчи бюджетни средства за сделки със съмнителни лица, и за оръжия със съмнително качество.

Пред Buzzfeed Петракиев разказал, че той е бил този, който е запознал Димитров с „влиятелни хора“, и че точно неговите връзки са помогнали на Димитров да се превърне в успешен оръжеен бизнесмен.

„Преди да му помогна Димитров беше нищо,“ – Боян Петракиев – Барона.

„Петракиев е известен престъпник с дълго криминално досие, което датира от десетилетия,“ коментира за Buzzfeed комисар Явор Колев, началник на отдел Трансгранична организирана престъпност към българското Министерство на вътрешните работи . „ В средите на правоприлагането той се счита за прочут представител на организираната престъпност в България.“

В схемата с оръжията плащани от Пентагона се появява и Osiris Global Trade, както показва съдебен документ от дело заведено в САЩ. От него се разбира, че Purple Shovel има дългове за милиони долари както към “Алгънс”, така и към Osiris Global Trade.

 

Скандалът със загиналия американец и публикациите в американската преса от 2015 г. очевидно не смущават бизнеса на Димитров. В края на 2016 той получава 31 милиона долара от друг американски търговец на оръжие.

Биволъ се опита да се свърже с Александър Димитров, но въпросите изпратени на фирмения имейл на “Алгънс” останаха без отговор.

Наш репортер посети и адреса, на който са регистрирани “Алгънс” и “Оро Инвестмънтс” на ул. Поп Богомил 6, етаж 1, апартамент 3. Сградата е скромна и там няма обозначен офис на нито една от фирмите. Самият Димитров отсъстваше, но стана ясно, че съседите често го виждат там. Шефът на успешната оръжейна фирма не е инвестирал в представителен офис, но се придвижва с луксозно “Мазерати”.

Очевидно е от огромен обществен интерес фактът, че в държавната банка, която трябва да насърчава малкия и средния бизнес, се въртят десетки милиони долари на фирми с нееднозначна репутация. От Българска банка за развитие обаче не отговориха на въпросите ни изпратени по електронна поща.

В търсене на отговори Биволъ се свърза директно с пиара на Министерството на финансите, което е принципал на ББР. Но всички опити да получим информация как е възможно офшорна фирма като Osiris Global Trade да има доларова сметка в държавната банка бяха неуспешни. По това време финансов министър е Владислав Горанов.

Банков превод от близо 50 милиона лева едва ли остава незабелязан за службите, отговарящи за националната сигурност, още повече в строго регламентирания и наблюдаван бизнес с оръжия. Но въпросите ни изпратени до ДАНС и конкретно адресирани до Дирекцията “Финансово разузнаване” останаха без отговор.

Дали тази дискретност се дължи на „влиятелните хора“, които стоят зад Димитров, и на които той е бил препоръчан не от кого да е, а от знаково лице на българската организирана престъпност? Които и да са те, явно влиянието им стига до върховете на държавата, след като на ортака на “Барона” му е дадена възможност да банкира в Българска банка за развитие. И разбира се, важно е да се разбере колко още такива офшорки и съмнителни субекти въртят доларови преводи за милиони през държавната банка.

Авторът Атанас Чобанов е част от групата от 400 журналисти от 88 държави, които в продължение на 16 месеца работиха над секретните документи на FinCen, предоставени на сайта Buzzfeed и Международния консорциум на разследващите журналисти (ICIJ).

[$] Accurate timestamps for the ftrace ring buffer

Post Syndicated from original https://lwn.net/Articles/831207/rss

The function
tracer (ftrace) subsystem
has become an essential part of the kernel’s
introspection tooling. Like many kernel subsystems, ftrace uses a ring buffer to
quickly
communicate events to user space; those events include a timestamp to
indicate when they occurred. Until recently, the design of the ring buffer
has led to the creation of inaccurate timestamps when events are generated
from interrupt handlers. That problem has now been solved; read on for an
in-depth discussion of how this issue came about and the form of its
solution.

[$] Accurate timestamps for the Ftrace ring buffer

Post Syndicated from original https://lwn.net/Articles/831207/rss

The function
tracer (ftrace) subsystem
has become an essential part of the kernel’s
introspection tooling. Like many kernel subsystems, ftrace uses a ring buffer to
quickly
communicate events to user space; those events include a timestamp to
indicate when they occurred. Until recently, the design of the ring buffer
has led to the creation of inaccurate timestamps when events are generated
from interrupt handlers. That problem has now been solved; read on for an
in-depth discussion of how this issue came about and the form of its
solution.

Linux Journal is Back

Post Syndicated from original https://lwn.net/Articles/832184/rss

Linux Journal has returned
under the ownership of Slashdot Media. “As Linux enthusiasts and long-time fans of Linux Journal, we were disappointed to hear about Linux Journal closing it’s doors last year. It took some time, but fortunately we were able to get a deal done that allows us to keep Linux Journal alive now and indefinitely. It’s important that amazing resources like Linux Journal never disappear.

Improving security as part of accelerated data center migrations

Post Syndicated from Stephen Bowie original https://aws.amazon.com/blogs/security/improving-security-as-part-of-accelerated-data-center-migrations/

Approached correctly, cloud migrations are a great opportunity to improve the security and stability of your applications. Many organizations are looking for guidance on how to meet their security requirements while moving at the speed that the cloud enables. They often try to configure everything perfectly in the data center before they migrate their first application. At AWS Managed Services (AMS), we’ve observed that successful migrations establish a secure foundation in the cloud landing zone then iterate from there. We think it’s important to establish a secure foundation in your cloud landing zone, and then refine and improve your security as you grow.

Customers who take a pragmatic, risk-based approach are able to innovate and move workloads more quickly to the cloud. The organizations that migrate fastest start by understanding the shared responsibility model. In the shared responsibility model, Amazon Web Services (AWS) takes responsibility for delivering security controls that might have been the responsibility of customers operating within their legacy data center. Customers can concentrate their activities on the security controls they remain responsible for. The modern security capabilities provided by AWS make this easier.

The most efficient way to migrate is to move workloads to the cloud as early as possible. After the workloads are moved, you can experiment with security upgrades and new security capabilities available in the cloud. This lets you migrate faster and consistently evolve your security approach. The sooner you focus on applying foundational security in the cloud, the sooner you can begin refining and getting comfortable with cloud security and making improvements to your existing workloads.

For example, we recently helped a customer migrate servers that weren’t sufficiently hardened to the Center for Internet Security (CIS) benchmarks. The customer could have attempted hardening on premises before their migration. That would have required spinning up dedicated infrastructure resources in their data center—a complex and costly, resource-intensive proposition.

Instead, we migrated their application to the cloud as it was, took snapshots of the servers, and ran the snapshots on an easy-to-deploy, low-cost instance of Amazon Elastic Compute Cloud (Amazon EC2). Using the snapshots, we ran scripts to harden those servers and brought their security scores up to over 90 percent against the CIS benchmarks.

Using this method to migrate let the customer migrate their existing system to the cloud quickly, then test hardening methods against the snapshots. If the application hadn’t run properly after hardening, the customer could have continued running on the legacy OS while fixing the issues at their own pace. Fortunately, the application ran seamlessly on the hardened snapshot of the OS. The customer switched to the hardened infrastructure without incurring downtime and with none of the risks or costs of trying to do it in their data center.

Migrations are great opportunities to uplift the security of your infrastructure and applications. It’s often more efficient to try migrating and break something rather than attempting to get everything right before starting. For example, dependence on legacy protocols, such as Server Message Block (SMB) v1, should be fixed by the customer or their migration partner as part of the initial migration. The same is true for servers missing required endpoint security agents. AWS Professional Services and AMS help customers identify these risks during migrations, and help them to isolate and mitigate them as an integral part of the migration.

The key is to set priorities appropriately. Reviewing control objectives early in the process is essential. Many on-premises data centers operate on security policies that are 20 years old or more. Legacy policies often clash with current security best practices, or lack the ability to take advantage of security capabilities that are native to the cloud. Mapping objectives to cloud capabilities can provide opportunities to meet or exceed existing security policies by using new controls and tools. It can also help identify what’s critical to fix right away.

In many cases, controls can be retired because cloud security makes them irrelevant. For example, in AMS, privileged credentials, such as Local Administrator and sudo passwords are either randomized or made unusable via policy. This removes the need to manage and control those types of credentials. Using AWS Directory Service for Microsoft Active Directory reduces the risk exposure of domain controllers for the resource forest and automates activities, such as patching, that would otherwise require privileged access. By using AWS Systems Manager to automate common operational tasks, 96 percent of our operations are performed via automation. This significantly reduced the need for humans to access infrastructure. This is one of the Well Architected design principles.

It’s also important to address the people and process aspects of security. Although the cloud can improve your security posture, you should implement current security best practices to help mitigate new risks that might emerge in the future. Migration is a great opportunity to refresh and practice your security response process, and take advantage of the increased agility and automation of security capabilities in the cloud. At AMS, we welcome every opportunity to simulate security events with our customers as part of a joint game day, allowing our teams to practice responding to security events together.

Or as John Brigden, Vice President of AMS, recently said in a blog post, “Traditional, centralized IT prioritized security and control over speed and flexibility. Outsourced IT could exacerbate this problem by adding layers of bureaucracy to the system. The predictable result was massive growth in shadow IT. Cloud-native, role-based solutions such as AWS Identity and Access Manager (IAM), Amazon CloudWatch, and AWS CloudTrail work together to enable enterprise governance and security with appropriate flexibility and control for users.”

In most cases, if it’s possible to migrate even a small application to the cloud early, it will be more efficient and less costly than waiting until all security issues have been addressed before migrating. To learn how using AMS to operate in the cloud can deliver a 243 percent return on investment, download the Forrester Total Economic Impact™ study.

You can use native AWS and third-party security services to inspect and harden your infrastructure. Most importantly, you can get a feel for security operations in the cloud—how things change, how they stay the same, and what is no longer a concern. When it comes to accelerating your migration securely, let the cloud do the heavy lifting.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Migration & Transfer forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Stephen Bowie

Based in Seattle, Stephen leads the AMS Security team, a global team of engineers who live and breathe security, striving around the clock to keep our customers safe. Stephen’s 20-year career in security includes time with Deloitte, Microsoft, and Cutter & Buck. Outside of work, he is happiest sailing, travelling, or watching football with his family.

Interview with the Author of the 2000 Love Bug Virus

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/interview-with-the-author-of-the-2000-love-bug-virus.html

No real surprises, but we finally have the story.

The story he went on to tell is strikingly straightforward. De Guzman was poor, and internet access was expensive. He felt that getting online was almost akin to a human right (a view that was ahead of its time). Getting access required a password, so his solution was to steal the passwords from those who’d paid for them. Not that de Guzman regarded this as stealing: He argued that the password holder would get no less access as a result of having their password unknowingly “shared.” (Of course, his logic conveniently ignored the fact that the internet access provider would have to serve two people for the price of one.)

De Guzman came up with a solution: a password-stealing program. In hindsight, perhaps his guilt should have been obvious, because this was almost exactly the scheme he’d mapped out in a thesis proposal that had been rejected by his college the previous year.

Creating an EC2 instance in the AWS Wavelength Zone

Post Syndicated from Bala Thekkedath original https://aws.amazon.com/blogs/compute/creating-an-ec2-instance-in-the-aws-wavelength-zone/

Creating an EC2 instance in the AWS Wavelength Zone

This blog post is contributed by Saravanan Shanmugam, Lead Solution Architect, AWS Wavelength

AWS announced Wavelength at re:Invent 2019 in partnership with Verizon in US, SK Telecom in South Korea, KDDI in Japan, and Vodafone in UK and Europe. Following the re:Invent 2019 announcement, on August 6, 2020, AWS announced GA of one Wavelength Zone with Verizon in Boston connected to US East (N.Virginia) Region and one in San Francisco connected to the US West (Oregon) Region.

In this blog, I walk you through the steps required to create an Amazon EC2 instance in an AWS Wavelength Zone from the AWS Management console. We also address the questions asked by our customers regarding the different protocol traffic allowed into and out of a AWS Wavelength Zones.

Customers who want to access AWS Wavelength Zones and deploy their applications to the Wavelength Zone can sign up using this link. Customers that opted in to access the AWS Wavelength Zone can confirm the status on the EC2 console Account Attribute section as shown in the following image.

 Services and features

AWS Wavelength Zones are Availability Zones inside the Carrier Service Provider network closer to the Edge of the Mobile Network. Wavelength Zones bring the AWS core compute and storage services like Amazon EC2 and Amazon EBS that can be used by other services like Amazon EKS and Amazon ECS. We look at Wavelength Zone(s) as a hub and spoke model, where developers can deploy latency sensitive, high-bandwidth applications at the Edge and non-latency sensitive and data persistent applications in the Region.

Wavelength Zones supports three Nitro based Amazon EC2 instance types t3 (t3.medium, t3.xlarge) r5 (r5.2xlarge) and g4 (g4dn.2xlarge) with EBS volume types gp2. Customers can also use Amazon ECS and Amazon EKS to deploy container applications at the Edge. Other AWS Services, like AWS CloudFormation templates, CloudWatch, IAM resources, and Organizations, continue to work as expected, providing you a consistent experience. You can also leverage the full suite of services like Amazon S3 in the parent Region over AWS’s private network backbone. Now that we have reviewed AWS wavelength, the services and features associated with it, let us talk about the steps to launch an EC2 instance in the AWS Wavelength zone.

Creating a Subnet in the Wavelength Zone

Once the Wavelength Zone is enabled for your AWS Account, you can extend your existing VPC from the parent Region to a Wavelength Zone by creating a new VPC subnet assigned to the AWS Wavelength Zone. Customers can also create a new VPC and then a Subnet to deploy their applications in the Wavelength zone. The following image shows the Subnet creation step, where you pick the Wavelength Zone as the Availability zone for the subnet

Carrier Gateway

We have introduced a new gateway type called Carrier Gateway, which allows you to route traffic from the Wavelength Zone subnet to the CSP network and to the Internet. Carrier Gateways are similar to the Internet gateway in the Region. Carrier Gateway is also responsible for NAT’ing the traffic from/to the Wavelength Zone subnets mapping it to the carrier ip address assigned to the instances.

Creating a Carrier Gateway

In the VPC console, you can now create Carrier Gateway and attach it to your VPC.

You select the VPC to which the Carrier Gateway must be attached. There is also option to select “Route subnet traffic to the Carrier Gateway” in the Carrier Gateway creation step. By selecting this option, you can pick the Wavelength subnets you want to default route to the Carrier Gateway. This option automatically deletes the existing route table to the subnets, creates a new route table, creates a default route entry, and attaches the new route table to the Subnets you selected. The following picture captures the necessary input required while creating a Carrier Gateway

 

Creating an EC2 instance in a Wavelength Zone with Private IP Address

Once a VPC subnet is created for the AWS Wavelength Zone, you can launch an EC2 instance with a Private address using the EC2 Launch Wizard. In the configure instance details step, you can select the Wavelength Zone Subnet that you created in the “Creating a Subnet” section.

Attach a IAM profile with SSM role included, which allows you to SSH into the console of the instance through SSM. This is a recommended practice for Wavelength Zone instances as there is no direct SSH access allowed from Public internet.

 Creating an EC2 instance in a Wavelength Zone with Carrier IP Address

The instances running in the Wavelength Zone subnets can obtain a Carrier IP address, which is allocated from a pool of IP addresses called Network Border group (NBG). To create an EC2 instance in the Wavelength Zone with a carrier routable IP address, you can use AWS CLI. You can use the following command to create EC2 instance in a Wavelength Zone subnet. Note the additional network interface (NIC) option “AssociateCarrierIpAddress: as part of the EC2 run instance command, as shown in the following command.

aws ec2 --region us-west-2 run-instances --network-interfaces '[{"DeviceIndex":0, "AssociateCarrierIpAddress": true, "SubnetId": "<subnet-0d3c2c317ac4a262a>"}]' --image-id <ami-0a07be880014c7b8e> --instance-type t3.medium --key-name <san-francisco-wavelength-sample-key>

 *To use “AssociateCarrierIpAddress” option in the ec2 run-instance command use the latest aws cli v2.

The carrier IP assigned to the EC2 instance can be obtained by running the following command.

 aws ec2 describe-instances --instance-ids <replace-with-your-instance-id> --region us-west-2

 Make necessary changes to the default security group that is attached to the EC2 instance after running the run-instance command to allow the necessary protocol traffic. If you allow ICMP traffic to your EC2 instance, you can test ICMP connectivity to your instance from the public internet.

The different protocols allowed in and out of the Wavelength Zone are captured in the following table.

 

TCP Connection FROM TCP Connection TO Result*
Region Zones WL Zones Allowed
Wavelength Zones Region Allowed
Wavelength Zones Internet Allowed
Internet (TCP SYN) WL Zones Blocked
Internet (TCP EST) WL Zones Allowed
Wavelength Zones UE (Radio) Allowed
UE(Radio) WL Zones Allowed

 

UDP Packets FROM UDP Packets TO Result*
Wavelength Zones WL Zones Allowed
Wavelength Zones Region Allowed
Wavelength Zones Internet Allowed
Internet WL Blocked
Wavelength Zones UE (Radio) Allowed
UE(Radio) WL Zones Allowed

 

ICMP FROM ICMP TO Result*
Wavelength Zones WL Zones Allowed
Wavelength Zones Region Allowed
Wavelength Zones Internet Allowed
Internet WL Allowed
Wavelength Zones UE (Radio) Allowed
UE(Radio) WL Zones Allowed

Conclusion

We have covered how to create and run an EC2 instance in the AWS Wavelength Zone, the core foundation for application deployments. We will continue to publish blogs helping customers to create ECS and EKS clusters in the AWS Wavelength Zones and deploy container applications at the Mobile Carriers Edge. We are really looking forward to seeing what all you can do with them. AWS would love to get your advice on additional local services/features or other interesting use cases, so feel free to leave us your comments!

 

Алексей Навални, отравянето и… любовта

Post Syndicated from original http://www.gatchev.info/blog/?p=2323

След време, когато вече няма да си спомняме, може би някой ще прочете този запис. Или може би аз ще го препрочета, търсейки среща със себе си отпреди… Затова разказвам обстоятелствата:

Руският дисидент Алексей Навални преди няколко седмици беше отровен с „Новичок“ – химическо оръжие, същото, с което бяха отровени в Солсбъри от агенти на ГРУ Сергей и Юлия Скрипал. Благодарение на навременното му прехвърляне в Берлинската университетска болница „Шарите“ той оцеля и в момента се връща към живота. По-решителен отвсякога да се върне в Русия и в политиката ѝ… Но не за това пиша сега.

Доскоро го смятах за поредният политикан. Да, несравнимо по-полезен за Русия (а и за света) от Путин, но все пак просто малко по-смел авантюрист с политически амбиции… Вчера той пусна един текст, който промени мнението ми. Който ме накара да мисля, че той става за политик – и всъщност дори за президент на Русия.

(Може ли всичко това да е свръх-изобретателна измама на руските тайни служби, и Навални всъщност да е техен агент? Посветен на това да донесе „още от същото“? Че няма как да си достатъчно параноичен, за да си адекватен на тези служби, е факт. Но засега ще си опростя живота и ще смятам това за чиста параноя. Ще си променя мнението, ако се натрупат доказателства, че е вярно… А има и още нещо. Ако това е така, някой трябва да е съчинил този текст за Навални. И този някой сигурно ще бъде зад него и занапред – а той става за президент. Оттам нататък, дали е друг човек или самият Навални, е без значение.)

Ето и текста, безсъвестно изпиратстван от Фейсбук акаунта на Навални. Да го има в още едно копие в Интернет, ей така – струва си. В оригинал, и в мой превод на български:

—-

Пост про любовь ❤️.

У нас с Юлей 26 августа была годовщина – 20 лет свадьбы, но я даже рад, что пропустил и могу это написать сегодня, когда знаю о любви немного больше, чем месяц назад.

Вы, конечно, сто раз видели такое в фильмах и читали в книжках: один любящий человек лежит в коме, а другой своей любовью и беспрестанной заботой возвращает его к жизни. Мы, конечно, тоже так действовали. По канонам классических фильмов о любви и коме. Я спал, и спал, и спал. Юля @yulia_navalnaya приходила, говорила со мной, пела меня песенки, включала музыку. Врать не буду – ничего не помню.

Зато расскажу вам, что точно помню сам. Вернее, вряд ли это можно назвать «воспоминание», скорее, набор самых первых ощущений и эмоций. Однако он был для меня так важен, что навсегда отпечатался в голове. Я лежу. Меня уже вывели из комы, но я никого не узнаю, не понимаю, что происходит. Не говорю и не знаю, что такое говорить. И все мое времяпрепровождение заключается в том, что я жду, когда придёт Она. Кто она – неясно. Как она выглядит – тоже не знаю. Даже если мне удаётся разглядеть что-то расфокусированным взглядом, то я просто не в состоянии запомнить картинку. Но Она другая, мне это понятно, поэтому я все время лежу и ее жду. Она приходит и становится главной в палате. Она очень удобно поправляет мне подушку. У неё нет тихого сочувственного тона. Она говорит весело и смеётся. Она рассказывает мне что-то. Когда она рядом, идиотские галлюцинации отступают. С ней очень хорошо. Потом она уходит, мне становится грустно, и я снова начинаю ее ждать.

Ни одну секунду не сомневаюсь, что у этого есть научное объяснение. Ну, типа, я улавливал тембр голоса жены, мозг выделял дофамины, мне становилось легче. Каждый приход становился буквально лечебным, а эффект ожидания усиливал дофаминовое вознаграждение. Но как бы ни звучало классное научно-медицинское объяснение, теперь я точно знаю просто на своём опыте: любовь исцеляет и возвращает к жизни. Юля, ты меня спасла, и пусть это впишут в учебники по нейробиологии.

—-

Публикация за любовта ❤️.

Юля и аз имахме на 6 август 20-годишен юбилей, но дори се радвам, че го пропуснах и мога да напиша това днес, когато знам за любовта малко повече, отколкото преди месец.

Разбира се, вие сто пъти сте гледали такива неща във филми и сте ги чели в книги: един влюбен лежи в кома, а друг го връща към живота с любовта си и непрекъснатата си грижа. И ние, разбира се, действахме така. По каноните на класическите филми за любов и кома. Аз спях, и спях и спях. Юля @yulia_navalnaya идваше, говореше ми, пееше ми песнички, пускаше ми музика. Няма да лъжа – нищо не помня.

Но ще ви кажа какво определено помня. Всъщност то надали може да се нарече „спомен“, по-скоро е набор от най-първични чувства и емоции. Но беше толкова важно за мен, че се запечата в ума ми завинаги. Лежа. Вече са ме извадили от кома, но не разпознавам никого, не разбирам какво се случва. Не говоря и не знам какво значи да говориш. И през всичкото време чакам да дойде Тя. Коя е тя – не е ясно. Как изглежда – също не знам. Дори да успея да видя нещо с разфокусиран поглед, просто не мога да го запомня. Но Тя е различна, разбирам това, и затова все лежа и я чакам. Тя идва и става най-важната наоколо. Оправя ми възглавницата, много удобно. Тонът ѝ не е тих и съчувствен. Тя говори весело и се смее. Тя ми разказва нещо. Когато тя е наоколо, идиотските халюцинации отстъпват. С нея е много хубаво. После тя си тръгва, аз се натъжавам и отново започвам да я чакам.

Нито за секунда не се съмнявам, че това има научно обяснение. Примерно там разпознавам тембъра на гласа на жена ми, мозъкът отделя допамини, става ми по-леко. Всяко посещение става буквално лечебно, а ефектът на очакването усилва допаминовото възнаграждаване. Но както и да звучи стилното научно-медицинско обяснение, сега знам от собствен опит: любовта изцелява и връща живота. Юля, ти ме спаси, и нека това го впишат в учебниците по невробиология.

Architecture Patterns for Red Hat OpenShift on AWS

Post Syndicated from Ryan Niksch original https://aws.amazon.com/blogs/architecture/architecture-patterns-for-red-hat-openshift-on-aws/

Editor’s note: Although this blog post and its accompanying code make use of the word “Master,” Red Hat is making open source code more inclusive by eradicating “problematic language.” Read more about this.

Introduction

Red Hat OpenShift is an application platform that provides customers with turnkey application platform that is much more than a simple Kubernetes orchestration.

OpenShift customers choose AWS as their cloud of choice because of the efficiency, security, and reliability, scalability, and elasticity it provides. Customers seeking to modernize their business, process, and application stacks are drawn to the rich AWS service and feature sets.

As such, we see some customers migrate from on-premises to AWS or exist in a hybrid context with application workloads running in various locations. For OpenShift customers, this poses a few questions and considerations:

  • What are the recommendations for the best way to deploy OpenShift on AWS?
  • How is this different from what customers were used to on-premises?
  • How does this ensure resilience and availability?
  • Do customers need a multi-region, multi-account approach?

For hybrid customers, there are assumptions and misconceptions:

  • Where does the control plane exist?
  •  Is there replication, and if so, what are the considerations and ramifications?

In this post I will run through some of the more common questions and patterns for OpenShift on AWS, while looking at some of the terminology and conceptual differences of AWS. I’ll explore migration and hybrid use cases and address some misconceptions.

OpenShift building blocks

On AWS, OpenShift 4x is the norm. To that effect, I will focus on OpenShift 4, but many of the considerations will apply to both OpenShift 3 and OpenShift 4.

Let’s unpack some of the OpenShift building blocks. An OpenShift cluster consists of Master, infrastructure, and worker nodes. The Master forms the control plane and infrastructure nodes cater to a routing layer and additional functions, such as logging, monitoring etc. Worker nodes are the nodes that customer application container workloads will exist on.

When deployed on-premises, OpenShift nodes will be placed in separate network subnets. Depending on distance, latency, etc., a single OpenShift cluster may span two data centers that have some nodes in a subnet in one data center and other subnets in a different data center. This applies to customers with data centers within a few miles of each other with high-speed connectivity. An alternative would be an OpenShift cluster in each data center.

AWS concepts and terminology

At AWS, the concept of “region” is a geolocation, such as EMEA (Europe, Middle East, and Africa) or APAC (Asian Pacific) rather than a data center or specific building. An Availability Zone (AZ) is the closest construct on AWS that maps to a physical data center. Within each region you will find multiple (typically three or more) AZs. Note that a single AZ will contain multiple physical data centers but we treat it as a single point of failure. For example, an event that impacts an AZ would be expected to impact all the data centers within that AZ. To this effect, customers should deploy workloads spanning multiple AZs to protect against any event that would impact a single AZ.

Read more about Regions, Availability Zones, and Edge Locations.

Deploying OpenShift

When deploying an OpenShift cluster on AWS, we recommend starting with three Master nodes spread across three AWS AZs and three worker nodes spread across three AZs. This allows for the combination of resilience and availably constructs provided by AWS as well as Red Hat OpenShift. The OpenShift installer provides a means of deploying the underlying AWS infrastructure in two ways: IPI Installer-provisioned infrastructure and UPI user-provisioned infrastructure. Both Red Hat and AWS collect customer feedback and use this to drive recommended patterns that are then included in the OpenShift installer. As such, the OpenShift installer IPI mode becomes a living reference architecture for deploying OpenShift on AWS.

Deploying OpenShift

The installer will require inputs for the environment on which it’s being deployed. In this case, since I am deploying on AWS, I will need to provide the AWS region, AZs, or subnets that related to the AZs, as well as EC2 instance type. The installer will then generate a set of ignition files that will be used during the deployment of OpenShift:

apiVersion: v1
baseDomain: example.com 
controlPlane: 
  hyperthreading: Enabled   
  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
      rootVolume:
        iops: 4000
        size: 500
        type: io1
      type: m5.xlarge 
  replicas: 3
compute: 
- hyperthreading: Enabled 
  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
      type: m5.xlarge
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
    userTags:
      adminContact: jdoe
      costCenter: 7536
pullSecret: '{"auths": ...}' 
fips: false 
sshKey: ssh-ed25519 AAAA... 

What does this look like at scale?

For larger implementations, we would see additional worker nodes spread across three or more AZs. As more worker nodes are added, use of the control plane increases. Initially scaling up the Amazon Elastic Compute Cloud (EC2) instance type to a larger instance type is an effective way of addressing this. It’s possible to add more Master nodes, and we recommend that an odd number of nodes are maintained. It is more common to see scaling out of the infrastructure nodes before there is a need to scale Masters. For large-scale implementations, infrastructure functions such as the router, monitoring, and logging functions can be moved to separate EC2 instances from the Master nodes, as well as from each other. It is important to spread the routing layer across multiple AZs, which is critical to maintaining availability and resilience.

The process of resource separation is now controlled by infrastructure machine sets within OpenShift. An infrastructure machine set would need to be defined, then the infrastructure role edited to be moved from the default to this new infrastructure machine set. Read about this in greater detail.

OpenShift in a multi-account context

Using AWS accounts as a means of separation is a common well-architected pattern. AWS Organizations and AWS Control Tower are services that are commonly adopted as part of a multi-account strategy. This is very much the case when looking to enable teams to use their own accounts and when an account vending process is needed to cater for self-service account provisioning.

OpenShift in a multi-account context

OpenShift clusters are deployed into multiple accounts. An OpenShift dev cluster is deployed into an AWS Dev account. This account would typically have AWS Developer Support associated with it. A separate production OpenShift cluster would be provisioned into an AWS production account with AWS Enterprise Support. Enterprise support provides for faster support case response times, and you get the benefit of dedicated resources such as a technical account manager and solutions architect.

CICD pipelines and processes are then used to control the application life cycle from code to dev to production. The pipelines would push the code to different OpenShift cluster end points at different stages of the life cycle.

Hybrid use case implementation

A common misconception of hybrid implementations is that there is a single cluster or control plan that has worker nodes in various locations. For example, there could be a cluster where the Master and infrastructure nodes are deployed in one location, but also worker nodes registered with this cluster that exist on-premises as well as in the cloud.

Having a single customer control plane for a hybrid implementation, even if technically possible, introduces undesired risks.

There is the potential to take multiple environments with very different resilience characteristics and make them interdependent of each other. This can result in performance and reliability issues, and these may increase not only the possibility of the risk manifesting, but also increase in the impact or blast radius.

Instead, hybrid implementations will see separate OpenShift clusters deployed into various locations. A customer may deploy clusters on-premises to cater for a workload that can’t be migrated to the cloud in the short term. Separate OpenShift clusters can then deployed into accounts in AWS for workloads on the cloud. Customers can also deploy separate OpenShift clusters in different AWS regions to cater for proximity to the consuming customer.

Though adding multiple clusters doesn’t add significant administrative overhead, there is a desire to be able to gain visibility and telemetry to all the deployed clusters from a central location. This may see the OpenShift clusters registered with Red Hat Advanced Cluster Manager for Kubernetes.

Summary

Take advantage of the IPI model, not only as a guide but to also save time. Make AWS Organizations, AWS Control Tower, and the AWS Service catalog part of your cloud and hybrid strategies. These will not only speed up migrations but also form building blocks for a modernized business with a focus of enabling prescriptive self-service. Consider Red Hat advanced cluster manager for multi cluster management.

Simplifying Complex: A Multi-Cloud Approach to Scaling Production

Post Syndicated from Lora Maslenitsyna original https://www.backblaze.com/blog/simplifying-complex-a-multi-cloud-approach-to-scaling-production/

How do you grow your production process without missing a beat as you evolve over 20 years from a single magazine to a multichannel media powerhouse? Since there are some cool learnings for many of you, here’s a summary of our recent case study deep dive into Verizon’s Complex Networks.

Founders Marc Eckō of Eckō Unlimited and Rich Antoniello started Complex in 2002 as a bi-monthly print magazine. Over almost 20 years, they’ve grown to produce nearly 50 episodic series in addition to monetizing more than 100 websites. They have a huge audience reaching 21 billion lifetime views and 52.2 million YouTube subscribers with premium distributors including Netflix, Hulu, Corus, Facebook, Snap, MSG, Fuse, Pluto TV, Roku, and more. Their team of creatives produce new content constantly—covering everything from music to movies, sports to video games, and fashion to food—which means that production workflows are the pulse of what they do.

Looking for Data Storage During Constant Production

In 2016, the Complex production team was expanding rapidly, with recent acquisitions bringing on multiple new groups that all had their own workflows. They used a Terrablock by Facilis and a few “homebrewed solutions,” but there was no unified, central storage location, and they were starting to run out of space. As many organizations with tons of data and no space do, they turned to Amazon Glacier.

There were problems:

  • Visibility: They started out with Glacier Vault, but with countless hours of good content, they constantly needed to access their archive—which required accessing the whole thing just to see what was in there.
  • Accessibility: An upgrade to S3 Glacier made their assets more visible, but retrieving those assets still involved multiple steps, various tools, and long retrieval times—sometimes ranging to 12 hours.
  • Complexity: S3 has multiple storage classes, each with its own associated costs, fees, and wait times.
  • Expense: The worst of the issue was that this glacial process didn’t just slow down production, it also incurred huge expenses through egress charges.

The worst thing was, staff would wade through this process only to sometimes realize that the content sent back to them wasn’t what they were looking for. The main issue for the team was that they struggled to see all of their storage systems clearly.

Organizing Storage With Transparent Asset Management

They resolved to fix the problem once and for all by investing in three areas:

  • Empower their team to collaborate and share at the speed of their work.
  • Identify tools that would scale with their team instantaneously.
  • Incorporate off-site storage that mimicked their on-site solutions’ scaling and simplicity.

To remedy their first issue, they set up a centralized SAN—a Quantum StorNext—that allowed the entire team to work on projects simultaneously.

Second, they found iconik, which moved them away from the inflexible on-prem integration philosophies of legacy MAM systems. Even better, they could test-run iconik before committing.

Finally, because iconik is integrated with Backblaze B2 Cloud Storage, the team at Complex decided to experiment with a B2 Bucket. Backblaze B2’s pay-as-you-go service with no upload fees, no deletion fees, and no minimum data size requirements fit the philosophy of their approach.

There was one problem: It was easy enough to point new projects toward Backblaze B2, but they still had petabytes of data they’d need to move to fully enable this new workflow.

Setting Up Active Archive Storage

The post and studio operations and media infrastructure and technology teams estimated that they would have to copy at least 550TB of 1.5PB of data from cold storage for future distribution purposes in 2020. Backblaze partners were able to help solve the problem.

Flexify.IO uses cloud internet connections to achieve significantly faster migrations for large data transfers. Pairing Flexify with a bare-metal cloud services platform to set up metadata ingest servers in the cloud, Complex was able to migrate to B2 Cloud Storage directly with their files and file structure intact. This allowed them to avoid the need to pull 550TB of assets into local storage just to ingest assets and make proxy files.

More Creative Possibilities With a Flexible Workflow

Now, Complex Networks is free to focus on creating new content with lightning-fast distribution. Their creative team can quickly access 550TB of archived content via proxies that are organized and scannable in iconik. They can retrieve entire projects and begin fresh production without any delays. “Hot Ones,” “Sneaker Shopping,” and “The Burger Show”—the content their customers like to consume, literally and figuratively, is flowing.

Is your business facing a similar challenge?

The post Simplifying Complex: A Multi-Cloud Approach to Scaling Production appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

On-Demand SCIM provisioning of Azure AD to AWS SSO with PowerShell

Post Syndicated from Natalie Doerr original https://aws.amazon.com/blogs/security/on-demand-scim-provisioning-of-azure-ad-to-aws-sso-with-powershell/

In this post, I will demonstrate how you can use a PowerShell script to initiate an on-demand synchronization between Azure Active Directory and AWS Single Sign-On (AWS SSO) and avoid the default 40-minute synchronization schedule between both identity providers. This solution helps enterprises quickly synchronize changes made to users, groups, or permissions within Azure AD with AWS SSO. This allows user or permission changes to be quickly reflected in associated AWS accounts.

Prerequisites

You need the following to complete this session:

This post focuses on the steps needed to set up the on-demand sync solution. You can find specifics on how to set up and use PowerShell and the Azure PowerShell modules at Installing Azure PowerShell.
 

Figure 1: Triggering the SCIM Endpoint to sync all users and groups

Figure 1: Triggering the SCIM Endpoint to sync all users and groups

Grant permission to the Graph API to access the Default Directory in Azure AD

To get started, grant the permissions needed for the application to have access to the directory endpoint.

To grant permissions

  1. Sign in to the Azure Portal and navigate to the Azure AD dashboard.
  2. From the left navigation pane, select App registrations. If you don’t see your application listed, select the All applications tab.
    For this example, I’m using an application named AWS.
     
    Figure 2: Select the AWS app registration

    Figure 2: Select the AWS app registration

  3. Choose API permissions from the navigation pane.
  4. Choose the Add a permission option.
     
    Figure 3: Select the Add API permission

    Figure 3: Select the Add API permission

  5. From the settings page that opens, choose the Microsoft Graph option.
     
    Figure 4: Request API permissions

    Figure 4: Request API permissions

    Under What type of permissions does your application require, select Delegated permissions and enter directory.readwrite.all in the permissions search field. Select Directory.ReadWrite.All and choose Add permissions at the bottom of the page.
     

    Figure 5: Request API permissions - Add permissions

    Figure 5: Request API permissions – Add permissions

  6. On the API permissions page, choose Grant admin consent for Default Directory and select Yes.
     
    Figure 6: Grant permission for the account to have administrator permissions

    Figure 6: Grant permission for the account to have administrator permissions

Create a certificate and secret to access the application

To get started, create a certificate and secret which grants secure access to the AWS application.

To create a certificate and secret

  1. Choose Certificate & secrets from the left navigation menu and then choose New client secret.
     
    Figure 7: Creating a client secret for 1 year

    Figure 7: Creating a client secret for 1 year

  2. Select the desired length of the certificate.
  3. Provide a description and choose Add.
    1. Copy the value of the certificate that’s generated and save it to use later in this process.
    2. After you’ve saved the value to use later, select Home from the top left corner of the screen.
    Figure 8: Make sure you click Copy to clipboard to store the value of the secret

    Figure 8: Make sure you click Copy to clipboard to store the value of the secret

Create a user with permissions to run the code

Now that you’ve given your application access to the directory, let’s create a user and assign the proper permissions to run the code.

To create a user and assign permissions

  1. Choose Azure Active Directory from the Azure services list.
  2. Choose Users and select New user. The User name, First name, and Last name fields are required. In this example, I set the User name and First name to Auth and the Last name to User.
    1. Take note of the password that is set for this user and save it to use later.
    2. Once completed, choose Create.
    Figure 9: Create a user in Azure AD

    Figure 9: Create a user in Azure AD

  3. Select the newly created user from the list.
    1. On the left navigation pane, select Assigned roles.
    2. Choose Add assignments.
    3. Choose Hybrid identity administrator and select Add.
    Figure 10: Assign the user the role to trigger the API

    Figure 10: Assign the user the role to trigger the API

  4. Select Default Directory from the top of the navigation pane.
    1. Choose Enterprise applications.
    2. Choose the AWS application.
    3. Select Assign users and groups.
    Figure 11: Azure Enterprise applications - Assign users and groups

    Figure 11: Azure Enterprise applications – Assign users and groups

  5. Choose + Add user at the top of the window.
    1. Select the user you created earlier. I select Auth as that was the user I created earlier.
    2. Choose Select and then Assign.
    Figure 12: Select the user we created earlier from Figure 9

    Figure 12: Select the user we created earlier from Figure 9

     

    Figure 13: Assign the user to the application

    Figure 13: Assign the user to the application

  6. Now that you’ve added the user, you can see that the user is assigned to the application.
     
    Figure 14: Screen now showing that the user has been assigned to the application

    Figure 14: Screen now showing that the user has been assigned to the application

  7. It’s recommended to log in to the Azure portal as the user you just created in a new incognito or private browser session. As part of the first log in, you’ll be prompted to change the password.

Prerequisites to trigger the SCIM endpoint

You need the following items to run the PowerShell code that triggers the endpoint.

  1. From the application registration, retrieve the items shown below. Note that you must use the client secret saved earlier when the certificate was created.
    • Tenant ID
    • Display name
    • Application ID
    • Client secret
    • User name
    • Password
  2. Copy the items to a notepad in the preceding order so you can enter all of them through a single copy and paste action while running the script.
  3. From the menu, select Azure Active Directory.
  4. Choose App registrations and select the AWS App that was set up.
  5. Copy the Application (client) ID and the Directory (tenant) ID.
Figure 15: App registration contains all the items needed for the PowerShell script

Figure 15: App registration contains all the items needed for the PowerShell script

Trigger the SCIM endpoint with PowerShell

Now that you’ve completed all of the previous steps, you need to copy the code from the GitHub repository to your local machine and run it. We’ve configured the code to run manually, but you can also automate it to trigger an Azure Automation runbook when users are added to Azure through Alerts. You can also configure CloudWatch Events to run a Lambda function at periodic intervals.

To trigger the SCIM endpoint

  1. Copy the code from the GitHub repository.
  2. Save the code using the code editor of your choice, or you can download Visual Studio Code. Give the file a user-friendly name, such as Sync.ps1.
  3. Navigate to the location where you saved the file and run ./sync.ps1.
  4. When prompted, enter the values from the notepad. You can paste these all at one time so you don’t have to copy and paste each individual item.

    Note: When copying and pasting in Windows, choose the PowerShell icon, then Edit > Paste.

     

    Figure 16: Windows Command Prompt – Select Paste to copy all items needed to trigger the sync

    Figure 16: Windows Command Prompt – Select Paste to copy all items needed to trigger the sync

After you paste the values into the PowerShell window, you see the script input as shown in the following screenshot. The client secret and password are secure values and are masked for security purposes.
 

Figure 17: PowerShell script with input values pasted in

Figure 17: PowerShell script with input values pasted in

After the job has started in PowerShell, two messages are displayed. One indicating that synchronization is starting and a following message when synchronization has completed. Both are shown in the following figure.
 

Figure 18: Output from a successful run of the PowerShell script

Figure 18: Output from a successful run of the PowerShell script

View the synchronization status and logs

To verify that the job ran successfully, you can check the completed time from the Azure portal. You can verify the time the script ran by viewing the completion time along with the current status.

To view the status and logs

  1. From the menu, choose Azure Active Directory.
  2. Choose Enterprise applications and select the AWS App.
  3. From the left navigation menu, choose Provisioning and then choose View provisioning details. This displays the last time the sync completed.
     
    Figure 19: View the Provisioning details about the job

    Figure 19: View the Provisioning details about the job

Summary

In this post, I demonstrate how you can use a PowerShell script to trigger the SCIM endpoint to on-demand synchronize Azure AD with AWS Single Sign-On. You can find the code in this GitHub repository and use it to synchronize user and group changes on demand.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Aidan Keane

Aidan is a Senior Technical Account Manager for AWS Enterprise Support. He has been working with Cloud technologies for more than 5 years. Outside of technology, he is a sports enthusiast who enjoys golf, biking, and watching Liverpool FC. He spends his free time with his family and enjoys traveling to Ireland and South America.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/832164/rss

Security updates have been issued by Mageia (mysql-connector-java), openSUSE (chromium, curl, libqt4, and singularity), Red Hat (bash and kernel), SUSE (python-pip and python3), and Ubuntu (busybox, ceph, freeimage, libofx, libpam-tacplus, linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-gcp, linux-gcp-4.15, linux-gke-4.15, linux-hwe, linux-oem, linux-oracle, linux-raspi2, linux-snapdragon, linux, linux-azure, linux-gcp, linux-oracle, novnc, and tnef).

Cook: Security things in Linux v5.7

Post Syndicated from original https://lwn.net/Articles/832132/rss

Kees Cook catches
up with the security-related changes
in the 5.7 kernel.
The kernel’s Linux Security Module (LSM) API provide a way to write
security modules that have traditionally implemented various Mandatory
Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are
numerous and no one LSM uses them all, as some hooks are much more
specialized (like those used by IMA, Yama, LoadPin, etc). There was not,
however, any way to externally attach to these hooks (not even through a
regular loadable kernel module) nor build fully dynamic security policy,
until KP Singh landed the API for building LSM policy using BPF. With this,
it is possible (for a privileged process) to write kernel LSM hooks in BPF,
allowing for totally custom security policy (and reporting).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close