[$] Maintainers don’t scale

Post Syndicated from original https://lwn.net/Articles/896918/

In something of a grab-bag session, Josef Bacik led a discussion about
various challenges that Linux kernel maintainers face, some of which lead to
burnout. The session was originally
going to be led by Darrick Wong, but he was unable to come to LSFMM, so
Bacik gathered some of Wong’s concerns and combined them with his own in a
joint storage and filesystem session at the
2022 Linux Storage,
Filesystem, Memory-management and BPF Summit
(LSFMM). As part of the
discussion, Bacik presented
his view on what the role of a kernel maintainer should be, which seemed to
resonate with those present.

Evaluating the Security of an Enterprise IoT Deployment at Domino’s Pizza

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/06/06/evaluating-the-security-of-an-enterprise-iot-deployment-at-dominos-pizza/

Evaluating the Security of an Enterprise IoT Deployment at Domino's Pizza

Recently, I had a great opportunity to work with Domino’s Pizza to evaluate an internally conceived Internet of Things (IoT)-based business solution they had designed and deployed throughout their US store locations. The goal of this research project was to understand the security implications around a large-scale enterprise IoT project and processes related to:

  • Acquisition, implementation, and deployment
  • Technology and functionality
  • Management and support

Laying the groundwork

I sat down with each of the internal teams involved with this project, and we discussed those key areas and how security was defined and applied within each. I gained valuable new insight into how security should play into the design and construction of a large IoT business solution, especially within the planning and acquisition phases. This opportunity allowed me to see how a security-driven organization like Domino’s approaches a large-scale project like this.

I walked away from this phase of the project with some great takeaways that should be considered on all like-minded projects:

  • Always consider vendor security in your risk planning and modeling
  • Security “must-haves” should map to your organization’s internal security policies

Assessing the security status quo

Also, as part of this research project, I conducted a full ecosystem security assessment, examining all the critical hardware components, operation software, and associated network communications. As with any large-scale enterprise implementation, we did find a few security problems. This is the main reason all projects, even those with security built in from the start, should go through a wide-ranging security assessment to flush out any shortcomings that could be lurking under the hood. Once completed, I delivered a comprehensive report, which the security teams and project developers then used to quickly create solutions for fixing the identified issues.

This also allowed me the chance to observe and discuss the processes and methodologies used by this enterprise organization for building and deploying fixes into production and doing that in a safe way to avoid impacting production.

During a typical security assessment of an enterprise-wide business solution like this, we are reminded of a couple key best-practice items that should always be considered, such as:

  • When testing the security for a new technology, use a holistic approach that targets the entire solutions ecosystem.
  • Conduct regular testing of documented security procedures — security is a moving target, and testing these procedures regularly can help identify deficiencies.

Bringing the idea to life

Once an idea is designed, built, and deployed into production, we have to make sure the deployed solution remains fully functional and secure. To accomplish that, we moved the deployed enterprise IoT solution under a structured management and support plan at Domino’s. This support structure was designed as expected to help avoid or prevent outages and security incidents that could impact production, loss of services, or loss of data, focusing on:

Again, it was nice to sit down with the various teams involved in the support infrastructure and talk security and to also see how it was not only applied to this specific project, but how the organization applied these same security methodologies across the whole enterprise.

During this final evaluation phase of this project, I was reminded of one of the most critical takeaways that many organizations — unlike Domino’s, who did it correctly — fail to apply: When deploying new embedded technology within your enterprise environment, make sure the technology is properly integrated into your organization’s patch management.

At the conclusion of this research project, I took away a greatly improved understanding of the complexity, difficulties, and security best-practice challenges a large enterprise IoT project could demand. I was pleased to see, and work with, an organization that was up to that challenge and who successfully delivered this project to their business.

If you’d like to read more detail on this security research project, check out my report here.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/897163/

Security updates have been issued by Debian (clamav, firefox-esr, pidgin, and thunderbird), Fedora (dotnet3.1, firefox, kernel, vim, and webkit2gtk3), Mageia (firefox/nss/nspr, gimp, logrotate, mariadb, thunderbird, trojita, webkit2, and webmin), Oracle (thunderbird), Red Hat (compat-openssl11, postgresql:10, postgresql:12, and thunderbird), Slackware (pidgin), and SUSE (openvpn).

Cloudflare observations of Confluence zero day (CVE-2022-26134)

Post Syndicated from Vaibhav Singhal original https://blog.cloudflare.com/cloudflare-observations-of-confluence-zero-day-cve-2022-26134/

Cloudflare observations of Confluence zero day (CVE-2022-26134)

On 2022-06-02 at 20:00 UTC Attlasian released a Security Advisory relating to a remote code execution (RCE) vulnerability affecting Confluence Server and Confluence Data Center products. This post covers our current analysis of this vulnerability.

When we learned about the vulnerability, Cloudflare’s internal teams immediately engaged to ensure all our customers and our own infrastructure were protected:

  • Our Web Application Firewall (WAF) teams started work on our first mitigation rules that were deployed on 2022-06-02 at 23:38 UTC for all customers.
  • Our internal security team started reviewing our Confluence instances to ensure Cloudflare itself was not impacted.

What is the impact of this vulnerability?

According to Volexity, the vulnerability results in full unauthenticated RCE, allowing an attacker to fully take over the target application.

Active exploits of this vulnerability leverage command injections using specially crafted strings to load a malicious class file in memory, allowing attackers to subsequently plant a webshell on the target machine that they can interact with.

Once the vulnerability is exploited, attackers can implant additional malicious code such as Behinder; a custom webshell called noop.jsp, which replaces the legitimate noop.jsp file located at Confluence root>/confluence/noop.jsp; and another open source webshell called Chopper.

Our observations of exploit attempts in the wild

Once we learned of the vulnerability, we began reviewing  our WAF data to identify activity related to exploitation of the vulnerability. We identified requests matching potentially malicious payloads as early as 2022-05-26 00:33 UTC, indicating that knowledge of the exploit was realized by some attackers prior to the Atlassian security advisory.

Since our mitigation rules were put in place, we have seen a large spike in activity starting from 2022-06-03 10:30 UTC — a little more than 10 hours after the new WAF rules were first deployed. This large spike coincides with the increased awareness of the vulnerability and release of public proof of concepts. Attackers are actively scanning for vulnerable applications at time of writing.

Cloudflare observations of Confluence zero day (CVE-2022-26134)

Although we have seen valid attack payloads since 2022-05-26, many payloads that started matching our initial WAF mitigation rules once the advisory was released were not valid against this specific vulnerability. Examples provided below:

Cloudflare observations of Confluence zero day (CVE-2022-26134)

The activity above indicates that actors were using scanning tools to try and identify the attack vectors. Exact knowledge of how to exploit the vulnerability may have been consolidated amongst select attackers and may not have been widespread.

The decline in WAF rule matches in the graph above after 2022-06-03 23:00 UTC is due to us releasing improved WAF rules. The new, updated rules greatly improved accuracy, reducing the number of false positives, such as the examples above.

A valid malicious URL targeting a vulnerable Confluence application is shown below:

Cloudflare observations of Confluence zero day (CVE-2022-26134)

(Where $HOSTNAME is the host of the target application.)

The URL above will run the contents of the HTTP request post body eval(#parameters.data[0]). Normally this will be a script that will download a web shell to the local server allowing the attacker to run arbitrary code on demand.

Other example URLs, omitting the schema and hostname, include:

Cloudflare observations of Confluence zero day (CVE-2022-26134)

Some of the activity we are observing is indicative of malware campaigns and botnet behavior. It is important to note that given the payload structure, other WAF rules have also been effective at mitigating particular variations of the attack. These include rule PHP100011 and PLONE0002.

Cloudflare’s response to CVE-2022-26134

We have a defense-in-depth approach which uses Cloudflare to protect Cloudflare. We had  high confidence that we were not impacted by this vulnerability due to the security measures in place. We confirmed this by leveraging our detection and response capabilities to sweep all of our internal assets and logs for signs of attempted compromise.

The main actions we took in response to this incident were:

  1. Gathered as much information as possible about the attack.
  2. Engaged our WAF team to start working on mitigation rules for this CVE.
  3. Searched our logs for any signs of compromise.
  4. We searched the logs from our internal Confluence instances for any signs of attempted exploits. We supplemented our assessment with the pattern strings provided by Atlasian: “${“.
  5. Any matches were reviewed to find out if they could be actual exploits. We found no signs that our systems were targeted by actual exploits.
  6. As soon as the WAF team was confident of the quality of the new rules, we started deploying them to all our servers to start protecting our customers as soon as possible. As we also use the WAF for our internal systems, our Confluence instances are also protected by the new WAF rules.
  7. We scrutinized our Confluence servers for signs of compromise and the presence of malicious implants. No signs of compromise were detected.
  8. We deployed rules to our SIEM and monitoring systems to detect any new exploitation attempt against our Confluence instances.

How Cloudflare uses Confluence

Cloudflare uses Confluence internally as our main wiki platform. Many of our teams use Confluence as their main knowledge-sharing platform. Our internal instances are protected by Cloudflare Access. In previous blog posts, we described how we use Access to protect internal resources. This means that every request sent to our Confluence servers must be authenticated and validated in accordance with our Access policies. No unauthenticated access is allowed.

This allowed us to be confident that only Cloudflare users are able to submit requests to our Confluence instances, thus reducing the risk of external exploitation attempts.

What to do if you are using Confluence on-prem

If you are an Atlassian customer for their on-prem products, you should patch to their latest fixed versions. We advise the following actions:

  1. Add Cloudflare Access as an extra protection layer for all your websites. Easy-to-follow instructions to enable Cloudflare Access are available here.
  2. Enable a WAF that includes protection for CVE-2022-26134 in front of your Confluence instances. For more information on how to enable Cloudflare’s WAF and other security products, check here.
  3. Check the logs from your Confluence instances for signs of exploitation attempts. Look for the strings /wiki/ and ${ in the same request.
  4. Use forensic tools and check for signs of post-exploitation tools such as webshells or other malicious implants.

Indicators of compromise and attack

The following indicators are associated with activity observed in the wild by Cloudflare, as described above. These indicators can be searched for against logs to determine if there is compromise in the environment associated with the Confluence vulnerability.

Indicators of Compromise (IOC)

# Type Value Filename/Hash
1 File 50f4595d90173fbe8b85bd78a460375d8d5a869f1fef190f72ef993c73534276 Filename: 45.64.json
Malicious file associated with exploit
2 File b85c16a7a0826edbcddbd2c17078472169f8d9ecaa7209a2d3976264eb3da0cc Filename: 45.64.rar
Malicious file associated with exploit
3 File 90e3331f6dd780979d22f5eb339dadde3d9bcf51d8cb6bfdc40c43d147ecdc8c Filename: 45.640.txt
Malicious file associated with exploit
4 File 1905fc63a9490533dc4f854d47c7cb317a5f485218173892eafa31d7864e2043 Filename: 45.647.txt
Malicious file associated with exploit
5 File 5add63588480287d1aee01e8dd267340426df322fe7a33129d588415fd6551fc Filename: lan (perl script)
Malicious file associated with exploit
6 File 67c2bae1d5df19f5f1ac07f76adbb63d5163ec2564c4a8310e78bcb77d25c988 Filename: jui.sh
Malicious file associated with exploit
7 File 281a348223a517c9ca13f34a4454a6fdf835b9cb13d0eb3ce25a76097acbe3fb Filename: conf
Malicious file associated with exploit

Indicators of Attack (IOA)

# Type Value Hash
1 URL String ${ String used to craft malicious payload
2 URL String javax.script.ScriptEngineManager String indicative of ScriptEngine manager to craft malicious payloads

Седмицата (30 май – 4 юни)

Post Syndicated from Йовко Ламбрев original https://toest.bg/editorial-30-may-4-june-2022/

Едва когато Марин ни напусна, осъзнахме пълния мащаб на неговото себераздаване. Знаехме, че го прави, но не подозирахме на колко много хора и в колко много посоки и проекти се е нагърбил да помага.

През седмицата загубихме друг приятел на „Тоест“. Имахме късмета още в самото начало, преди още дори да сме изяснили за себе си някои детайли, да споделим плановете и идеите си с Нери Терзиева. Вероятно няма да е изненада за никого, че тя прегърна всичко с отворени обятия, без резерви, и ни даде най-важното – кураж и подкрепа. А едва се бяхме запознали. И през годините продължи да ни прегръща навсякъде, когато се срещнехме случайно из улиците на Пловдив. Последната ни среща бе точно такава, непланирана, случайно се мярнахме по бул. „Мария Луиза“. Тя бързаше, но ни прегърна сърдечно, преди да продължи. Не подозирахме, че тогава сме се сбогували.

Емилия Милчева

Седмицата не беше особено запомняща се. Освен ако не държим да помним цяла купчина неудачни публични изявления и прояви на политиците ни. Емилия Милчева коментира сексуално оцветения преразказ на транспортния министър Николай Събев на вътрешнокоалиционните взаимоотношения. Тя припомня, че на този терен тревата е поизтъркана от предходната власт, а от сегашните управници продължаваме да очакваме политика и резултати вместо нескопосани метафори.

Калина Константинова е другото име от правителството, което провокира буря от реакции. Според Светла Енчева обаче персонифицирането на проблема в личността на вицепремиерката означава да пропуснем, че той се корени в дисфункциониращата от години социална система. И разбира се, в местното незряло отношение към човешките права по принцип. Прочетете статията ѝ „Не стреляйте по Калина Константинова“.

Йоанна Елми

„Как преподаваме и говорим за комунизма“ е заглавието на интервюто на Йоанна Елми с Луиза Славкова от „Софийска платформа“. Във фокуса на разговора са организираното от гражданската инициатива лятно училище в гр. Белене, паметта за комунистическата диктатура и важността на гражданското образование при малки и големи. „За нас разговорите за историята на комунизма не са самоцел, те са част от опита ни да усилим значението на демократичната култура в нашето общество. Войната в Украйна е най-радикалното доказателство защо познаването на миналото е важно“, казва Славкова.

Нева Мичева

Тази седмица в рубриката „Говори с Нева“ разговаряме за гнета на семейните празници над несемейните хора в непразнично настроение; за напрежението, породено от чуждите и собствените очаквания на определени дати; за възможностите, превърнати в задължения и носещи тъга вместо удовлетворение. И малко за световния ден на велосипедите, свети Кевин Ирландеца и Уолт Уитман с „яснота и нежност“.

Севда Семер

В рубриката ни „На второ четене“ този път Севда Семер е подбрала заглавие на писателка, родена в средата на ХVIII век. Първоначално книгата е публикувана анонимно заради тогавашните обществени нагласи спрямо явлението жена писателка. Епистоларният роман „Евелина“ на всичко отгоре е социална сатира и е написан от Франсис Бърни, когато тя е на 26 години. В него се разказва за сблъсъка на млада и красива жена с обществото и мъжете. Според Севда, освен че четивото е забавно, книгата е ценна като показател за развитието на литературата и на романа в частност, както и на отношенията между мъжете и жените.

И накрая – една препоръка от мен. Особено подходяща заради някои местни събития от седмицата. Ако сте пропуснали речта на новозеландската министър-председателка Джасинда Ардърн пред завършващите Харвардския университет, ви съветвам да си доставите това удоволствие. Особено ако страдате от онази хронична жажда да чувате от политиците това, което трябва да бъде казано, по начина, по който трябва да бъде казано.

Приятно четене и гледане!

Източник

Честити косове

Post Syndicated from Нева Мичева original https://toest.bg/govori-s-neva-chestiti-kosove/

Около Великден и Коледа винаги си мисля, че имам проблем с определението „семейни празници“. Едва ли съм единствената – има толкова много хора, които нямат семейства, които са отхвърлени от семействата си, и толкова сложни конфигурации на човешките отношения. Тези хора също изпитват болка, когато ги облъчват с послания, че празникът е истински само когато си със семейството си.

Отраснах в семейство, в което най-зрелищните скандали се случваха точно навръх т.нар. семейни празници. Беше си нещо като традиция. Гледаш елхата, мирише ти на мандарини, около теб се ва̀лят подаръци, а крясъците на родителите ти огласят входа. Искаш да се скриеш, но няма къде да отидеш.

От години прекарвам „семейните празници“ сама – като резултат от личното ми решение да бъда себе си дори с цената на отхвърляне и дискриминация. И на нежеланието ми да празнувам с хора, които ме обичат, но при които смятам, че не ми е мястото. Или ме канят от съжаление. С времето свикнах да съм сама на Коледа и Великден, даже започна да ми харесва. Преди си сготвях нещо хубаво и си намислях филмова програма за утешение. Вече и от това нямам нужда. За мен това са просто дни като всички останали. Е, почти.

Мила Нева, ти как гледаш на определението „семейни празници“? Мислиш ли, че е възможно хората да празнуват така, че онези, които не пасват на клишето за празника, да не се чувстват по-самотни и отхвърлени, отколкото реално са?

С.

Прелиствам списъка на ООН със „световните дни на…“ и виждам ден на светлината и ден на варивата, ден на джаза и ден на вдовиците, ден на мочурищата и ден на астероидите, ден на солидарността и ден на пощите – и ден има за всяка работа под небето, дето се вика, дори за тоалетните. Или за велосипедите, 3 юни, в който именно ти пиша, мила С., с благодарност за твоето писмо. И който, между другото, при католиците е отреден на ирландския монах Кевин от Глендалох – разправя се, че както се молел веднъж, в дланта му кацнал кос и снесъл яйце, та на Кевин му се наложило да почака, докато пилето не се излюпило (поради което сега е не само светец, ами и покровител на косовете). Същият този 3 юни е рожден ден на някого, имен на друг и сватбен на трети, юбилеен на градове, институции, революции и конституции. Тоест както останалите 364 дни в годината – повод за честване за един или за мнозина.

Гледай само какъв забавен човешки парадокс: от horror vacui сме препълнили календара си с празноти. Защото нали това е идеята на празника – да е луфт, ниша, свободно място и време за почитане, замисляне, отстъпване от битовото и типовото. От друга страна, в секундата, в която се окажем пред поредния „ден на…“, отново в пристъп на страх от празното, най-битово и типово се юрваме да го запълним с ритуали, храна и шум. Фестивалите и карнавалите, манифестациите и награждаванията, премиерите и финисажите са толкова далеч от „неработното и неприсъственото“, за които говори речникът, че са по-точен синоним на „гмеж“, „врява“ и „пренасищане“. (Което може да е и прекрасно, и ужасно, и никакво – празниците са контейнери, не същности.)

Често има нещо трескаво и не съвсем логично в „запразването“; нещо недотам приятно във фиксираните зони за приятност; нещо по-смазващо делнично в предписаната тържественост, отколкото в най-смазващия делник. И то се усеща особено силно по големите празници, възможността за които отдавна се е вкоравила в задължение. В неоползотворяването на възможностите има доза радост, доколкото то е избор, сиреч поле за изява на личността. В неспазването на задълженията обаче неизбежно има вина. И ето че там, където уж трябва да е светло и леко, поникват травми. Писмото ти е за празници, а в него чета за безизходица в детството, за „отхвърляне и дискриминация“, за „съжаление“, „утешение“ и самота – и не се изненадвам. Моето впечатление е, че рано или късно подобно писмо можем да подпишем всички, макар и по различни причини.

Не си ти проблемът, матрицата е безнадеждно остаряла. Разкривила се е и произвежда повече болка и белези, отколкото смисъл и красота. И то не единствено в нашето провинциално, несъобразително общество, вкопчено в миш-маш от вехти идеи поради страх, инерция или пресметливост; почти маниакално поддържащо занижено качество на живот, само и само да не поема допълнителни отговорности; несвикнало да мечтае с размах, но пък с огромен опит в неуважението към мечтателите. Навсякъде масовите празници за огромна част от хората са не толкова почивка или общуване извън баналното, колкото пакет от пречки за преодоляване. На щастието това му е характерното, че е невъзможно за режисиране отвън. А що за празник без щастие?

Мисля, че в „семеен празник“ работят – със скърцане и за чужда сметка – цели две стари матрици: едната за „празник“, другата за „семейство“. Две схващания за „правилно“ пребиваване на света, които почти никой не може да сбъдне, но пък почти всички могат да имитират. Реалността на повечето от нас се отклонява от идеала за задружна, хармонична, винаги функционираща група от близки хора, готови по команда да избухнат във веселие – отклонява се толкова често и толкова много, че в крайна сметка идеалът се превръща в бреме. Питаш ме дали според мен има как да се празнува мащабно, но така, че онези, които не желаят или не могат да споделят празника, да не са принудени да се престуват, да тъгуват, да се срамуват. Разбира се. Когато – след достатъчно разговори и добри примери – идеалите станат по-хуманни и по-адекватни на реалността. Чудесно е да имаш семейство, приятели, любима професия, хоби, енергия, вдъхновение. Но ако не ти се намира едно от тези или всичките: а) никак не е изключено също да бъде чудесно и б) значи ти се намира друго. Толкоз.

Мисля, че системите ни на общуване, съществуване, мислене и така нататък грубо могат да се поделят на системи на изключването и системи на целостта. Първите залагат примамки и те погват да ги гониш, а при непостигането на някакви минимуми наказват. (Говоря не за материи от типа на почтеността или солидарността, тоест базови нужди на съжителството, а за такива, в които няма реално основание човек да приема външен диктат – какво обличаш, кого обичаш, къде ти е добре да бъдеш…) Вторите ползват за гръбнак няколко твърди правила, без които съвместното съществуване не би се получило, а в останалото се съобразяват с човешката необикновеност и я обемат; трупат разнообразие и го поощряват; разклоняват се възможно най-много – като коренова система, която с всяко свое крайче, дори най-финото, черпи сила.

Моята приятелка Силке си измисли празник. Движи го от двайсетина години. Той се казва Никабра и се състои в следното: на определена дата през декември Силке праща на всичките си приятели общ имейл с въпрос и покана. Въпросът е загадка, над която човек да си поблъска главата. Поканата е за творческа интерпретация по дадена тема. Накрая с томбола се избира един от отговорилите правилно на въпроса, както и един от откликналите със стихотворение, рисунка, видео или каквото друго артистично им е хрумнало. Подаръкът е книга или шоколад. Точния ден не помня, както и произхода на името. Понякога се включвам, понякога – не. Веднъж съм печелила (беше нещо за черното куче на депресията), но не помня какво. Ето защо ми харесва Никабра – защото съм винаги канена и никога виновна.

Мисля, че има системи, които с цялото си устройство ти внушават „сам си“, и системи, които излъчват точно обратното – „не си сам“. Първите са не просто вредни, но и естествено противни на човека, така ми се струва. Една добра система ще допусне както да боядисваш яйца по Великден и да се кичиш с колкото мартеници можеш да носиш, така и напълно да забравиш за тях. Както да си легнеш рано на Нова година, без да те буди всеобщата дандания или притеснението с какво да оправдаеш отсъствието си от нея, така и да отидеш на купона у приятели, без да анализираш защо са те поканили. Празникът е истински само когато ти е хубаво. А без да отчиташ коя си и какво чувстваш, не можеш да бъдеш адекватно с другите.

„Аз чествам себе си“ – така започва една великанска ода за живота от Уолт Уитман (прев. Владимир Свинтила), в която няма грам себичност и с парченце от която ще завърша.

Аз просто шавам, стискам, пипам с пръстите си,
чувствам се щастлив да докосвам другиго със себе си –
това най-много бих могъл да понеса.

Заглавно изображение: Гнездо на кос © Przemek Pietrak, 2019, CC-BY-3.0 / Wikimedia
П.П.: Яйцата на коса са сини, както ми се изясни, докато търсех колко време им е нужно да се измътят. Две седмици. Браво, свети Кевин!
„Говори с Нева“ е рубрика за писма от читатели. Винаги съм си мечтала да поддържам такава и да имам адрес, на който непознати да ми пишат, за да ми разкажат нещо важно за себе си, което да обсъдим – както във влака, когато разговорът тръгне. Случка, върху която да поразсъждаваме, чуденка, която да разчепкаме още малко, наблюдение, към което да добавя друго. Сигурна съм, че както аз винаги съм искала да отговарям на писма, така има хора, които винаги са искали да ги напишат. Заповядайте.

Източник

На второ четене: „Евелина“ от Франсис Бърни

Post Syndicated from Севда Семер original https://toest.bg/na-vtoro-chetene-evelina-frances-byrney/

Никой от нас не чете единствено най-новите книги. Тогава защо само за тях се пише? „На второ четене“ е рубрика, в която отваряме списъците с книги, публикувани преди поне година, четем ги и препоръчваме любимите си от тях. Рубриката е част от партньорската програма Читателски клуб „Тоест“. Изборът на заглавия обаче е единствено на авторите – Стефан Иванов и Севда Семер, които биха ви препоръчали тези книги и ако имаше как веднъж на две седмици да се разходите с тях в книжарницата.

„Евелина“ от Франсис Бърни

подзаглавие „Историята за навлизането на една млада жена в обществото“, превод от английски Златина Сакалова, изд. „ЖАР – Жанет Аргирова“, 2018

Франсис Бърни, родена през 1752 г., публикува социалната сатира „Евелина“ анонимно. Това е нейният дебют, но не и първият ѝ опит в писането. Историята на предишната ѝ книга е символична за времето си. Бърни изгаря първия си роман, усещайки натиска на тогавашното общество, което смята, че писането не е занимание за една дама. Тя се притеснява и от реакцията на баща си, но все пак продължава да твори. Така се появява „Евелина“, без да е подписана от писателката, и постига сериозен успех. Малко след това се разкрива истинското ѝ авторство. Бащата на Бърни така и не приема напълно нейната кариера – например противостои и на поставянето на пиесите ѝ на сцена.

Бърни пише „Евелина“, когато е на 26 години. Постижението ѝ не е толкова впечатляващо, колкото това на Мери Шели, написала „Франкенщайн“ и изобретила научната фантастика на 18. Но пък без силното влияние на Бърни, автори като Джейн Остин едва ли щяха да съществуват. Докато четях, няколко пъти се сещах за възрастта на писателката, когато е написала тази социална сатира. В романа някои герои имат остър ум, непритъпен от ръжда дори като го четеш толкова векове по-късно. Разбира се, това се очаква от жанра – и все пак някои от наблюденията са толкова фини, че успяват да изненадат.

Героинята, дала името на романа, е 17-годишната непризната дъщеря на английски аристократ. Тя е отгледана в провинцията от преподобния мистър Виларс, близък на починалата ѝ майка, който ѝ е дал възможно най-доброто образование и я е наставлявал. Когато обаче Евелина е поканена да се присъедини към приятелско семейство, което се отправя към Лондон, градът е точно толкова вълнуващ за нея, колкото са объркващи порядките му. Изтънчените маниери, които се изискват от нея, са съвсем нови за Евелина.

Историята е написана под формата на писма – най-често адресирани от Евелина до мистър Виларс. Епистоларната форма е доста популярна за онова време и си спомних защо е толкова убедителна – чувстваш се, все едно лично ти надничаш да разбереш повече.

Известна част от романа се движи точно от невежеството на Евелина по отношение на социалните правила. Най-голямата каша обаче се забърква, когато тя се натъква на баба си по майчина линия, разбрала съвсем наскоро, че има внучка, и дошла в Англия от Франция, за да я търси. Поредица от събития, някои – комични, други – доста сериозни, ще доведат до това момичето да се срещне с част от семейството си, което е от по-ниска прослойка. Тук проличават и жестоките класови предразсъдъци в Англия. Сюжетът продължава с искането на Евелина баща ѝ да я признае за своя дъщеря – което ще ѝ осигури големи средства, освен че ще изчисти името на майка ѝ.

Самата тя е типична героиня за романите от онова време. С други думи – съвършен ангел, истинска дама, изтъкана от добродетели, а на това отгоре е и толкова красива, че всеки мъж по пътя ѝ се влюбва в нея. Самата тя се влюбва само в един от тях – лорд Орвил, който осигурява задължителния романтичен елемент в книгата. Езикът на Евелина обаче е по-остър от очакваното за един ангел. Това, което я превръща от стеснителна девойка в млада жена с характер, е именно нежеланото внимание на мъжете около нея. Самата тя казва:

Всъщност безмерната суетност на този мъж ме предизвика да проявя сила на характера, която досега не съм подозирала, че притежавам. Не можех да търпя той да си въобразява, че съм на негово разположение. 

Всеки от тези мъже иска да я грабне за ръка или да я принуди да се качи в каретата му. Докато тя в началото смутено мълчи, към края на романа отвръща на един от обожателите си така:

Вашето мнение, сър, относно брачен или свободен живот по никакъв начин не ме засяга. Така че не се напрягайте да обсъждате различните им достойнства.

Сюжетът се движи от нежеланието ѝ да бъде „инструмент, макар и невинен“, когато хората около нея си позволяват да използват името ѝ или самата нея, както решат за подходящо. Това ме кара отново да се върна към биографията на авторката – и не само покрай нейния властен баща. Бърни приема да влезе в английския двор, защото си дава сметка, че като неомъжена жена на 33 години, която иска да продължи да се занимава с това странно действие – писането, ѝ трябват самостоятелност и доходи (общо взето това, което и Вирджиния Улф, нарекла Бърни майката на английския роман, посочва като необходими условия за една жена да пише). Така тя създава за себе си необичаен за времето живот – още преди късния ѝ брак.

Мисля за връзките между тези истории – реалната и измислената – и за жените, които търсят пътя си и самостоятелността си. Поне доколкото, естествено, това е било възможно тогава (в романа някои от най-важните неща за Евелина се решават, докато тя не е в стаята). Междувременно един от мъжете в романа в разговор казва: „Не ми е ясно защо, по дяволите, жените трябва да живеят над трийсетте, само се пречкат на другите хора.“

Отношенията между мъжете и жените понякога са потиснически, друг път стават повод за забавление на гърба на предразсъдъка. Бърни – също като Остин – говори с ирония за идеята, че четенето е женско занимание. Както един джентълмен, засрамен от непознаването на Хораций, обяснява: „Ами, заради ездата иии така нататък наистина на човек не му остава много време за четене дори в университета.“

Мисля си, че книгата е ценна като показател за развитието на литературата и на романа в частност, но и на отношенията между мъжете и жените. Надявам се, че с повече читатели книгата ще има и ново издание, при което тя да премине още един коректорски преглед, който ѝ е необходим. Този роман обаче е повече от исторически документ и всъщност най-важното поне за мен е, че е писан за забавление на своите читатели.

Заглавно изображение: Колаж от корицата на романа „Евелина“ (изд. „ЖАР – Жанет Аргирова“) и снимка на Amber Faust / Pexels
Активните дарители на „Тоест“ получават постоянна отстъпка в размер на 20% от коричната цена на всички заглавия от каталога на „ЖАР – Жанет Аргирова“, както и на няколко други български издателства в рамките на партньорската програма Читателски клуб „Тоест“. За повече информация прочетете на toest.bg/club.

Източник

IAM policy types: How and when to use them

Post Syndicated from Matt Luttrell original https://aws.amazon.com/blogs/security/iam-policy-types-how-and-when-to-use-them/

You manage access in AWS by creating policies and attaching them to AWS Identity and Access Management (IAM) principals (roles, users, or groups of users) or AWS resources. AWS evaluates these policies when an IAM principal makes a request, such as uploading an object to an Amazon Simple Storage Service (Amazon S3) bucket. Permissions in the policies determine whether the request is allowed or denied.

In this blog post, we will walk you through a scenario and explain when you should use which policy type, and who should own and manage the policy. You will learn when to use the more common policy types: identity-based policies, resource-based policies, permissions boundaries, and AWS Organizations service control policies (SCPs).

Different policy types and when to use them

AWS has different policy types that provide you with powerful flexibility, and it’s important to know how and when to use each policy type. It’s also important for you to understand how to structure your IAM policy ownership to avoid a centralized team from becoming a bottleneck. Explicit policy ownership can allow your teams to move more quickly, while staying within the secure guardrails that are defined centrally.

Service control policies overview

Service control policies (SCPs) are a feature of AWS Organizations. AWS Organizations is a service for grouping and centrally managing the AWS accounts that your business owns. SCPs are policies that specify the maximum permissions for an organization, organizational unit (OU), or an individual account. An SCP can limit permissions for principals in member accounts, including the AWS account root user.

SCPs are meant to be used as coarse-grained guardrails, and they don’t directly grant access. The primary function of SCPs is to enforce security invariants across AWS accounts and OUs in an organization. Security invariants are control objectives or configurations that you apply to multiple accounts, OUs, or the whole AWS organization. For example, you can use an SCP to prevent member accounts from leaving your organization or to enforce that AWS resources can only be deployed to certain Regions.

Permissions boundaries overview

Permissions boundaries are an advanced IAM feature in which you set the maximum permissions that an identity-based policy can grant to an IAM principal. When you set a permissions boundary for a principal, the principal can perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.

A permissions boundary is a type of identity-based policy that doesn’t directly grant access. Instead, like an SCP, a permissions boundary acts as a guardrail for your IAM principals that allows you to set coarse-grained access controls. A permissions boundary is typically used to delegate the creation of IAM principals. Delegation enables other individuals in your accounts to create new IAM principals, but limits the permissions that can be granted to the new IAM principals.

Identity-based policies overview

Identity-based policies are policy documents that you attach to a principal (roles, users, and groups of users) to control what actions a principal can perform, on which resources, and under what conditions. Identity-based policies can be further categorized into AWS managed policies, customer managed policies, and inline policies. AWS managed policies are reusable identity-based policies that are created and managed by AWS. You can use AWS managed policies as a starting point for building your own identity-based policies that are specific to your organization. Customer managed policies are reusable identity-based policies that can be attached to multiple identities. Customer managed policies are useful when you have multiple principals with identical access requirements. Inline policies are identity-based policies that are attached to a single principal. Use inline-policies when you want to create least-privilege permissions that are specific to a particular principal.

You will have many identity-based policies in your AWS account that are used to enable access in scenarios such as human access, application access, machine learning workloads, and deployment pipelines. These policies should be fine-grained. You use these policies to directly apply least privilege permissions to your IAM principals. You should write the policies with permissions for the specific task that the principal needs to accomplish.

Resource-based policies overview

Resource-based policies are policy documents that you attach to a resource such as an S3 bucket. These policies grant the specified principal permission to perform specific actions on that resource and define under what conditions this permission applies. Resource-based policies are inline policies. For a list of AWS services that support resource-based policies, see AWS services that work with IAM.

Resource-based policies are optional for many workloads that don’t span multiple AWS accounts. Fine-grained access within a single AWS account is typically granted with identity-based policies. AWS Key Management Service (AWS KMS) keys and IAM role trust policies are two exceptions, and both of these resources must have a resource-based policy even when the principal and the KMS key or IAM role are in the same account. IAM roles and KMS keys behave this way as an extra layer of protection that requires the owner of the resource (key or role) to explicitly allow or deny principals from using the resource. For other resources that support resource-based policies, here are some use cases where they are most commonly used:

  1. Granting cross-account access to your AWS resource.
  2. Granting an AWS service access to your resource when the AWS service uses an AWS service principal. For example, when using AWS CloudTrail, you must explicitly grant the CloudTrail service principal access to write files to an Amazon S3 bucket.
  3. Applying broad access guardrails to your AWS resources. You can see some examples in the blog post IAM makes it easier for you to manage permissions for AWS services accessing your resources.
  4. Applying an additional layer of protection for resources that store sensitive data, such as AWS Secrets Manager secrets or an S3 bucket with sensitive data. You can use a resource-based policy to deny access to IAM principals that shouldn’t have access to sensitive data, even if granted access by an identity-based policy. An explicit deny in an IAM policy always overrides an allow.

How to implement different policy types

In this section, we will walk you through an example of a design that includes all four of the policy types explained in this post.

The example that follows shows an application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance and needs to read from and write files to an S3 bucket in the same account. The application also reads (but doesn’t write) files from an S3 bucket in a different account. The company in this example, Example Corp, uses a multi-account strategy, and each application has its own AWS account. The architecture of the application is shown in Figure 1.

Figure 1: Sample application architecture that needs to access S3 buckets in two different AWS accounts

Figure 1: Sample application architecture that needs to access S3 buckets in two different AWS accounts

There are three teams that participate in this example: the Central Cloud Team, the Application Team, and the Data Lake Team. The Central Cloud Team is responsible for the overall security and governance of the AWS environment across all AWS accounts at Example Corp. The Application Team is responsible for building, deploying, and running their application within the application account (111111111111) that they own and manage. Likewise, the Data Lake Team owns and manages the data lake account (222222222222) that hosts a data lake at Example Corp.

With that background in mind, we will walk you through an implementation for each of the four policy types and include an explanation of which team we recommend own each policy. The policy owner is the team that is responsible for creating and maintaining the policy.

Service control policies

The Central Cloud Team owns the implementation of the security controls that should apply broadly to all of Example Corp’s AWS accounts. At Example Corp, the Central Cloud Team has two security requirements that they want to apply to all accounts in their organization:

  1. All AWS API calls must be encrypted in transit.
  2. Accounts can’t leave the organization on their own.

The Central Cloud Team chooses to implement these security invariants using SCPs and applies the SCPs to the root of the organization. The first statement in Policy 1 denies all requests that are not sent using SSL (TLS). The second statement in Policy 1 prevents an account from leaving the organization.

This is only a subset of the SCP statements that Example Corp uses. Example Corp uses a deny list strategy, and there must also be an accompanying statement with an Effect of Allow at every level of the organization that isn’t shown in the SCP in Policy 1.

Policy 1: SCP attached to AWS Organizations organization root

{
    "Id": "ServiceControlPolicy",
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "DenyIfRequestIsNotUsingSSL",    
        "Effect": "Deny",    
        "Action": "*",    
        "Resource": "*",    
        "Condition": {
            "BoolIfExists": {
                "aws:SecureTransport": "false"        
            }
        }
    },
    {
        "Sid": "PreventLeavingTheOrganization",
        "Effect": "Deny",
        "Action": "organizations:LeaveOrganization",
        "Resource": "*"
    }]
}

Permissions boundary policies

The Central Cloud Team wants to make sure that they don’t become a bottleneck for the Application Team. They want to allow the Application Team to deploy their own IAM principals and policies for their applications. The Central Cloud Team also wants to make sure that any principals created by the Application Team can only use AWS APIs that the Central Cloud Team has approved.

At Example Corp, the Application Team deploys to their production AWS environment through a continuous integration/continuous deployment (CI/CD) pipeline. The pipeline itself has broad access to create AWS resources needed to run applications, including permissions to create additional IAM roles. The Central Cloud Team implements a control that requires that all IAM roles created by the pipeline must have a permissions boundary attached. This allows the pipeline to create additional IAM roles, but limits the permissions that the newly created roles can have to what is allowed by the permissions boundary. This delegation strikes a balance for the Central Cloud Team. They can avoid becoming a bottleneck to the Application Team by allowing the Application Team to create their own IAM roles and policies, while ensuring that those IAM roles and policies are not overly privileged.

An example of the permissions boundary policy that the Central Cloud Team attaches to IAM roles created by the CI/CD pipeline is shown below. This same permissions boundary policy can be centrally managed and attached to IAM roles created by other pipelines at Example Corp. The policy describes the maximum possible permissions that additional roles created by the Application Team are allowed to have, and it limits those permissions to some Amazon S3 and Amazon Simple Queue Service (Amazon SQS) data access actions. It’s common for a permissions boundary policy to include data access actions when used to delegate role creation. This is because most applications only need permissions to read and write data (for example, writing an object to an S3 bucket or reading a message from an SQS queue) and only sometimes need permission to modify infrastructure (for example, creating an S3 bucket or deleting an SQS queue). As Example Corp adopts additional AWS services, the Central Cloud Team updates this permissions boundary with actions from those services.

Policy 2: Permissions boundary policy attached to IAM roles created by the CI/CD pipeline

{
    "Id": "PermissionsBoundaryPolicy",
    "Version": "2012-10-17",
    "Statement": [{   
        "Effect": "Allow",    
        "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "sqs:ChangeMessageVisibility",
            "sqs:DeleteMessage",
            "sqs:ReceiveMessage",
            "sqs:SendMessage",
            "sqs:PurgeQueue",
            "sqs:GetQueueUrl",
            "logs:PutLogEvents"        
         ],    
        "Resource": "*"
    }]
}

In the next section, you will learn how to enforce that this permissions boundary is attached to IAM roles created by your CI/CD pipeline.

Identity-based policies

In this example, teams at Example Corp are only allowed to modify the production AWS environment through their CI/CD pipeline. Write access to the production environment is not allowed otherwise. To support the different personas that need to have access to an application account in Example Corp, three baseline IAM roles with identity-based policies are created in the application accounts:

  • A role for the CI/CD pipeline to use to deploy application resources.
  • A read-only role for the Central Cloud Team, with a process for temporary elevated access.
  • A read-only role for members of the Application Team.

All three of these baseline roles are owned, managed, and deployed by the Central Cloud Team.

The Central Cloud Team is given a default read-only role (CentralCloudTeamReadonlyRole) that allows read access to all resources within the account. This is accomplished by attaching the AWS managed ReadOnlyAccess policy to the Central Cloud Team role. You can use the IAM console to attach the ReadOnlyAccess policy, which grants read-only access to all services. When a member of the team needs to perform an action that is not covered by this policy, they follow a temporary elevated access process to make sure that this access is valid and recorded.

A read-only role is also given to developers in the Application Team (DeveloperReadOnlyRole) for analysis and troubleshooting. At Example Corp, developers are allowed to have read-only access to Amazon EC2, Amazon S3, Amazon SQS, AWS CloudFormation, and Amazon CloudWatch. Your requirements for read-only access might differ. Several AWS services offer their own read-only managed policies, and there is also the previously mentioned AWS managed ReadOnlyAccess policy that grants read only access to all services. To customize read-only access in an identity-based policy, you can use the AWS managed policies as a starting point and limit the actions to the services that your organization uses. The customized identity-based policy for Example Corp’s DeveloperReadOnlyRole role is shown below.

Policy 3: Identity-based policy attached to a developer read-only role to support human access and troubleshooting

{
    "Id": "DeveloperRoleBaselinePolicy",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "cloudformation:Describe*",
                "cloudformation:Get*",
                "cloudformation:List*",
                "cloudwatch:Describe*",
                "cloudwatch:Get*",
                "cloudwatch:List*",
                "ec2:Describe*",
                "ec2:Get*",
                "ec2:List*",
                "ec2:Search*",
                "s3:Describe*",
                "s3:Get*",
                "s3:List*",
                "sqs:Get*",
                "sqs:List*",
                "logs:Describe*",
                "logs:FilterLogEvents",
                "logs:Get*",
                "logs:List*",
                "logs:StartQuery",
                "logs:StopQuery"
            ],
            "Resource": "*"
        }
    ]
}

The CI/CD pipeline role has broad access to the account to create resources. Access to deploy through the CI/CD pipeline should be tightly controlled and monitored. The CI/CD pipeline is allowed to create new IAM roles for use with the application, but those roles are limited to only the actions allowed by the previously discussed permissions boundary. The roles, policies, and EC2 instance profiles that the pipeline creates should also be restricted to specific role paths. This enables you to enforce that the pipeline can only modify roles and policies or pass roles that it has created. This helps prevent the pipeline, and roles created by the pipeline, from elevating privileges by modifying or passing a more privileged role. Pay careful attention to the role and policy paths in the Resource element of the following CI/CD pipeline role policy (Policy 4). The CI/CD pipeline role policy also provides some example statements that allow the passing and creation of a limited set of service-linked roles (which are created in the path /aws-service-role/). You can add other service-linked roles to these statements as your organization adopts additional AWS services.

Policy 4: Identity-based policy attached to CI/CD pipeline role

{
    "Id": "CICDPipelineBaselinePolicy",
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",    
        "Action": [
            "ec2:*",
            "sqs:*",
            "s3:*",
            "cloudwatch:*",
            "cloudformation:*",
            "logs:*",
            "autoscaling:*"           
        ],
        "Resource": "*"
    },
    {
        "Effect": "Allow",
        "Action": "ssm:GetParameter*",
        "Resource": "arn:aws:ssm:*::parameter/aws/service/*"
    },
    {
        "Effect": "Allow",
        "Action": [
            "iam:CreateRole",
            "iam:PutRolePolicy",
            "iam:DeleteRolePolicy"
        ],
        "Resource": "arn:aws:iam::111111111111:role/application-roles/*",
        "Condition": {
            "ArnEquals": {
                "iam:PermissionsBoundary": "arn:aws:iam::111111111111:policy/PermissionsBoundary"
            }            
        }
    }, 
    {
        "Effect": "Allow",
        "Action": [
            "iam:AttachRolePolicy",
            "iam:DetachRolePolicy"
        ],
        "Resource": "arn:aws:iam::111111111111:role/application-roles/*",
        "Condition": {
            "ArnEquals": {
                "iam:PermissionsBoundary": "arn:aws:iam::111111111111:policy/PermissionsBoundary"
            },
            "ArnLike": {
                "iam:PolicyARN": "arn:aws:iam::111111111111:policy/application-role-policies/*"
            }          
        }
    }, 
    {
        "Effect": "Allow",
        "Action": [
            "iam:DeleteRole",
            "iam:TagRole",
            "iam:UntagRole",
            "iam:GetRole",
            "iam:GetRolePolicy"
        ],
        "Resource": "arn:aws:iam::111111111111:role/application-roles/*"
    },
      
    {
        "Effect": "Allow",
        "Action": [
            "iam:CreatePolicy",
            "iam:DeletePolicy",
            "iam:CreatePolicyVersion",            
            "iam:DeletePolicyVersion",
            "iam:GetPolicy",
            "iam:TagPolicy",
            "iam:UntagPolicy",
            "iam:SetDefaultPolicyVersion",
            "iam:ListPolicyVersions"
         ],
        "Resource": "arn:aws:iam::111111111111:policy/application-role-policies/*"
    },
    {
        "Effect": "Allow",
        "Action": [
            "iam:CreateInstanceProfile",
            "iam:AddRoleToInstanceProfile",
            "iam:RemoveRoleFromInstanceProfile",
            "iam:DeleteInstanceProfile"
        ],
        "Resource": "arn:aws:iam::111111111111:instance-profile/application-instance-profiles/*"
    },
    {
        "Effect": "Allow",
        "Action": "iam:PassRole",
        "Resource": [
            "arn:aws:iam::111111111111:role/application-roles/*",
            "arn:aws:iam::111111111111:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling*"
        ]
    },
    {
        "Effect": "Allow",
        "Action": "iam:CreateServiceLinkedRole",
        "Resource": "arn:aws:iam::111111111111:role/aws-service-role/*",
        "Condition": {
            "StringEquals": {
                "iam:AWSServiceName": "autoscaling.amazonaws.com"
            }
        }
    },
    {
        "Effect": "Allow",
        "Action": [
            "iam:DeleteServiceLinkedRole",
            "iam:GetServiceLinkedRoleDeletionStatus"
        ],
        "Resource": "arn:aws:iam::111111111111:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling*"
    },
    {
        "Effect": "Allow",
        "Action": "iam:ListRoles",
        "Resource": "*"
    },
    {
        "Effect": "Allow",
        "Action": "iam:GetRole",
        "Resource": [
            "arn:aws:iam::111111111111:role/application-roles/*",
            "arn:aws:iam::111111111111:role/aws-service-role/*"
        ]
    }]
}

In addition to the three baseline roles with identity-based policies in place that you’ve seen so far, there’s one additional IAM role that the Application Team creates using the CI/CD pipeline. This is the role that the application running on the EC2 instance will use to get and put objects from the S3 buckets in Figure 1. Explicit ownership allows the Application Team to create this identity-based policy that fits their needs without having to wait and depend on the Central Cloud Team. Because the CI/CD pipeline can only create roles that have the permissions boundary policy attached, Policy 5 cannot grant more access than the permissions boundary policy allows (Policy 2).

If you compare the identity-based policy attached to the EC2 instance’s role (Policy 5 on left) with the permissions boundary policy described previously (Policy 2 on the right), you can see that the actions allowed by the EC2 instance’s role are also allowed by the permissions boundary policy. Actions must be allowed by both policies for the EC2 instance to perform the s3:GetObject and s3:PutObject actions. Access to create a bucket would be denied even if the role attached to the EC2 instance was given permission to perform the s3:CreateBucket action because the s3:CreateBucket action exceeds the permissions allowed by the permissions boundary.

Policy 5: Identity-based policy bound by permissions boundary and attached to the application’s EC2 instance

{
"Id": "ApplicationRolePolicy",
"Version": "2012-10-17",
"Statement": [{   
 "Effect": "Allow",    
 "Action": [
    "s3:PutObject",
    "s3:GetObject"
 ],    
 "Resource": "arn:aws:s3:::DOC-EXAMPLE-
 BUCKET1/*"
},
{   
 "Effect": "Allow",    
 "Action": [
    "s3:GetObject"
 ],    
 "Resource": "arn:aws:s3:::DOC-EXAMPLE-
 BUCKET2/*"
}]
}

Policy 2: Permissions boundary policy attached to IAM roles created by the CI/CD pipeline.

{
    "Id": "PermissionsBoundaryPolicy"
    "Version": "2012-10-17",
    "Statement": [{   
        "Effect": "Allow",    
        "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "sqs:ChangeMessageVisibility",
            "sqs:DeleteMessage",
            "sqs:ReceiveMessage",
            "sqs:SendMessage",
            "sqs:PurgeQueue",
            "sqs:GetQueueUrl",
            "logs:PutLogEvents"        
         ],    
        "Resource": "*"
    }]
}

Resource-based policies

The only resource-based policy needed in this example is attached to the bucket in the account external to the application account (DOC-EXAMPLE-BUCKET2 in the data lake account in Figure 1). Both the identity-based policy and resource-based policy must grant access to an action on the S3 bucket for access to be allowed in a cross-account scenario. The bucket policy below only allows the GetObject action to be performed on the bucket, regardless of what permissions the application’s role (ApplicationRole) is granted from its identity-based policy (Policy 5).

This resource-based policy is owned by the Data Lake Team that owns and manages the data lake account (222222222222) and the policy (Policy 6). This allows the Data Lake Team to have complete control over what teams external to their AWS account can access their S3 bucket.

Policy 6: Resource-based policy attached to S3 bucket in external data lake account (222222222222)

{
    "Version": "2012-10-17",
    "Statement": [{
        "Principal": {
            "AWS": "arn:aws:iam::111111111111:role/application-roles/ApplicationRole"
        },
        "Effect": "Allow",    
        "Action": [
            "s3:GetObject"
        ],    
        "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET2/*"
    }]
}

No resource-based policy is needed on the S3 bucket in the application account (DOC-EXAMPLE-BUCKET1 in Figure 1). Access for the application is granted to the S3 bucket in the application account by the identity-based policy on its own. Access can be granted by either an identity-based policy or a resource-based policy when access is within the same AWS account.

Putting it all together

Figure 2 shows the architecture and includes the seven different policies and the resources they are attached to. The table that follows summarizes the various IAM policies that are deployed to the Example Corp AWS environment, and specifies what team is responsible for each of the policies.

Figure 2: Sample application architecture with CI/CD pipeline used to deploy infrastructure

Figure 2: Sample application architecture with CI/CD pipeline used to deploy infrastructure

The numbered policies in Figure 2 correspond to the policy numbers in the following table.

Policy number Policy description Policy type Policy owner Attached to
1 Enforce SSL and prevent member accounts from leaving the organization for all principals in the organization Service control policy (SCP) Central Cloud Team Organization root
2 Restrict maximum permissions for roles created by CI/CD pipeline Permissions boundary Central Cloud Team All roles created by the pipeline (ApplicationRole)
3 Scoped read-only policy Identity-based policy Central Cloud Team DeveloperReadOnlyRole IAM role
4 CI/CD pipeline policy Identity-based policy Central Cloud Team CICDPipelineRole IAM role
5 Policy used by running application to read and write to S3 buckets Identity-based policy Application Team ApplicationRole on EC2 instance
6 Bucket policy in data lake account that grants access to a role in application account Resource-based policy Data Lake Team S3 Bucket in data lake account
7 Broad read-only policy Identity-based policy Central Cloud Team CentralCloudTeamReadonlyRole IAM role

Conclusion

In this blog post, you learned about four different policy types: identity-based policies, resource-based policies, service control policies (SCPs), and permissions boundary policies. You saw examples of situations where each policy type is commonly applied. Then, you walked through a real-life example that describes an implementation that uses these policy types.

You can use this blog post as a starting point for developing your organization’s IAM strategy. You might decide that you don’t need all of the policy types explained in this post, and that’s OK. Not every organization needs to use every policy type. You might need to implement policies differently in a production environment than a sandbox environment. The important concepts to take away from this post are the situations where each policy type is applicable, and the importance of explicit policy ownership. We also recommend taking advantage of policy validation in AWS IAM Access Analyzer when writing IAM policies to validate your policies against IAM policy grammar and best practices.

For more information, including the policies described in this solution and the sample application, see the how-and-when-to-use-aws-iam-policy-blog-samples GitHub respository. The repository walks through an example implementation using a CI/CD pipeline with AWS CodePipeline.

 
If you have any questions, please post them in the AWS Identity and Access Management re:Post topic or reach out to AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Matt Luttrell

Matt is a Sr. Solutions Architect on the AWS Identity Solutions team. When he’s not spending time chasing his kids around, he enjoys skiing, cycling, and the occasional video game.

Josh Joy

Josh is a Senior Identity Security Engineer with AWS Identity helping to ensure the safety and security of AWS Auth integration points. Josh enjoys diving deep and working backwards in order to help customers achieve positive outcomes. 

Metasploit Weekly Wrap-Up

Post Syndicated from Jeffrey Martin original https://blog.rapid7.com/2022/06/03/metasploit-weekly-wrap-up-160/

Ask and you may receive

Metasploit Weekly Wrap-Up

Module suggestions for the win, this week we see a new module written by jheysel-r7 based on CVE-2022-26352 that happens to have been suggested by jvoisin in the issue queue last month. This module targets an arbitrary file upload in dotCMS versions before 22.03, 5.3.8.10, and 21.06.7 to obtain shells. Make sure you have covered your bases for permission to target this vulnerability before testing this as one blog post suggests some banking sites may rely on this tool.

Everything comes full circle

As the GSoC 2022 program starts to ramp up, a contributor that participated in 2020, red0xff, contributed an enhancement to SQLi library support to give module writers a quicker path to injection on Microsoft SQL. The enhancement updates the auxiliary/gather/billquick_txtid_sqli module to showcase library utility and can reduce logic code required in modules significantly—saving about 20% in this one instance.

New module content (2)

  • DotCMS RCE via Arbitrary File Upload by Hussein Daher, Shubham Shah, and jheysel-r7, which exploits CVE-2022-26352 – Adds an exploit module that leverages CVE-2022-26352, an arbitrary file upload vulnerability in dotCMS versions before 22.03, 5.3.8.10, and 21.06.7, that allows an attacker to execute arbitrary code remotely in the context of the user running the application. The module uploads a .jsp payload to the tomcat ROOT directory and accesses it to trigger its execution.
  • MyBB Admin Control Code Injection RCE by Altelus, Christophe De La Fuente, and Cillian Collins, which exploits CVE-2022-24734 – Adds an exploit module that leverages an improper input validation vulnerability in MyBB prior to 1.8.30 to execute arbitrary code in the context of the user running the application. Authentication to the MyBB Admin Control is required for this exploit to work and the account must have rights to add or update settings.

Enhancements and features (2)

  • #16435 from red0xff – This adds support for Microsoft SQL Server to the SQL injection library. Additionally, this updates the auxiliary/gather/billquick_txtid_sqli module to leverage the new library features for exploitation.
  • #16492 from h00die – Improves the nfs_mount scanner module by detecting if a NFS network share is mountable or not based on the provided IP address and hostname.

Bugs fixed (2)

  • #16621 from sjanusz-r7 – Fixes a bug where running multi/manage/shell_to_meterpreter to upgrade from a Python Meterpreter session to a Native Meterpreter session would kill the original Meterpreter session.
  • #16640 from zeroSteiner – A bug has been fixed where the Net::LDAP library would fail due to the socket returning less data than was requested. This was addressed by introducing a custom read() method to appropriately handle cases where the socket may return less data than was expected.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

Me on Public-Interest Tech

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/06/me-on-public-interest-tech.html

Back in November 2020, in the middle of the COVID-19 pandemic, I gave a virtual talk at the International Symposium on Technology and Society: “The Story of the Internet and How it Broke Bad: A Call for Public-Interest Technologists.” It was something I was really proud of, and it’s finally up on the net.

NixOS 22.05 released

Post Syndicated from original https://lwn.net/Articles/897045/

Version
22.05
of the NixOS distribution is out. “NixOS is already known as
the most up to date distribution and is the distribution with the most
packages. This release saw 9345 new packages and 10666 updated
packages
“. Significant changes include an update to version 2.8.0 of
the Nix package manager with experimental support for flakes, GNOME 42, and many
new services; see the
release notes
for details.

Understand resiliency patterns and trade-offs to architect efficiently in the cloud

Post Syndicated from Haresh Nandwani original https://aws.amazon.com/blogs/architecture/understand-resiliency-patterns-and-trade-offs-to-architect-efficiently-in-the-cloud/

This post was originally published in June 2022 and is now updated with more information on efficiently architecting resilient patterns in the cloud.


Architecting workloads for resilience on the cloud often need to evaluate multiple factors before they can decide the most optimal architecture for their workloads.

Example Corp has multiple applications with varying criticality, and each of their applications have different needs in terms of resiliency, complexity, and cost. They have many choices to architect their workloads for resiliency and cost, but which option suits their needs best? What should they consider when choosing the patterns most appropriate for the needs of their applications?

To help answer these questions, we’ll discuss the five resilience patterns in Figure 1 and the trade-offs to consider when implementing them: 1) design complexity, 2) cost to implement, 3) operational effort, 4) effort to secure, and 5) environmental impact. This will help you achieve varying levels of resiliency and make decisions about the most appropriate architecture for your needs. Our intent is to provide a high-level approach to structure conversations on trade-offs associated with each of these patterns. For a deeper dive on each pattern, please navigate to the Further reading section at the end of this post.

Note: these patterns are not mutually exclusive; you may decide to implement a combination of one of more patterns.

Resilience patterns and trade-offs

Figure 1. Resilience patterns and trade-offs

What is resiliency? Why does it matter?

The AWS Well-Architected Framework defines resilience as having “the capability to recover when stressed by load (more requests for service), attacks (either accidental through a bug, or deliberate through intention), and failure of any component in the workload’s components.”

To meet your business’ resilience requirements, consider the following core factors as you design your workloads:

  • Design complexity – An increase in system complexity typically increases the emergent behaviors of that system. Each individual workload component has to be resilient, and you’ll need to eliminate single points of failure across people, process, and technology elements. Customers should consider their resilience requirements and decide if increasing system complexity is an effective approach, or if keeping the system simple and using a disaster recovery (DR) plan is be more appropriate.
  • Cost to implement – Costs often significantly increase when you implement higher resilience because there are new software and infrastructure components to operate. It’s important for such costs to be offset by the potential costs of future loss.
  • Operational effort – Deploying and supporting highly resilient systems requires complex operational processes and advanced technical skills. For example, customers might need to improve their operational processes using the Operational Readiness Review (ORR) approach. Before you decide to implement higher resilience, evaluate your operational competency to confirm you have the required level of process maturity and skillsets.
  • Effort to secure – Security complexity is less directly correlated with resilience. However, there are generally more components to secure for highly resilient systems. Using security best practices for cloud deployments can achieve security objectives without adding significant complexity even with a higher deployment footprint.
  • Environmental impact – An increased deployment footprint for resilient systems may increase your consumption of cloud resources. However, you can use trade-offs, like approximate computing and deliberately implementing slower response times to reduce resource consumption. The AWS Well-Architected Sustainability Pillar describes these patterns and provides guidance on sustainability best practices.

Pattern 1 (P1): Multi-AZ

P1 is a cloud-based architecture pattern (Figure 2) that introduces Availability Zones (AZs) into your architecture to increase your system’s resilience. The P1 pattern uses a Multi-AZ architecture where applications operate in multiple AZs within a single AWS Region. This allows your application to withstand AZ-level impacts.

As shown in Figure 2, Example Corp deploys their internal employee applications using the P1 pattern. These applications are low business impact and therefore have lower requirements for resiliency.

Example Corp deploys their low-business-impact applications as a single Amazon Elastic Compute Cloud (Amazon EC2) instance managed by an Auto Scaling group. Amazon EC2 uses health checks to automatically detect faults. If an AZ fails, Amazon EC2 prompts an Amazon EC2 Auto Scaling group to recreate their application in another unaffected AZ.

Multi-AZ deployment pattern (P1)

Figure 2. Multi-AZ deployment pattern (P1)

Trade-offs

P1 is low in several categories and mitigates a disruption to the AZ hosting the application, but this comes at the expense of application recovery. If an AZ is down, it will disrupt end users’ access to the application while the new resources are being re-provisioned in a new AZ. This is known as bi-modal behavior.

Pattern 2 (P2): Multi-AZ with static stability

P2 uses multiple instances across multiple AZs within a Region to increase resilience. The pattern uses static stability to prevent bimodal behavior. Statically stable systems remain stable and operate in one mode, irrespective of changes to their operating environment. A key benefit of a statically stable system on AWS is it reduces complexity of recovery during a disruption thanks to pre-provisioned resource capacity. Any resources needed to maintain operations during a disruption, such as the loss of resources in an AZ, already exist and AWS service control planes do not need to be available for recovery to be successful. To learn more about static stability, data planes and control planes read the builder’s library article Static stability using Availability Zones.

As shown in Figure 3, Example Corp has a customer-facing website that has a lower tolerance for downtime. Any time the website is down, it could result in lost revenue. Because of this, the website requires two EC2 instances that are provisioned within two AZs. Using health checks, when the AZ becomes impaired, the website continues to operate as the Elastic Load Balancer diverts traffic away from the impacted AZ. For more on using health checks, see the Implementing health checks article in The Amazon Builder’s Library.

Multi-AZ with static stability pattern (P2)

Figure 3. Multi-AZ with static stability pattern (P2)

Trade-offs

P2 mitigates an AZ disruption without downtime to application clients but must be weighed against cost concerns. P1 is less expensive from an infrastructure cost perspective, as it provisions less compute capacity and relies on launching new instances in case of a failure. However, P1’s bimodal behavior can affect your customers during large-scale events.

Implementing P2 requires your application to support distributed operation across multiple instances. If your application can support this pattern, you can deploy your workload to all available AZs (usually 3 or more) across the Region. This will reduce costs associated with over-provisioning because you only have to provision 150% of your capacity across three AZs compared with the 200% in two AZs (as mentioned in our earlier example).

Pattern 3 (P3): Application portfolio distribution

P3 uses a Multi-Region pattern to increase functional resilience, as demonstrated in Figure 4. It distributes different critical applications in multiple Regions.

Example Corp provides banking services, like credit balance checks, to consumers on multiple digital channels. These services are available to consumers via a mobile application, contact center, and web-based applications. Each digital channel is deployed to a separate Region, which mitigates against a regional service disruption.

For example, a Region with the customers’ mobile application may have a disruption that causes the mobile app to be unavailable, but customers can still access banking services via online banking deployed in an alternate Region. Regional service disruptions are rare, but implementing a pattern like this ensures your users retain access to business-critical services during disruptions.

Application portfolio distribution pattern (P3)

Figure 4. Application portfolio distribution pattern (P3)

Trade-offs

P3 mitigates the possibility of a regional service disruption impacting a multitude of systems at the same time. Operating an application portfolio that spans multiple Regions requires significant operational planning and management. Isolated functional elements may depend on common downstream systems and data sources that are deployed in a single Region. Therefore, Region-wide events may still cause disruption, but the impact surface area should be reduced.

Pattern 4 (P4): Multi-AZ deployment (multi-Region DR)

Example Corp operates several business-critical services that have a very low tolerance for disruption, such as the ability for consumers to make bank payments. Example Corp reviewed the four common patterns for DR (as defined in Disaster Recovery of Workloads on AWS: Recovery in the Cloud) and decided to use the following sub-patterns for their multi-Region applications:

  • Pilot Light – This pattern works for applications that require RTO/RPO of 10s of minutes. Data is actively replicated and application infrastructure is pre-provisioned in the DR Region. Cost optimization is a key driver here, as the application infrastructure is kept switched-off and only switched-on during the restore event.
  • Warm Standby – This pattern improves restore times significantly compared with pilot light by keeping your applications running in the DR Region but with a reduced capacity. Application infrastructure will be scaled up during a DR event, but this can typically be automated with minimal manual effort. This pattern can achieve RTO/RPO of minutes if implemented correctly.

Trade-offs

P4 mitigates a disruption to a regional service while reducing mitigation costs. Regional DR patterns increase deployment complexity as infrastructure changes need to be synchronized across Regions. Testing resilience is also significantly more complex and include simulating regional disruptions. Using Infrastructure as Code to automate deployments can help alleviate these issues.

Pattern 5 (P5): Multi-Region active-active

Example Corp’s core banking and Customer Relationship Management applications have zero tolerance for disruption. They use the P5 pattern for deploying these applications because it has an RTO of real-time and an RPO of near-zero data loss. They run their workload simultaneously in multiple Regions, allowing them to serve traffic from all Regions simultaneously. This pattern not only mitigates against regional disruptions but also addresses their zero tolerance requirements (Figure 5).

Multi-Region active-active pattern (P5)

Figure 5. Multi-Region active-active pattern (P5)

Trade-offs

P5 mitigates the disruption of a regional service, and invests additional costs and complexity to deliver a RTO of near zero. Multi-active deployments are generally complex, as they include multiple applications that collaborate to deliver required business services. If you implement this pattern, you’ll need to consider the fact that you’re introducing asynchronous replication for data across Regions and the impact that has on data consistency.

Operating this pattern requires a very high level of process maturity, so we recommend customers gradually build towards this pattern by starting with the deployment patterns described earlier.

Conclusion

In this blog post, we introduced five resilience patterns and trade-offs to consider when implementing them. In an effort to help you find the most efficient architecture for your use case, we demonstrated how Example Corp evaluated these options and how they applied them to their business needs.

Further reading

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Correlate IAM Access Analyzer findings with Amazon Macie

Post Syndicated from Nihar Das original https://aws.amazon.com/blogs/security/correlate-iam-access-analyzer-findings-with-amazon-macie/

In this blog post, you’ll learn how to detect when unintended access has been granted to sensitive data in Amazon Simple Storage Service (Amazon S3) buckets in your Amazon Web Services (AWS) accounts.

It’s critical for your enterprise to understand where sensitive data is stored in your organization and how and why it is shared. The ability to efficiently find data that is shared with entities outside your account and the contents of that data is paramount. You need a process to quickly detect and report which accounts have access to sensitive data. Amazon Macie is an AWS service that can detect many sensitive data types. Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and help protect your sensitive data in AWS.

AWS Identity and Access Management (IAM) Access Analyzer helps to identify resources in your organization and accounts, such as S3 buckets or IAM roles, that are shared with an external entity. When you enable IAM Access Analyzer, you create an analyzer for your entire organization or your account. The organization or account you choose is known as the zone of trust for the analyzer. The analyzer monitors the supported resources within your zone of trust. This analyzer enables IAM Access Analyzer to detect each instance of a resource shared outside the zone of trust and generates a finding about the resource and the external principals that have access to it.

Currently, you can use IAM Access Analyzer and Macie to detect external access and discover sensitive data as separate processes. You can join the findings from both to best evaluate the risk. The solution in this post integrates IAM Access Analyzer, Macie, and AWS Security Hub to automate the process of correlating findings between the services and presenting them in Security Hub.

How does the solution work?

First, IAM Access Analyzer discovers S3 buckets that are shared outside the zone of trust. Next, the solution schedules a Macie sensitive data discovery job for each of these buckets to determine if the bucket contains sensitive data. Upon discovery of shared sensitive data in S3, a custom high severity finding is created in Security Hub for review and incident response.

Solution architecture

This solution is based on a serverless architecture, and uses the following services:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Figure 1 depicts the following process flow:

  1. IAM Access Analyzer detects shared S3 buckets outside of the zone of trust—the organization or account you choose is known as a zone of trust for the analyzer—and creates the event Access Analyzer Finding in EventBridge.
  2. EventBridge triggers the Lambda function sda-aa-save-findings.
  3. The sda-aa-save-findings function records each finding in DynamoDB.
  4. An EventBridge scheduled event periodically starts a new cycle of the Step Function state machine, which immediately runs the Lambda function sda-macie-submit-scan. The template sets a 15-minute interval, but this is configurable.
  5. The sda-macie-submit-scan function reads the IAM Access Analyzer findings that were created by sda-aa-save-findings from DynamoDB.
  6. sda-macie-submit-scan launches a Macie classification job for each distinct S3 bucket that is related to one or more recent IAM Access Analyzer findings.
  7. Macie performs a sensitive discovery scan on each requested S3 bucket.
  8. The sda-macie-submit-scan function initiates the Lambda function sda-macie-check-status.
  9. sda-macie-check-status periodically checks the status of each Macie classification job, waiting for all the Macie jobs initiated by this solution to complete.
  10. Upon completion of the sda-macie-check-status function, the step function runs the Lambda function sda-sh-create-findings.
  11. sda-sh-create-findings joins the resulting IAM Access Analyzer and Macie datasets for each S3 bucket.
  12. sda-sh-create-findings publishes a finding to Security Hub for each bucket that has both external access and sensitive data.

    Note: The Macie scan is skipped if the S3 bucket is tagged to be excluded or if it was recently scanned by Macie. See the Cost considerations section for more information on custom configurations.

  13. Information security can review and act on the findings shown in Security Hub.

Sample Security Hub output

Figure 2 shows the sample findings that Security Hub will present. Each finding includes:

  • Severity
  • Workflow status
  • Record state
  • Company
  • Product
  • Title
  • Resource
Figure 2: Sample Security Hub findings

Figure 2: Sample Security Hub findings

The output to Security Hub will display a severity of HIGH with workflow NEW, because this is the first time the event has been observed. The record state is ACTIVE because the workflow state is NEW. The title explains the reason for the event.

For example, if potentially sensitive data is discovered in a bucket that is shared outside a zone of trust, selecting an event will display the resources involved in the finding so you can investigate. For more information, see the Security Hub User Guide.

Notes:

  • Detection of public S3 buckets by IAM Access Analyzer will still occur through Security Hub and will be marked as critical severity. This solution does not add to or augment this finding in Security Hub.
  • If a finding in IAM Access Analyzer is archived, the solution does not update the related finding in Security Hub.

Prerequisites

To use this solution, you need the following:

  • Permission to run AWS CloudFormation
  • Permission to create Lambda functions
  • Permission to create DynamoDB tables
  • Permission to create Step Function state machines
  • Permission to create EventBridge event rules
  • Permission to enable IAM Access Analyzer on the account where sensitive discovery is required
  • Permission to enable Macie on the account
  • Permission to enable Security Hub on the account

Deploy the solution

The solution is deployed through AWS CloudFormation, and you can review the template for options to best suit your specific needs.

  1. Sign in to your AWS account located at https://aws.amazon.com/console/.
  2. In the AWS Management Console, navigate to the AWS CloudFormation service, and then choose Create stack.
  3. Under Prerequisite – Prepare template, choose Template is ready.
  4. Under Specify template, choose Amazon S3 URL and provide the following URL:
    https://awsiammedia.s3.amazonaws.com/public/sample/936-correlating-aa-findings-macie/sda-cfn.yml
  5. Choose Next.
  6. Enter the stack name.
  7. The Application code location, S3 Bucket and S3 Key fields will be pre-filled.
  8. Under Service Activations, modify the activations based on the services you presently have running in your account.
  9. Modify the Logging and Monitoring settings if required.
  10. (Optional) Set an alert email address for errors.
  11. Choose Next, then choose Next again.
  12. Under Capabilities, select the check box.
  13. Choose Create Stack. The solution will begin deploying; watch for the CREATE_COMPLETE message.
Figure 3: Sample CloudFormation deployment status

Figure 3: Sample CloudFormation deployment status

The solution is now deployed and will start monitoring for sensitive data that is being shared. It will send the findings to Security Hub for your teams to investigate.

Cost considerations

When you scan large S3 buckets with sensitive data, remember that Macie cost is based on the amount of data scanned. For more information on Macie costs, see Amazon Macie pricing.

This solution allows the following options, which you can use to help manage costs:

  • Use environment variables in Lambda to skip specific tagged buckets
  • Skip recently scanned S3 buckets and reuse prior findings
Figure 4: Screen shot of configurable environment variable

Figure 4: Screen shot of configurable environment variable

Conclusion

In this post, we discussed how the solution uses Lambda, Step Functions and EventBridge to integrate IAM Access Analyzer with Macie discovery jobs. We reviewed the components of the application, deployed it by using CloudFormation, and reviewed the output a security team would use to take the appropriate actions. We also provided two ways that you can manage the costs associated with the solution.

After you deploy this project, you can modify it to meet your organization’s needs. For example, you can modify the tags to skip specific S3 buckets your organization has already classified to hold sensitive data. Customers who use multiple AWS accounts can designate a centralized Security Hub administrator account to receive the solution alerts from each member account. For more information on this option, see Designating a Security Hub administrator account.

If you have feedback about this post, please submit it in the Comments section below. If you have questions about this post, please start a new thread on the AWS Identity and Access Management forum.

Other resources

For more information on correlating security findings with AWS Security Hub and Amazon EventBridge, refer to this blog post.

Want more AWS Security news? Follow us on Twitter.

Nihar Das

Nihar Das

Nihar has over 20 years of experience in various business domains including financial services. As an AWS Senior Solutions Architect, he is passionate about solving challenges in the cloud and helps financial services customers to migrate to AWS and support the continued innovation.

Joe Dunn

Joe Dunn

Joe is an AWS Senior Solutions Architect in Financial Services with over 20 years of experience in infrastructure architecture and migration of business-critical loads to AWS. He helps financial services customers to innovate on the AWS Cloud by providing solutions using AWS products and services.

Armand Aquino

Armand Aquino

Armand is a solutions architect helping financial services organizations design their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and learning Korean.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close